Software Architect at Genzeon Corporation in Malvern, Pennsylvania, Microsoft .NET MVP, Husband, Dad and Geek.
14332 stories
·
21 followers

TFS 2018 RC2 is available

1 Share

Today we shipped the second release candidate for Team Foundation Server 2018.  If all goes well, this should be the final release candidate.  We’ll make sure that all the issues you report are addressed and we’ll ship the final version in a couple of months.  From the release notes, you can see TFS 2018 has tons of new features.  RC2 adds to that list with a bunch more improvements.  At this point TFS 2018 is feature complete.  We have a few dozen bugs that we know about and we are still working on.  Add that to what you all find and that represents the work we have left to do to finish up.

TFS 2018 RC2, of course, has a “go-live” license.  You can upgrade from previous versions to RC2 and you will be able to upgrade from RC2 to the final release.

Important links

Important information on compatibility and support is here.

As I mentioned, there are a bunch of new features in RC2 that were not in RC1.  Some of the notable ones, at least to me, include:

GVFS support – RC2 contains the server side support that allows you to use our GVFS client extensions.  Together, the client and server enable you to use Git repos of virtually any size hosted in TFS or VSTS efficiently.  We’re working on a plan to provide client binaries so you don’t have to build them yourself.

Creating a Git or TFVS folder in the web – I know it seems like such a small thing, but it was one of the most popular extensions in our marketplace.  We decided to roll it into the product so people don’t have to keep going and getting the extension every time they create a new account.

Improved test parallelism – Our parallelizing of tests now considers previous run times to better optimize the distribution.  Instead of just counting tests for parallelization, now it counts execution time – yielding shorter critical paths and faster test runs.

That’s, of course, just a few of the improvements.  Please try it out and give us any feedback you have.

Thank you,

Brian

 

Read the whole story
alvinashcraft
49 minutes ago
reply
West Grove, PA
Share this story
Delete

The self-organizing team’s guide to micromanagers

1 Share

This is part 2 of 2. Part 1 was an attempt to help micromanagers understand the value of self-organizing teams. Now we’ll take a look at how a self-organizing team can work with a micromanager.

Those “Agile” people are always telling teams how they should run things, as if they knew better. Don’t be a passive order-taker. Become an integral part of the value stream. Work as a partner with business stakeholders. Understand the problems to be solved, rather than just implementing whatever is requested. Then the magic will happen.

That advice can be pretty counter-intuitive for a team embedded deep in the bowels of a traditional organization. I wonder if the problem is the way those “Agile” people talk. They describe everything in glowing terms, all covered in sugar. Haven’t they ever been in the Real World? Let me see if I can describe the same thing without the sugar coating. Maybe that will help.

Anecdote

This is a story that pre-dates the Agile movement by a long stretch. There will be references to technologies the current generation of developers has never seen. Don’t worry about that. The same scene still plays out today, everywhere and always.

In the 1980s, I was part of a team that supported a large and complicated CICS application. This wasn’t a run-of-the-mill CICS application. It was a transaction processing application that “owned” three mainframe systems in three geographically-distributed locales.

It was conceptually analogous to contemporary microservices-based solutions in that operating system instances came and went while the application as a whole was operational 100% of the time. The technologies were different, but many of the challenges were the same: managing transaction integrity, data integrity, logging, performance and so forth across multiple distributed components while presenting customers with seamless, uninterrupted service; and the ability to deploy different components independently of one another in the middle of the day without an outage. Nothing new under the sun, I guess.

I like telling stories like that, because I enjoy reminding you young whippersnappers (you know, like under 45) that you didn’t invent everything in the universe last week.

Anyway, this story isn’t about the technology, it’s about the team’s response to a micromanagement request one fine day. It’s a story that still happens today. A lot. It probably happened to you yesterday. If not, it wil probably happen to you today.

Our manager came to me one day and said, “I want you to put the entire master file into a temp storage queue.”

I looked at her without smiling, and nodded. After a thoughtful pause, I asked, “Are you sure that’s what you want? What problem are you trying to solve, really?”

She hesitated, and explained, “Well, I’m looking for a performance boost. Eliminate some I/O.”

“Okay. That sounds like a good goal. Why temp storage?”

“I read that temp storage allows you to keep data in memory under CICS.”

“True. But note the name: temp. It isn’t meant to keep entire master files in memory and hammer them with updates all day.”

“But you can still load them up, right?”

“Sure. You can tell CICS you want such-and-such a queue to be in memory, or on disk. But here’s the deal: If you put too much data in the queue, CICS will automatically change it into an on-disk queue. Then you have much, much heavier I/O overhead than you would have with normal VSAM access. You’d achieve the opposite of improved I/O performance.”

“Oh. Let’s not do that, then. What are our options?”

“Well, CICS has another feature we could use. We can back up a VSAM KSDS in memory. IBM calls it a ‘Shared Data Table.’ It’s a cache mechanism. Recently-referenced records will remain in the cache. Other records can age out, so we can optimize the size of it. CICS will keep the cache and the actual file sychronized without impacting transaction performance.”

“Will it take long to do?”

“Nah. It’s just a configuration setting. No change to source code.”

“Okay. Do that, then.”

If you remember part 1 of this two-part piece, then you may recall the story about the soldiers who placed bets on how many quarter-turns of the wrench it would take to snap off a stud. We could have done the same thing in this case. We could have defined an in-memory temp storage queue and loaded it with the contents of the master file. It is technically possible to do that. Stupid as hell, but technically possible. Then we could have placed bets on how long it would take for the manager’s head to explode.

It seems those “Agile” people are right about the value of teams participating actively in the whole process. When we behave like passive order-takers, stupid things happen. So, when your manager tries to micromanage technical decisions, engage them in a discussion. Find out what problem they’re really trying to solve. Then you can use your technical expertise to explore viable solution options. Otherwise you’ll end up with broken studs.

The post The self-organizing team’s guide to micromanagers appeared first on LeadingAgile.

Read the whole story
alvinashcraft
49 minutes ago
reply
West Grove, PA
Share this story
Delete

Service Fabric 6.0 Release

2 Shares

Service Fabric version 6.0 of the runtime and .NET SDK 2.8 is now available, packed with new features, improvements, and bug fixes.

General availability of Service Fabric on Linux

As part of this release we are happy to announce the general availability of Service Fabric as a container orchestrator on Linux. The Linux environment is identical to the Windows runtime for container orchestration and Service Fabric fully supports containerized applications running on Linux. We currently support Ubuntu 16.04 for the Linux clusters, other Linux OSes including Red Hat Enterprise Linux are on the roadmap. For more information please see this Azure blog post

Container and Resource Governance Improvements

We've made a lot of improvements to our container support and resource governance across all hosted workloads in Service Fabric (Reliable Services, Guest executables and containers).

SDK Previews for Java and .NET

In this release there is also a preview of the both the Java 1.0 and the .NET 3.0 SDK. The previews include features across Linux and Windows for Reliable Services written in Java or using .NET core 2.0.

More details of the release will follow shortly, please check back...

In the meantime head over to our docs landing page for great quickstarts and tutorials: https://docs.microsoft.com/en-us/azure/service-fabric/

All the best,
The Service Fabric Team

Read the whole story
alvinashcraft
49 minutes ago
reply
West Grove, PA
sbanwart
1 hour ago
reply
Akron, OH
Share this story
Delete

Everything you need to know about F# classes in 23 view models

2 Shares

The View model is the centre piece to MVVM, with no external dependencies. As a result the view model is an excellent class that requires no libraries while also exercising many language features related to classes. These requirements make view models a great example for learning how to build classes in a new language.

If learning F# (FSharp) is on your bucket list, getting started with classes is the easiest. Everything you need to know about F# classes is covered here. This post shows 23 (mostly) useless view models to get you started with F#. At the end, you will know how to make any useful view model in F#, so let’s get started!

View Model: #1

As already stated, in the purest sense, a view model is just a class. Creating classes in F# is really easy. Here is our first view model

1: 
2: 
type ViewModel1() = 
    member this.Foo() = 42

There is no need to create a new file. The only requirement is that it is declared before it is used.

type is synonymous with the class keyword. ViewModel is also public by default, which is generally what the developer intended. Parenthesis follow the class name.

Foo is a public method. They keyword member is used to declare anything that belongs to the class. Additionally don’t forget the instance for the method this. required before the name of the method.

A few striking differences that immediately stand out to most developers is the lack of curly braces and types. The braces are omitted in favour of indentation. Should any problems arise with indentation, the compiler will highlight the line. Secondly, the lack of types can seam daunting. The F# compiler utilizes advanced mathematics so this generally not a problem. All items are given a type and checked for consistent usage. This is known as type inference.

Variables: More than one way

Variables are possible in F#, and the language even has two ways of doing it. An example of each is listed below:

1: 
2: 
3: 
4: 
5: 
type ViewModel2() = 
    let mutable _variable = 42

type ViewModel3() = 
    let _variable = ref 42

A key aspect to highlight in the examples above is that mutation/variables require ‘opt in’ with keywords. By default, all items are immutable, which is what most developers prefer. Predictable code is the result of immutability.

Next up, our view model needs a getter to expose the variables.

1: 
2: 
3: 
4: 
5: 
6: 
7: 
type ViewModel4() = 
    let mutable _variable = 42
    member this.Foo = _variable

type ViewModel5() = 
    let _variable = ref 42
    member this.Foo = !_variable

As with the methods, the member keyword is used, but the parentheses are left off. For the ref variable, note the ! that is used to return the value contained. If this is missed, the wrong type will be returned. If a setter is also present (shown later), then a compiler error will result.

With the getters defined, a setter is is the next step. On a side note, in F# <- is used for assignment. For the ref there is a shorthand := to update the value.

Adding a setter to the property is simple and straight forward. Append the keyword and with set and then name the parameter to function, value is commonly used.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
type ViewModel6() = 
    let mutable _variable = 42
    member this.Foo  
        with get() = _variable
        and set(value) = _variable <- value //Assignment in F#

type ViewModel7() = 
    let _variable = ref 42
    member this.Foo 
        with get() = !_variable // same as _variable.Value
        and set(value) = _variable := value //Same as `_variable.Value <- value`

When to use mutable or ref

You should prefer mutable over ref as it is easier to with. You can use ref when using a framework that updates the value of properties in a subclass. To illustrate this, here is our next view model:

1: 
2: 
3: 
4: 
5: 
6: 
7: 
8: 
9: 
type ViewModel8() = 
    inherit FrameworkViewModel()

    let _variable = ref 42
    member this.Foo 
        with get() = !_variable
        and set(value) = 
            if (this.SetProperty(_variable, value)) then 
                this.RaisePropertyChanged("Foo")

They key part to the code above is the call to SetProperty a method on the base class. For a view model, there is no need to set a value if it has not changed. SetProperty performs this check, and then updates the value if value has changed. A bool, ie true, will be returned if the value is updated. A ref makes this possible.

Auto Properties: A new keyword

Auto properties have a backing variable. In F# the syntax is a little different to methods. Here is an example with a getter only

1: 
2: 
3: 
4: 
5: 
6: 
type ViewModel9() = 
    member val Foo = 42

// Long form, prefer the above format if no setter
type ViewModel10() = 
    member val Foo = 42 with get

The key difference here is the use of the keyword val. Additionally the instance, this no longer needs to declared before the name.

Adding a setter is also super easy, just add a comma and the keyword set:

1: 
2: 
type ViewModel11() = 
    member val Foo = 42 with get, set

Scoping rules: There’s a twist

Scoping rules are also available in F#. Methods can have their usage restricted to only inside the class with private.

1: 
2: 
type ViewModel12() = 
    member private this.Foo() = 42

There is no scoping for protected as it creates potential problems with lamda functions accessing the base class. Instead use interfaces/higher-order functions.

Prefer function over methods

It turns out there is another way to achieve the same as a private method in F#. A function declared in a class in most circumstances will behave the same as a private method. Because of this, it is best to prefer functions over methods, as functions have better type inference and can be chained together easier using F#’s iconic piper operator |>.

1: 
2: 
3: 
4: 
type ViewModel13() = 
    let foo = 42
    let addTen x = x + 10
    let data = foo |> addTen |> string

Closely related to access modifiers, is overriding methods. It is also possible in F#, and is just a keyword change to use override instead of member

1: 
2: 
3: 
type ViewModel14() = 
    inherit MvvmBase()
    override this.Foo() = 42

Adding constructor arguments

Constructor arguments, are passed within the parentheses of the declaring line of the class. See below:

1: 
2: 
type ViewModel15(foo: int) = 
    member this.Foo: int = foo

Types in F# are declared after the name with a colon. The type can also be omitted in many cases and F# will infer the type.

1: 
2: 
type ViewModel16(foo) = 
    member this.Foo: int = foo

For this example, the method Foo has been declared to return an int. F# can figure out that the type of foo must also be an int. Alternatively, the type could be declared the other way around. The types and meaning are identical in both cases:

1: 
2: 
type ViewModel17(foo: int) = 
    member this.Foo = foo

Another important point to highlight with constructor arguments in F# is that they are immutable:

1: 
2: 
3: 
4: 
// Will not compile. Error: `This value is not mutable`
type ViewModel18(foo: int) = 
    let fooPlusTen = 
        foo <- 10

If immutability is a problem and mutation is required, there is a simple solution. A copy of the constructor argument can be taken and declared with the mutable keyword highlighted above (though many prefer to avoid mutation, as functional competence is gained).

1: 
2: 
3: 
4: 
type ViewModel19(foo: int) = 
    let mutable _foo = foo
    let fooPlusTen = 
        _foo <- 10

A Constructor without a Constructor

The syntax for constructors is a quite different in F#; though much cleaner. To declare a constructor, after the local functions/values and before any member items, do signals the constructor followed by the required statements.

1: 
2: 
3: 
4: 
5: 
6: 
type ViewModel20() = 

    let mutable foo: string = null
    do 
        foo <- "Hello, World"
    member this.Foo() = foo

In the example above, the class is initially constructed with foo set to null. The constructor (the statements after do), are evaluated and foo is updated to be “Hello, World”.

The example above only aims to highlight the usage of do, however usage of null and mutable are not encouraged. As a developer learns techniques in functional programming, usage of null or mutable is required less. The reason for teh reduced usage is that concepts in functional programming, model computation differently resulting in code that is much clear in intent and less runtime errors. For a brief highlight of this see my post on How to turn runtime exceptions into compiler errors.

Dependencies with interfaces

If you have read this far, we’re almost at the point where all the knowledge has been laid out to build any standard view model. To complete this, you need to know how to pass in dependencies, notably interfaces.

An interface could be declared in C# or F# (The C# declaration would need to be in a different project). Here is one written in F#. There are no parenthesis after the name. Methods are declared with the abstract keyword and a type declaration.

1: 
2: 
type IService = 
    abstract member Foo: unit -> int

‘Foo’ is method that, when invoked, returns a single integer. With an interface declared, it can now be used. Here is a view model that is using the interface as a dependency. Simply declare the type and use it as you would any constructor argument.

1: 
2: 
type ViewModel21(foo: IService) = 
    member this.Foo: int = foo.Foo()

If you were to compare this to the C#/Java equivalent, you will see that a copy of the dependency does not need to be taken. No strange attributes, no local constructor parameters. No duplicate names.

Another key feature with these interfaces and view models is that they are fully compliant with any IOC framework. Register the interface (F# or C#), make sure the view model is appropriately named and enjoy everything working. These F# interfaces/classes generate very similar IL that the C# equivalent would output, you benefit from clearer code.

Interfaces are optional

The last example showed building a view model with a dependency through an interface. In large apps, both interfaces themselves and the number of interfaces a view model requires can get large. Many developers have found this can make things difficult to work with, and hard to refactor as the abstractions are not longer clear.

There is an alternative that many developers have already turned to. Replace the interface with just a function. For a full read up on this Scott Wlaschin provides the details: Functional approaches to dependency injection. To follow the approach that Scott talks about, all we need to do is pass in functions.

1: 
2: 
3: 
4: 
5: 
type ViewModel22(foo) = 
    let data = foo()
    member this.Foo: int = data

let vm = ViewModel(fun () -> 42)  

As stated, no interface needed, just a function. Type inference checks the type for us so everything must be wired up correctly. Additionally, it must be noted that construction of the class must be done explicitly (Leave a comment if you know of an IOC container that can resolve functions). The result is that the code is now easier to read, especially for juniors since an IOC container is an advanced topic.

Using this functional approach will result in many functions passed into the view model. Incase it gets hard to keep track of those names and types, an alias for the type can be created to keep dependencies readable. Here’s an example with a couple of dependencies:

1: 
2: 
3: 
4: 
5: 
6: 
type IFoo =  unit -> int
type IBar =  unit -> string
type ViewModel23(foo: IFoo, bar: IBar) = 
    let data = foo() |> string
    let message = bar() + " , " + data
    member this.Foo = message

The type of foo and bar are declared explicitly, rather than being inferred by the compiler. As arguments to the class they now hve readable types as opposed to function types. The alias only affects readability, and will not help the compiler find errors.

So there you go, 23 absolutely useless view models. F# makes it easier and clear to create view models. The code is shorter and with a powerful compiler, both typing and errors can be reduced!

Your challenge: How many ways can you combine the 22 view models to make useful view models?

Multiple items
type ViewModel1 =
  new : unit -> ViewModel1
  member Foo : unit -> int

Full name: creatingViewModels.ViewModel1

——————–
new : unit -> ViewModel1

val this : ViewModel1
Multiple items
type ViewModel2 =
  new : unit -> ViewModel2

Full name: creatingViewModels.ViewModel2

——————–
new : unit -> ViewModel2

val mutable _variable : int
Multiple items
type ViewModel3 =
  new : unit -> ViewModel3

Full name: creatingViewModels.ViewModel3

——————–
new : unit -> ViewModel3

val _variable : int ref
Multiple items
val ref : value:'T -> 'T ref

Full name: Microsoft.FSharp.Core.Operators.ref

——————–
type 'T ref = Ref<'T>

Full name: Microsoft.FSharp.Core.ref<_>

Multiple items
type ViewModel4 =
  new : unit -> ViewModel4
  member Foo : int

Full name: creatingViewModels.ViewModel4

——————–
new : unit -> ViewModel4

val this : ViewModel4
Multiple items
type ViewModel5 =
  new : unit -> ViewModel5
  member Foo : int

Full name: creatingViewModels.ViewModel5

——————–
new : unit -> ViewModel5

val this : ViewModel5
Multiple items
type ViewModel6 =
  new : unit -> ViewModel6
  member Foo : int
  member Foo : int with set

Full name: creatingViewModels.ViewModel6

——————–
new : unit -> ViewModel6

val this : ViewModel6
val set : elements:seq<'T> -> Set<'T> (requires comparison)

Full name: Microsoft.FSharp.Core.ExtraTopLevelOperators.set

val value : int
Multiple items
type ViewModel7 =
  new : unit -> ViewModel7
  member Foo : int
  member Foo : int with set

Full name: creatingViewModels.ViewModel7

——————–
new : unit -> ViewModel7

val this : ViewModel7
Multiple items
type ViewModel8 =
  inherit obj
  new : unit -> ViewModel8
  member Foo : 'a
  member Foo : 'a with set

Full name: creatingViewModels.ViewModel8

——————–
new : unit -> ViewModel8

Multiple items
type ViewModel9 =
  new : unit -> ViewModel9
  member Foo : int

Full name: creatingViewModels.ViewModel9

——————–
new : unit -> ViewModel9

Multiple items
type ViewModel10 =
  new : unit -> ViewModel10
  member Foo : int

Full name: creatingViewModels.ViewModel10

——————–
new : unit -> ViewModel10

Multiple items
type ViewModel11 =
  new : unit -> ViewModel11
  member Foo : int
  member Foo : int with set

Full name: creatingViewModels.ViewModel11

——————–
new : unit -> ViewModel11

Multiple items
type ViewModel12 =
  new : unit -> ViewModel12
  member private Foo : unit -> int

Full name: creatingViewModels.ViewModel12

——————–
new : unit -> ViewModel12

val this : ViewModel12
Multiple items
type ViewModel13 =
  new : unit -> ViewModel13

Full name: creatingViewModels.ViewModel13

——————–
new : unit -> ViewModel13

val foo : int
val addTen : (int -> int)
val x : int
val data : string
Multiple items
val string : value:'T -> string

Full name: Microsoft.FSharp.Core.Operators.string

——————–
type string = System.String

Full name: Microsoft.FSharp.Core.string

Multiple items
type ViewModel14 =
  inherit obj
  new : unit -> ViewModel14
  override Foo : unit -> 'a

Full name: creatingViewModels.ViewModel14

——————–
new : unit -> ViewModel14

Multiple items
type ViewModel15 =
  new : foo:int -> ViewModel15
  member Foo : int

Full name: creatingViewModels.ViewModel15

——————–
new : foo:int -> ViewModel15

Multiple items
val int : value:'T -> int (requires member op_Explicit)

Full name: Microsoft.FSharp.Core.Operators.int

——————–
type int = int32

Full name: Microsoft.FSharp.Core.int

——————–
type int<'Measure> = int

Full name: Microsoft.FSharp.Core.int<_>

val this : ViewModel15
Multiple items
type ViewModel16 =
  new : foo:int -> ViewModel16
  member Foo : int

Full name: creatingViewModels.ViewModel16

——————–
new : foo:int -> ViewModel16

val this : ViewModel16
Multiple items
type ViewModel17 =
  new : foo:int -> ViewModel17
  member Foo : int

Full name: creatingViewModels.ViewModel17

——————–
new : foo:int -> ViewModel17

val this : ViewModel17
Multiple items
type ViewModel18 =
  new : foo:int -> ViewModel18

Full name: creatingViewModels.ViewModel18

——————–
new : foo:int -> ViewModel18

val fooPlusTen : unit
Multiple items
type ViewModel19 =
  new : foo:int -> ViewModel19

Full name: creatingViewModels.ViewModel19

——————–
new : foo:int -> ViewModel19

val mutable _foo : int
Multiple items
type ViewModel20 =
  new : unit -> ViewModel20
  member Foo : unit -> string

Full name: creatingViewModels.ViewModel20

——————–
new : unit -> ViewModel20

val mutable foo : string
val this : ViewModel20
type IService =
  interface
    abstract member Foo : unit -> int
  end

Full name: creatingViewModels.IService

abstract member IService.Foo : unit -> int

Full name: creatingViewModels.IService.Foo

type unit = Unit

Full name: Microsoft.FSharp.Core.unit

Multiple items
type ViewModel21 =
  new : foo:IService -> ViewModel21
  member Foo : int

Full name: creatingViewModels.ViewModel21

——————–
new : foo:IService -> ViewModel21

val foo : IService
val this : ViewModel21
abstract member IService.Foo : unit -> int
Multiple items
type ViewModel22 =
  new : foo:(unit -> int) -> ViewModel22
  member Foo : int

Full name: creatingViewModels.ViewModel22

——————–
new : foo:(unit -> int) -> ViewModel22

val foo : (unit -> int)
val data : int
val this : ViewModel22
val vm : obj

Full name: creatingViewModels.vm

type IFoo = unit -> int

Full name: creatingViewModels.IFoo

type IBar = unit -> string

Full name: creatingViewModels.IBar

Multiple items
type ViewModel23 =
  new : foo:IFoo * bar:IBar -> ViewModel23
  member Foo : string

Full name: creatingViewModels.ViewModel23

——————–
new : foo:IFoo * bar:IBar -> ViewModel23

val foo : IFoo
val bar : IBar
val message : string
val this : ViewModel23

The post Everything you need to know about F# classes in 23 view models appeared first on Coding with Sam.

Read the whole story
sbanwart
1 hour ago
reply
Akron, OH
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Exciting new things for Docker with Windows Server 17.09

2 Shares

What a difference a year makes… last September, Microsoft and Docker launched Docker Enterprise Edition (EE), a Containers-as-a-Service platform for IT that manages and secures diverse applications across disparate infrastructures, for Windows Server 2016. Since then we’ve continued to work together and Windows Server 1709 contains several enhancements for Docker customers.

Docker Enterprise Edition Preview

To experiment with the new Docker and Windows features, a preview build of Docker is required. Here’s how to install it on Windows Server 1709 (this will also work on Insider builds):

Install-Module DockerProvider
Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview

To run Docker Windows containers in production on any Windows Server version, please stick to Docker EE 17.06.

Docker Linux Containers on Windows

A key focus of Windows Server version 1709 is support for Linux containers on Windows. We’ve already blogged about how we’re supporting Linux containers on Windows with the LinuxKit project.

To try Linux Containers on Windows Server 1709, install the preview Docker package and enable the feature. The preview Docker EE package includes a full LinuxKit system (all 13MB of it) for use when running Docker Linux containers.

[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
Restart-Service Docker

To disable, just remove the environment variable:

[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", $null, "Machine")
Restart-Service Docker

Docker Linux containers on Windows is in preview, with ongoing joint development by Microsoft and Docker. Linux Containers is also available on Windows 10 version 1709 (“Creators Update 2”). To try it out, install the special Docker for Windows preview available here.

Docker ingress mode service publishing on Windows

Parity with Linux service publishing options has been highly requested by Windows customers. Adding support for service publishing using ingress mode in Windows Server 1709 enables use of Docker’s routing mesh, allowing external endpoints to access a service via any node in the swarm regardless of which nodes are running tasks for the service.

These networking improvements also unlock VIP-based service discovery when using overlay networks so that Windows users are not limited to DNS Round Robin.

Named pipes in Windows containers

A common and powerful Docker pattern is to run Docker containers that use the Docker API of the host that the container is running on, for example to start more Docker containers or to visualize the containers, networks and volumes on the Docker host. This pattern lets you ship, in a container, software that manages or visualizes what’s going on with Docker. This is great for building software like Docker Universal Control Plane.

Running Docker on Linux, the Docker API is usually hosted on Unix domain socket, and since these are in the filesystem namespace, sockets can be bind-mounted easily into containers. On Windows, the Docker API is available on a named pipe. Previously, named pipes where not bind-mountable into Docker Windows containers, but starting with Windows 10 and Windows Server 1709, named pipes can now bind-mounted.

Jenkins CI is a neat way to demonstrate this. With Docker and Windows Server 1709, you can now:

  1. Run Jenkins in a Docker Windows containers (no more hand-installing and maintaining Java, Git and Jenkins on CI machines)
  2. Have that Jenkins container build Docker images and run Docker CI/CD jobs on the same host

I’ve built a Jenkins sample image (Windows Server 1709 required) that uses the new named-pipe mounting feature. To run it, simple start a container, grab the initial password and visit port 8080. You don’t have to setup any Jenkins plugins or extra users:

> docker run -d -p 8080:8080 -v \\.\pipe\docker_engine:\\.\pipe\docker_engine friism/jenkins
3c90fdf4ff3f5b371de451862e02f2b7e16be4311903649b3fc8ec9e566774ed
> docker exec 3c cmd /c type c:\.jenkins\secrets\initialAdminPassword
<password>

Now create a simple freestyle project and use the “Windows Batch Command” build step. We’ll build my fork of the Jenkins Docker project itself:

git clone --depth 1 --single-branch --branch add-windows-dockerfile https://github.com/friism/docker-3 %BUILD_NUMBER%
cd %BUILD_NUMBER%
docker build -f Dockerfile-windows -t jenkins-%BUILD_NUMBER% .
cd ..
rd /s /q %BUILD_NUMBER%

Hit “Build Now” and see Jenkins (running in a container) start to build a CI job to build a container image on the very host it’s running on!

Smaller Windows base images

When Docker and Microsoft launched Windows containers last year, some people noticed that Windows container base images are not as small as typical Linux ones. Microsoft has worked very hard to winnow down the base images, and with 1709, the Nanoserver download is now about 70MB (200MB expanded on the filesystem).

One of the things that’s gone from the Nanoserver Docker image is PowerShell. This can present some challenges when authoring Dockerfiles, but multi-stage builds make it fairly easy to do all the build and component assembly in a Windows Server Core image, and then move just the results into a nanoserver image. Here’s an example showing how to build a minimal Docker image containing just the Docker CLI:

# escape=`
FROM microsoft/windowsservercore as builder
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN Invoke-WebRequest -Uri https://download.docker.com/win/static/test/x86_64/docker-17.09.0-ce-rc1.zip -OutFile 'docker.zip'
RUN Expand-Archive -Path docker.zip -DestinationPath .

FROM microsoft/nanoserver
COPY --from=builder ["docker\\docker.exe", "C:\\Program Files\\docker\\docker.exe"]
RUN setx PATH "%PATH%;C:\Program Files\docker"
ENTRYPOINT ["docker"]

You now get the best of both worlds: Easy-to-use, full-featured build environment and ultra-small and minimal runtime images that deploy and start quickly, and have minimal exploit surface area. Another good example of this pattern in action are the .NET Core base images maintained by the Microsoft .NET team.

Summary

It’s hard to believe that Docker Windows containers GA’d on Windows Server 2016 and Windows 10 just one year ago. In those 12 months, we’ve seen lots of adoption by the Docker community and lots of uptake with customers and partners. The latest release only adds more functionality to smooth the user experience and brings Windows overlay networking up to par with Linux, with smaller container images and with support for bind-mounting named pipes into containers.

To learn more about Docker solutions for IT:


Exciting new things for #Docker with @Windows Server 17.09
Click To Tweet


The post Exciting new things for Docker with Windows Server 17.09 appeared first on Docker Blog.

Read the whole story
sbanwart
1 hour ago
reply
Akron, OH
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Yes to databases in containers – Microsoft SQL Server available on Docker Store

2 Shares

Microsoft SQL Server 2017 is now available for the first time on multiple platforms: Windows, Linux and Docker. Your databases can be in containers with no lengthy setup and no prerequisites, and using Docker Enterprise Edition (EE) to modernize your database delivery. The speed and efficiency benefits of Docker and containerizing apps that IT Pros and developers have been enjoying for years are now available to DBAs.

 

Try the Docker SQL Server lab now and see how database containers start in seconds, and how you can package your own schemas as Docker images.

 

If you’ve ever sat through a SQL Server install, you know why this is a big deal: SQL Server takes a while to set up, and running multiple independent SQL Server instances on the same host is not simple. This complicates maintaining dev, test and CI/CD systems where tests and experiments might break the SQL Server instance.

With SQL Server in Docker containers, all that changes. Getting SQL Server is as simple as running `docker image pull`, and you can start as many instances on a host as you want, each of them fresh and clean, and tear them back down when you’re done.

Database engines are just like any other server-side application: they run in a process that uses CPU and memory, they store state to disk, and they make services available to clients over the network. That all works the same in containers, with the added benefit that you can limit resources, manage state with volume plugins and restrict network access.

Many Docker customers are already running highly-available production databases in containers, using technologies like Postgres. Now the portability, security and efficiency you get with Docker EE is available to SQL Server DBAs.

 

Modernize your database delivery with Docker


Traditional database delivery is difficult to fit into a modern CI/CD pipeline, but Docker makes it easy. You use Microsoft’s SQL Server Docker image and package your own schema on top, using an automated process. Anyone can run any version of the database schema, just by starting a container – they don’t even need to have SQL Server installed on their machine.

This is the database delivery workflow with Docker:

sql-ci.jpg

  1. DBA pushes schema changes to source control
  2. CI process packages the schema into a Docker image based on Microsoft-published SQL Server base images
  3. CI process runs test suites using disposable database containers created from the new image
  4. CD process upgrades the persistent database container in the test environment to the new image
  5. CD process runs a database container to upgrade the production database, applying diff scripts to align the schema to the new image

The whole process of packaging, testing, distributing and upgrading databases can be automated with Docker. You run database containers in development and test environments which are fast, isolated, and have identical schema versions. You can continue using your existing production database, but use the tested Docker image to deploy updates to production.

Support and availability

Docker Enterprise Edition is a supported platform for running SQL Server in Linux in containers in production. SQL Server for Linux is a certified container image which means you have support from Microsoft and Docker to resolve any issues.

On Windows Server and Windows 10 you can run SQL Server Express in containers with Docker, to modernize your database delivery process for existing SQL Server deployments, without changing your production infrastructure.

The new SQL containers will be available for download in Docker Store in October – but you can start testing with the pre-GA containers in Store today. Already there have been over 1 million downloads from Docker Hub of the SQL Server preview for Linux containers.


Yes you can run databases in containers – #SQLServer on #Docker EE
Click To Tweet


To learn more about Docker solutions for IT:

The post Yes to databases in containers – Microsoft SQL Server available on Docker Store appeared first on Docker Blog.

Read the whole story
sbanwart
1 hour ago
reply
Akron, OH
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories