Menu
I posted about last year. With being this week, it seemed like a great time to give you an update.
Since my last post, we’ve enabled a set of Docker workflows with guidance and samples for and, for development, CI/CD, and production. We also offer. If you haven’t taken a look at Docker and.NET recently, now is a good time. Docker and containers come up more and more in conversations that we have with.NET developers. It has become the way to deploy server applications for many people, due to its primary benefits of consistency and a light-weight alternative to virtual machines. In the DockerCon keynote, there were multiple.NET demos showing how you can use Docker for modern applications and for older applications that use traditional architectures.
Kubernetes Support. You can now run a single-node Kubernetes cluster from the “Kubernetes” Pane in Docker for Windows settings and use kubectl commands as well as Docker commands. (related to docker/for-mac#1374) VPNKit: improve the logging around the UNIX domain socket connections. Users must be part of the specific group “docker. About Kubernetes and Docker Docker is the world’s leading software containerization platform. It packages your application into one standardised unit, wrapping it into a complete file system that contains everything needed to run on a server.
It has become a lot easier to containerize.NET applications using tools from both Microsoft and Docker. See for information about.NET container images. Trying out Docker We maintain samples repositories for both.NET Core and.NET Framework. With just a few commands at the command line, you can test out with these sample images.
The easiest (and supported on the most operating systems) is the.NET Core console app sample. All you need to do is type the following command. Docker run -rm microsoft/dotnet-samples There are other samples you can try, both console and ASP.NET:. How to Approach using Docker Docker is flexible, enabling you to use it in lots of different ways. There are three major scenarios to consider when looking at adopting Docker:. Building source code.
Testing binaries. Running applications/services in production You can adopt Docker for all of these roles or just a subset. From what we’ve seen, most developers start with the production scenario and then adopt more of Docker in their build infrastructure as they find it useful.
This approach makes sense, since the choice to use Docker is usually centered around using it to run applications. On the.NET Team, we’ve been making heavy use of Docker for both building code and testing. The value of a high-fidelity and instant-on computing environment is super high. There is no need to put off a product investigation on Debian, for example, when you can boot up the exact right environment in seconds. The following sections show a mixture of.NET Core and.NET Framework examples for these three scenarios.
Building container images with built binaries The primary requirement for running Docker in production is containerizing your application. The simplest way to create an image within existing build infrastructure is to copy build artifacts into an image. The primary value with this model is consistency between environments, like staging and production. The following Dockerfile copies build assets from the current directory into a new image that is based on the on Docker Hub. Docker run -rm app Note: The removes the container after it terminates. Preserving containers is only useful when you want to investigate why they behaved a certain (undesired) way.
Building container images with source Docker makes it easy to build source for an application and produce a container image in one step. This is called. The value of building source within a container follows:.
Consistency between build and runtime/production. Potentially faster for incremental building than even your own build system, due to Docker layer caching. docker build doesn’t rely on an external build to function (if you build from source within Docker). The following Dockerfile copies source files from the current directory into a new image based on the image on Docker Hub. The Dockerfile commands build the source with NuGet and MSBuild. The binaries are copied from the build stage into a new image based on the image.
The build stage image is discarded. The selected image name is used only for the image generated from the last stage. The second section, above, is an example where Docker shines. Each command in Docker creates a distinct layer in your Docker image. If Docker finds that all the inputs for a given layer are unchanged, then it doesn’t rebuild that layer for subsequent invocations of docker build. The second section copies msbuild assets, like project files, and then runs nuget restore. If the msbuild assets have not changed, then the RUN line that performs restore is skipped.
That ends up being a large time savings. It also explains why the Dockerfile is written the way it is.
![Server Server](/uploads/1/2/5/5/125517157/145683762.png)
The following command creates a new image, called aspnetapp using the Dockerfile above, assuming the command is run from the directory where the Dockerfile and the source are located. Docker run -rm -it -p 8000:80 aspnetapp The -p parameter maps local host machine ports to Docker guest ports. See the following examples for more detail on building source with Docker:.
Testing binaries with Docker The testing scenario showcases the value of Docker since testing is more valuable when the test environment has high fidelity with target environments. Imagine you support your application on multiple operating systems or operating system versions. You can test your application in each of them within Docker. It is easy to do and incredibly valuable.
Up until now in this post, you’ve seen Dockerfile files with RUN commands that described required logic that is executed with docker build and the final result executed with docker run. Running tests via docker build is useful as a means of getting early feedback, primarily with pass/fail results printed to the console/terminal.
This model works OK for testing but doesn’t scale well for two reasons:. docker build will fail if there are errors, which are inherent to testing.
docker build doesn’t allow volume mounting, which is required to collect test logs. Testing with docker run is a great alternative, since it doesn’t suffer from either of these two challenges. Testing with docker build is only useful if you want your build to fail if tests fail. The instructions in this document show you how to test with docker run. The following Dockerfile in its normal use is similar to the Dockerfile for.NET Framework that you saw above. This one, however, includes something of a trick to enable testing.
It includes a testrunner stage that is normally very close to a, but that is very useful for testing. For testing, build an image to the testrunner stage, which will include all the content that has been built to that point. The resulting image is based on the.NET Core SDK image, which includes all of the.NET Core testing infrastructure. The trick in this Dockerfile is that the testrunner stage presents an alternative ENTRYPOINT, which calls dotnet test to kick off testing. If you run the Dockerfile all the way through (not targeting a specific stage), then this first ENTRYPOINT is replaced by the last one, which is the ENTRYPOINT for the application. The following command creates a new image, called dotnetapp:test, using the Dockerfile above and building only to and including the testrunner stage, assuming the command is run from the directory where the Dockerfile and the source are located.
Docker build -pull -target testrunner -t dotnetapp:test. In order to collect test logs on your local machine, you need to use.
In short, you can project a directory on your machine into the container as the same directory. Volume mounting is a great way to get content in or out of a container. The following command creates a container based on the dotnetapp:test image.
It volume mounts the C: app TestResults local directory into the /app/tests/TestResults in the app. The local directory must exist already and the C drive must be shared to Docker. Docker run -rm -v C: app TestResults:/app/tests/TestResults dotnetapp:test After running the command, you should see a.trx file in the C: app TestResults file. The shows you how to test in a container with more detail.
It includes instructions for Windows, macOS, and Linux. It also includes a described in this section. Developing in a Container The scenarios above are focused on producing or validating a container image. The use of Docker can be moved further upstream to development. Visual Studio enables development in a container. You can add a Dockerfile to a.NET project, with either Windows or Linux containers.
The experience is nearly seamless. It is hard to tell that you are using Docker at all, as you can see in the following image. You can also develop in a container at the command line. The.NET Core SDK image includes a lot of functionality that you can use without bothering with creating a Dockerfile. In fact, you can run, build, or test your application only using the command line.
Explains how you can build and rebuild ASP.NET Core applications within Docker as you edit them on your local machine, from within Visual Studio Code, for example. The following commandline hosts an ASP.NET Core application with dotnet watch on macOS or Linux. Instructions are available for Windows at.
Every time you edit and save the application on your local machine, it will be rebuilt within the container. I haven’t tried doing that 1000 times in a row, but you probably can. This scenario relies on volume mounting, to project localy resident source code into a running container. As you can see, volume mounting is a powerful alternative to going through the effort of writing a Dockerfile. Docker run -rm -it -p 8000:80 -v /git/aspnetapp:/app/ -w /app/aspnetapp microsoft/dotnet:2.1-sdk dotnet watch run See for similar instructions for.NET Core console applications.
ASP.NET Core and HTTPS It is important to host web applications with HTTPS. In many cases, you will terminate HTTPS requests before they get to your ASP.NET Core site.
In the case that ASP.NET Core needs to directly handle HTTPS traffic and you are running your site in a container, then you need a solution. Describes how to host our sample ASP.NET Core sample images with HTTPS. The model described is very similar to how you would host your own images with your own certificate. The following commands can be used to run the ASP.NET Core sample images with a dev certificate on Windows with Linux containers.
Dotnet dev-certs https -ep%USERPROFILE%.aspnet https aspnetapp.pfx -p crypticpassword dotnet dev-certs https -trust docker pull microsoft/dotnet-samples:aspnetapp docker run -rm -it -p 8000:80 -p 8001:443 -e ASPNETCOREURLS='-e ASPNETCOREHTTPSPORT=8001 -e ASPNETCOREKestrelCertificatesDefaultPassword='crypticpassword' -e ASPNETCOREKestrelCertificatesDefaultPath=/https/aspnetapp.pfx -v%USERPROFILE%.aspnet https:/https/ microsoft/dotnet-samples:aspnetapp The instructions can be used on Windows, macOS, and Linux. Closing You can probably see that we’re much farther along in our approach of using.NET and Docker together than our initial. We’re far from done everything that one can imagine with the container space, but have provided a much more complete foundation for you to use as you adopt Docker.
Tell us how you are using Docker and the improvements you would like to see, either with guidance and samples or with.NET itself. We’ll continue to make improvements to make the container experience better. I’m working on a project with five or six separate web applications that all have different deployment configurations combined with different feature branches and it’s a bear to get them deployed for CI. I’ve considered looking into Docker to remove some of these hardships, but all I see are useful tuts on the subject (such as “Hosting ASP.NET Core Images with Docker over HTTPS”), but they all appear to be.net Core centric. What about us full framework folks? Can I get a ‘Hosting ASP.NET Images with Docker over HTTPS” tutorial?
This week along with its ultra light headless deployment option — Nano Server. The Nano server images are many times smaller than what we have come to expect from a Windows server image.
A Nano box is just a few hundred megabytes. These machines also boot up VERY quickly and require fewer updates and reboots.
Earlier this year, I about how to run a client on Windows Nano Server. Things have come a long way since then and this post serves as an update.
Now that the RTM Nano bits are out, we will look at:. How to get and run a Nano server. How to install the Chef client on Windows Nano. How to use Test-Kitchen and Inspec to test your Windows Nano Server cookbooks.
The I'll be demonstrating here will highlight some of the new Windows container features in Nano server. It will install and allow you to use your Nano server as a container host where you can run, manipulate and inspect Windows containers from any Windows client. How to Get Windows Nano Server You have a few options here. One thing to understand about Windows Nano is that there is no separate Windows Nano ISO. Deploying a Nano server involves extracting a WIM and some PowerShell scripts from a Windows 2016 Server ISO.
You can then use those scripts to generate a.vhd file from the WIM or you can use the WIM to deploy Nano to a bare metal server. There are some shortcuts available if you don't want to mess with the scripts and prefer a more instantly gratifying experience. Let's explore these scenarios.
Using New-NanoServerImage to Create Your Nano Image If you mount the server 2016 ISO (free evaluation versions available ), you will find a 'NanoServer NanoServerImageGenerator' folder containing a NanoServerImageGenerator PowerShell module. This module's core function is New-NanoServerImage. Here is an example of using that to produce a Nano Server VHD. This will generate a Nano Hyper-V capable image file of a Container/DSC/IIS ready Nano server. You can read more about the details and other options of this function in this. Direct EXE/VHD Download As I briefly noted above, you can download evaluation copies of Windows Server 2016.
Instead of downloading a full multi gigabyte Windows ISO, you could choose the exe/vhd download option. This will download an exe file that will extract a pre-made vhd. You can then create a new Hyper-V VM from the vhd. With that vm, just login to the Nano console to set the administrative password and you are good to go.
Vagrant This is my installation method of choice. I use a template to automate the download of the 2016 server ISO, the generation of the image file and finally package the image both for and Vagrant providers. I keep the image publicly available on Atlas via. The advantage of these images is that they are fully patched (key for docker to work with Windows containers), work with VirtualBox and enable file sharing ports so you can map a drive to Nano.
Vagrant Nano Bug One challenge working with Nano Server and cross-platform automation tools such as vagrant is that Nano exposes a Powershell.exe with no -EncryptedCommand argument which many cross platform WinRM libraries leverage to invoke remote Powershell on a Windows box. And I rewrote the to use PSRP (PowerShell remoting protocol) to talk PowerShell and allow it to interact with Nano server. This has been integrated with all the Chef based tools and I will be porting it to Vagrant soon. In the meantime, a 'vagrant up' will hang after creating the VM. Know that the VM is in fact fully functional and connectable.
I'll mention a hack you can apply to get 's vagrant driver working later in this post. Connecting to Windows Nano Server Once you have a Nano server VM up and running. You will probably want to actually use it. Note: There is no RDP available here.
You can connect to Nano and run commands either using native Powershell Remoting from a Windows box ( does not yet support remoting) or use ' 'knife winrm' from Windows, Mac or Linux. Powershell Remoting. Note that knife winrm expects 'cmd.exe' style commands by default. Use '-winrm-shell powershell' to send powershell commands.
Installing Chef on Windows Nano Server Quick tip: Do not try to install a Chef client MSI. That will not work. Windows Nano server jettisons many of the APIs and subsystems we have grown accustomed to in order to achieve a much more compact and cloud friendly footprint.
This includes the removal of the MSI subsystem. Nano server does support the newer appx packaging system currently best known as the format for packaging Windows Store Apps. With Nano Server, new extensions have been added to the appx model to support what is now known as 'Windows Server Applications' (aka WSAs). At Chef, we have added the creation of appx packages into our build pipelines but these are not yet exposed by our Artifactory and Bintray fed Omnitruck delivery mechanism. That will happen but in the meantime, I have uploaded one to a public AWS S3 bucket. You can grab the current client (as of this post).
To install this.appx file (note: if using Test-Kitchen, this is all done automatically for you):. Either copy the.appx file via a mapped drive or just download it from the Nano server using. Run 'Add-AppxPackage -Path '.
Copy the appx install to c: opscode chef. The last item is a bit unfortunate but temporary.
Microsoft has confirmed this to be an issue with running simple zipped appx applications. The ACLs on the appx install root are seriously restricted and you cannot invoke the Chef client from that location. Until this is fixed, you need to copy the files from the appx location to somewhere else. We'll just copy to the well-known Chef default location on Windows c: opscode chef. Running Chef With the Chef client installed, it's easiest to work with Chef when it's on your path. To add it run. Not All Resources May Work I have to include this disclaimer.
Nano is a very different animal than our familiar 2012 R2. I am confident that the newly launched Windows Server 2016 should work just as 2012 R2 does today, but nano has APIs that have been stripped away that we have previously leveraged heavily in Chef.
One example is Get-WmiObject. This cmdlet is not available on Nano Server so any usage that depends on it will fail. Most of the crucial areas surrounding installing and invoking chef are patched and tested. However, there may be resources that either have not yet been patched or will simply never work.
The windowspackage resource is a good example. Its used to install MSIs and EXE installers not supported on Nano. Test-Kitchen and Inspec on Nano The to leverage PSRP allows our remote execution ecosystem tools to access Windows Nano Server. We have also overhauled our gem to use.Net core APIs (the.NET runtime supported on Nano) for the chef provisioners. With those changes in place, Test-Kitchen can install and run Chef, and Inspec can test resources on your Nano instances. There are a few things to consider when using Test-Kitchen on Windows Nano: Specifying the Chef Appx Installer As I mentioned above, the 'OmniTruck' system is not yet serving appx packages to Nano. However, you can tell Test-Kitchen in your.kitchen.yml to use a specific.msi or.appx installer.
Here is some example YAML for running Test-Kitchen with Nano. Inspec requires no configuration changes.
Working Around Vagrant Hangs Until I refactor Vagrant's winrm communicator, it cannot talk PowerShell with Windows Nano. Because Test-Kitchen and Inspec talks to Nano directly via the newly PSRP supporting WinRM ruby gem, they make Vagrant's limitation nearly unnoticeable. However, the RTM Nano bits exacerbated the Vagrant bug causing it to hang when it does its initial winrm auth check. This can, unfortunately, hang your kitchen create. You can work around this by applying a simple 'hack' to your vagrant install: Update C: HashiCorp Vagrant embedded gems gems vagrant-1.8.5 plugins communicators winrm communicator.rb (adjusting the Vagrant gem version number as necessary) and change. See the for details regarding azure authentication configuration. As of the date of this post, RTM images are not yet available but that's probably going to change very soon.
In the meantime, use TP5. Using Chef to Configure a Docker Host One of the exciting new features of Windows Server 2016 and Nano Server is their ability to host Windows containers.
They can do this using the same Docker API we are familiar with Linux containers. You could walk through the for setting this up or you could just have Chef do this for you.
Updating the Nano Server Note that in order for this to work on RTM Nano images, you must install the latest Windows updates. My Vagrant boxes come fully patched and ready but if you are wondering how do you install updates on a Nano server, here is how.
C: dev dockernanohost master kitchen verify - Starting Kitchen (v1.13.0) - Creating. Bringing machine 'default' up with 'hyperv' provider. default: Verifying Hyper-V is enabled.
default: Starting the machine. default: Waiting for the machine to report its IP address. Default: Timeout: 240 seconds default: IP: 192.168.137.25 default: Waiting for machine to boot. This may take a few minutes. Default: WinRM address: 192.168.1 default: WinRM username: vagrant default: WinRM executiontimelimit: PT2H default: WinRM transport: negotiate default: Machine booted and ready!
default: Machine not provisioned because `-no-provision` is specified. WinRM Established Vagrant instance created. Finished creating (1m15.86s). Converging.