Windows base image docker

Are you new to Docker Windows Images? Are you currently working in a Windows shop and curious to learn about Docker builds for container images? You have come to the right place. The best way to learn about new something is by doing with the docker build and docker build "tag" commands!

Not a reader? Watch this related video tutorial!

Not seeing the video? Make sure your ad blocker is disabled.

In this article, you are going to learn how to create your first Windows Docker image from a Dockerfile using the docker build command.

Let’s get started!

Understanding Docker Container Images

For years, the only way to test or perform development on multiple operating systems (OS) was to have several dedicated physical or virtual machines imaged with the OS version of your choice. This methodology required more hardware and overhead to provision new machines for each software and OS specification.

However, these days the usage of Docker container images has grown partly due to the popularity of micro-service architecture. In response to the rise in Docker’s popularity, Microsoft has started to publicly support Docker images for several flagship products on their Docker Hub page. They have even added native support for images for Windows as a product feature in Windows 10 and Windows Server 2016!

A Docker image is run on a container by using the Docker Engine. Docker images have many benefits such as portability (applicable to multiple environments and platforms), customizable, and highly scalable.  As you can see below, unlike traditional virtual machines, the Docker engine runs on a layer between the host OS kernel and the isolated application services that are being containerized.

Understanding Docker Build and Images

The docker build command can be leveraged to automate container image creation, adopt a container-as-code DevOps practice, and integrate containerization into the development cycle of your projects. Dockerfiles are simply text files that contain build instructions used by Docker to create a new container image that is based on an existing image.

The user can specify the base image and list of commands to be run when a container image is deployed or startup for the first time. In this article, you will learn how to create a Windows-based docker image from Dockerfile using a Windows container.

This process has several benefits over using a pre-built container image:

  1. You are able to rebuild a container image for several versions of Windows – which is great for testing code changes on several platforms.
  2. You will have more control over what is installed in the container. This will allow you to keep your container size to a minimum.
  3. For security reasons, you might want to check the container for vulnerabilities and apply security hardening to the base image

Prerequisites/Requirements

This article is a walkthrough on learning about learning how to build a Docker image using a Dockerfile. If you’d like to follow along, ensure that you have the following prerequisites in place.

  • Docker for Windows installed. I’ll be using the Docker Community Edition (CE) version 2.1.0.4 in my environment.
  • Internet access is needed for downloading the Docker images
  • Windows 10+ Operating System (version 1709 is being used for this tutorial)
  • Nested virtualization enabled
  • 5 GB of free diskspace on your local machine
  • PowerShell 5.0+
  • This tutorial uses the Visual Studio Code IDE. However feel free to use what ever IDE you’d prefer.

Note: Be sure to enable Windows Containers Configuration when installing Docker.

Getting Prepared

You’ll first need a folder to store all of the Docker images and containers you’ll be building from those images. To do so, open a Powershell or cmd terminal (you’ll be using PowerShell throughout this article) and create a new directory called C:\Containers.

Once the folder is created, change to that directory. This puts the console’s current working directory to C:\Containers to default all downloads to this directory.

PS51> mkdir C:\Containers
PS51> cd C:\Containers

In this article, you’ll get a headstart. Most of the files to work through this project are already available. Once the folder is created, perform a Git pull to copy over the files needed for this article from the TechSnips Github repository to the C:\Containers folder. Once complete, check to make sure that the C:\Containers folder looks like below.

Tutorial files

Tutorial files

Downloading the IIS Windows Docker Image

The first task to perform is to download a “template” or base image. You’ll be building your own Docker image later but first, you need an image to get started with. You’ll be downloading the latest IIS and Windows Server Core Images that are required for this tutorial. The updated list of images can be found on the official Microsoft Docker hub image page.

Reviewing the Current Docker Base Images

Before downloading the image from the image repository, let’s first review the current Docker base images that you currently have on your local system. To do so, run a PowerShell console as Administrator and then type docker images. This command returns all images on your local system.

As you can see below, the images available are initially empty.

Docker Build Tag : Listing available Docker images

Docker Build Tag : Listing available Docker images

Downloading the Base Image

Now it’s time to download the base IIS image from Docker Hub. To do so, run docker pull as shown below. This process can take some time to complete depending on your internet speeds.

PS51> docker pull mcr.microsoft.com/windows/servercore/iis
Downloading an image from the Docker Hub

Downloading an image from the Docker Hub

Now run docker images and you should have the latest Microsoft Windows Core IIS image available for this tutorial.

Viewing available Docker images

Viewing available Docker images

Inspecting the Dockerfile

In an earlier step, you had downloaded an existing Dockerfile for this tutorial. Let’s now take a look at exactly what that entails.

Open the C:\Containers\Container1\Dockerfile file in your favorite editor. The contents of this Dockerfile are used to define how the container image will be configured at build time.

You can see an explanation of what each piece of this file does in the in-line comments.

# Specifies that the latest microsoft/iis image will be used as the base image
# Used to specify which base container image will be used by the build process.

# Notice that the naming convention is "**owner/application name : tag name**"
# (shown as microsoft/iis:latest); so in our case the owner of the image is
# Microsoft and the application is IIS with the "latest" tag name being used
# to specify that you will pull the most recent image version available.
FROM microsoft/iis:latest

# Copies contents of the wwwroot folder to the inetpub/wwwroot folder in the new container image
# Used to specify that you want to copy the WWWroot folder to the IIS inetpub WWWroot
# folder in the container. You don't have to specify the full path to your local
# files because docker already has the logic built-in to reference files and folders
# relative to the docker file location on your system. Also, make note that that
# docker will only recognize forward slashes for file paths - since this is a
# Windows based container instead of Linux.
COPY wwwroot c:/inetpub/wwwroot

# Run some PowerShell commands within the new container to set up the image

# Run the PowerShell commands to remove the default IIS files and create a new
# application pool called TestPool
RUN powershell Remove-Item c:/inetpub/wwwroot/iisstart.htm -force
RUN powershell Remove-Item c:/inetpub/wwwroot/iisstart.png -force
RUN powershell Import-Module WebAdministration
RUN powershell New-WebAppPool -Name 'TestPool'

# Exposes port 80 on the new container image
# Used to open TCP port 80 for allowing an http connection to the website.
# However, this line is commented out, because the IIS container has this port
# already open by default.
#EXPOSE 80

# Sets the main command of the container image
# This tells the image to run a service monitor for the w3svc service.
# When this is specified the container will automatically stop running
# if the w3svc service stopped. This line is commented out because of the
# IIS container already has this entrypoint in place by default.
#ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]

Building a New Docker Image

You’ve got the Dockerfile ready to go and a base IIS image downloaded. Now it’s time to build your new Docker image using the Dockerfile.

To build a new image, use the docker build "tag" command. This command creates the image. For this article, you can see below you’re also using the -t ** option which replaces the “tag” portion. This option allows you to give your new image a friendly tag name and also reference the Dockerfile by specifying the folder path where it resides.

Below you can see an example of ensuring the console is in the C:\Containers directory and then building a new image from the Dockerfile in the C:\Containers\Container1 directory.

PS51> cd C:\Containers
PS51> docker build -t container1 .\Container1

Once started, you can see the progress of the command as it traverses each instruction in the docker file line by line:

progress of the command as it traverses each instruction in the docker file

Building a progress of the command as it traverses each instruction in the docker filenew Docker image

Once done, you should now have a new Docker image!

Now run the docker images command to view the images that are available. You can see below an example of the container1 image created.

Viewing available Docker images

Viewing available Docker images

Note: The docker build —help command is a useful parameter to display detailed information on the docker command being run.

Running the Docker Container

At this point, you should have a new image created. It’s time to spin up a container using that image. To bring up a new container, use the docker run command.

The docker run command will bring up a new Docker container based on the container1 image that you created earlier. You can see an example of this below.

Notice that the -d parameter is used. This tells the docker runtime to start the image in the detached mode and then exit when the root process used to run the container exits.

When docker run completes, it returns the ID of the container created. The example below is capturing this ID into a $containerID variable so we can easily reference it later.

PS51> $containerID = docker run -d container1
PS51> $containerID
Running a Docker container

Running a Docker container

Once the container is brought up, now run the docker ps command. This command allows you to see which containers are currently running using each image. Notice below that the running image is automatically generated a nickname (busy_habit in this case). This nickname is sometimes used instead of the container ID to manage the container.

Listing running Docker containers

Listing running Docker containers

Running Code Inside a Docker Container

A new container is built from a new image you just created. Let’s now start actually using that container to run code. Running code inside of a Docker container is done using the docker exec command.

In this example, run docker exec to view PowerShell output for the Get-ChildItem command in the container using the command syntax below. This will ensure the instructions in the Dockerfile to remove the default IIS files succeeded.

PS51> docker exec $containerID powershell Get-ChildItem c:\inetpub\wwwroot

You can see below that the only file that exists is index.html which means the default files were removed.

Running PowerShell commands in a Docker container

Running PowerShell commands in a Docker container

Now run the ipconfig command in the container to get the local IP address of the container image so that you can try to connect to the IIS website.

PS51> docker exec $containerID ipconfig

You can see below that ipconfig was run in the container just as if running on your local computer and has return all of the IP information.

Running ipconfig in a Docker container

Running ipconfig in a Docker container

Inspecting the IIS Website

Now it’s time to reveal the fruits of your labor! It’s time to see if the IIS server running in the Docker container is properly serving up the index.html page.

Open a browser and paste the IP4 Address found via ipconfig into the address bar. If all is well, you should see a Hello World!! message like below.

IIS webpage running in a Docker container

IIS webpage running in a Docker container

Reviewing Docker History

One useful command to use when working with Docker containers i the docker history command. Although not necessarily related to creating an image or container itself, the docker history command is a useful command that allows you to review changes made to the container image.

PS51> docker history container1

You can see below, that docker history returns all of the Dockerfile and PowerShell activity performed on the container1 container you’ve been working with.

Inspecting container changes with docker history

Inspecting container changes with docker history

Cleaning up the Running Docker Images

The steps below are used to cleanup all stopped containers running on your machine. This will free up diskspace and system resources.

Run the docker ps command to view a list of the containers running on your system:

Viewing available Docker containers

Viewing available Docker containers

Now stop the running containers using the docker stop command:

PS51> docker stop <image nick name: busy_haibt in my case>
PS51> docker stop <image nick name: unruffled_driscoll in my case>
Stopping Docker containers

Finally you can permanently remove the stopped containers using the docker system prune command.

PS51> docker system prune
Removing Docker images

Removing Docker images

Further Reading

  • Creating Your First Docker Windows Server Container
  • How to Manage Docker Volumes on Windows

You can run any application in Docker as long as it can be installed and executed unattended, and the base operating system supports the app. Windows Server Core runs in Docker which means you can run pretty much any server or console application in Docker.

TL;DR

Update! For a full walkthrough on Dockerizing Windows apps, check out my book Docker on Windows and my Pluralsight course Modernizing .NET Apps with Docker.

Check out these examples:

  • openjdk:windowsservercore — Docker image with the Java runtime on Windows Server Core, by Docker Captain Stefan Scherer
  • elasticsearch:nanoserver — Docker image with a Java app on Nano Server
  • kibana:windowsservercore — Docker image with a Node.js app on Windows Server Core
  • nats:nanoserver — Docker image with a Go app on Nano Server
  • nerd-dinner — Docker image with an ASP.NET app on Windows Server Core
  • dotnetapp — Docker image with a .NET Core app on Nano Server

The 5 Steps

Lately I’ve been Dockerizing a variety of Windows apps — from legacy .NET 2.0 WebForms apps to Java, .NET Core, Go and Node.js. Packaging Windows apps as Docker images to run in containers is straightforward — here’s the 5-step guide.

1. Choose Your Base Image

Docker images for Windows apps need to be based on microsoft/nanoserver or microsoft/windowsservercore, or on another image based on one of those.

Which you use will depend on the application platform, runtime, and installation requirements. For any of the following you need Windows Server Core:

  • .NET Framework apps
  • MSI installers for apps or dependencies
  • 32-bit runtime support

For anything else, you should be able to use Nano Server. I’ve successfully used Nano Server as the base image for Go, Java and Node.js apps.

Nano Server is preferred because it is so drastically slimmed down. It’s easier to distribute, has a smaller attack surface, starts more quickly, and runs more leanly.

Being slimmed down may have problems though — certain Windows APIs just aren’t present in Nano Server, so while your app may build into a Docker image it may not run correctly. You’ll only find that out by testing, but if you do find problems you can just switch to using Server Core.

Unless you know you need Server Core, you should start with Nano Server. Begin by running an interactive container with docker run -it --rm microsoft/nanoserver powershell and set up your app manually. If it all works, put the commands you ran into a Dockerfile. If something fails, try again with Server Core.

Derived Images

You don’t have to use a base Windows image for your app. There are a growing number of images on Docker Hub which package app frameworks on top of Windows.

They are a good option if they get you started with the dependencies you need. These all come in Server Core and Nano Server variants:

  • microsoft/iis — basic Windows with IIS installed
  • microsoft/aspnet — ASP.NET installed on top of IIS
  • microsoft/aspnet:3.5 — .NET 3.5 installed and ASP.NET set up
  • openjdk — OpenJDK Java runtime installed
  • golang — Go runtime and SDK installed
  • microsoft/dotnet — .NET runtime and SDK installed.

A note of caution about derived images. When you have a Windows app running in a Docker container, you don’t connect to it and run Windows Update to apply security patches. Instead, you build a new image with the latest patches and replace your running container. To support that, Microsoft release regular updates to the base images on Docker Hub, tagging them with a full version number (10.0.14393.693 is the current version).

Base image updates usually happen monthly, so the latest Windows Server Core and Nano Server images have all the latest security patches applied. If you build your images from the Windows base image, you just need to rebuild to get the latest updates. If you use a derived image, you have a dependency on the image owner to update their image, before you can update yours.

If you use a derived image, make sure it has the same release cadence as the base images. Microsoft’s images are usually updated at the same time as the Windows image, but official images may not be.

Alternatively, use the Dockerfile from a derived image to make your own “golden” image. You’ll have to manage the updates for that image, but you will control the timescales. (And you can send in a PR for the official image if you get there first).

2. Install Dependencies

You’ll need to understand your application’s requirements, so you can set up all the dependencies in the image. Both Nano Server and Windows Server Core have PowerShell set up, so you can install any software you need using PowerShell cmdlets.

Remember that the Dockerfile will be the ultimate source of truth for how to deploy and run your application. It’s worth spending time on your Dockerfile so your Docker image is:

  • Repeatable. You should be able to rebuild the image at any time in the future and get exactly the same output. You should specify exact version numbers when you install software in the image.
  • Secure. Software installation is completely automated, so you should make sure you trust any packages you install. If you download files as part of your install, you can capture the checksum in the Dockerfile and make sure you verify the file after download.
  • Minimal. The Docker image you build for your app should be as small as possible, so it’s fast to distribute and has a small surface area. Don’t install anything more than you need, and clean up any installations as you go.

Adding Windows Features

Windows features can be installed with Add-WindowsFeature. If you want to see what features are available for an image, start an interactive container with docker run -it --rm microsoft/windowsservercore powershell and run Get-WindowsFeature.

On Server Core you’ll see that .NET 4.6 is already installed, so you don’t need to add features to run .NET Framework applications.

.NET is backwards-compatible, so you can use the installed .NET 4.6 to run any .NET application, back to .NET 2.0. In theory .NET 1.x apps can run too. I haven’t tried that.

If you’re running an ASP.NET web app but you want to use the base Windows image and control all your dependencies, you can add the Web Server and ASP.NET features:

RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45

Downloading Files

There’s a standard pattern for installing dependencies from the Internet — here’s a simple example for downloading Node.js into your Docker image:

ENV NODE_VERSION="6.9.4" `
    NODE_SHA256="d546418b58ee6e9fefe3a2cf17cd735ef0c7ddb51605aaed8807d0833beccbf6"

WORKDIR C:/node

RUN Invoke-WebRequest -OutFile node.exe "https://nodejs.org/dist/v$($env:NODE_VERSION)/win-x64/node.exe" -UseBasicParsing; `
    if ((Get-FileHash node.exe -Algorithm sha256).Hash -ne $env:NODE_SHA256) {exit 1} ;

The version of Node to download and the expected SHA-256 checksum are captured as environment variables with the ENV instruction. That makes it easy to upgrade Node in the future — just change the values in the Dockerfile and rebuild. It also makes it easy to see what version is present in a running container, you can just check the environment variable.

The download and hash check is done in a single RUN instruction, using Invoke-WebRequest to download the file and then Get-FileHash to verify the checksum. If the hashes don’t match, the build fails.

After these instructions run, your image has the Node.js runtime in a known location — C:\node\node.exe. It’s a known version of Node, verified from a trusted download source.

Expanding Archives

For dependencies that come packaged, you’ll need to install them as part of the RUN instruction. Here’s an example for Elasticsearch which downloads and uncompresses a ZIP file:

ENV ES_VERSION="5.2.0" `
    ES_SHA1="243cce802055a06e810fc1939d9f8b22ee68d227" `
    ES_HOME="c:\elasticsearch"

RUN Invoke-WebRequest -outfile elasticsearch.zip "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$($env:ES_VERSION).zip" -UseBasicParsing; `
    if ((Get-FileHash elasticsearch.zip -Algorithm sha1).Hash -ne $env:ES_SHA1) {exit 1} ; `
    Expand-Archive elasticsearch.zip -DestinationPath C:\ ; `
    Move-Item c:/elasticsearch-$($env:ES_VERSION) 'c:\elasticsearch'; `
    Remove-Item elasticsearch.zip

It’s the same pattern as before, capturing the checksum, downloading the file and checking the hash. In this case, if the hash is good the file is uncompressed with Expand-Archive, moved to a known location and the Zip file is deleted.

Don’t be tempted to keep the Zip file in the image, “in case you need it”. You won’t need it — if there’s a problem with the image you’ll build a new one. And it’s important to remove the package in the same RUN command, so the Zip file is downloaded, expanded and deleted in a single image layer.

It may take several iterations to build your image. While you’re working on it, it’s a good idea to store any downloads locally and add them to the image with COPY. That saves you downloading large files every time. When you have your app working, replace the COPY with the proper download-verify-delete RUN pattern.

Installing MSIs

You can download and run MSIs using the same approach. Be aware that not all MSIs will be built to support unattended installation. A well-built MSI will support command-line switches for any options available in the UI, but that isn’t always the case.

If you can install the app from an MSI you’ll also need to ensure that the install completed before you move on to the next Dockerfile instruction — some MSIs continue to run in the background. This example from Stefan Scherer’s iisnode Dockerfile uses Start-Process ... -Wait to run the MSI:

RUN Write-Host 'Downloading iisnode' ; \
    $MsiFile = $env:Temp + '\iisnode.msi' ; \
    (New-Object Net.WebClient).DownloadFile('https://github.com/tjanczuk/iisnode/releases/download/v0.2.21/iisnode-full-v0.2.21-x64.msi', $MsiFile) ; \
    Write-Host 'Installing iisnode' ; \
    Start-Process msiexec.exe -ArgumentList '/i', $MsiFile, '/quiet', '/norestart' -NoNewWindow -Wait

3. Deploy the Application

Packaging your own app will be a simplified version of step 2. If you already have a build process which generates an unattended-friendly MSI, you can can copy it from the local machine into the image and install it with msiexec:

COPY UpgradeSample-1.0.0.0.msi /

RUN msiexec /i c:\UpgradeSample-1.0.0.0.msi RELEASENAME=2017.02 /qn

This example is from the Modernize ASP.NET Apps — Ops Lab from Docker Labs on GitHub. The MSI supports app configuration with the RELEASENAME option, and it runs unattended with the qn flag.

With MSIs and other packaged deployment options (like Web Deploy) you need to choose between using what you currently have, or changing your build output to something more Docker friendly.

Web Deploy needs an agent installed into the image which adds an unnecessary piece of software. MSIs don’t need an agent, but they’re opaque, so it’s not clear what’s happening when the app gets installed. The Dockerfile isn’t an explicit deployment guide if some of the steps are hidden.

An xcopy deployment approach is better, where you package the application and its dependencies into a folder and copy that folder into the image. Your image will only run a single app, so there won’t be any dependency clashes.

This example copies an ASP.NET Web app folder into the image, and configures it with IIS using PowerShell:

RUN New-Item -Path 'C:\web-app' -Type Directory; `
    New-WebApplication -Name UpgradeSample -Site 'Default Web Site' -PhysicalPath 'C:\web-app'

COPY UpgradeSample.Web /web-app

If you’re looking at changing an existing build process to produce your app package, you should think about building your app in Docker too. Consolidating the build in a multi-stage Dockerfile means you can build your app anywhere without needing to install .NET or Visual Studio.

See Dockerizing .NET Apps with Microsoft’s Build Images on Docker Hub.

4. Configure the Entrypoint

When you run a container from an image, Docker starts the process specified in the CMD or ENTRYPOINT instruction in the Dockerfile.

Modern app frameworks like .NET Core, Node and Go run as console apps — even for Web applications. That’s easy to set up in the Dockerfile. This is how to run the open source Docker Registry — which is a Go application — inside a container:

CMD ["registry", "serve", "config.yml"]

Here registry is the name of the executable, and the other values are passed as options to the exe.

ENTRYPOINT and CMD work differently and can be used in conjunction. See how CMD and ENTRYPOINT interact to learn how to use them effectively.

Starting a single process is the ideal way to run apps in Docker. The engine monitors the process running in the container, so if it stops Docker can raise an error. If it’s also a console app, then log entries written by the app are collected by Docker and can be viewed with docker logs.

For .NET web apps running in IIS, you need to take a different approach. The actual process serving your app is w3wp.exe, but that’s managed by the IIS Windows service, which is running in the background.

IIS will keep your web app running, but Docker needs a process to start and monitor. In Microsoft’s IIS image they use a tool called ServiceMonitor.exe as the entrypoint. That tool continually checks a Windows service is running, so if IIS does fail the monitor process raises the failure to Docker.

Alternatively, you could run a PowerShell startup script to monitor IIS and add extra functionality — like tailing the IIS log files so they get exposed to Docker.

5. Add a Healthcheck

HEALTHCHECK is one of the most useful instructions in the Dockerfile and you should include one in every app you Dockerize for production. Healthchecks are how you tell Docker if the app inside your container is healthy.

Docker monitors the process running in the container, but that’s just a basic liveness check. The process could be running, but your app could be in a failed state — for a .NET Core app, the dotnet executable may be up but returning 503 to every request. Without a healthcheck, Docker has no way to know the app is failing.

A healthcheck is a script you define in the Dockerfile, which the Docker engine executes inside the container at regular intervals (30 seconds by default, but configurable at the image and container level).

This is a simple healthcheck for a web application, which makes a web request to the local host (remember the healthcheck executes inside the container) and checks for a 200 response status:

HEALTHCHECK CMD powershell -command `
    try { `
     $response = iwr http://localhost:80 -UseBasicParsing; `
     if ($response.StatusCode -eq 200) { return 0} `
     else {return 1}; `
    } catch { return 1 }

Healthcheck commands need to return 0 if the app is healthy, and 1 if not. The check you make inside the healthcheck can be as complex as you like — having a diagnostics endpoint in your app and testing that is a thorough approach.

Make sure your HEALTHCHECK command is stable, and always returns 0 or 1. If the command itself fails, your container may not start.

Any type of app can have a healthcheck. Michael Friis added this simple but very useful check to the Microsoft SQL Server Express image:

HEALTHCHECK CMD ["sqlcmd", "-Q", "select 1"]

The command verifies that the SQL Server database engine is running, and is able to respond to a simple query.

There are additional advantages in having a comprehensive healthcheck. The command runs when the container starts, so if your check exercises the main path in your app, it acts as a warm-up. When the first user request hits, the app is already running warm so there’s no delay in sending the response.

Healthchecks are also very useful if you have expiry-based caching in your app. You can rely on the regular running of the healthcheck to keep your cache up-to date, so you could cache items for 25 seconds, knowing the healthcheck will run every 30 seconds and refresh them.

Summary

Dockerizing Windows apps is straightforward. The Dockerfile syntax is clean and simple, and you only need to learn a handful of instructions to build production-grade Docker images based on Windows Server Core or Nano Server.

Following these steps will get you a functioning Windows app in a Docker image — then you can look to optimizing your Dockerfile.

HostProcess container base image

Overview

This project produces a minimal base image that can be used with HostProcess containers.

This image cannot be used with any other type of Windows container (process isolated, Hyper-V isolated, etc…)

Benefits

Using this image as a base for HostProcess containers has a few advantages over using other base images for Windows containers including:

  • Size — This image is a few KB. Even the smallest official base image (NanoServer) is still a few hundred MB is size.
  • OS compatibility — HostProcess containers do not inherit the same compatibility requirements as Windows server containers and because of this it does not make sense to include all of the runtime / system binaries that make up the different base layers. Using this image allows for a single container image to be used on any Windows Server version which can greatly simplify container build processes.

Usage

Build your container from mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0.

Dockerfile example

Create hello-world.ps1 with the following content:

Write-output "Hello World!"

and Dockerfile.windows with the following content:

FROM mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0

ADD hello-world.ps1 .

ENV PATH="C:\Windows\system32;C:\Windows;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;"
ENTRYPOINT ["powershell.exe", "./hello-world.ps1"]

Build with BuildKit

Containers based on this image cannot currently be built with Docker Desktop.
Instead use BuildKit or other tools.

Example:

Create a builder

One time step

docker buildx create --name img-builder --use --platform windows/amd64

Build your image

Use the following command to build and push to a container repository

 docker buildx build --platform windows/amd64 --output=type=registry -f {Dockerfile} -t {ImageTag} .

Container Manifests

As mentioned in Benefits above, HostProcess containers can run on any Windows Server version however
there is currently logic in containerd to only pull Windows container images if the OSVersion defined in the
container manifest matches the OSVersion of the node.

When building container images from this base image it is recommended to not include this image in a manifest-list
and also not include any platform information in the manifest for now.

Please see containerd/containerd#7431 for more information.

Licensing

Code is the repository is released under the MIT license.

The container images produced by this repository are distributed under the CC0 license.

  • CC0 license
  • CC0 legacode

Introduction

Installing and running a standard Windows executable inside a Docker container involves creating an image that includes the necessary Windows operating system components and the executable file.

This can be done using a Windows Server base image and adding the necessary packages and files.

Creating a Docker Image with Windows Server Base Image

  1. Create a Dockerfile: Start by creating a Dockerfile that specifies the steps for creating the Docker image. This file will include instructions for installing the necessary Windows operating system components and the executable file.
  2. Install Windows Operating System Components: Use the DISM command to install the necessary Windows operating system components, such as the .NET framework or any other libraries required by the executable file.
  3. Copy Executable File: Copy the Windows executable file to the Docker image. This can be done using the COPY instruction in the Dockerfile.
  4. Build Docker Image: Build the Docker image using the docker build command. This will create a new image with the installed components and the executable file.
  5. Run Docker Container: Run the Docker image using the docker run command. This will launch a container based on the image and execute the executable file.

Example Dockerfile

FROM mcr.microsoft.com/windows/servercore:ltsc2022

RUN dism.exe /Online /Add-Package /PackagePath:"C:\path\to\package.cab"

COPY C:\path\to\executable.exe /app/executable.exe

ENTRYPOINT ["/app/executable.exe"]

In this example, the DISM command is used to install a package called package.cab. The COPY instruction copies the executable file executable.exe to the container. The ENTRYPOINT specifies that the executable file should be executed when the container starts.

Running the Docker Container

Once the Docker image is built, you can run the container using the following command:

docker run -it [image-name]

Replace [image-name] with the name of your Docker image. The container will start, and the executable file will be executed.

Additional Considerations

When running Windows executables inside Docker containers, there are a few additional considerations to keep in mind:

  • Memory Usage: Windows applications may require more memory than Linux applications. Be sure to allocate sufficient memory to your container to avoid running out of memory.
  • Networking: Windows applications may require different networking configurations than Linux applications. Be sure to configure the networking for your container appropriately.
  • Security: Running Windows applications inside Docker containers can introduce security risks. Be sure to take appropriate security precautions, such as using a secure network and restricting access to the container.

Conclusion

Overall, running Windows executables inside Docker containers can be a powerful and flexible way to test or run legacy Windows applications that have not been ported to Linux. Following this article’s guidelines, you can ensure that your Windows applications run smoothly and securely inside Docker containers.

It can also provide a really clean way of installing applications on development machines without having to install them on the base operating system. Limiting the exposed systems can help keep your machine clean and reduce potential security issues.

Here we will make a windows Docker image which uses Windows Nano Server as the base image

This is great for deploying your .Net apps on Docker

Source code

Prereqs

  • Docker Desktop installed on Windows Enterprise, Pro or Educational machine
    • Windows Home edition will not work since Hyper-V needs to be enabled and Windows Home just doesn’t offer it
  • Hyper-V feature enabled
  • Visual Studio
    • Or the .Net 6.0 SDK
  • A .Net app (.Net versions 3.1, 5.x and 6.x)
    • .Net Framework apps won’t work on Nano Server (.Net versions 4.x)

Create the app

First we need to create a sample .Net app

dotnet new console -o App -n DotNet.Docker

This creates a new folder App with a very simple console app in it using .Net version 6.0

Edit the app

We’re just going to edit it slightly to loop after saying “Hello world!”

Console.WriteLine("Hello, World!");

for (int i = 0; i < 500; i++)
{
    Console.WriteLine("Sleep for 2 seconds.");
    Thread.Sleep(2000);
}

Build the app

We’ll build the app and copy the binaries into the container image

Navigate to the App directory and run:

dotnet publish -c Release

This will create the precious binary files at App/bin/Release/net6.0/publish

Create Dockerfile

Here is the final Dockerfile, though I will explain each part below

FROM mcr.microsoft.com/powershell:lts-nanoserver-1909

# Run as admin
USER ContainerAdministrator

# Make default shell powershell
SHELL ["pwsh", "-command"]

# Dotnet 6.0.101 install
WORKDIR Users\\Example\\dotnetinstall\\6.0.101
RUN Invoke-WebRequest -OutFile dotnet-install.ps1 -URI https://dotnet.microsoft.com/download/dotnet/scripts/v1/dotnet-install.ps1
RUN .\dotnet-install.ps1 -Version 6.0.101 -InstallDir """C:\\Users\\Example\\\\dotnetinstall\\6.0.101"""

# Copy the application binaries into the docker image
COPY App/bin/Release/net6.0/publish/ C:\\Users\\Example\\App

# Run the application binaries as the main container program
WORKDIR C:\\Users\\Example\\App
ENTRYPOINT ["C:\\Users\\Example\\dotnetinstall\\6.0.101\\dotnet.exe", "C:\\Users\\Example\\App\\DotNet.Docker.dll"]

FROM mcr.microsoft.com/powershell:lts-nanoserver-1909 is to specify which Docker image we want to use as the base image. This powershell:lts-nanoserver-1909 image is from Microsoft’s powershell docker hub. It comes with really lightweight version of the Windows OS called Nano server which is great for containers, since you generally want to keep container images small.
This powershell:lts-nanoserver-1909 image also comes with a new version of Powershell 7 called pwsh (read more here).
pwsh is an open-source cross-platform version of Powershell. There are a few extra steps to get it to do all of what previous versions of Powershell could do which I will cover in a different article.

USER ContainerAdministrator is to make the commands run as admin, since this is not set by default in Windows (Linux it is default). Basically without this command, any subsequent steps can’t access files downloaded/copied from previous steps.

SHELL ["pwsh", "-command"] is to make the commands that happen in the RUN sections executed through pwsh instead of the default cmd

WORKDIR Users\\Example\\dotnetinstall\\6.0.101 is to make a folder in the container image with the given path to use as a workspace. It also navigates to this directory for any subsequent commands.

RUN Invoke-WebRequest -OutFile dotnet-install.ps1 -URI https://dotnet.microsoft.com/download/dotnet/scripts/v1/dotnet-install.ps1 is to download a dotnet install script and then the following command is to install the dotnet runtime into the C:\\Users\\Example\\\\dotnetinstall\\6.0.101 directory. We later use this runtime package to actually run our .Net app.

Instead of manually installing via this dotnet-install.ps1 script, you could alternatively use mcr.microsoft.com/dotnet/aspnet:6.0 (read more here) as the base image, but I personally find it hard to debug since it doesn’t come with cmd, powershell, pwsh, or any sort of shell.

COPY App/bin/Release/net6.0/publish/ C:\\Users\\Example\\App is to copy the folder with the binaries into the container image.

WORKDIR C:\\Users\\Example\\App is to create a working directory and finally ENTRYPOINT ["C:\\Users\\Example\\dotnetinstall\\6.0.101\\dotnet.exe", "C:\\Users\\Example\\App\\DotNet.Docker.dll"] is to tell Docker to invoke the dotnet runtime which we installed as the main entrypoint of the container, and to pass it the DotNet.Docker.dll file as an argument. This will essentially start our app and print "Hello world!" and then loop.

Build the Docker image

docker build -t dotnetexample:v1.0.0 -f .\Dockerfile .

Run this command from just outside the App directory. Ensure the Dockerfile is also just outside the App directory.

Remember: you have to enable the Hyper-v feature for Windows 10 Enterprise, Pro or Educational. Windows 10 home will not work.

You must also ensure that Docker Desktop is using Windows Containers:

Ensure Docker Desktop is using Windows Containers

If you need quick access to a Windows 10 Enterprise machine, you can create one in Azure Portal

Creating a Windows Enterprise machine in Azure Portal if need be

Run image

docker run dotnetexample:v1.0.0

Run the docker image!

You can also interactively run powershell commands in the container with:

docker run -it --entrypoint pwsh dotnetexample:v1.0.0

Interactively run powershell commands in container image

You’ve done it!

Понравилась статья? Поделить с друзьями:
0 0 голоса
Рейтинг статьи
Подписаться
Уведомить о
guest

0 комментариев
Старые
Новые Популярные
Межтекстовые Отзывы
Посмотреть все комментарии
  • Как при закрытом ноутбуке вывести изображение на монитор windows 11
  • Где находится bluetooth на ноутбуке windows 8
  • Драйвера samsung r60 plus драйвера windows 7
  • Как остановить windows defender win 10
  • Сайт на своем сервере windows server