Windows Server 2016 Containers

Containers, Docker and other similar technologies are such a trending topic these days I thought I would take the time to have a look at what all the fuss is about. Being a Windows developer and not completely happy with having to run VirtualBox just to get a Linux distro working so I can create a container, With Windows Server 2016 containers are built in so I wanted to explore them and see how it could change my development workflow. These posts are a journey of installing and configuring a set of containers to host an ASP.Net MVC website inside them. I initially tried running the PowerShell scripts linked by Microsoft on their ‘Windows Containers Quick Start’ pages, but I ended up getting so many errors with them (it is understandable as it is still in development) that I started from scratch and built the containers up myself.

Installing Windows Server Core and enabling the Container feature

Firstly get the Server 2016 ISO from Currently this is Technical Preview 4 and may change before the final release of Windows Server 2016.
I have renamed my ISO WindowsServerTP4 so I can find it on my machine much easier.
Create a VM and mount the ISO as the DVD drive.
Creating a Server 2016 Core Virtual Machine
Make sure the VM has access to an external switch.
I have called mine ContainerHost and will be a Server Core installation.
Start and connect to the Virtual Machine.
Follow the usual UELA and drive specifics.
Installing Server Core
Installing Server Core
Specify drive properties  
On first boot you need to change the password (use tab to move to the confirm password prompt).
Change password
Enter a PowerShell session.
Enter PowerShell session
Install the Container Windows feature by using the cmdlet
Install-WindowsFeature Containers.
Install Containers feature
Server will continue to install components during reboot.
Install and reboot
Login and renter back into PowerShell.
Now you can list all the command the Container feature exposes.
Run Get-Command –Module Containers
Container commands

Installing a Base OS Image

The main container features are now installed, however all containers must be based off an OS image and these come from OneGet. As Server Core Technical Preview 4 is version 10.0.10586.0, then the base OS image has to be the same. These images are WIM files and can be downloaded and saved to your local network from the Server Core Virtual Machine. First install the PackageProvider (OneGet).
Install-PackageProvider ContainerProvider –Force
Then you can use this to find container images from OneGet.
Find base OS images
Create a local share on your host machine and create a mapped drive in the Virtual Machine.
net use z: \\path-to-share\MyShare <account password> /user:"<username"
Save the WIM file to this share.
Save-ContainerImage -Name WindowsServerCore -Version 10.0.10586.0 -Destination "z:\WindowsServerCore.wim"
This will take a long time depending on your internet connection as it is Gigabytes in size.
Now the wim file is on your host network you can use the Install-ContainerOSImage to import it to your image collection. This way any time you want to create other Virtual Machines and run through this process again, you already have the WIM and don't need to download it again. However the WIM version will change if there is another Technical Preview or Release.
Install base OS image
Running the command Get-ContainerImage should list this as an OS image.
Display locally installed images
In the next post I will create an image that has IIS set up and then create containers based off this image.

Happy coding

In an older post on website performance using concatenating and minification I used the Ajax Minifier tool to do the work and used an MSBuild file to run that tool, now I use gulp which I find easier as it only needs a dependency on node and so can be separated out from the Visual Studio project itself and run outside the build cycle.


When run in a CI/CD environment, the gulp file is processed before an actual build takes place, this is so that the AssemblyInfo.cs file can be amended with the version information. The gulp file can be broken down into 3 main parts; the version, concatenation and minification and the assembly version writing.

The default task is run initially which will then run the version task. The version task reads any parameters passed into the gulp runner, in this case it is looking for buildVersion and uses the yargs library. If it is undefined, then it is being run outside the CI environment and can default to Then it performs the concatenation and minification of the styles using the gulp-concat and gulp-cssnano libraries. The resulting file is then written out to the public-assets/css directory which is where they are referenced from in the HTML view.The same process is taken for the JavaScript files except the gulp-uglify library is used to minify them.

Writing them to the public-assets/js and css directories allows the directories to be classed as static in IIS and so we can add addition headers such as expiry dates. Finally the same version number is passed to the assembly-info task which uses the gulp-dotnet-assembly-info library. This simply changes the AssemblyInfo.css file so when it is built, the assembly will have this version. Then when both JavaScript and css files are linked to in the HTML, the assembly version can be injected to the url.


TeamCity Integration

It is real easy to include a gulp task to TeamCity by creating a Build step to your pipeline and choosing the Command Line as the runner type.


Then specify ‘Custom script’ from the drop down and enter your normal gulp command in the Custom script text area.

With my gulp script I specify a build version parameter which gets its value from the environment variable BUILD_NUMBER. This is the counter that the gulp build step generates when it is running. I use this value, inject it into the gulp script using the version task which is then used to name the css and JavaScript files. It also changes the AssemblyInfo.cs file to create a unique assembly version.

Happy coding

Foggy Software Development

Software development is one of those areas that it is important to find the fine line between too little specification and too much. One project I worked on recently for a client had tremendous success mainly due to good user story development and a good development team that could take those stories and run with them.
Thanks to:-

This was an agile shop that used the agile template in TFS but also used the vocabulary of scrum interchangeably. Nothing wrong with that, I believe there has to be a balance between adapting your process to your environment as opposed to changing your environment to fit the process especially in larger more traditional businesses. So they had sprints, sprint planning but also user stories as opposed to product backlog items. So six months into the project and all sprints were on the line of the burn-down chart, then things started to go wrong. The burn-down chart was way off with the resources we had at hand, the reason? Well we started to get user stories such as this:-

Story title: We need a different view of the data

Description: To do


Story title: General printing

Description: To do

There isn't much you can do with these, but they were in the sprint. The more time a developer spends going to ask for clarification about a user story, the less effort they can put into acting on the story. The end result after all the backwards and forwards communication is an inaccurate burn-down which can lead to a demoralised team and possible infighting. Why were these in the sprint you may ask? Well for some unknown reason, management decided to take a more hands on approach to the project and so forced stories into sprints instead of the software development team deciding what resources were available to tackle the high priority stories. The stories had to be done in that sprint even if the resource was not enough to cover it. Add to that, the management only had a vague idea of what those stories were and possibly due to lack of time couldn't expand on their details.

So what is just enough information for a user story?

  • The product owners should define the features of the project.
  • The business analyst and business owners should define the user stories that lead to the features and describe the acceptance criteria to fulfil the user story.
  • The development team should create the tasks off the user stories that will fulfil the acceptance criteria and make the user story valid.
So there has to be enough information in the user story that will define the acceptance criteria at a minimum; the implementation details should be down to the development team. So what happened to the two user stories mentioned earlier? They eventually got removed from that sprint in to the next, and then in to the next after that. Finally they were broken down into smaller stories that were described better and then got implemented into the project.   Happy coding

Part 1 – Securing Your Logins With ASP.Net MVC
Part 2 - Securing Web.API Requests With JSON Web Tokens (This post)

In the last post I went over the techniques you can use to secure your ASP.Net MVC logins using salted hashes in the database. This post covers the web service layer and how to secure requests to service calls that are essentially exposed to the big bad web. To show what sort of layering I am discussing, here is a basic example of the various layers I have been using on a number of projects.

Three tier architecture

Once the user has been validated and allowed into the site, all service requests are done on their behalf. To make sure nobody who is not validated get access to the service calls, we implement JSON Web Tokens or JWT.

JSON Web Tokens

Jason Web Tokens are a standard and url safe way of representing claims and was created by the Internet Engineering Task Force (IETF). They are in the form like this:-


A JWT is split into 3 sections which comprise of:-
JOSE Header - this describes the token and the hashing algorithm that is being used for it.
JWS Payload - the main content and can include claims sets, issuer, expiry date as well as any bespoke data you want to include
Signature hash - base64 encoding the header and payload and creating the message authentication code (MAC) leads to the signature hash

Creating JSON Web Tokens in .Net

Going back to the web project, in the constructor of each controller, create a private field that will store our token string.The code to generate the token uses the System.IdentityModel.Tokens.Jwt namespace which you may need to add extra references for by using the NuGet packages.The call to Authorization.GetBytes() is a method call from a class we use in a business object that sits in the Webservice layer. All it does is turns a string into a byte array.Here we just store the web token in the viewbag for rendering on each view, the reason we do this is because we don't want to run into any cross domain issues as our web and web service layers are running on different machines on different urls.Now in the angular code that is calling into the service layer we extract that token and append it to the call as a parameter.

Consuming JSON Web Tokens

In the web service layer we intercept each call by creating an override on the OnAuthorization method inside AuthorizeApi.cs within App_Start.If they have the correct and valid token then they proceed to get the data from the API call, if not they get sent a 403 Forbidden response.

JSON Web Token (JWT) - OAuth Working Group

Part 1 – Securing Your Logins With ASP.Net MVC (This post)
Part 2 - Securing Web.API Requests With JSON Web Tokens

An archetectural patterns that is becoming more popular is using ASP.Net MVC with a Web.API layer servicing the web front end via angular.js or similar technology. A kind of hybrid SPA with all the benefits that ASP.Net bring to the table. This is a two part primer running through what I do to secure logins to MVC applications. In part two I will expand on this post to cover how to secure the Web.API layer utilizing the security built into ASP.Net.

If you ever go to a web site and you cannot remember your password, you will most likely have requested a password reminder. If you get sent your current password in plain text, then that is bad news. It means the website is storing passwords in plain text and if they get hacked then they will have access to those passwords, and knowing the fact that people have a tendency to use the same password on multiple sites then they could compromise multiple sites that you use. It is really important to salt and hash your passwords for storage in the database. By doing this, you can do a string comparison against the hash and not the actual password. Here I will go through the process in code.

As usual you will have a login screen asking for username (or email address) and password. I won't go into the MVC/Razor side here, just the important code.

Take in the two form values
The LookupUser method on the SecurityService is where the magic happensThis method looks up the User from the database via a UserRepository and appends the salt to the password the user has provided. I explain what salts and hashes are a little later on, but for now know they are just a random string representation of a passkey. This combination of password and salt are then passed into the GetPasswordHashAndSalt method of the PasswordHash class.The GetPasswordHashAndSalt method reads the string into a byte array and encrypts it using SHA256, then returns a string representation of it back to the calling method. This is then the hash of the salted password which should be equal to the value in the database. On line 19 of the SecurityService class the repository does another database look-up to get the User that matches both the email address and hash value. OK, so how do we get those hashes and salts in the database in the first place? When a new user account is set up you need to generate a random salt like this:-You then store the usual user details in the database along with the salt and the hashAndSalt values in place of the password. By generating a new salt each time an account is created you minimize the risk that a hacker will get the salt and regenerate the passwords from the hashAndSalt value. Now back to the login POST method on the controller. Once the user has been authenticated in the system, you need to create a cookie for the ASP.Net forms authentication to work. First create a ticket that stores information such as the user logged in.Where LoggedInUser is the valid User object we got from the database earlier. To check for a valid ticket throughout the site, you can decorate each action method with [Authorize] filter attributes, or you could do the whole site and just have [AllowAnonymous] attributes on the login controller actions. To do this for the whole site firstly add a new AuthorizeAttribute to the FilterConfig.cs file inside App_Start like this:-Then in the Application_AuthenticateRequest method to the global.asax.cs file add this:-This method will check every request coming in to see if it has a valid FormsAuthentication ticket. If it doesn't then it will redirect the user to the default location specified in the web.config file.