Entity Framework code first migrations are really handy, but can be dangerous in the wrong hands as they make changes to the underlying database of your application.

Before the days when a single check in would run through the build process CI/CD pipeline, it was normal for the development stage to make constant changes to a database structure as an when the features requested needed it. With the new model of all changes running through the CI/CD pipeline, the database changes could be quickly pushed to a test or staging server in no time at all. With this new model of writing code, a few precautions have to be taken to make sure EF migrations don’t wreck havoc on the database running the staging or test servers.

Migrations in Entity Framework

A migration is a change to the schema of a database based on any model changes that have been created in the data layer of the application. If for example there is a User class and a new property is added for membership level, then with migrations enabled, a new column will be created in the database to match the new property.

But for a database already in existence and maybe even already in production, you don’t want changes to the schema to go directly through to production. So there are a few steps to take, these are:-

  1. Create a copy of the existing database on the local machine
  2. Reverse engineer the database to classes that can be used for Code First migrations
  3. Enable migrations
  4. Create a manual empty migration to tell EF that both model and database are in sync
  5. Disable migrations for all but local changes
  6. Generate a migration script for the dba to implement

Create a copy of the existing database on the local machine

I will leave this step up to you as some developers prefer to either restore a database back up locally, script the entire database and data and run the script locally or you may even have a database on another server that you can use.

Reverse engineer the database to classes that can be used for Code First migrations

By using the Entity Framework Reverse POCO Generator Visual Studio plugin, it is possible to point it at the database and generate the DbContext and all the necessary POCO classes.

Right click the project in Visual Studio and choose Add > New item.. from the context menu, you will then get a choice that includes ‘EntityFramework Reverse POCO Code First Generator’ in the template list.


By default this will create a context with the name MyDbContext like this:-


Simply change the ConnectionStringName to match that in your app.config or web.config and save the file and it will generate the necessary classes in a cs file under the T4 template.



As this tool uses a t4 template to generate the classes, it is easily configurable for example to change the context class name.


Enable migrations

To enable migrations, simply go to the ‘Package Manager Console’ view in Visual Studio. It can be found either by typing in the Quick launch text box at the top right or View > Other Windows  > Package Manager Console. In the default project drop down make sure the correct project is selected then type:-


This will create a ‘Migrations’ directory within the project which will have a Configuration.cs file inside.

This file will house the main configuration class for the migrations and it is in here where you can change whether AutomaticMigrations are enabled and if data loss is acceptable. On a development machine, that is probably OK, but you don’t want those settings passed into staging or production; so use a pre-processor directive such as this

In this class, you will notice the DbMigrationsConfiguration is being inherited from which is using the DbContext class we created when we reverse engineered the database. It is this class that is responsible for the model and to keep track of the migrations.

Going back to the generated context class we need to add references to System.Data.Entity to get access to the functionality to drop and recreate the database and pluralize names. This can be added to the OnModelCreating method like this:-

By setting the Initializer context to null stops any changes being made to the schema, whereas setting it to DropCreateDatabaseIfModelChanges will do just that.

Create a manual empty migration to tell EF that both model and database are in sync

Now you have the model that matches the database schema, you can manually add a migration using this command:-

Add-Migration –IgnoreChanges


By giving it a name it will be easier to identify it later on in the migrations collection and will also allow it to generate a class with the Up and Down methods that are used to generate logic to change the schema to different versions. As this is the initial synced migration, these should be left empty.


Disable migrations for all but local changes

Now that the switches are put in place, when the code goes through the CI/CD pipeline just build it with a switch of anything but Debug. For example in TeamCity I have a build step using MSBuild using the parameter /p:Configuration=Release


Generate a migration script for the dba to implement

Lets face it, nobody wants to feel the wrath of a DBA if you make changes to the production database without firstly notifying them. There should even be a proper change management process that would need to be followed through before any changes are pushed to production. This is where the script generation feature from Entity Framework comes in handy.

By using the command:-

Update-Database –Script

You can generate a TSQL script that will show all the changes needed to get the database up to data with the model; this script can then be run by the DBA on the staging or production servers. Here I have added a simple class called TestTable in the generated cs file.




Happy coding.

Containers on Windows 10

This is part 1 of what will most likely be a series (unless I get totally hacked off with containers like I am right now). Part of the series will be about getting containers up and running on a Windows 10 box to run a Windows based container. Then I will progress onto Windows Server 2016 and get the same configured container to run on that and hopefully discover some kind of work flow for developers along the way. The end game is to develop a website using Visual Studio 2015, test on Nano Server running IIS and deploy using Server 2016.


Apart from Windows 10 professional or Enterprise, you need to be sure you are running at least the Anniversary Update (1607); to check this go to a command line and type:- winver

I am running 1607 build 14393.187 (22nd September 2016)


Enable Containers

To get containers installed, you need to enable the feature. This can either be done using PowerShell or the UI. Here is the PowerShell:-

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

Then as Windows 10 will only support Hyper-V containers, enable that feature as well:-

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All


Then restart the computer


Installing Docker

As of Docker version 1.12 there is an option to download Docker for Windows, however to natively run Windows containers instead of using MobyLinux as a middle ware virtual machine, opt to install the beta version which can be found here:- https://docs.docker.com/docker-for-windows/

Switch to Windows Containers

Once installed, right click the whale and switch to using Windows Containers.



Now you can get the version of Docker and it should show that both host and client are running windows.

docker version


Nano Server

According to this link you can run both Server Core and Nano Server on a Windows 10 Pro machine within a Hyper-V container.

The best way to get Nano Server is from the Docker hub at:-https://hub.docker.com/r/microsoft/nanoserver/

In your command line or PowerShell screen use:-

docker pull microsoft/nanoserver


It will take a while to download if it has not already been cached on your host machine.

List the images

To list the images you have on your host use the command:-

docker images



Run Nano Server

This can be achieved simply by using the docker run command; however if you want to run it interactively you can append the –it flag and get into a command line like this:-

docker run –it microsoft/nanoserver cmd

Once it starts up you will be in the command line of the container; you can check this by using the hostname command.


You can get out of this container by typing exit and it will get you back to your host machine. Once there use the command docker ps and it will list all running containers and you will see there are none running now. You can however use the switch –a to get all containers in both running and stopped state.


As you can see I have a lot of containers, but thats ok I can delete them all by iterating through them all and remove using the rm command like this:-

FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm %i


A nice clean slate to start again.

Happy coding.

I have finally rewritten my MoonPhase app to be a UWP app. This allows me to write the code once and have it on a Windows 10 phone as well as a standard Windows 10 desktop or tablet.

With this app, there are three main interfaces that show you the phases of the moon; a day view, week view and a month view. As the phone interface is somewhat constrained, I decided to only show the day view on these devices and allow the user to move back and forward in time to get to other dates.

Phone view


Desktop view


So in the code I had to detect the screen size of the current device, this can be done in the app.onactivated event of the WinJS framework.

getCurrentView() will get the device view object where we can then query it for the visibleBounds properties. I am using a device height of 800 which covers phones in the 5inch range such as Lumia 650.

Then a conditional is used to render either the day.html view or the week.html view for all other devices.

Also as I don’t want phone users to see the buttons to navigate to week or month views, I simply use jQuery to get all the buttons from the toolbar that have a certain class and remove them.

The app can be found in the Windows app store here.

Happy coding.

Require.js is a JavaScript module loader, helping to reduce complexity of applications by managing dependencies and allowing easier separation and organisation of code into modules.

Asynchronous Module Definition or AMD modules are a way for applications to manage dependencies and easier separation by organising code into modules. Require.js is one such AMD module loader that can be used with jQuery, node and others, but firstly the JavaScript needs to be refactored appropriately.

Run JSLint over the code

JSLint is a tool that can be run either from the command line, through Gulp or Grunt or even in the browser at http://www.jslint.com/, it will highlight issues with the JavaScript to be fixed that will produce better quality code.

Structure the JavaScript

After the issues highlighted by JSLint have been fixed, organise the code so that it follows a more structured outline such as having a collection of vars and functions.


Add the AMD define call

For it to be recognised by module loaders such as require.js, the code needs to be wrapped in define calls which will turn it into an AMD module.


Dependencies such as jQuery

To add any dependencies such as jQuery, simply include them in the define call array such as this:-


Any dependencies added in this way also need to be AMD compatible modules or a shim might be needed, documentation for jQuery can be found here:- http://requirejs.org/docs/jquery.html

Add a little Encapsulation

One of the advantages of the module pattern is that it can be structured so that variables and functions can be internalised and effectively defined in a 'private' scope. The module's return object can define exactly what should be publicly accessible. In the previous code snippet, the variable foo and both functions were all returned by the module. This means that they would all be publicly accessible by any client code. Instead, we want to ensure that foo and the functions are initially defined as private, and then expose the functions explicitly in the return object.


Happy coding

Creating Controllers in Umbraco CMS

On a recent project I needed to implement a secure area on a public facing website that runs on Umbraco CMS.
Umbraco runs on top of ASP.Net MVC, so pretty much what you can do with ASP.Net MVC you can do with Umbraco. The key requirements for this small project were to create a secure area that allows users to register for an account and login. Also the registered users were to be given different roles one for the uploading of documents and the other for the download of the same documents.


As MVC implements the membership provider, it was possible to create a register form which uses the same provider. In the view instead of using Html.BeginForm, for Umbraco we use Html.BeginUmbracoForm()

All this method does is acknowledge the registration and send out an email to the admin staff who then validates the request offline. One they have done this they will manually create a member in the Umbraco back office. Members in Umbraco can belong to either the Member type or a custom type. For our implementation we created a Portal Member Type that has all the same properties as the Member type except for an additional Portal Admin property. Members that are a Portal Admin are allowed to upload documents to the secure area; all other members are read only in essence.


So when the admin staff adds a new Member they choose the Portal Member Type.

Then simply add the necessary details and save.


Now that the user has an account to login they can go back to the same page and enter their credentials.

Which when posted sends required parameters to the controller.

This controller inherits from the Umbraco SurfaceController which allows it do either a standard redirect or a RedirectToCurrentUmbracoPage() for invalid attempts.
This method simply uses the Membership.ValidateUser() method which does the lookup in the database for the user.

Once the redirect has been complete, the call into the Index method of the GovernorPortalController occurs which inherits from the Umbraco RenderMVCController class. This class allows the Index method to be overridden and so a model can then be passed to the calling view. In our implementation we simply go to a backend database to get a collection of document details that are stored in a custom table in SQL Server. The controller then passes that collection to a ViewBag that the view then iterates through.

For some reason that I am yet to work out, the recommended way to implement custom controllers in Umbraco is to override RenderMVCcontroller for GETs and override SurfaceController for POSTs.

Anyway that aside, Admin Portal users need to upload documents to this area so another form is added to the same view. This is again using the Html.BeginUmbracoForm() helper method that posts to the UploadController's UploadFileMethod taking in an HttpPostedFileBase and title string parameters.


After checking this file's ContentLength, it is then saved to the server file system while being saved with a GUID for its filename to stop conflicts.

Various properties are then saved to the database, in this example I am using a simple SqlCommand instead of using Entity Framework as nowhere else in the site is using EF.

Once a collection of documents is presented to the user, admin users can delete documents by the use of a conditional that checks their role; this is found by querying the Umbraco properties for the user.

Happy coding.