WebUp is a new website monitoring tool that can also be configured for other endpoints such as web API urls. It is live in the Windows store as a UWP app, which means it can run on mobile, desktop, tablet and Xbox.

image

Why UWP?

The Microsoft ecosphere for app development, although on the mobile side is way behind both iOS and Android, has huge potential with everybody who runs Windows 10 included no matter what device. Here are some of the reasons I chose to publish on the Windows app store.

Auto updating

Any change I wish to make to the app once gone through certification will auto update on the end users machine leaving them with an always up to date version with no user input. Gone are the days when a new version of an application comes out, you got to go to the site, download and then install it.

No credit card information collected

WebUp is free for a trial of 7 days, but some of the features are locked out until upgraded to the full version. This payment process is entirely handled by Microsoft and although there are payment methods out there such as Stripe, as a developer I don’t need to lift a finger, just specify the price.

Runs across multiple devices

UWP apps are compiled once, but each architecture is included which means once a package is uploaded to the Windows Store, it is available for any Windows 10 device. WebUp gives a great experience on a tablet sized machine, but some users are running it on an Xbox with a massive display and using it as an information radiator. The mobile version is great when you need to whip out your phone and just check the status of your web sites.

User can multi install

The Windows Store allows a single user to install the app on up to 10 devices giving them the freedom to run it in multiple locations on different devices. WebUp can be installed on Xbox for an information radiator but you can also have it running elsewhere in an organisation with a different subset of http end points showing. A manager can also have it on their mobile device connected to the local Wi-Fi giving them satisfaction that everything is running fine.

Don’t need to worry about account management

With more traditional web based monitoring solutions, username and passwords need to be configured to log into the service, with a UWP Windows app, the installation is linked to the Microsoft account of the user so no more little yellow notes with credentials stuck to the monitor.

Can be installed inside enterprise and behind corporate firewall

As the app is installed to the device and not hosted by a third party such as SaaS providers, no holes need to be punched through the firewall if all your http end points are internal.

So those are some of the reasons I opted for UWP, it is early days at the moment, but I am already getting some traction and have other features in mind for the app.

Catchy title, but it explains exactly what is in this post.

I have just released an update to one of my UWP apps called WebUp. The additional functionality I added was email notifications when a http end point went offline. There are different ways you can message from a UWP app, but I wanted to use SendGrid to manage all messaging, so in this post I will cover adding this functionality from a UWP app written using WinJS. So yes it is possible to consume C# functionality from JavaScript and it is easier than you think. Why SendGrid you may ask, simply that they allow you to send 12 thousand emails per month for free, can’t grumble with that.

Windows Runtime Component

This is a dll that can produce winmd metadata that can then be consumed by UWP app written in any supported language such as C#, C++ and even JavaScript. Although the functionality can be limited somewhat, they can be used to create wrappers around libraries that cannot usually be accessed in a UWP app.

What is WinJS?

This is a JavaScript framework from Microsoft that can allow the developer to quickly add functionality to UWP apps. Although originally developed just for creating Windows Store apps, more recently this library has been open sourced and can be used as a framework to create web applications. It can still be used however to create UWP apps using the JavaScript template in Visual Studio.

Creating the app

Firstly create an app using the basic WinJS template in Visual Studio.

image

Choose the target and minimum versions.

image

Now a new project is needed to be added to the solution, so right click the solution.

image

Choose Windows Runtime Component under the C# collection.

image

Right click the references of your main UWP project and add this component.

image

To use SendGrid for messaging, the best component to use is called LightBuzz and can be found here:- https://github.com/LightBuzz/SMTP-WinRT

Although it’s easy to add references via NuGet, for some reason with UWP apps created this way, the library was not copied locally, so I cloned the repo from GitHub and built the project locally giving me a nice shiny library I can then reference old school.

image

Just make sure it is set to copy local in its properties.

Now for some code.

In the class code auto generated when you created the Runtime Component, make sure you have the using statement for LightBuzz.SMTP and add this code:-

When using SendGrid, you can add extra credentials to your account that helps separate out different usage scenarios. I have credentials set up for my WebUp app and I have used it here, just swap out with your credentials.

While in the Component code, have a look at the project.json file, it will list all the runtimes that are available for the component. When running this app,, you will need to change the environment to one of these as ‘Any CPU’ does not work.

image

Now back in the UWP code, add something like this to the app.onactivated function; obviously you will want to put it somewhere else, but for this example it will work.

Enter your email address and run the app. With any luck you should start to receive emails direct from your app via SendGrid.

When doing Continuous Integration and Continuous Deployment, sometimes its great to get your code into production but hidden from any end user actions. In this post I will outline what I have been using with great success on my SaaS application ObsPlanner. There are however loads of libraries and other tools that will do the same thing, but I opted for the more simple, home grown approach.

My application is an MVC application that uses Angular JS in the views to give a better end user experience by acting as a mini SPA, so C#, Razor and Angular need to know about what features are available and what is not.

The database structure is simple, a table with one column for the toggle name and another for if it is enabled like this:-

 

I am using Entity Framework for all my database work, so I have a FeatureToggleRepository that retrieves the data like this:-

Each of my MVC controllers inherit from a BaseController which has methods used throughout the site including the getting of toggles:-

Now each of my controllers can easily get the toggle list:-

Where the toggle list is added to a view model, in this case it includes the logged in user details as well as the toggles. So in the view I can use Razor to run a conditional against it like this:-

So that works great for the View, what about the Angular stuff, couldn't that break if certain elements are not present in the view? Well yes it could so in the JavaScript I simply have all my toggle code nicely wrapped in functions that only get called if an element with the correct id is present in the view like this:-

The setup for this site is a development, staging and live server all with their own copies of the database. So this method allows me to push code through the whole life cycle and simply switch on the feature on the live server when I am happy with it.

Happy coding

Entity Framework code first migrations are really handy, but can be dangerous in the wrong hands as they make changes to the underlying database of your application.

Before the days when a single check in would run through the build process CI/CD pipeline, it was normal for the development stage to make constant changes to a database structure as an when the features requested needed it. With the new model of all changes running through the CI/CD pipeline, the database changes could be quickly pushed to a test or staging server in no time at all. With this new model of writing code, a few precautions have to be taken to make sure EF migrations don’t wreck havoc on the database running the staging or test servers.

Migrations in Entity Framework

A migration is a change to the schema of a database based on any model changes that have been created in the data layer of the application. If for example there is a User class and a new property is added for membership level, then with migrations enabled, a new column will be created in the database to match the new property.

But for a database already in existence and maybe even already in production, you don’t want changes to the schema to go directly through to production. So there are a few steps to take, these are:-

  1. Create a copy of the existing database on the local machine
  2. Reverse engineer the database to classes that can be used for Code First migrations
  3. Enable migrations
  4. Create a manual empty migration to tell EF that both model and database are in sync
  5. Disable migrations for all but local changes
  6. Generate a migration script for the dba to implement

Create a copy of the existing database on the local machine

I will leave this step up to you as some developers prefer to either restore a database back up locally, script the entire database and data and run the script locally or you may even have a database on another server that you can use.

Reverse engineer the database to classes that can be used for Code First migrations

By using the Entity Framework Reverse POCO Generator Visual Studio plugin, it is possible to point it at the database and generate the DbContext and all the necessary POCO classes.

Right click the project in Visual Studio and choose Add > New item.. from the context menu, you will then get a choice that includes ‘EntityFramework Reverse POCO Code First Generator’ in the template list.

reverse-engineer-db

By default this will create a context with the name MyDbContext like this:-

image

Simply change the ConnectionStringName to match that in your app.config or web.config and save the file and it will generate the necessary classes in a cs file under the T4 template.

image

 

As this tool uses a t4 template to generate the classes, it is easily configurable for example to change the context class name.

 

Enable migrations

To enable migrations, simply go to the ‘Package Manager Console’ view in Visual Studio. It can be found either by typing in the Quick launch text box at the top right or View > Other Windows  > Package Manager Console. In the default project drop down make sure the correct project is selected then type:-

Enable-Migrations

This will create a ‘Migrations’ directory within the project which will have a Configuration.cs file inside.

This file will house the main configuration class for the migrations and it is in here where you can change whether AutomaticMigrations are enabled and if data loss is acceptable. On a development machine, that is probably OK, but you don’t want those settings passed into staging or production; so use a pre-processor directive such as this

In this class, you will notice the DbMigrationsConfiguration is being inherited from which is using the DbContext class we created when we reverse engineered the database. It is this class that is responsible for the model and to keep track of the migrations.

Going back to the generated context class we need to add references to System.Data.Entity to get access to the functionality to drop and recreate the database and pluralize names. This can be added to the OnModelCreating method like this:-

By setting the Initializer context to null stops any changes being made to the schema, whereas setting it to DropCreateDatabaseIfModelChanges will do just that.

Create a manual empty migration to tell EF that both model and database are in sync

Now you have the model that matches the database schema, you can manually add a migration using this command:-

Add-Migration –IgnoreChanges

image

By giving it a name it will be easier to identify it later on in the migrations collection and will also allow it to generate a class with the Up and Down methods that are used to generate logic to change the schema to different versions. As this is the initial synced migration, these should be left empty.

image

Disable migrations for all but local changes

Now that the switches are put in place, when the code goes through the CI/CD pipeline just build it with a switch of anything but Debug. For example in TeamCity I have a build step using MSBuild using the parameter /p:Configuration=Release

 

Generate a migration script for the dba to implement

Lets face it, nobody wants to feel the wrath of a DBA if you make changes to the production database without firstly notifying them. There should even be a proper change management process that would need to be followed through before any changes are pushed to production. This is where the script generation feature from Entity Framework comes in handy.

By using the command:-

Update-Database –Script

You can generate a TSQL script that will show all the changes needed to get the database up to data with the model; this script can then be run by the DBA on the staging or production servers. Here I have added a simple class called TestTable in the generated cs file.

 

 

image

Happy coding.

Containers on Windows 10

This is part 1 of what will most likely be a series (unless I get totally hacked off with containers like I am right now). Part of the series will be about getting containers up and running on a Windows 10 box to run a Windows based container. Then I will progress onto Windows Server 2016 and get the same configured container to run on that and hopefully discover some kind of work flow for developers along the way. The end game is to develop a website using Visual Studio 2015, test on Nano Server running IIS and deploy using Server 2016.

Pre-requisites.


Apart from Windows 10 professional or Enterprise, you need to be sure you are running at least the Anniversary Update (1607); to check this go to a command line and type:- winver

I am running 1607 build 14393.187 (22nd September 2016)

image

Enable Containers

To get containers installed, you need to enable the feature. This can either be done using PowerShell or the UI. Here is the PowerShell:-

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

Then as Windows 10 will only support Hyper-V containers, enable that feature as well:-

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

 

Then restart the computer

 

Installing Docker


As of Docker version 1.12 there is an option to download Docker for Windows, however to natively run Windows containers instead of using MobyLinux as a middle ware virtual machine, opt to install the beta version which can be found here:- https://docs.docker.com/docker-for-windows/

Switch to Windows Containers

Once installed, right click the whale and switch to using Windows Containers.
docker-7

docker-8

docker-9

Now you can get the version of Docker and it should show that both host and client are running windows.

docker version

docker-10

Nano Server

According to this link you can run both Server Core and Nano Server on a Windows 10 Pro machine within a Hyper-V container.

The best way to get Nano Server is from the Docker hub at:-https://hub.docker.com/r/microsoft/nanoserver/

In your command line or PowerShell screen use:-

docker pull microsoft/nanoserver

docker-11

It will take a while to download if it has not already been cached on your host machine.

List the images

To list the images you have on your host use the command:-

docker images

docker-16

 

Run Nano Server

This can be achieved simply by using the docker run command; however if you want to run it interactively you can append the –it flag and get into a command line like this:-

docker run –it microsoft/nanoserver cmd

Once it starts up you will be in the command line of the container; you can check this by using the hostname command.

docker-17

You can get out of this container by typing exit and it will get you back to your host machine. Once there use the command docker ps and it will list all running containers and you will see there are none running now. You can however use the switch –a to get all containers in both running and stopped state.

docker-18

As you can see I have a lot of containers, but thats ok I can delete them all by iterating through them all and remove using the rm command like this:-

FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm %i

docker-19

A nice clean slate to start again.

Happy coding.