Require.js is a JavaScript module loader, helping to reduce complexity of applications by managing dependencies and allowing easier separation and organisation of code into modules.

Asynchronous Module Definition or AMD modules are a way for applications to manage dependencies and easier separation by organising code into modules. Require.js is one such AMD module loader that can be used with jQuery, node and others, but firstly the JavaScript needs to be refactored appropriately.

Run JSLint over the code

JSLint is a tool that can be run either from the command line, through Gulp or Grunt or even in the browser at http://www.jslint.com/, it will highlight issues with the JavaScript to be fixed that will produce better quality code.

Structure the JavaScript

After the issues highlighted by JSLint have been fixed, organise the code so that it follows a more structured outline such as having a collection of vars and functions.

 

Add the AMD define call

For it to be recognised by module loaders such as require.js, the code needs to be wrapped in define calls which will turn it into an AMD module.

 

Dependencies such as jQuery

To add any dependencies such as jQuery, simply include them in the define call array such as this:-

 

Any dependencies added in this way also need to be AMD compatible modules or a shim might be needed, documentation for jQuery can be found here:- http://requirejs.org/docs/jquery.html

Add a little Encapsulation

One of the advantages of the module pattern is that it can be structured so that variables and functions can be internalised and effectively defined in a 'private' scope. The module's return object can define exactly what should be publicly accessible. In the previous code snippet, the variable foo and both functions were all returned by the module. This means that they would all be publicly accessible by any client code. Instead, we want to ensure that foo and the functions are initially defined as private, and then expose the functions explicitly in the return object.

 

Happy coding

Through the life of this blog in its many guises I have used Blogger, Community Server and WordPress. Now comes the time to move again and I have opted to use MiniBlog; a project created by Mads Kristensen of Visual Studio Web Essentials fame.

About MiniBlog

MiniBlog is written in ASP.Net, C# and uses the website template in Visual Studio. It has a simple architecture and persists the blog posts in XML format as physical files on the web server drive. There is much more to this platform though such as:-

  • Windows Live Writer support
  • RSS and ATOM feeds
  • Support for robots.txt and sitemap.xml
  • Much much more…

Why move?

Although I have had a great experience using WordPress over the past few years, I have become more apparent of the bloat that is downloaded to the users browser in the form of Java Script and CSS. As a developer who strives to optimise web pages to get a better UX for the user, this didn't sit right with me.

old-js

This is the old sites Java Script showing over kb of data downloaded to the users browser.

old-css

The CSS is also loaded with the many styles from various plugins used in WordPress.

Wordpress is written in PHP and my experience is .Net, also the back end it's MySQL again a technology I am not 100% experienced with. So to make the changes to optimise the blog to a level I was happy with would leave me with either rewriting the entire theme and persistence layer or move to a technology stack I can have more control over. MiniBlog gives me this. I can create the style and layout I want with Razor and CSS and tweak the caching layer in the C# code. I can also get gulp up and running to run tasks to concatenate and minimize the css and Java Script files. Also like other projects I am working on, I can easily create a work flow in Team City to do the build and deploy.

What I had to do

Firstly I had to export all my blog posts from WordPress into a format that MiniBlog can understand. Again thanks to Mads Kristensen I could use the MiniBlogFormatter tool to get my posts into the correct format. This tool outputs each post to a seperate XML file. Then going through the current blog I found and separated post I wanted to keep that covered topics and technology that are still current. Then I created a temporary website in IIS to test these posts. It soon became apparent that the directory structure was different and MiniBlog uses a post directory to store files. I didn't want to loose any links to popular posts especially those that are referred to from other sites, so I created a url rewrite rule to do redirects to the new structure for users coming in on the old url.

Optimization

As I have mentioned in a past post concatinating and minification of Java Script and CSS can be achieved by using gulp. Once this was done, the number of files downloaded was much less than before.

new-js

Now only 16 files are downloaded as opposed to 46 in the old site.

new-css

Also with the CSS, there are only 2 as opposed to 7 requests.

With the images, I used Paint.Net to decrease the resolution and size for better performance.

Then in IIS, I navigated to the HTTP Response Headers section for the directories that contain the Java Script and CSS files.

iis 

Clicked on the ‘Set common headers’ link in the top right hand corner.

http-headers

Set the expires date to a date far into the future.

Comments

I was getting a lot of spam comments which Akismet took care of, so I had to find an alternative which would also protect me. I enjoy reading the comments and like to respond so switching off comments was not an option. I eventually went for Disqus which has a plugin architecture that was really simple to implement.

So there you have it, this is now the new blog site. It will change in style over the next few weeks as I make tweaks here and there, but with the CI/CD workflow I have in place this is very easy to work with.

What next?

I run all my sites of the same box which also has both MySQL and SQL Server running on it, so soon I can switch off MySQL, uninstall the PHP processor that it's part of IIS and hopefully free up some more resources.

Happy coding

Offline Web Applications

The HTML 5 specification, now available in most evergreen web browsers, gave the power of offline web applications. How this works is a manifest file is downloaded to the client which lists all files needed to make that page useable even if there is no network connection. This manifest file canm include JavaScript, CSS and HTML files. Of course it is possible to create an ASP.Net MVC application that can use this same technique as it renders HTML 5 anyway, however there are a few gotchas to look out for which I will cover in this post. Firstly the path to the .manifest file needs to be included in the HTML element at the top of the page.
<html lang="en" manifest="offline.manifest">
manifest file
For an MVC site, simply add the reference to the manifest in the _Layout file and add each resource you want in the browser application cache.
 
These resources above are used for the standard MVC template you get with Visual Studio 2015.
Now to get IIS to serve up this offline.manifest file you either have to add a mime type on the server or add it to the <system.webserver> element in web.config.
Its important that you add the clear element and you need then to add the usual types your site will serve up other wise you will get an HTTP 500 on each of these.
In Chrome when browsing to this page you can examine the Application Cache by going to the developer tools (F12) and going to the Resources tab.
1
Make sure that the 'Disable cache' check box is cleared as by default it is checked when you open up the developer tools and the browser will not retrieve the files needed from the cache.
2
Now by switching off your network connection or setting the Network throttling drop down to Offline (I disable my network card temporarily for a full test), your website should still view correctly and as the JavaScript has also been cached, it should function the same as well.
3
The main issue with the .manifest file is the browser does not know when the contents listed in it have changed unless the time stamp on the manifest itself has changed. One way to do this is to version the file and an even better way is to version it by using your application build number.
There are many issues with MVC serving up a .manifest file and the easiest solution I have found is to write out a physical file to the location the site is hosted in like this.
Which can be placed inside the Home/Index controller. Then add the path to the _Layout file like this.


<html lang="en" manifest="/Manifest/offline.manifest">


You will need to add write permissions to this directory for the IIS account so it can delete and write the file. Now if your build number increments each time you do a build and deploy, the manifest file will be regenerated and the client browser will pick up any changes.


Happy coding
 

Creating an IIS Base Image for Containers

Now we have a base image from the last post, we can use it to create other images and containers. Enter a PowerShell command and call Get-ContainerImage, we should see our WindowsServerCore image.
1
For the sake of this demonstration we will let our containers get an IP address from DHCP in the network, other situations will lead to different DHCP configurations with the containers and their IP address as completely throw away.
So again in the host machine create a new VM switch, I have it mapped to my External switch which will handle the DHCP.


New-VMSwitch –Name DHCP –NetAdapterName Ethernet


2
3
Now we create a new container which will become our base IIS image, so call

New-Container –Name ServerCoreIIS –ContainerImageName WindowsServerCore –SwitchName “DHCP”


4
Then start it up with Start-Container –Name ServerCoreIIS
5
We will need to enable the IIS feature, so enter a PowerShell session.
6
Then invoke the command to install the Web Server feature.


Invoke-Command –ScriptBlock {Install-WindowsFeature Web-Server}


7
It will run through and should show an Exit Code of Success
8
9
Get the IP address.
10
We should see the typical IIS welcome page when we browse to it.
11
Now call Stop-Container ServerCoreIIS
12
We now need to create an image from that container, so call the Get-Container command and pipe it to the New-ContainerImage like this.


Get-Container –Name ServerCoreIIS | New-ContainerImage –Publisher YourCompany –Name ServerCoreIIS –Version 1.0


13
I have named both the container and image the same so I can easily determine the relationship.
14
When calling Get-ContainerImage we should see our 2 base images.
15
Now we can create any number of containers based off the ServerCoreIIS image like this.
17
Start one up, enter a PowerShell session using Enter-PSSession –ContainerName “IISWeb01” –RunAsAdministrator and get the IP address
18
We should then see the IIS welcome page again when we browse to that address.
19
So now we have a static website running from c:\inetpub\wwwroot on the container. In the next post I will configure the base IIS image to run an MVC website and create a container from that.


Happy coding.
 

There has been quite a lot written lately about the MEAN stack which is MongoDB, Express, Angular and Node, but I am going to describe the architecture I have used to create ObsPlanner.com. It comprises of both the MEAN stack and ASP.Net, Web.API, SQL Server as well as a number of third party components.

About ObsPlanner.com

The ObsPlanner.com website is a web application aimed at the more technical amateur astronomer market. It allows the user to plan in advance what astronomical objects they wish to observe by taking into account obstructions such as buildings and then optimises the plan around the times the user is at the telescope. Plans can then be downloaded and loaded into telescope controlling software which will automate the plan. Future releases will also include a mobile/offline mode and maybe even control the scope itself via Bluetooth. But that is dependent on much more research.

This is a brief overview of the architecture I chose.

ObsPlanner schematic

 

ASP.Net MVC Razor Views

By using Razor views, I could quickly create all the views for the front end based on layouts. Also with Razor and MVC I had a security model that could validate the user and redirect them if they were not authenticated or authorized to go to that view.

Angular js

Each view has an angular controller to manage its data binding and any service calls to either the Node or Web.API layers. By using angular it was possible to create a neatly structured code base as well as giving the end user a fast UI experience.

ASP.Net MVC & Web.API

With ASP.Net MVC and Web.API I was able to utilize the forms based security model as well interface with the Entity Framework layer that manages all the relational data from SQL Server. The services also use token based authentication which was very easy to set up using MVC Filters and attributes.

Node js & Express

As the site has a large amount of math based calculations, I wanted to use a language and technology that took this responsibility away from the client machine as I couldn’t guarantee what inconsistencies would be introduced. So I opted to use node.js so previously written components in JavaScript could be re-used with little change. It is running on IIS using the IISNode plugin and neatly interfaces with MongoDB using Express.

Entity Framework (EF)

I used a code first approach to create the entities that are relational in nature so the SQL Server database could be dropped and re-generated as the model changed. I also used the repository and Unit of Work patterns which lead to a faster integration with the MVC and Web.API controllers.

SQL Server

This is the data store for more relational entities such as the user and location as well as account stores. I was happier choosing this instead of having all data in Mongo as I was familiar with the security side of things and I wanted to use entity framework, repository and unit of work patterns.

MongoDB

I chose this as part of the back end for a fast read only data store that easily and quickly integrates with Node and Express. All the astronomical data is stored in this database and is made up of multiple collections for thousands of celestial objects. By using a data store that manages json natively it was very easy to retrieve astronomical data and pass it to the front end in the same format that the angular controllers could manage.

Stripe

As ObsPlanner is a multi tiered application, I wanted an easily integrated payment system without having to worry about being compliant with the banking systems. Stripe has an easy to use API layer that can be used both as part of the front end layer as well as the middle tier layer written in C#. When a user does opt to move on to the Pro level, none of the card details go through the ObsPlanner system and only to Stripe systems.

SendGrid

There are a number of internal subsystems which need to manage the sending of messages to registered users, this is done by SendGrid whichhas a simple API and allows access to message statistics such as errors, opening rate etc.

All this is running on one instance of Windows Server 2008 R2, not ideal I know but it is all I can afford just now.

So that is a quick run down of the architecture of ObsPlanner.com as of February 2016. It may change in the coming months as more users register and I detect any pain points.

Happy coding