Simon Online

2015-05-09

Do you really want "bank grade" security in your SSL? Canadian edition

In the past few days I’ve seen a few really interesting posts about having bank grade security. I was intersted in them because I frequently tell my clients that the SSL certificates I’ve got them and the security I’ve set up for them are as good as what they use when they log into a bank.

As it turns out I’ve been wrong about that: the security I’ve configured is better than their bank. The crux of the matter is that simply picking an SSL cert and applying it is not sufficient to have good security. There are certain configuration steps that must be taken to avoid using old cyphers or weak signatures.

There are some great tools out there to test if your SSL is set up properly. I like SSL Labs’ test suite. Let’s try running those tools against Canada’s five big banks.


































































Bank Grade SSL 3 TLS 1.2 SHA1 RC4 Forward Secrecy POODL
Bank of Montreal B Pass Pass Pass Fail Fail Pass
CIBC B Pass Pass Pass Fail Fail Pass
Royal Bank B Pass Pass Pass Fail Fail Pass
Scotia Bank B Pass Pass Pass Fail Fail Pass
Toronto Dominion B Pass Pass Pass Fail Fail Pass

So everybody is running with a grade of B and everybody is restricted to B because they still accept the RC4 cypher. There are some attacks available on RC4 but they don’t currently appear to be practical. That’s not to say that they won’t become practical in short order. The banks should certainly be leading the charge against RC4 because it is possible that when a practical exploit is found that it will be found by somebody who won’t be honest enough to report it.

Out of curiousity I tried the same test on some of Canada’s smaller banks such as ATB Financial(I have a friend who works there an I really wanted to stick it to him for having bad security).














































Bank Grade SSL 3 TLS 1.2 SHA1 RC4 Forward Secrecy POODL
ATB Financial A Pass Pass Pass Pass Pass Pass
Banque Laurentienne A Pass Pass Pass Pass Pass Pass
Canadian Western Bank A Pass Pass Pass Pass Pass Pass

So all these little banks are doing a really good job, that’s nice to see. It is a shame they can’t get their big banking friends to fix their stuff.

##But Simon, we have to support old browsers

Remember that time that your doctor suggested that your fluids were out of balance and you needed to be bled? No? That’s because we’ve moved on. For most things I recommend looking at your user statistics to see what percentage of your users you’re risking alienating if you use a feature that isn’t in their browser. I cannot recommend the same approach when dealing with security. This is one area where requiring newer browsers is a good call - allowing your users to be under the false impression that their connection is secure is a great disservice.

2015-05-05

A way to customize bootstrap variables

I have been learning a bunch about building responsive websites this last week. I had originally been using a handful of media-queries but I was quickly warned off this. Apparently the “correct” way of building responsive websites is to lean heavily on the pre-defined classes in bootstrap.

This approach worked great right up until I got to the navbar, that thing that sits at the top of the screen on large screens and collapses on small screens. My issue was that my navbar had a hierarchy to it so was a little wider than the normal version. As a result the navbar looked cramped on medium screens. I wanted to change the point at which the break between the collapsed and full navbar fired.

Unfortunatly the suggested approach for this is to customize the boostrap source code and rebuild it.

Bootstrap documentation

I really didn’t want to do this. The issue is that I was pulling in a nice clean bootstrap from bower. If I started to modify it then anybody who wanted to upgrade in the future would be trapped having to figure out what I did and apply the same changes to the updated bootstrap.

The solution was to go to the web brain trust that is James Chambers and David Paquette. After some discussion we came up with just patching the variables.less file in bootstrap.

#How does that look?

My project already used gulp but was built on top of sass for css so I had to start by adding a few new packages to my project

npm install --save-dev gulp-less
npm install --save-dev gulp-minify-css
npm install --save-dev gulp-concat

Then I dropped into my gulpfile. As it turns out I already had a target that moved about some vendor css files. All the settings for this task were defined in my config object. I added 4 lines to that object to list the new bootstrap variables I would need.

1
2
3
4
5
6
7
8
vendorcss: {
input: ["bower_components/leaflet/dist/*.css", "bower_components/bootstrap/dist/css/bootstrap.min.css"],
output: "wwwroot/css",
bootstrapvariables: "bower_components/bootstrap/less/variables.less",
bootstrapvariablesoverrides: "style/bootstrap-overrides.less",
bootstrapinput: "bower_components/bootstrap/less/bootstrap.less",
bootstrapoutput: "bower_components/bootstrap/dist/css/"
},

The

gives the location of the variables.less file within bootstrap. This is what we'll be patching. The ```bootstrapvariablesoverrides``` gives the file in my scripts directory that houses the overrides. The ```bootstrapinput``` is the name of the master file that is passed to less to do the compilation. Finally the ```bootstrapoutput``` is the place where I'd like my files put.
1
2
3
4
5
6
7

To the vendorcss target I added

```javascript
gulp.src([config.vendorcss.bootstrapvariables,config.vendorcss.bootstrapvariablesoverrides])
.pipe(concat(config.vendorcss.bootstrapvariables))
.pipe(gulp.dest('.'));

This takes an override file that I keep in my style directory and appends it to the end of the bootstrap variables. In it I can redefine any of the variables for bootstrap.

1
2
3
4
5
6
7
gulp.src(config.vendorcss.bootstrapinput)
.pipe(less())
.pipe(minifyCSS())
.pipe(rename(function (path) {
path.basename = "bootstrap.min";
}))
.pipe(gulp.dest(config.vendorcss.bootstrapoutput));

This bit simply runs the bootstrap build and produces and output file. Our patched variables.less is included in the newly rebuild code. The output is passed along to the rest of the task which I left unmodified.

The result of this is that I now have a modified bootstrap without having to actually change bootstrap. If another developer, or me, comes along to upgrade boostrap it should be apparent what was changed as it is all isolated in a single file.

2015-03-25

WebJobs and Deployment Slots (Azure)

I should start this post by apologizing for getting terminology wrong. Microsoft just renamed a bunch of stuff around Azure WebSites/Web Apps so I’ll likely mix up terms from the old ontology with the new ontology (check it, I used “ontology” in a sentence, twice!). I will try to favour the new terminology.

On my Web App I have a WebJob that does some background processing of some random tasks. I also use scheduler to drop messages onto a queue to do periodic tasks such as nightly reporting. Recently I added a deployment slot to the Web App to provide a more seamless experience to my users when I deploy to production, which I do a few times a day. The relationship between WebJobs and deployment slots is super confusing in my mind. I played with it for an hour today and I think I understand how it works. This post is an attempt to explain.

If you have a deployment slot with a webjob and a live site with a webjob are both running?

Yes, both of these two jobs will be running at the same time. When you deploy to the deployment slot the webjob there is updated and restarted to take advantage of any new functionality that might have been deployed.

My job uses a queue, does this mean that there are competing consumers any time I have a webjob?

If you have used the typical way of getting messages from a queue in a webjob, that is to say using the QueueTrigger annotation on a parameter:

1
2
public static void ProcessQueueMessage([QueueTrigger("somequeue")] string messageText, TextWriter log)
{...}

then yes. Both of your webjobs will attempt to read this message. Which ones gets it? Who knows!

Doesn’t that kind of break your functionality if you’re deploying different functionality for the same queue giving you a mix of old and new functionality?

Yep! Messages might even be processed by both. That can happen in normal operation on multiple nodes anyway which is why your jobs should be idempotent. You can either turn off the webjob for your slot or use differently named queues for production and your slot. This can then be configured using the new slot app settings. To do this you need to set up a QueueNameResolver, you can read about that here

What about the webjobs dashboard, will that help me distinguish what was run?

Kind of. As far as I can tell the output part of this page shows output from the single instance of the webjob running on the current slot.

Imgur

However the functions invoked list shows all invocations across any instance. So the log messages might tell you one thing and the function list another. Be warned that whey you swap a slot the output flips from one site to another. So if I did a swap on this dashboard and then refreshed the output would be different but the functions invoked list would be the same.

2015-03-18

How is Azure Support?

From time to time I stumble on an Azure issue I just can’t fix. I don’t like to rely too heavily on people I know in the Azure space because they shouldn’t be punished for knowing me too much (sorry, Tyler). I’ve never opened a support ticket before and I imagine most others haven’t either. This is how the whole thing unrolled:

This time the issue was with database backups. A week or so ago I migrated one of my databases to v12 so I could get some performance improvements. I tested the migration and the performance on a new server so I was confident. I did not, however, think about testing backups. Backups are basic and would have been well tested by Microsoft, right? Turns out that isn’t the case.

The first night my backup failed and, instead of a nice .bacpac file I was left with ten copies of my database.

http://i.imgur.com/qzZReCc.png

Of course each one of these databases is consuming the same S1 sized slot on the server and is being billed to me at S1 levels. Perhaps more damning was that the automatic backup task seemed to have deleted itself from the portal. I put the task back and waited for the next backup window to hit. I also deleted the extra databases and ran a manual backup.

When the next backup window hit the same problem reoccurred. This was an issue too deep inside Azure for me to diagnose myself. I ponied up the $30/month for support and logged an issue. I feel like with my MSDN subscription I probably get some support incidents for free but it was taking me so long to find how to use them $30 was cheaper.

The timeline of the incident was something like

noon - log incident
3:42 - incident assigned to somebody from Teksystems
3:48 - scope of incident defined
3:52 - incident resolved

This Teksystems dude works fast! I hope all his incidents were as easy to solve as mine. The resolution: “Yeah, automatic backups are broken with v12. We’ll fix it at some point in the future. Do manual backups for now”

I actually think that is a pretty reasonable response. I’m not impressed that backups were broken in this way but things break and get fixed all the time. With point in time restore there was no real risk of losing data but it did throw off my usual workflow (download last night’s backup every day for development today).

What I’m upset about is that this whole 4 hour problem could have been prevented by putting this information on the Azure health page. Back in November there was a big Azure failure and one of the lessons Microsoft took away was to do a better job of updating the health dashboard. At least they claimed to have taken that lesson away. From what I can see we’re not there yet. If we, as an industry, are going to put our trust in Azure and other cloud providers then we desperately need to have transparency into the status of the system.

I was once told, in an exit interview, that I needed to do a better job of not volunteering information to customers. To this day I am totally aghast at the idea that we wouldn’t share technical details with paying customers. Customers might not care but the willingness to be above board should always be there. The CEO of the company I left is being indicted for fraud which you won’t get if everybody was dedicated to the truth.

This post has diverged from the original topic of Azure support. My thoughts there are that it is really good. That $30 saved me hours of messing about with backups for days. If I had a lot of stuff running on Azure I would buy the higher support levels which, I suspect, provide an even better level of service.

2015-02-23

Book Review - Learn D3.js Mapping

I’m reviewing the book “Learn D3.js Mapping” by Thomas Newton and Oscar Villarreal.

1
Disclaimer: While I didn't receive any compensation for reviewing this book I did get a free digital review copy. So I guess if being paid in books is going to sway my opinion take that into account.

The book starts with an introduction to running a web server to play host for our visualizations. For this they have chosen to use node which is an excellent choice for this sort of lightweight usage. The authors do fall into the trap of thinking that npm stands for something, honestly it should stand for Node Package Manager.

The first chapter also introduces the development tools in the browser.

Chapter 2 is an introduction to the SVG format including the graphical primitives and the coordinate system. The ability to style elements via CSS is also explored. One of the really nice things is that the format for drawing paths, which is always somewhat confusing, are covered. Curved lines are even explored. The complexity of curved lines acts as a great introduction to using the mapping functionality in d3 as it acts as an abstraction over top of the complexity of wavy lines.

In chapter 3 we finally run into d3.js. The enter, exit and update functions, which are key to using d3 are introduced. The explanation is great! These are such important things and difficult to explain to fist time users of d3. Finally the chapter talks about how to retrieve data for the visualization from a remote data source using ajax.

In chapter 4 we get down to business. The first thing we see is a couple of different projections available within d3. I can’t read about Mercator projections without thinking about the map episode of the West Wing. That it isn’t referenced here is, I think, a serious flaw in the book. Once a basic map has been created we move onto creating bounding boxes, choropleths(that’s a map with colours representing some dimension of data) and adding interaction through click handlers. No D3 visualization is complete without some nifty looking transitions and the penultimate section of this chapter satisfies that need. Finally we learn how to add points of interest.

Chapter 5 continues to highlight transition capabilities of D3. This includes a great introduction to zooming and panning the map through the use of panning and zooming behaviours. The chapter then moves onto changing up projections to actually show a globe instead of a two dimensional map. The map even spins! A great example and nifty to see in action.

The GeoJSON and TropoJSON file formats are explained in chapter 6. In addition the chapter explores how to simplify map data. This is actually very important to get any sort of reasonably sized map on the internet. At issue is that today’s cartographers are really good and maps tend to have far more detail than we would ever need in a visualization.

The book finishes off with a discussion of how to go about testing visualizations and JavaScript in general.

This is an excellent coverage of a quite complex topic: mapping using D3. I would certainly recommend that if you have some mapping to do using D3 that purchasing this book might save you a whole lot of headaches.

2015-02-23

Using Two Factor Identity in ASP.net 5

First off a disclaimer:

1
2
3
ASP.net 5 is in beta form and I know for a fact that some of the identity related stuff is going to change next release. I know this because the identity code in [git](https://github.com/aspnet/Identity) is different from what's in the latest build of ASP.net that comes with Visual Studio 2015 CTP 5. So this tutorial will stop working pretty quickly. 

Update: yep, 3 hours after I posted this the next release came out and broke everything. Check the bottom of the article for an update.

With that disclaimer in place let’s get started. This tutorial supposes you have some knowledge of how multi-factor authentication works. If not then lifehacker have a decent introduction or for a more exhaustive examination the Wikipedia page.

If we start with a new ASP.net 5 project in Visual Studio 2015 and select the starter template then we get some basic authentication functionality built in.

Starter project

Let’s start, appropriately, in the Startup.cs file. Here we’re going to enable two factor at a global level by adding the default token providers to the identity registration:

1
2
3
services.AddIdentity<ApplicationUser, IdentityRole>(Configuration)
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

The default token providers are an SMS token provider to send messages to people’s phones and an E-mail token provider to send messages to people’s e-mail. If you only want one of these two mechanisms then you can register just one with

1
2
.AddTokenProvider(typeof(PhoneNumberTokenProvider<>).MakeGenericType(UserType))
.AddTokenProvider(typeof(EmailTokenProvider<>).MakeGenericType(UserType))

Next we need to enable two factor authentication on individual users. If you want this for all users then this can be enabled by setting User.TwoFactoreEnabled during registration in the AccountController.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Register(RegisterViewModel model)
{
if (ModelState.IsValid)
{
var user = new ApplicationUser
{
UserName = model.UserName,
Email = model.Email,
CompanyName = model.CompanyName,
TwoFactorEnabled = true,
EmailConfirmed = true
};
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
await SignInManager.SignInAsync(user, isPersistent: false);
return RedirectToAction("Index", "Home");
}
else
{
AddErrors(result);
}
}

// If we got this far, something failed, redisplay form
return View(model);
}

I also set EMailConfirmed here, although I really should make users confirm it via an e-mail. This is required to allow the EMailTokenProvider to generate tokens for a user. There is a similar field called PhoneNumberConfirmed for sending SMS messages.

Also in the account controller we’ll have to update the Login method to handle situations where the signin response is “RequiresVerification”

1
2
3
4
5
6
7
8
9
10
11
switch (signInStatus)
{
case SignInStatus.Success:
return RedirectToLocal(returnUrl);
case SignInStatus.RequiresVerification:
return RedirectToAction("SendCode", returnUrl);
case SignInStatus.Failure:
default:
ModelState.AddModelError("", "Invalid username or password.");
return View(model);
}

This implies that there are going to be a couple of new actions on our controller. We’ll need one to render a form for users to enter the code from their e-mail and another one to accept that back and finish the login process.

We can start with the SendCode action

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpGet]
[AllowAnonymous]
public async Task<IActionResult> SendCode(string returnUrl = null)
{
var user = await SignInManager.GetTwoFactorAuthenticationUserAsync();
if(user == null)
{
return RedirectToAction("Login", new { returnUrl = returnUrl });
}
var userFactors = await UserManager.GetValidTwoFactorProvidersAsync(user);
if (userFactors.Contains(TOKEN_PROVIDER_NAME))
{
if(await SignInManager.SendTwoFactorCodeAsync(TOKEN_PROVIDER_NAME))
{
return RedirectToAction("VerifyCode", new { provider = TOKEN_PROVIDER_NAME, returnUrl = returnUrl });
}
}
return RedirectToAction("Login", new { returnUrl = returnUrl });
}

I’ve taken a super rudimentary approach to dealing with errors here, just sending users back to the login page. A real solution would have to be more robust. I’ve also hard coded the name of the token provider (it is “Email”). I’m only allowing one token provider but I thought I would show the code to select one. You can render a view that shows a list from which users can select.

The key observation here is the sending of the two factor code. That is what sends the e-mail to the user.

Next we render the form into which users can enter their code:

1
2
3
4
5
6
[HttpGet]
[AllowAnonymous]
public IActionResult VerifyCode(string provider, string returnUrl = null)
{
return View(new VerifyCodeModel{ Provider = provider, ReturnUrl = returnUrl });
}

The view here is a simple form with a text box into which users can paste their code

Entering a code

The final action we need to add is the one that receives the post back from this form

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpPost]
[AllowAnonymous]
public async Task<IActionResult> VerifyCode(VerifyCodeModel model)
{
if(!ModelState.IsValid)
{
return View(model);
}

var result = await SignInManager.TwoFactorSignInAsync(model.Provider, model.Code, false, false);
switch (result)
{
case SignInStatus.Success:
return RedirectToLocal(model.ReturnUrl);
default:
ModelState.AddModelError("", "Invalid code");
return View(model);
}
}

Again you should handle errors better than me, but it gives you an idea.

The final component is to hook up an class to send the e-mail. In my case this was as simple as using SmtpClient.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
using System;
using System.Net;
using System.Net.Mail;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNet.Identity;
using Microsoft.Framework.ConfigurationModel;

namespace IdentityTest
{
public class EMailMessageProvider : IIdentityMessageProvider
{

private readonly IConfiguration _configuration;
public EMailMessageProvider(IConfiguration configuration)
{
_configuration = configuration;
}

public string Name
{
get
{
return "Email";
}
}

public async Task SendAsync(IdentityMessage identityMessage, CancellationToken cancellationToken = default(CancellationToken))
{
var message = new MailMessage
{
From = new MailAddress(_configuration.Get("MailSettings:From")),
Body = identityMessage.Body,
Subject = "Portal Login Code"
};
message.To.Add(identityMessage.Destination);

var client = new SmtpClient(_configuration.Get("MailSettings:Server"));
client.Credentials = new NetworkCredential(_configuration.Get("MailSettings:UserName"), _configuration.Get("Password"));
await client.SendMailAsync(message);
}
}
}

This provider will need to be registered in the StartUp.cs so the full identity registration looks like:

1
2
3
4
services.AddIdentity<ApplicationUser, IdentityRole>(Configuration)
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders()
.AddMessageProvider<EMailMessageProvider>();

You should now be able to log people in using multifactor authentication just like the big companies. If you’re interested in using SMS messages to verify people both Tropo and Twilio provide awesome phone system integration options.

Update

Sure enough, as I predicted in the disclaimer, 3 hours after I posted this my install of VS2015 CTP 6 finished and all my code was broken. The fixes weren’t too bad though:

  • The Authorize attribute moved and is now in Microsoft.AspNet.Security.
  • The return type from TwoFactorSignInAsync and PasswordSignInAsync have changed to a SignInResult. This changes the code for the Login and VerifyCode actions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Login(LoginViewModel model, string returnUrl = null)
{
if (ModelState.IsValid)
{
var signInResult = await SignInManager.PasswordSignInAsync(model.UserName, model.Password, model.RememberMe, shouldLockout: false);
if (signInResult.Succeeded)
return Redirect(returnUrl);
if (signInResult.RequiresTwoFactor)
return RedirectToAction("SendCode", returnUrl);
}
ModelState.AddModelError("", "Invalid username or password.");
return View(model);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[HttpPost]
[AllowAnonymous]
public async Task<IActionResult> VerifyCode(VerifyCodeModel model)
{
if (!ModelState.IsValid)
{
return View(model);
}

var signInResult = await SignInManager.TwoFactorSignInAsync(model.Provider, model.Code, false, false);
if (signInResult.Succeeded)
return RedirectToLocal(model.ReturnUrl);
ModelState.AddModelError("", "Invalid code");
return View(model);
}
  • EF’s model builder styntax changed to no longer have Int() and String() extension methods. I think that’s a mistake but that’s not the point. It can be fixed by deleting and regenerating the migrations using:

    k ef migration add initial

You may need to specify the connection string in the Startup.cs as is explained here: http://stackoverflow.com/questions/27677834/a-relational-store-has-been-configured-without-specifying-either-the-dbconnectio

2015-02-20

Replace Grunt with Gulp in ASP.net 5

The upcoming version of ASP.net and Visual Studio includes first class support for both Grunt and Gulp. I’ve been using Gulp a fair bit as of late but when I created a new ASP.net 5 project I found that the template came with a gruntfile instead of a gulpfile. My tiny brain can only hold so many different tools so I figured I’d replace the default Grunt with Gulp.

Confused? Read about grunt and gulp. In short they are tools for building and working with websites. They are JavaScript equivlents of ant or gnumake - alghough obviously with a lot of specific capabilities for JavaScript.

The gruntfile.js is pretty basic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// This file in the main entry point for defining grunt tasks and using grunt plugins.
// Click here to learn more. http://go.microsoft.com/fwlink/?LinkID=513275&clcid=0x409

module.exports = function (grunt) {
grunt.initConfig({
bower: {
install: {
options: {
targetDir: "wwwroot/lib",
layout: "byComponent",
cleanTargetDir: false
}
}
}
});

// This command registers the default task which will install bower packages into wwwroot/lib
grunt.registerTask("default", ["bower:install"]);

// The following line loads the grunt plugins.
// This line needs to be at the end of this this file.
grunt.loadNpmTasks("grunt-bower-task");

It looks like all that is being done is bower is being run. Bower is a JavaScript package manager and running it here will simply install the packages listed in the bower.json.

So to start we need to create a new file at the root of our project and call it gulpfile.js

Next we can open up the package.json file that controls the packages installed via npm and add in a couple of new packages for gulp.

1
2
3
4
5
6
7
8
9
    "version": "0.0.0",
"name": "IdentityTest",
"devDependencies": {
"grunt": "^0.4.5",
"grunt-bower-task": "^0.4.0",
"gulp": "3.8.11",
"gulp-bower": "0.0.10"
}
}

We have gulp here as well as a task that will install bower. The packages basically mirror those found in the file already for grunt. Once we’re satisfied we’ve replicated grunt properly we can come back and take out the two grunt entries. Once that’s done you can run

1
npm install

From the command line to add these two packages.

In the gulp file we’ll pull in the two required packages, gulp and gulp-bower. Then we’ll set up a default task and also one for running bower

1
2
3
4
5
6
7
8
9
10
var gulp = require('gulp');
var bower = require('gulp-bower');

gulp.task('default', ['bower:install'], function () {
return;
});

gulp.task('bower:install', function () {
return bower({ directory: "wwwroot/lib" });
});

We can test if it works by deleting the contents of wwwroot/lib and running bower from the command line. (If you don’t already use gulp then you’ll need to install it globally using

install -g gulp```) The contents of the directory are restored and we can be confident that gulp is working.
1
2

We can now set this up as the default by editing the project.json file. Right at the bottom is

“scripts”: {
“postrestore”: [ “npm install” ],
“prepare”: [ “grunt bower:install” ]
}

1
2

We'll change this from grunt to gulp

“scripts”: {
“postrestore”: [ “npm install” ],
“prepare”: [ “gulp bower:install” ]
}
`

As a final step you may want to update the bindings between visual studio actions and the gulp build script. This can normally be done through the task runner explorer, however at the time of writing this functionality is broken in the Visual Studio CTP. I’m assured that it will be fixed in the next release. In the meantime you can read more about gulp on David Paquette’s excelent blog.

2015-02-12

Visual Studio 2015 Not Launching

If you’re having a problem with Visual Studio 2015 not launching then, perhaps, you’re in the same boat as me. I just installed VS 2015 CTP and when I went to launch it the splash screen would blink up then disapear at once. Staring with SafeMode didn’t help and there was nothing in any log I could find to explain it. In the end I found the solution was to open up regedit and delete the 14.0 directories under HKEY_CURRENT_USER\Software\Microsoft\VisualStudio. Any settings you had will disapear but it isn’t like you could get into Visual Studio to use those settings anyway.

Regedit
Hopefully this helps somebody.

2015-02-12

Apple Shouldn't be Asking for Your Password

My macbook decided that yesterday was a good day to become inhabited by the spirits of trackpads past and just fire random events. When I bought this machine I also bought apple care, which, as it turns out was a really, really good idea. It cost me something like $350 and has, so far, saved me






















Power adapter$100

Trackpad
$459
Screen$642
Total$1201

In the process of handing my laptop over for the latest round of repairs the apple genius asked for my user name and password.

I blinked.

“You want what?”

The genius explained that to check that everything was working properly they would need to log in and check things like sound and network. This is, frankly, nonsense. There is no reason that the tech should need to log into the system to perform tests of this nature. In the unlikely case that the sophisticated diagnostic tools they have at their disposal couldn’t check the system then it should be standard procedure to boot into a temporary, in memory, version of OSX.

When I pushed back they said I could create a guest account on the machine. This is an okay solution but it still presents opportunity to leverage local privilege escalation exploits, should they exist. It is certainly not unusual for computer techs to steal data from the computers they are servicing. Why give Apple that opportunity?

What I find more disturbing is that a large computer company that should know better is teaching people that it is okay to give out password. It isn’t. If there were a 10 commandments of computer security then

Thou shalt not give out thy password

Would be very close to the top of the list. At risk is not just the integrity of that computer but also of all the passwords stored on that computer. How many people have chrome save their passwords for them? Or have active sessions that could be taken over by an attacker with their computer password? Or use the same password on many systems or sites? I don’t think a one of us could claim that none of these apply to them.

I don’t know why Apple would want to take on the liability of knowing people’s passwords. When people, even my wife, offer to give me their passwords I run from the room, fingers in ears screaming “LA LA LA LA LA” because I don’t want to know. If something goes wrong I want to be above suspicion. If there is some other way of performing the task without knowing the password then I’ll take it, even if it is slightly harder.

Apple, this is such an easy policy change, please stop telling people it is okay to give out their password. Use a live CD instead or, if it is a software problem, sit with the customer while you fix it. Don’t break the security commandments, nothing good can come of it.

2015-01-30

Sending Message to Azure Service Bus Using REST

Geeze, that is a long blog title.

In this post I’m going to explore how to send messages to Azure service bus using the HTTPS end point. HTTP has become pretty much a lingua franca when it comes to sending messages. While it is a good thing in that most every platform has a web client HTTP is not necessarily a great protocol for this. There is quite a bit of overhead some from HTTP itself and some from TCP at least we’re not sending important messages over UDP.

In my case here I have an application running on a device in the field (that’s what we in the oil industry call anything that isn’t in our nice offices). The device is running a full version of Windows but the application from which we want to send messages is running an odd sort of programming language that doesn’t have an Azure client built for it. No worries, it does have an HTTP Client.

The first step is to set up a queue to which you want to send your messages. This has to be done from the old portal. I created a namespace and in that I created a single queue.

Portal

Next we add a new queue and in the configuration for this queue we add a couple of access policies:

Access policies

I like to add one for each of the combinations of services. I don’t like a single role that isn’t the manager to be able to send and listen on a queue. It is just the principle of least privilege in action.
Now we would like to send a message to this queue using REST. There are a couple of ways of getting this done. The way I knew about was to generates a WRAP token by talking to the access control server. However as of August 2014 the ACS namespace is not generated by default for new service bus name spaces. I was pointed to an article about it by my good buddy Alexandre Brisebois*. He also recommended using Shared Access Signatures instead.

A shared access signature is a token that is generated from the access key and an expiry date. I’ve seen these used to grant limited access to a blob in blob storage but didn’t realize that there was support for them in service bus. These tokens can be set to expire quite quickly meaning that even if they fall into the hands of an evil doer they’re only valid for a short window which has likely already passed.

Generating one of these for service bus is really simple. The format looks like

SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}
1
[HMACSHA256 using key of [[queue address] + [new line] + [expiry date in seconds since epoch]]]
  • 2 is the expiry, again in seconds since epoch
  • 3 is the key name, in our case Send

This will generate something that looks like

1
2
SharedAccessSignature sr=https%3a%2f%2fultraawesome.servicebus.windows.net%2fawesomequeue%2f&sig=WuIKwkBuB%2fjxMgK6x79o3Xrf4nKZtWX9defu7HLdzWg%3d&se=1422636195&skn=Sen
d

With that in place I can now drop to CURL and attempt to send a message

1
curl -X POST https://ultraawesome.servicebus.windows.net/awesomequeue/messages -H "Authorization: SharedAccessSignature sr=https%3a%2f%2fultraawesome.servicebus.windows.net%2fawesomequeue%2f&sig=WuIKwkBuB%2fjxMgK6x79o3Xrf4nKZtWX9defu7HLdzWg%3d&se=1422636195&skn=Send" -H "Conent-Type:application/json" -d "I am a message"

This works like a dream and in the portal we can see the message count tick up.

In our application we need only generates the shared access token and we can send the message. If the environment is lacking the ability to do HMACSHA256 then we could call out to another application or even pre-share a key with a long expiry, although that would invalidate the advantages of time locking the keys.

*We actually only met once at the MVP summit but I feel like we’re brothers in arms.