Simple explanation of bearer authentication for Web Api 2

For some reasons I had trouble grasping how bearer authentication supposed to work in web API. Not only that, our first implementation was weird, even though it worked. But enough of self-shaming, I believe I finally got it. And I want to try to explain it as simple as possible, in the way I wish I would read it somewhere. I will start with our project. We have mobile applications that should work with backend through our own REST API. Our customers have to login in into application before they can do anything there. We already have all required functionality implemented to support users in our database, all business layers etc., Our API is implemented using Web API 2, which is OWIN based. And we host it all as Azure web role, so multi-instances support is required. At this point I didn’t want to implement STS (Security Token Service) as separate application, so we embedded it into our web api. Let’s see how it works, step by step

  1. When client want’s to access API very fist time, it sends his credentials, typically user name and password. To protect them all calls to our API must be allowed only using SSL.
  2. If credentials are successful, web API will return two tokens, access token and refresh token. Let me make few important points here:
    1. Access token is not just some arbitrary id, The token contains some claims (Web API defines what claims should be included)
    2. The token is signed, which prevents clients from manipulating with content. This is very important, since one of the claims will most likely be user id, it can also have such claims as user’s security role.
    3. In our multi-instance environment, token encrypted and signed by one host may be decrypted by another. By default it is done by having the same values in machinekey settings in config file.
    4. The expiration for access token is typically very short. It can be an hour, or even less.
    5. Access token is not stored anywhere on the server. Once it is generated, server forgets about it.
  3. Client will keep both access and refresh tokens and use access tokens for all calls to the API.
  4. When server receives access token from the client the token is decrypted, signature is verified and then API can use claims to restore the identity of the user. No additional validations against password required at this moment. But we need to verify expiration date of the token, which we receive from the client, against current time. If token is expired, we notify the client that token is expired.
  5. If client receives response that token has expired, it uses different call to receive new access token using refresh token
    1. It means that client has to keep refresh token in persistent storage. But client should not keep the password, only refresh token. The point here is that refresh token will also expire at some point, even if expiration for refresh token can be counted in days, or even weeks.
  6. Refresh token should not contain any information. Refresh token is just an id, and server should keep expiration date for the refresh token. Not only that, if user becomes inactive for whatever reason, maybe user was fired, the refresh tokens associated with this user will expire immediately. Server will validate that refresh token is still valid, and then will generate new access token.

This is the essence of bearer token authentication. Now let’s see what we need to do to support it in Web API 2. In fact, not that much.

		public void ConfigureAuth(IAppBuilder app)
		{

			// Configure the application for OAuth based flow
			var oAuthOptions = new OAuthAuthorizationServerOptions
			{
				TokenEndpointPath = new PathString("/token"), //Path of the authorization server
				Provider = new ApplicationOAuthProvider(),
				AccessTokenExpireTimeSpan = TimeSpan.FromHours(1), //Life time of the access token
				AllowInsecureHttp = false, //HTTPS is allowed only
				RefreshTokenProvider = new RefreshTokenProvider(),
				ApplicationCanDisplayErrors = true,
			};

#if DEBUG
			oAuthOptions.AllowInsecureHttp = true; //Insecure HTTP is allowed in debug mode
#endif

			// Enable the application to use bearer tokens to authenticate users
			app.UseOAuthBearerTokens(oAuthOptions);

		}

}

Let’s see what we just did:

TokenEndPointPath – is just what url our clients should use to get our token.

Provider – this is where we should set all required claims for our access token. If you use “Individual Accounts” template from Visual Studio, it will generate you provide that will use ASP.Identity. I don’t have anything to say against it, but in out case we already had all business layer that works with users, so we don’t really need it. Instead, we created our own simple ApplicationOAuthProvider class, inherited from OAuthAuthorizationServerProvider, just like ASP.Identity template would do, and then overwritten following methods:

		public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
		{
			// Resource owner password credentials does not provide a client ID.
			if (context.ClientId == null)
			{
				context.Validated();
			}

			//We don't uthenticate clients. We authenticate resource owners only

			return Task.FromResult<object>(null);
		}

		public override Task GrantRefreshToken(OAuthGrantRefreshTokenContext context)
		{
			//enforce client binding of refresh token
			if (context.Ticket == null || context.Ticket.Identity == null || !context.Ticket.Identity.IsAuthenticated)
			{
				context.SetError("invalid_grant", "Refresh token is not valid");
			}
			else
			{
				var userIdentity = context.Ticket.Identity;

				var authenticationTicket = CreateAuthenticationTicket(userIdentity);

				//Additional claim is needed to separate access token updating from authentication 
				//requests in RefreshTokenProvider.CreateAsync() method
				authenticationTicket.Identity.AddClaim(new Claim("refreshToken", "refreshToken"));

				context.Validated(authenticationTicket);
			}

			return Task.FromResult<object>(null);
		}

		public override Task TokenEndpoint(OAuthTokenEndpointContext context)
		{
			foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
			{
				context.AdditionalResponseParameters.Add(property.Key, property.Value);
			}

			return Task.FromResult<object>(null);
		}

To support our refresh token we implemented RefreshTokenProvider class. That one has just two methods, one for creating refresh token, and another is for restoring identity from existing refresh token.

And in this is it. This is all we need to have bearer token authentication with refresh token support implemented.

Registration for Azure Notification Hub

In our new Azure based project, we are working for the last few months now. Now it is time for us to work on push notification. And there are some decision to be made here.

When one of the “pundits” was asked what he was thinking about particular subject, his answer was “I don’t know, I didn’t write about it yet”. I mostly made these decision, let’s put them in writing to think it through again.

Your decisions need to be based on what you are doing. This is our setup.

  • All our users need to be authenticated to use our application.
  • Currently we are working on push notification for messages, but later we can add more, like some kind of alerts.
  • We use iOS and Android as a platform for out mobile applications.

There are two ways how mobile application can register itself for push notification in Azure notification hub. One is using direct API of notification hub, another is through App Server API, which is developed by you. The latter approach is described here. I believe that the choice between these two approached is defined by authentication. If all of your users required authentication, it means you probably need to send targeted notifications, which means one of your tags is going to be user identification. This is where vulnerability is. If you know (guess) the user identification, it is very easy to subscribe for other users notifications.

If registration is done by app server, it will be done when user is already authenticated, and our tag should not be user’s login. Instead, it should be some id that is used internally, and never exposed outside of the backend.

My problem with the second approach, at least in the way it is described in the in http://msdn.microsoft.com/en-us/library/dn743807.aspx , is that now backend needs to have platform specific code, by setting different templates depending on the platform of the caller. So, to that example we made our own modification. We decided that our API should accept the template from the mobile application itself. Both mobile app and should have some agreement about what to expect in the notification, but doing this we can abstract ourselves from the specifics of the platform in the backend, which is the goals of the notification hub in the first place.

With this addition, we have the best from both approaches. Keep it secure and abstracted from the platform.

Application settings in Azure roles

Whether you develop web application, or windows service, you probably have some application settings in you config files. For Azure roles we have another option. There are settings that can be stored in .cscfg files and then even available for modifications through Azure portal.

In our company we have different teams working on different application, so same settings ended up in config files in one application and in azure settings in another. It made me thing about some rules. I would like to share my thoughts.

Let’s break application configuration settings into categories:

Application settings: These are the settings that developers create to adjust the behavior of the  application. Technically, they could be defined as constants in you code, but to make it clear that those values are actually configurable your application, you defined them as configuration settings. Examples: some thresholds for you processing algorithms, color, authentication methods etc. When you need to change these settings, you typically do not just change them in production environment. You have to actually test them in QA and release new version. IT doesn’t need to be aware of those settings.

Environment settings: Usually you have at least three different environments in your development. You local computer, QA, and then production. Very often only your IT personal knows settings for your production environment. Example: connection strings, url to other services, smtp settings, parameters for supporting scalability. Typically, those settings do not change often, and just different between environments.

Debug/troubleshooting settings: These settings needs to be temporary changed when something goes wrong and you troubleshoot. Ideally, you would love to change them without restarting your service. You may need to troubleshoot your application in production environment, so IT has to be aware of them. Example: all logging level settings, turning your plugins on and off etc.

In typical deployment, you would pack all of them in config files. If you wanted to maintain them better, you could break them in multiple files.

In Azure you can actually put your environment and debug settings into Azure settings. They are just ideal for this. Just look at the benefits:

Your IT doesn’t need to look at your config files anymore. And it totally makes sense, since after your role is deployed, you don’t really have an easy  way to change them. So, you put only your environment and troubleshooting settings in Azure settings, and only those are available to your IT admins. IT makes life easier both for you and for them.

On top of that, for debug settings, I highly recommend you to support RoleEnvironment.Changed event, This way you can change them without restarting your roles, even worker role. Isn’t it nice?

Bottom line: put all settings that are the same for all environments and those that IT should not be aware of into your config files. Put you environment and troubleshooting files into azure settings, and support RoleEnvironment.Changed event for troubleshooting ones.

Latin local was not supported by Windows Server 2012

There was an interesting problem discovered today by our QA. We are developing web site that will be hosted in Azure. Without giving it second thought, I just created web roles under Windows Server 2012, which I believe was the default option.

However,  the bug was logged where if you set you locale to Mexican and use Chrome (and Chrome only) the server’s CultureInfo was set into Spanish “es-es”. Our web developer identified that es-419 was sent in Accept-Language header from Chrome, which wasn’t converted properly to CultureInfo.

The solution was simple, change configuration to use Windows Server 2012 R2.

Logging in Azure (Part 2 – Trace in Azure roles)

If you want to use traces in you web or worker roles, you need to configure it differently, comparing to web site.

In fact, Visual Studio template will do most of the work for you, but it’s better if you understand what it does.

First of all, trace listener is added to the application. You can find in your web.config file:

<system.diagnostics>
    <trace>
      <listeners>
        <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
          name="AzureDiagnostics">
          <filter type="" />
        </add>
      </listeners>
    </trace>
  </system.diagnostics>

Next question is where your logs will be stored? Naturally for Azure it is going to be storage account. It means you have to create this storage account before you continue your configuration. Once you have it, you can select your role in Visual Studio, go to the properties and set everything up following

What actually is going to happen, two files will be updated. One is ServiceConfiguration.Cloud.cscfg, another is diagnostics.wadcfg.

After you have everything set up, you can start looking at it. You can use any Visual studio or some third party tools to explore you storage account. In Visual studio it looks like this:

WADLogsTable is where you will find your trace information.

Updated:

I recently read this post and noticed one important point. By default, only errors are going to be logged. You can change that either by modifying “scheduledTransferLogLevelFilter” in diaglonstics.wadcfg, or through parameters in Visual Studio. Select your role, go to properties, and look at Configuration tab. Change “Diagonstics” settings to change you log level, and some other settings.

FQDN for SmtpClient

Got interesting problem yesterday. Our IT updated SMTP server, and new server required FQDN (Fully Qualified Domain Name) server name in HELO command. The problem is that SmtpClient component does not provide public property for this to be set.

The solution was found though. We would need to add this to config file:

	<system.net>
	 	<mailSettings>
			<smtp>
				<network
					clientDomain="mail.domain.com"
				/>
			</smtp>
	 	</mailSettings>
        </system.net>

Logging in Azure (Part 1 – Trace in Web sites)

Recently our team was working on new project, which will be hosted in Azure. One of the first things you need to figure out when you start project in new environment, is what logging options are available. I’m going to cover all options I know about.

While we host our application as Web Roles, I will start with Web site today, specifically using Trace for logging. My next topic will be about Web/Worker Roles, where I will cover both Trace and Enterprise Lib support.

Let’s start.

I created simple web application, with this simple code in my home controller:

		public ActionResult Index()
		{
			try
			{
				Trace.WriteLine(string.Format("Index at {0}", DateTime.Now));
				return View();
			}
			catch (Exception ex)
			{
				Trace.TraceError(string.Format("Error {0} at {1}", ex.Message, DateTime.Now));
				throw;
			}
		}

To turn on logging for our web site we should go into Config tab in Windows Azure Portal, and then look at “Application diagnostic” section:

Web site Application Diagnostics

For logging level we have options Verbose, Information, Warning, and Error. They directly translate into Trace.WriteLine, Trace.TraceInformation, Trace.TraceWarning, and Trace.TraceError methods.

File system
If we select file system, there are two ways of accessing our logs. One is through Visual Studio. First, you need to open Server Explorer window and connect to your Azure account. There, you can select you web site in three view. It looks like this in for me:

Web site Logging File System VS

Another option is downloading these files through FTP. You need to setup you FTP credentials, and then connect using you favorite FTP client. To get url, go to Dashboard tab:
Web site Logging Dashboard

In your ftp site you will find log files in LogFiles/Application folder.

Table storage
The better option is actually using table storage. Before using this option, you have to create Azure Storage. Then you can press “manage table storage” button, select your storage account, and then type the name of the table. You don’t need to create the table, it will be created automatically.

To access the table you can use either Visual Studio, just same Server Explorer window, or some tools, like

Blob storage
Blob storage is something in between. You use your storage account, but instead of table, the .csv file will be created in blob container you specify.

Everything is pretty straightforward, it’s going to be more interesting when we get to using Enterprise Lib, particularly Semantic Logging application block. But I wanted to cover basics first.

Really?

This is just unbelievable. Look at this http://connect.microsoft.com/SQLServer/feedback/details/243527/

Practically, if you use connection pools, and you always do, Isolation Level is not cleared. That means you actually have to set isolation level manually, not relying on default setting. Nobody does it. I’m shocked. We found it hard way, with locks in production.

Custom data sources in SSRS (Part 4)

We have our custom data source ready for deployment. The last question is how do we put it to reporting services.

As you remember we had to implement some interfaces to make it work. Which means reporting service should somehow find about our classes. It is done in simple config file. When reporting service starts, it looks for extensions in config file, and then loads registered dll and creates our classes. Unfortunately that means we need to stop reporting services to register our custom data source, or even to update it with newer version.

Steps to register custom datasource

  1. Stop reporting service
  2. Drop your dll(s) in binary folder. On my server it is “C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerbin”
  3. Open C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerrsreportserver.config
  4. Find node <Extensions>, then <Data>
  5. Add       <Extension Name=”<Your Name>” Type=”<Your namespace>.RdpConnection, <Your dll name>”>

There is very interesting feature you may find useful. If your extension need to use some values from config files, like appSettings, you can actually added then here. It will look like this:

      <Extension Name="{name}" Type="{Type}">
        <Configuration>
          <appSettings>
            <clear />
            <add key="{key}" value="{value}" />
          </appSettings>
        </Configuration>
      </Extension>

And this is it. You can now use your custom datasource.

Custom data sources in SSRS (Part 3)

If you didn’t read first part, start from here.

By now we found out how to implement custom data source, which is in fact connection. We also know that at the end we need to implement IDbCommand interface. Let’s take a look at it.

public interface IDbCommand : IDisposable
{
    string CommandText { get; set; }
    int CommandTimeout { get; set; }
    CommandType CommandType { get; set; }
    IDataParameterCollection Parameters { get; }
    IDbTransaction Transaction { get; set; }

    void Cancel();
    IDataParameter CreateParameter();
    IDataReader ExecuteReader(CommandBehavior behavior);
}

Do you remember how we put some text into Query Text in Report Builder? CommandText is where we should analyze this value. For example, we can define 3 known values, like Customers, Orders, Order Items. These will be the values our data source would recognize and return data for each of them in ExecuteReader

As you can see, you don’t actually return data here, but rather implementation of IDataReader. I will not cover implementation details of IDataReader. It is actually irrelevant as long as it works. What I want to note is that you can in fact read all your data first, and then implement simple reader, that would just return next row each time. It will use more memory, but the way Reporting Services work, all data will be requested immediately anyway, as reporting services do grouping and sorting, so they have to read all data first.

For our purpose it is enough to support only CommandType.Text, so we have this implementation for CommandType:

public CommandType CommandType
{
    get
    {
        return CommandType.Text;
    }
    set
    {
        if (value != CommandType.Text)
            throw new NotSupportedException();
    }
}

We found that we can get away without implementing Cancel method. It was never called.

What is important is to implement parameters. Most likely you will need to support some kind of parameters, at least to filter your data. This is not difficult. Parameter is nothing more than container for IDataParameter. There is literally nothing but these two properties.

public class RdpDataParameter : IDataParameter
{
    public RdpDataParameter(string parameterName, object value)
    {
        ParameterName = parameterName;
        Value = value;
    }

    public RdpDataParameter()
    {
    }

    #region Implementation of IDataParameter

    public string ParameterName { get; set; }

    public object Value { get; set; }

    #endregion
}

And the last property is Transaction. Any dummy class will work. Here is what we have:

public IDbTransaction Transaction
{
    get { return _trans; }
    set { _trans = (RdpTransaction)value; }
}
public class RdpTransaction: IDbTransaction
{
    #region Implementation of IDisposable

    public void Dispose()
    {
        throw new NotImplementedException();
    }

    #endregion

    #region Implementation of IDbTransaction

    public void Commit()
    {
        throw new NotImplementedException();
    }

    public void Rollback()
    {
        throw new NotImplementedException();
    }

    #endregion
}

The last thing I need to cover is field names. You remember that we didn’t actually implement support for “Refresh fields” button. Instead, we populated our fields manually. What do we put there? Just property names of the objects we return back from the reader. If we return set of Customer objects with property named Name, that means we should add field named Name.

In the next and last post I will cover how to register our data source for Reporting Services.