Registration for Azure Notification Hub

In our new Azure based project, we are working for the last few months now. Now it is time for us to work on push notification. And there are some decision to be made here.

When one of the “pundits” was asked what he was thinking about particular subject, his answer was “I don’t know, I didn’t write about it yet”. I mostly made these decision, let’s put them in writing to think it through again.

Your decisions need to be based on what you are doing. This is our setup.

  • All our users need to be authenticated to use our application.
  • Currently we are working on push notification for messages, but later we can add more, like some kind of alerts.
  • We use iOS and Android as a platform for out mobile applications.

There are two ways how mobile application can register itself for push notification in Azure notification hub. One is using direct API of notification hub, another is through App Server API, which is developed by you. The latter approach is described here. I believe that the choice between these two approached is defined by authentication. If all of your users required authentication, it means you probably need to send targeted notifications, which means one of your tags is going to be user identification. This is where vulnerability is. If you know (guess) the user identification, it is very easy to subscribe for other users notifications.

If registration is done by app server, it will be done when user is already authenticated, and our tag should not be user’s login. Instead, it should be some id that is used internally, and never exposed outside of the backend.

My problem with the second approach, at least in the way it is described in the link us/library/dn743807.aspx, is that now backend needs to have platform specific code, by setting different templates depending on the platform of the caller. So, to that example we made our own modification. We decided that our API should accept the template from the mobile application itself. Both mobile app and should have some agreement about what to expect in the notification, but doing this we can abstract ourselves from the specifics of the platform in the backend, which is the goals of the notification hub in the first place.

With this addition, we have the best from both approaches. Keep it secure and abstracted from the platform.

Application settings in Azure roles

Whether you develop web application, or windows service, you probably have some application settings in you config files. For Azure roles we have another option. There are settings that can be stored in .cscfg files and then even available for modifications through Azure portal.

In our company we have different teams working on different application, so same settings ended up in config files in one application and in azure settings in another. It made me thing about some rules. I would like to share my thoughts.

Let’s break application configuration settings into categories:

Application settings: These are the settings that developers create to adjust the behavior of the  application. Technically, they could be defined as constants in you code, but to make it clear that those values are actually configurable your application, you defined them as configuration settings. Examples: some thresholds for you processing algorithms, color, authentication methods etc. When you need to change these settings, you typically do not just change them in production environment. You have to actually test them in QA and release new version. IT doesn’t need to be aware of those settings.

Environment settings: Usually you have at least three different environments in your development. You local computer, QA, and then production. Very often only your IT personal knows settings for your production environment. Example: connection strings, url to other services, smtp settings, parameters for supporting scalability. Typically, those settings do not change often, and just different between environments.

Debug/troubleshooting settings: These settings needs to be temporary changed when something goes wrong and you troubleshoot. Ideally, you would love to change them without restarting your service. You may need to troubleshoot your application in production environment, so IT has to be aware of them. Example: all logging level settings, turning your plugins on and off etc.

In typical deployment, you would pack all of them in config files. If you wanted to maintain them better, you could break them in multiple files.

In Azure you can actually put your environment and debug settings into Azure settings. They are just ideal for this. Just look at the benefits:

Your IT doesn’t need to look at your config files anymore. And it totally makes sense, since after your role is deployed, you don’t really have an easy  way to change them. So, you put only your environment and troubleshooting settings in Azure settings, and only those are available to your IT admins. IT makes life easier both for you and for them.

On top of that, for debug settings, I highly recommend you to support RoleEnvironment.Changed event, This way you can change them without restarting your roles, even worker role. Isn’t it nice?

Bottom line: put all settings that are the same for all environments and those that IT should not be aware of into your config files. Put you environment and troubleshooting files into azure settings, and support RoleEnvironment.Changed event for troubleshooting ones.

Latin local was not supported by Windows Server 2012

There was an interesting problem discovered today by our QA. We are developing web site that will be hosted in Azure. Without giving it second thought, I just created web roles under Windows Server 2012, which I believe was the default option.

However,  the bug was logged where if you set you locale to Mexican and use Chrome (and Chrome only) the server’s CultureInfo was set into Spanish “es-es”. Our web developer identified that es-419 was sent in Accept-Language header from Chrome, which wasn’t converted properly to CultureInfo.

The solution was simple, change configuration to use Windows Server 2012 R2.

Logging in Azure (Part 2 – Trace in Azure roles)

If you want to use traces in you web or worker roles, you need to configure it differently, comparing to web site.

In fact, Visual Studio template will do most of the work for you, but it’s better if you understand what it does.

First of all, trace listener is added to the application. You can find in your web.config file:

<system.diagnostics>
    <trace>
      <listeners>
        <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
          name="AzureDiagnostics">
          <filter type="" />
        </add>
      </listeners>
    </trace>
  </system.diagnostics>

Next question is where your logs will be stored? Naturally for Azure it is going to be storage account. It means you have to create this storage account before you continue your configuration. Once you have it, you can select your role in Visual Studio, go to the properties and set everything up following

What actually is going to happen, two files will be updated. One is ServiceConfiguration.Cloud.cscfg, another is diagnostics.wadcfg.

After you have everything set up, you can start looking at it. You can use any Visual studio or some third party tools to explore you storage account. In Visual studio it looks like this:

WADLogsTable is where you will find your trace information.

Updated:

I recently read this post and noticed one important point. By default, only errors are going to be logged. You can change that either by modifying “scheduledTransferLogLevelFilter” in diaglonstics.wadcfg, or through parameters in Visual Studio. Select your role, go to properties, and look at Configuration tab. Change “Diagonstics” settings to change you log level, and some other settings.

FQDN for SmtpClient

Got interesting problem yesterday. Our IT updated SMTP server, and new server required FQDN (Fully Qualified Domain Name) server name in HELO command. The problem is that SmtpClient component does not provide public property for this to be set.

The solution was found though. We would need to add this to config file:

	<system.net>
	 	<mailSettings>
			<smtp>
				<network
					clientDomain="mail.domain.com"
				/>
			</smtp>
	 	</mailSettings>
        </system.net>

Logging in Azure (Part 1 – Trace in Web sites)

Recently our team was working on new project, which will be hosted in Azure. One of the first things you need to figure out when you start project in new environment, is what logging options are available. I’m going to cover all options I know about.

While we host our application as Web Roles, I will start with Web site today, specifically using Trace for logging. My next topic will be about Web/Worker Roles, where I will cover both Trace and Enterprise Lib support.

Let’s start.

I created simple web application, with this simple code in my home controller:

		public ActionResult Index()
		{
			try
			{
				Trace.WriteLine(string.Format("Index at {0}", DateTime.Now));
				return View();
			}
			catch (Exception ex)
			{
				Trace.TraceError(string.Format("Error {0} at {1}", ex.Message, DateTime.Now));
				throw;
			}
		}

To turn on logging for our web site we should go into Config tab in Windows Azure Portal, and then look at “Application diagnostic” section:

Web site Application Diagnostics

For logging level we have options Verbose, Information, Warning, and Error. They directly translate into Trace.WriteLine, Trace.TraceInformation, Trace.TraceWarning, and Trace.TraceError methods.

File system
If we select file system, there are two ways of accessing our logs. One is through Visual Studio. First, you need to open Server Explorer window and connect to your Azure account. There, you can select you web site in three view. It looks like this in for me:

Web site Logging File System VS

Another option is downloading these files through FTP. You need to setup you FTP credentials, and then connect using you favorite FTP client. To get url, go to Dashboard tab:
Web site Logging Dashboard

In your ftp site you will find log files in LogFiles/Application folder.

Table storage
The better option is actually using table storage. Before using this option, you have to create Azure Storage. Then you can press “manage table storage” button, select your storage account, and then type the name of the table. You don’t need to create the table, it will be created automatically.

To access the table you can use either Visual Studio, just same Server Explorer window, or some tools, like

Blob storage
Blob storage is something in between. You use your storage account, but instead of table, the .csv file will be created in blob container you specify.

Everything is pretty straightforward, it’s going to be more interesting when we get to using Enterprise Lib, particularly Semantic Logging application block. But I wanted to cover basics first.

Really?

This is just unbelievable. Look at this http://connect.microsoft.com/SQLServer/feedback/details/243527/

Practically, if you use connection pools, and you always do, Isolation Level is not cleared. That means you actually have to set isolation level manually, not relying on default setting. Nobody does it. I’m shocked. We found it hard way, with locks in production.

Custom data sources in SSRS (Part 4)

We have our custom data source ready for deployment. The last question is how do we put it to reporting services.

As you remember we had to implement some interfaces to make it work. Which means reporting service should somehow find about our classes. It is done in simple config file. When reporting service starts, it looks for extensions in config file, and then loads registered dll and creates our classes. Unfortunately that means we need to stop reporting services to register our custom data source, or even to update it with newer version.

Steps to register custom datasource

  1. Stop reporting service
  2. Drop your dll(s) in binary folder. On my server it is “C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerbin”
  3. Open C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerrsreportserver.config
  4. Find node <Extensions>, then <Data>
  5. Add       <Extension Name=”<Your Name>” Type=”<Your namespace>.RdpConnection, <Your dll name>”>

There is very interesting feature you may find useful. If your extension need to use some values from config files, like appSettings, you can actually added then here. It will look like this:

      <Extension Name="{name}" Type="{Type}">
        <Configuration>
          <appSettings>
            <clear />
            <add key="{key}" value="{value}" />
          </appSettings>
        </Configuration>
      </Extension>

And this is it. You can now use your custom datasource.

Custom data sources in SSRS (Part 3)

If you didn’t read first part, start from here.

By now we found out how to implement custom data source, which is in fact connection. We also know that at the end we need to implement IDbCommand interface. Let’s take a look at it.

public interface IDbCommand : IDisposable
{
    string CommandText { get; set; }
    int CommandTimeout { get; set; }
    CommandType CommandType { get; set; }
    IDataParameterCollection Parameters { get; }
    IDbTransaction Transaction { get; set; }

    void Cancel();
    IDataParameter CreateParameter();
    IDataReader ExecuteReader(CommandBehavior behavior);
}

Do you remember how we put some text into Query Text in Report Builder? CommandText is where we should analyze this value. For example, we can define 3 known values, like Customers, Orders, Order Items. These will be the values our data source would recognize and return data for each of them in ExecuteReader

As you can see, you don’t actually return data here, but rather implementation of IDataReader. I will not cover implementation details of IDataReader. It is actually irrelevant as long as it works. What I want to note is that you can in fact read all your data first, and then implement simple reader, that would just return next row each time. It will use more memory, but the way Reporting Services work, all data will be requested immediately anyway, as reporting services do grouping and sorting, so they have to read all data first.

For our purpose it is enough to support only CommandType.Text, so we have this implementation for CommandType:

public CommandType CommandType
{
    get
    {
        return CommandType.Text;
    }
    set
    {
        if (value != CommandType.Text)
            throw new NotSupportedException();
    }
}

We found that we can get away without implementing Cancel method. It was never called.

What is important is to implement parameters. Most likely you will need to support some kind of parameters, at least to filter your data. This is not difficult. Parameter is nothing more than container for IDataParameter. There is literally nothing but these two properties.

public class RdpDataParameter : IDataParameter
{
    public RdpDataParameter(string parameterName, object value)
    {
        ParameterName = parameterName;
        Value = value;
    }

    public RdpDataParameter()
    {
    }

    #region Implementation of IDataParameter

    public string ParameterName { get; set; }

    public object Value { get; set; }

    #endregion
}

And the last property is Transaction. Any dummy class will work. Here is what we have:

public IDbTransaction Transaction
{
    get { return _trans; }
    set { _trans = (RdpTransaction)value; }
}
public class RdpTransaction: IDbTransaction
{
    #region Implementation of IDisposable

    public void Dispose()
    {
        throw new NotImplementedException();
    }

    #endregion

    #region Implementation of IDbTransaction

    public void Commit()
    {
        throw new NotImplementedException();
    }

    public void Rollback()
    {
        throw new NotImplementedException();
    }

    #endregion
}

The last thing I need to cover is field names. You remember that we didn’t actually implement support for “Refresh fields” button. Instead, we populated our fields manually. What do we put there? Just property names of the objects we return back from the reader. If we return set of Customer objects with property named Name, that means we should add field named Name.

In the next and last post I will cover how to register our data source for Reporting Services.

Custom data sources in SSRS (Part 2)

If you didn’t read first part, start from here.

Now when we know how our data source looks in report builder, it is clear that we need to do something to register oud data source. What is it and how we register it?

What we need to register is the class, that implements IDbConnectionExtension interface. The interface itself is defined in Microsoft.ReportingServices.Interfaces.dll assembly.

public class RdpConnection : IDbConnectionExtension

There are many methods there to implement, but many of them can have dummy implementation, like these ones:

public string Impersonate
{
    set { _impersonate = value; }
}

public string UserName
{
    set { _username = value; }
}

public string Password
{
    set { _password = value; }
}

public bool IntegratedSecurity
{
    get;
    set;
}

But there is one that does real work. It is IDbConnection.CreateCommand(). Let’s look at it:

public IDbCommand CreateCommand()
{
    return new RdpCommand(this);
}
public string ConnectionString
{
    get { return _conn; }
    set { _conn = value; }
}

public int ConnectionTimeout
{
    get { return 0; }
}

As you can see, we have new class, RdpCommand. So, now we actually delegated all our work into this new class. There is really nothing else important here.