FQDN for SmtpClient

Got interesting problem yesterday. Our IT updated SMTP server, and new server required FQDN (Fully Qualified Domain Name) server name in HELO command. The problem is that SmtpClient component does not provide public property for this to be set.

The solution was found though. We would need to add this to config file:

	<system.net>
	 	<mailSettings>
			<smtp>
				<network
					clientDomain="mail.domain.com"
				/>
			</smtp>
	 	</mailSettings>
        </system.net>

Logging in Azure (Part 1 – Trace in Web sites)

Recently our team was working on new project, which will be hosted in Azure. One of the first things you need to figure out when you start project in new environment, is what logging options are available. I’m going to cover all options I know about.

While we host our application as Web Roles, I will start with Web site today, specifically using Trace for logging. My next topic will be about Web/Worker Roles, where I will cover both Trace and Enterprise Lib support.

Let’s start.

I created simple web application, with this simple code in my home controller:

		public ActionResult Index()
		{
			try
			{
				Trace.WriteLine(string.Format("Index at {0}", DateTime.Now));
				return View();
			}
			catch (Exception ex)
			{
				Trace.TraceError(string.Format("Error {0} at {1}", ex.Message, DateTime.Now));
				throw;
			}
		}

To turn on logging for our web site we should go into Config tab in Windows Azure Portal, and then look at “Application diagnostic” section:

Web site Application Diagnostics

For logging level we have options Verbose, Information, Warning, and Error. They directly translate into Trace.WriteLine, Trace.TraceInformation, Trace.TraceWarning, and Trace.TraceError methods.

File system
If we select file system, there are two ways of accessing our logs. One is through Visual Studio. First, you need to open Server Explorer window and connect to your Azure account. There, you can select you web site in three view. It looks like this in for me:

Web site Logging File System VS

Another option is downloading these files through FTP. You need to setup you FTP credentials, and then connect using you favorite FTP client. To get url, go to Dashboard tab:
Web site Logging Dashboard

In your ftp site you will find log files in LogFiles/Application folder.

Table storage
The better option is actually using table storage. Before using this option, you have to create Azure Storage. Then you can press “manage table storage” button, select your storage account, and then type the name of the table. You don’t need to create the table, it will be created automatically.

To access the table you can use either Visual Studio, just same Server Explorer window, or some tools, like

Blob storage
Blob storage is something in between. You use your storage account, but instead of table, the .csv file will be created in blob container you specify.

Everything is pretty straightforward, it’s going to be more interesting when we get to using Enterprise Lib, particularly Semantic Logging application block. But I wanted to cover basics first.

Really?

This is just unbelievable. Look at this http://connect.microsoft.com/SQLServer/feedback/details/243527/

Practically, if you use connection pools, and you always do, Isolation Level is not cleared. That means you actually have to set isolation level manually, not relying on default setting. Nobody does it. I’m shocked. We found it hard way, with locks in production.

Custom data sources in SSRS (Part 4)

We have our custom data source ready for deployment. The last question is how do we put it to reporting services.

As you remember we had to implement some interfaces to make it work. Which means reporting service should somehow find about our classes. It is done in simple config file. When reporting service starts, it looks for extensions in config file, and then loads registered dll and creates our classes. Unfortunately that means we need to stop reporting services to register our custom data source, or even to update it with newer version.

Steps to register custom datasource

  1. Stop reporting service
  2. Drop your dll(s) in binary folder. On my server it is “C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerbin”
  3. Open C:Program FilesMicrosoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServerrsreportserver.config
  4. Find node <Extensions>, then <Data>
  5. Add       <Extension Name=”<Your Name>” Type=”<Your namespace>.RdpConnection, <Your dll name>”>

There is very interesting feature you may find useful. If your extension need to use some values from config files, like appSettings, you can actually added then here. It will look like this:

      <Extension Name="{name}" Type="{Type}">
        <Configuration>
          <appSettings>
            <clear />
            <add key="{key}" value="{value}" />
          </appSettings>
        </Configuration>
      </Extension>

And this is it. You can now use your custom datasource.

Custom data sources in SSRS (Part 3)

If you didn’t read first part, start from here.

By now we found out how to implement custom data source, which is in fact connection. We also know that at the end we need to implement IDbCommand interface. Let’s take a look at it.

public interface IDbCommand : IDisposable
{
    string CommandText { get; set; }
    int CommandTimeout { get; set; }
    CommandType CommandType { get; set; }
    IDataParameterCollection Parameters { get; }
    IDbTransaction Transaction { get; set; }

    void Cancel();
    IDataParameter CreateParameter();
    IDataReader ExecuteReader(CommandBehavior behavior);
}

Do you remember how we put some text into Query Text in Report Builder? CommandText is where we should analyze this value. For example, we can define 3 known values, like Customers, Orders, Order Items. These will be the values our data source would recognize and return data for each of them in ExecuteReader

As you can see, you don’t actually return data here, but rather implementation of IDataReader. I will not cover implementation details of IDataReader. It is actually irrelevant as long as it works. What I want to note is that you can in fact read all your data first, and then implement simple reader, that would just return next row each time. It will use more memory, but the way Reporting Services work, all data will be requested immediately anyway, as reporting services do grouping and sorting, so they have to read all data first.

For our purpose it is enough to support only CommandType.Text, so we have this implementation for CommandType:

public CommandType CommandType
{
    get
    {
        return CommandType.Text;
    }
    set
    {
        if (value != CommandType.Text)
            throw new NotSupportedException();
    }
}

We found that we can get away without implementing Cancel method. It was never called.

What is important is to implement parameters. Most likely you will need to support some kind of parameters, at least to filter your data. This is not difficult. Parameter is nothing more than container for IDataParameter. There is literally nothing but these two properties.

public class RdpDataParameter : IDataParameter
{
    public RdpDataParameter(string parameterName, object value)
    {
        ParameterName = parameterName;
        Value = value;
    }

    public RdpDataParameter()
    {
    }

    #region Implementation of IDataParameter

    public string ParameterName { get; set; }

    public object Value { get; set; }

    #endregion
}

And the last property is Transaction. Any dummy class will work. Here is what we have:

public IDbTransaction Transaction
{
    get { return _trans; }
    set { _trans = (RdpTransaction)value; }
}
public class RdpTransaction: IDbTransaction
{
    #region Implementation of IDisposable

    public void Dispose()
    {
        throw new NotImplementedException();
    }

    #endregion

    #region Implementation of IDbTransaction

    public void Commit()
    {
        throw new NotImplementedException();
    }

    public void Rollback()
    {
        throw new NotImplementedException();
    }

    #endregion
}

The last thing I need to cover is field names. You remember that we didn’t actually implement support for “Refresh fields” button. Instead, we populated our fields manually. What do we put there? Just property names of the objects we return back from the reader. If we return set of Customer objects with property named Name, that means we should add field named Name.

In the next and last post I will cover how to register our data source for Reporting Services.

Custom data sources in SSRS (Part 2)

If you didn’t read first part, start from here.

Now when we know how our data source looks in report builder, it is clear that we need to do something to register oud data source. What is it and how we register it?

What we need to register is the class, that implements IDbConnectionExtension interface. The interface itself is defined in Microsoft.ReportingServices.Interfaces.dll assembly.

public class RdpConnection : IDbConnectionExtension

There are many methods there to implement, but many of them can have dummy implementation, like these ones:

public string Impersonate
{
    set { _impersonate = value; }
}

public string UserName
{
    set { _username = value; }
}

public string Password
{
    set { _password = value; }
}

public bool IntegratedSecurity
{
    get;
    set;
}

But there is one that does real work. It is IDbConnection.CreateCommand(). Let’s look at it:

public IDbCommand CreateCommand()
{
    return new RdpCommand(this);
}
public string ConnectionString
{
    get { return _conn; }
    set { _conn = value; }
}

public int ConnectionTimeout
{
    get { return 0; }
}

As you can see, we have new class, RdpCommand. So, now we actually delegated all our work into this new class. There is really nothing else important here.

Custom data sources in SSRS (Part 1)

Few years ago, when we had a task of adding reporting capabilities to our web application, we chose MS-SQL reporting services as our platform. One of the first decision we had to make was what kind of data sources we were going to use. Our data structure is not very complicated, it’s actually quite simple. Our challenge was the amount of data we have.

In other words, we wanted to isolate the people that would design the reports from the data behind it. They should not know data structure or write the SQL. There were some other requirements that are not relevant now, like combining data from separate databases.

All these requirements made me think about implementing custom report datasources. And I don’t regret it. Briefly these are pros and cons we found using this approach:

Pros:

  1. Data source looks simple in RDL file, doesn’t confuse our report designers, prevents them from writing inefficient sql statements, that could “kill” database.
  2. Additional .NET layer:
    • T-SQL is powerful, but sometime .NET is better, like when you need to work with Time Zones
    • Maybe useful if you need to combine different sources of data.

Cons:

  1. To deploy new binaries with datasources you need to restart reporting services. You also need an access to the machine, as you have to put binaries to binary folder,

So, it very much depends on your requirement, but if you decide that custom data sources may work for you, it is not difficult to implement, and this post will show you how.

Before we start

  1. MS-SQL 2008 R2 was important release for Microsoft Reporting Services. All examples and explanations will be based on MS-SQL 2008 R2 or 2012.
  2. I will be using Report Builder 3.0 and real working samples from my work, where I will just hid sensitive information

Creating new template

I would like to start not from the code, but from report template. Let’s see what our end result first and then I will show you how to get there.

When we deploy our custom data source to the reporting service, we will see it available in the list of data sources, but with new data source type, that we registered:

Report extension

Now let’s take a look how we use this data source in Report Builder. Try to create new report, then in “Choose a dataset” screen select “Create a dataset” option. Then in “Data Source Connections” screen select “Browse…”. You will see the list of datasources, with the one that we registered.

Next step is adding DataSet. We are going to use the option “DataSet embedded in the report”, query type text.

Now the question is, what should we put for the query? The answer is it depends on the data source you created. Whatever you type in this box, will be available later for your custom data connection, and will define the result set.

Another important not is how you populate the list of the fields. The way we implemented it “Refresh fields” function is not working with our custom data source. I know it’s lame, but we just stopped there and added the field manually. It just didn’t worth it for us, so I cannot provide you validated solution for this. However, if you manually add the fields, it will work. So, if you know how to support “Refresh Fields” functionality, please comment here. For us supporting it was not practical. With limited number of data sets we have, we just copy/past them between reports.

The next post will be about actually implementing this data source.

ResourceDictionary and memory leaks

I profiled our application for memory leaks recently, and finally get clear understanding for how ResourceDictionary can result in memory leaks or just excessive memory usage.

I can identify three problems:
  • MergedDictionaries
  • References from Control to ResourceDictionary
  • Using DropShadowEffect
MergedDictionaries:
The typical use of MergedDictionaries looks like this:

<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="/X.Styles;component/TextBoxStyle.xaml"/>
ResourceDictionary.MergedDictionaries>
ResourceDictionary>

This code suggests that there was a ResourceDictionary with Source /X.Styles;component/TextBoxStyle.xaml defined somewhere. The way ResourceDictionary is implemented, there will be two instances of ResourceDictionary objects created in the memory. The original one, and then the one that is added to the MergedDictionaries collection.

Solution:
The solution to this problem is to create your own implementation of ResourceDictionary and use it instead. One of the examples can be found here.
Then, our code will look like this:
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<a:CachedResourceDictionary Source="/X.Styles;component/TextBoxStyle.xaml"/>
ResourceDictionary.MergedDictionaries>
ResourceDictionary>

References from control to Resource Dictionary

Assuming we implemented this approach, what if we have this code now in our application?
<UserControl A>
<UserControl.Resources>
<ResourceDictionary.MergedDictionaries>
<a:CachedResourceDictionary Source="/Actsoft.Styles;component/TextBoxStyle.xaml"/>
ResourceDictionary.MergedDictionaries>
...
UserControl.Resources>
...
UserControl>
We should be fine now don’t you think? Apparently not. The problem is that every ResourceDictionary keeps reference to what is called owner. Or even multiple owners. For example, when we add ResourceDictionary to UserControl, this user control becomes the owner of this ResourceDictionary. If we load ResourceDictionary to the application, the application becomes the owner. Not only that, when we add ResourceDictionary to the MergedDictionaries collection, all owners form parent dictionary added to the list of owners for child dictionary. In example above, if we have TextBoxStyle.xaml loaded to the application resources, and then to the UserControl resources, we create references from application to the instance of UserControl. As result, UserControl will remain in heap indefinitely and will not be disposed by garbage collector. And we created this problem by using CachedResourceDictionary, since if we used regular ResourceDictionary, we would have separate copy, which could be disposed as part of UserControl.
Solution:
If you decide to use CachedResourceDictionary, create and remove your user controls dynamically and want them to be disposed, you have to choose how you use resource dictionaries
  • Either use them in application resources only and don’t use them in UserControl
  • Or, use them in User Controls only and don’t use them in Application Resources. Beware if you activate one user control before you deactivate another one.
To simplify it, CachedResourceDictionary are for application resources or static user controls.
Problems with DropShadowEffect
The description of the problem can be found here. The problem will appear if you have code like that:

<ResourceDictionary>
<DropShadowEffect x:Key="key" ... />

And then, this resource dictionary is loaded into application resources.

Solution:
Most likely, you don’t have intentions to modify this effect, so instead, you should make it look like this:
<ResourceDictionary>
<DropShadowEffect x:Key="key"
PresentationOptions:Freeze=True
... />

This will create frozen effect. As result there will be no events assigned, and no references.

Using .NET 2.0 assembies from .NET 4.0

I was always curious how exactly assemblies compiled for .NET 2.0 are used when you reference from asseblies compiled for .NET 4.0. Somehow, I could not find “an official” answer to this question, and was too lazy to check myself. Thankfully, I have a young collegue who was as curious as I am but not so lazy.

So, here is an experiment.

Exe A,

  • target platform 4.0

Assembly B,

  • target platform 2.0,
  • referenced by exe A.
  • references System.dll, .NET 2.0.

Question, will System.dll 2.0 will be loaded when we run A, or System.dll from 4.0?

The answer – System.dll from 4.0.

The order in Xaml is important

Today I had one more chance to notice that the order in XAML is important.What I had was a button with code like this:


<Setter Property="Command" Value="{Binding Command}" />
<Setter Property="CommandParameter" Value="{Binding CommandParameter}"/>

Notice that I assigned Command before CommandParameter. When I was doing it, I din’t even think about it. I just added support for CommandParameter at some point so I naturally added this line at the end. However, CanExecute method from ICommand was triggered immediatelly after Command was assigned to the control, even before CommandParameter was assigned. Obviously, it didn’t work.
When I changed the order, it’s all started to work.