Sunday 23 November 2014

Type registration in Unity - Code or Configuration

Dependency Injection is one of the important design aspects that makes system just better. It lets you have Dependency Inversion built into your system and lets you managed it with ease. Dependency Inversion is the "D" of the SOLID design principals.

There are plenty of tools available that let you implement Dependency Injection in the applications. I have used Unity Application Block for Dependency Injection functionality for a long time. It is just as good as any other tool available in the market both in terms of features, extensibility and performance.

There are two ways to manage the DI container registrations. Usually i would use a separate file (generally named IOC.config) and mention all the desired registrations in that file. Making the registrations driven by configuration file makes you inject Mock implementation in test projects by just changing it in the file. This usually worked fine for me till now.

Recently though I moved to a team where the software will be built and evolved over multiple iterations (in many months) and it seemed OK to register dependencies through configuration file. It didn't work out all that great :). The main difference was that this solution is just too big and has many moving parts that needed to be deployed at different intervals - that forced us to have multiple IOC configuration and therefore leaving gaps in configuration every time something changed (e.g. namespace change, assembly refactoring, internal interfaces etc.) because configuration files do not force strong typing. Almost every deployment started to reflect one issue or other in IOC configuration file. So the team decided to give up the benefits configuration for ensuring strong type checking by getting rid of IOC configuration files and using Application Startup methods to register dependencies in code.

Learning: If there are multiple deployment units in your solution, it might be better to have type registration in code instead of driving it through configuration file. What we realized was that we almost never ever change configuration file in deployed system anyways :).

Option 2: In case registration becomes error prone, code base can be improved to use convention based approach and have a default implementation registered by default and developers have to register a type only if they do not want to use default implementations. E.g. 

public interface ICustomerGateway{}
public class DefaultCustomerGateway: ICustomerGateway {}

Registration code change be changed to reflect the assembly and references, find public interfaces and register types that have "Default" word appended to their implementations.

Sunday 12 October 2014

Task and synchrony

Task class is a pretty nifty one and helps in implementing much of the plumbing required to use CLR ThreadPool and makes application developers focus on making their application more responsive. However there can be some cases where after implementing a highly scalable API, you get a requirement to integrate that with an application which does not use asynchrony at all. A similar requirement can be that you need to invoke an async method from constructor of a class.

There is an easy way to do that. Use Task.WaitAll for asynchronous methods that have return type void and use Task.FromResult or Task.Run for asynchronous methods that return some result. Simple example:

static void Main(string[] args)
{
            Console.WriteLine("Starting");
            DoSomethingSynchronously(1000);
            Console.WriteLine("Finished");
            Console.WriteLine("Starting");
            DoSomething2Synchronously(1000);
            Console.WriteLine("Finished");
            Console.ReadLine();
}

static private void DoSomething2Synchronously(int x)
{
            FunctionTesting t = new FunctionTesting();
            var y = t.DoSomething2Async(100);
            Task.WaitAll(y);
}

static private void DoSomethingSynchronously(int x)
{
            FunctionTesting t = new FunctionTesting();
            var y = Task.FromResult(t.DoSomethingAsync(100).Result).Result;
            Console.WriteLine(y);
}

class FunctionTesting
{
        public async Task DoSomethingAsync(int x)
        {
            await Task.Delay(3000);
            Console.WriteLine(x);
            return "xxxxxxxx";
        }

        public async Task DoSomething2Async(int x)
        {
            await Task.Delay(3000);
            Console.WriteLine(x);
        }
}

Monday 6 October 2014

StackOverflowException - Quite obvious

It isn't all that difficult to hit StackOverflowException as opposed to what I would typically like to believe.

Create a console application with default settings. Add a simple class which has a single method which calls itself recursively.




Call the function from the Main function. e.g.



This produces error. If it doesn't then try changing the value such that it starts to produce the error :).


Lower the value to like 3000 instead of 3371. And we get a successful response.



Always interesting to break software. At least sometimes.

Thursday 10 July 2014

Hybrid Connection - How does it work?

After using Hybrid Connection, i wanted to figure out how it works. Here are the notes:

When setting up the "on-premise" software, a Windows Service "Azure Hybrid Connection Manager Service" is deployed on the machine which launches a listener application "Microsoft.HybridConnectionManager.Listener.exe" present at "C:\Program Files\Microsoft\HybridConnectionManager".


Process explorer shows that the executable is launched by the service to open a persistent (?) connection of some sort with the azure website.


You can verify it by watching the TCP/IP connections used by the "Microsoft.HybridConnectionManager.Listener.exe" in process explorer.

In my application, I accessed SQL Server hosted on my local machine. As shown in the above screenshot, communication address uses ":ms-sql-s" to indicate that.

Interestingly the connection with remote host is closed after ~60 seconds if there is no more traffic between the two machines. 

Wednesday 9 July 2014

Hybrid Connection - Azure

Microsoft Azure has introduced a new feature called Hybrid Connection. As its name suggests, it allows the azure hosted websites/mobile services to connect to on-premise resources. Though the term "Resources" is quite wide by its definition, Hybrid Connection allows the Azure hosted websites and mobile services to connect to services (e.g. websites, web services, SQL Server, Oracle database Server etc.) hosted on ports defined in the Hybrid Connection created on Azure.

So i thought of trying it out. It was a breeze. I used the tutorial available on Azure site to set up the connection. Then I created a simple website (used default website template of ASP.NET MVC) and added a simple code to read information from a database like following:






"DefaultConnection" pointed to my laptop. Needless to say it worked without any issue even though the stuff was running inside a transaction scope. Now that is not to be mistaken with distributed transaction. That does not work :). In fact even if you try to run two queries on same database, then things fail because DTC does not work over Hybrid Connection as of now. The below code fails.





In all essence, it is quite an interesting feature that can be used for simple websites or mobile services that simply need to access some remote services like intranet website or on-premise database server. 

I am sure that complex scenarios can be enabled but they will not be possible in a straight forward manner. For complex scenarios, recommended approach will to use site-to-site VPN, point-to-site VPN or something else.

Obvious Limitations of Hybrid Connection:
  1. Only SQL Authentication works for SQL Server related communication. Obvious and sensible. 
  2. Distribution transaction coordinator not supported.
  3. Only supported in website and mobile services. Can not be used with Cloud Services.
  4. Nothing that can not be exposed over a port can be used. e.g. File System.

Friday 4 July 2014

ASP.NET database role provider for Azure Web Role application

When developing Microsoft Azure based Web applications, there can be cases when the application needs information about the user like Role, Age, other "Claim" etc. Ideal solution will be to plug-in Windows Identity Foundation (WIF) module and fetch the information from a trusted source e.g. your own ADFS installation, or a trusted third party etc. but chances are that you don't get to use it right away - either because the required implementation of that may not be available right away or using it in development environment is expensive.

In such cases, provider model of ASP.NET acts like a boon. You can plugin your custom role provider and implement the stuff based on roles/profile/claims and later on plug-in the actual provider. One example is to use AspNetSqlRoleProvider.

Set up the out of box authentication & authorization databases (by default named as "aspnetdb") by running the aspnet_regsql.exe present at .NET framework installation folder e.g. "C:\Windows\Microsoft.NET\Framework64\v4.0.30319". This launches a wizard and sets up a default database which can be used by AspNetSqlRoleProvider.

  1. Create a "Cloud Web Role" application using Visual Studio's Cloud template.
  2. Choose Windows Authentication when setting up the Web Application. I chose the ASP.NET MVC application for this sample but it can work with ASP.NET WebForms application too.
  3. Change the Web.Config to use AspNetSqlRoleProvider (System.Web.Security.SqlRoleProvider) to associating roles to the user.
  4. Use the out of the box Stored Procedures present in aspnetdb database e.g. "aspnet_Applications_CreateApplication", "aspnet_Roles_CreateRole", "aspnet_Users_CreateUser", "aspnet_UsersInRoles_AddUsersToRoles"  to add application, users, roles etc. 
  5. Change the Global.asax.cs to ensure that user's identity is set to his/her windows identity in debug mode - you would want to change this later based on your requirements.
  6. Change the home page to print if user belongs to a group.

Run the application in Azure Emulator and it should show the role information of the user. Quite useful.

Tuesday 10 June 2014

ASP.NET vNext


Details of the new version of the ASP.NET have being announced. Additional details about it can be found over the Internet e.g. here. Since chances are high that many of the proposed changes should make it to the actual product in its current form or a modified form, I thought it would be a good idea to try out the new version.
I was a little surprised when I created a new web project in Visual Studio 2014 CTP - You can try it out too if you have an account on Azure, there is an VM image available in the Image Gallery by the name "Visual Studio Professional 14 CTP". Notable things are :
  1. There is no web.config. Much of the configuration in the web.config file used to target the hosting of application in IIS, IIS Express or Cassini. Now that ASP.NET vNext aims to break free from IIS (refer to "Helios"), much of it not needed anyways. Rest of the configuration can be specified in "config.json" file - yes, it contains a JSON object and Visual Studio does a neat job of helping us with Intellisense. By default it has a connectionstring that wires up the default connection to a database hosted on "(localdb)\mssqllocaldb".
  2. There are not many options available on the dialog box that opens up when you click "Properties" of the context menu of the web project. Most of it is moved into "project.json" file which stores (again) a JSON object. Apart from project properties, it also keeps a list of nuGet package references.
  3. There is an interesting reference in the project named ".NET Framework 4.5" - you are free to change it to ".NET Core Framework 4.5" though which is targeted towards hosting on cloud (read Azure). So, I guess in future we will not be required to install .NET framework on the target machine as the necessary framework assemblies will be available along with the application's deployment folder. I wonder how would that impact the GAC though.
  4. You can host an ASP.NET vNext application in IIS, IIS Express, or even self host it in a console application. Imagine taking your application in a USB and hosting it on any available laptop by just plugging the USB stick and launching a command file (which can produce by using Publish feature of ASP.NET vNext). As an experiment, I created an ASP.NET vNext Console application (named "ConsoleApp2") - added a class "startup.cs", updated "Program.cs" to start the web server and added a simple controller/view in the application : it works like a charm :)
  5. Default database model for authentication/authorization has been pruned to just 2 tables - AspNetUsers and AspNetUserClaims. AspNetUsers, as the name suggests, is about the users and their credentials. AspNetUserClaims is most probably targeted to store rest of the details like role etc. as a claim. It is much simpler and covers most of the bases.
  6. There is only one base class for controllers i.e. Controller. No ApiController or MvcController etc. In fact, you don't even need to do that if you want. e.g. You can add a POCO class like NonStandardController and add a public method (action) that returns whatever you want to return.
  7. Everything can be configured through "Configure" method of "Startup" class. e.g. DirectoryBrowsing can be enabled by invoking UseDirectoryBrowser method on IBuilder instance. In fact, there is a way to configure the ASP.NET application to use one (or more) middleware classes to return response. e.g.
     
 And here is how you register your own Middleware.
 
 
Note: Instead of writing code from scratch I referred to the code samples available at Entropy. It is a great place to start when experimenting with ASP.NET vNext.
 
 

Monday 9 June 2014

Clone .NET objects

Making clones of objects is quite a regular requirement in .NET projects. Some of the typical scenarios are:

  1. Provide undo functionality in user forms in:
    • Desktop applications e.g. Windows Forms application, WPF application
    • Rich client applications e.g. Silverlight (I know it is most probably not going to have a new version but SL5 will be supported till 2021)
  2. Duplicate a record retrieved from data store

There are a few tried and tested methods which can help in solving this.

Method # 1:
Use Automapper. That is probably the easiest way. e.g. Suppose we have classes like these:

public class FormA
{
        public string FormName { get; set; }

       // public string TestString { get; set; }

        public FormB FormB { get; set; }
}

public class FormB
{
        public string FormName { get; set; }

}

Application code can be like following:

Mapper.CreateMap();
Mapper.CreateMap();
FormA f = new FormA() { FormName = "Test", FormB = new FormB() { FormName = "test2" } };
FormA f1 = new FormA();
Mapper.Map(f, f1);
Console.WriteLine(f1.FormName);
Console.WriteLine(f1 == f);

Console.WriteLine(f1.FormB == f.FormB);

Method# 2:
Use serialization process with any of the preferred formatters (XML, Binary, JSON etc.). Plenty of examples are available on the Internet for this e.g. here. If we use binary formatter, we can preserve value of private members too as binary serialization preserves type fidelity.

Method#3:
Write a generic copy method that Use Reflection to copy values of properties, members etc. It will turn out to be a very involved implementation as it will need to handle null values of members, and type of members (arrays, lists, enums etc.) but it will also provide flexibility to handle any case.

Method#4:
Implement ICloneable to implement the cloning logic.

Monday 26 May 2014

Visual Studio Upgrade : Private accessor vs Private Object/Type

There are some interesting scenarios that can arise when upgrading a solution built using Visual Studio 2010 to a higher version of Visual Studio (2012 or 2013). If the original solution uses "Private Accessor" feature, a feature that was introduced in an earlier version of Visual Studio to allow users to write test cases that can target private methods, fields or properties of a class, then there is a possibility that you might get compilation error similar to the one described below  after upgrading the solution (along with a warning that this particular feature is deprecated):

System.IO.FileNotFoundException: Could not load file or assembly 'Fully Qualified Assembly Name' or one of its dependencies. The system cannot find the file specified.
File name: 'Fully Qualified Assembly Name'
   at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean forIntrospection)
   at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
   at System.Reflection.Assembly.Load(String assemblyString)
   at System.UnitySerializationHolder.GetRealObject(StreamingContext context)

   at Microsoft.VisualStudio.TestTools.UnitTesting.Publicize.Shadower.ShadowAssemblyHelper(ShadowerOptions options)
   at Microsoft.VisualStudio.TestTools.UnitTesting.Publicize.Shadower.ShadowAssembly(AppDomain domain, ShadowerOptions options)
   at Microsoft.VisualStudio.TestTools.UnitTesting.Publicize.Shadower.ShadowAssembly(ShadowerOptions options)
   at Microsoft.VisualStudio.TestTools.BuildShadowReferences.BuildShadowTask.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.d__20.MoveNext()


As it happened in my case, chances are the all the necessary references required to load the assembly are  already in place (I verified it using FusLogVw) and yet the compilation process causes grief.
 
So I resorted to the Visual Studio Unit Test Upgrade guidance description and decided to replace the private accessors with PrivateObject and PrivateType. There are some fine examples (#1, #2, #3 etc.) available over the internet that explain about usage of these with respect to instance level access and Type level access.
 
Though in my personal opinion using this approach is just marginally better than using "InternalsVisibleTo" approach - In ideal circumstances, we should  not be worried about internal implementation when dealing with unit test cases. They should always cater to public API only and the effort spent in designing the application should need to pay attention to "testability" of the application as well. If we do that, then we will never need to go back to such hacks (if I may say so). Nevertheless, these are quite interesting wrappers that may be useful in things other than unit testing too for example - cases when you do not want to write elaborate code that uses Reflection etc.

Monday 19 May 2014

HTML Input control's title and maxlength attribute - interesting behavior

Modern web development tools like ASP.NET, JQuery etc. are very much advanced compared to the tools and technologies available in the past. Add HTML5 to this and you feel the set piece is complete. However, some times you run into weird issues because everything seems to be a given specially when most of the browsers support many common standards like CSS3, HTML5 attributes etc. Recently I ran into an interesting issue.

We use ASP.NET MVC, JQuery, JQuery.Validate and Unobtrusive JavaScript validation in one of the web application. In an attempt to accomplish a requirement, my team mate ended with a model that had a property like following:

[StringLength(12)]

[RegularExpression(@"^-?\d{1,5}(\.\d{1,5})?$")]

public string SomeNumber { get; set; }

It was needed to show this property on a text box and a piece of information was required to be shown in tool tip when user hovered over it. So cshtml file markup looked like following:

<div class="col-lg-8 col-md-8 col-sm-8 col-xs-12">

<input title="this is a test tool tip." class="form-control"

data-val="true"

data-val-number="Must be a valid number"

data-val-regex="Invalid pattern."

data-val-regex-pattern="^-?\d{1,5}(\.\d{1,5})?$"

id="inputItem" maxlength="12" name="inputItem" type="number" value="">

<span class="field-validation-valid" data-valmsg-for="inputItem" data-valmsg-replace="true"></span>

</div>

Nothing particularly wrong with this. However this leads to interesting behavior across browsers. On Internet Explorer 11 it works alright i.e. if you enter a value that does not match pattern, it shows correct message. Input values like 123456, 123456., 123456.1, 12345678.1345 produce correct error message "Invalid pattern".
 
However on Chrome, FireFox, Opera it showed an interesting behavior. For input values like 12345678.134 it produced correct message like "Invalid pattern". However when input length is more than the value set in maxlength attribute e.g. 12345678.1346 the error message changes to "this is a test tool top." :)

 
After some intriguing investigation through hit/try and some searching over the Internet, it turned out that the issue was with "maxlength" attribute. Once I changed it to use the default HTML helper that generates client side length validation code (which does not use maxlength attribute), the issue was resolved. Turns out that Jquery validation treats "title" attribute as an error message under certain conditions :)

<div class="col-lg-8 col-md-8 col-sm-8 col-xs-12">

<input title="this is a test tool tip." class="form-control"

data-val="true"

data-val-number="Must be a valid number"

data-val-regex="Invalid pattern."

data-val-regex-pattern="^-?\d{1,5}(\.\d{1,5})?$"

data_val_length="The field must be a string with a maximum length of 12."

data_val_length_max="12"

id="inputItem"

name="inputItem" type="number" value="">

<span class="field-validation-valid" data-valmsg-for="inputItem" data-valmsg-replace="true"></span>

</div>

Sunday 13 April 2014

ASP.NET MVC - 3rd party integration with HtmlHelper

We had an interesting requirement recently where we wanted to create new extension methods for HtmlHelper class in an web (ASP.NET MVC) solution. One interesting requirement was to write extension method such that we could integrate a 3rd party JavaScript control. Usually integration with any 3rd party control requires following steps:

  1. Adding requisite content e.g. JavaScript files, CSS (or LESS) files to solution. (manual step, can be automated if NuGet package is available) 
  2. Include the added content to views (or the layout page).
  3. Add required HTML to the page and add the necessary JavaScript code to let the 3rd party control take effect.

It was the 3rd step that we wanted to encapsulate in an extension method to ensure that developers do not need to write boiler plate code (and avoid mistakes in the boiler plate code). Adding generated HTML was quite straightforward but adding JavaScript took some decision making time mainly due to the fact that as a general practice you want to add JavaScript code after
tag and it is not available in HtmlHelper context. So we took a conscious decision:
  1. Use ViewContext.ViewBag in HtmlHelper extension method to add Scripts to a List.
  2. Use Layout page to render all the scripts added to the ViewBag. We needed to use ViewContext.ViewData["name of dynamic property"].
It was basically similar to what ScriptManager of ASP.NET WebForms does :)