Sunday 29 December 2013

Azure Active Directory and MVC application

The ASP.NET Web Application template in Visual Studio 2013 comes with a new feature that allows us to quickly set up basic wiring required for the web application’s authentication and authorization to work against different authentication options – No Authentication (that’s simple!), Individual User Accounts (which was essentially already present in ASP.NET MVC4 template of Visual Studio 2012), Windows Authentication (that’s simple, again) and Organizational Accounts (which promises to make the application work against Office 365, Active Directory or Windows Azure Active Directory).

Authentication options

Authentication options 2

“Organizational Accounts” option is something that i had not tried out before. So I thought of giving it a try and it did not turn out all that pleasant (for me, at least).
I have an Windows Azure account which comes with a default Active Directory. It took me couple of minutes to figure out the actual domain of the default active directory (I did not locate any place that specifies that domain name for the directory). It is actually pretty simple – <yoursigninemailaddresswithoutdotsanddotcom> @ onmicrosoft.com. You can also find it in the browser’s address bar once you have signed into https://manage.windowsazure.com.

Azure AD Address

So with high hopes, i entered the domain address and clicked the OK button. It launched another windows which asked me to authenticate. So far so good.

Sign in address

I entered the details and the browser window authenticated me and as expected it closed by itself after the authentication was complete.  But then i was presented with an error prompt.

Sign in address error

I checked the errors in event log and there was one interesting thing related to error encountered while querying GraphAPI. I figured this was because i had selected Single Sign On and Read data. If i chose Single Sign On option, then there is no error in event viewer but i still get the error prompt. 

Application: devenv.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.Data.Services.Client.DataServiceQueryException
Stack:
   at System.Data.Services.Client.DataServiceRequest.Execute[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]](System.Data.Services.Client.DataServiceContext, System.Data.Services.Client.QueryComponents)
   at System.Data.Services.Client.DataServiceQuery`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Execute()
   at System.Data.Services.Client.DataServiceQuery`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].GetEnumerator()
   at System.Collections.Generic.List`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]..ctor(System.Collections.Generic.IEnumerable`1<System.__Canon>)
   at System.Linq.Enumerable.ToList[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]](System.Collections.Generic.IEnumerable`1<System.__Canon>)
   at Microsoft.VisualStudio.Web.AzureAD.ProvisionGraphHelper.GetUser(System.String)
   at Microsoft.VisualStudio.Web.AzureAD.ProvisionGraphHelper.HasProvisionRight(System.String)
   at Microsoft.VisualStudio.Web.AzureAD.ProvisionGraphHelper.AcquireToken(System.String, Boolean ByRef, System.String ByRef)
   at Microsoft.VisualStudio.Web.AzureAD.VsAzureADService.LoginToTenant(System.String, Boolean ByRef, System.String ByRef)
   at Microsoft.VisualStudio.Web.Project.AuthenticationDialogViewModel.BeforeCloseDialogByClickYes(Microsoft.VisualStudio.Web.AzureAD.Contracts.IVsAzureADService, Microsoft.VisualStudio.Web.AzureAD.UrlChecker, System.Action`1<System.String>)
   at Microsoft.VisualStudio.Web.Project.AuthenticationDialogWindow.OkAuthenticationButton_Click(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.RaiseEventImpl(System.Windows.DependencyObject, System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseEvent(System.Windows.RoutedEventArgs)
   at System.Windows.Controls.Primitives.ButtonBase.OnClick()
   at System.Windows.Controls.Button.OnClick()
   at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.UIElement.OnMouseLeftButtonUpThunk(System.Object, System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventArgs.InvokeHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.ReRaiseEventAs(System.Windows.DependencyObject, System.Windows.RoutedEventArgs, System.Windows.RoutedEvent)
   at System.Windows.UIElement.OnMouseUpThunk(System.Object, System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventArgs.InvokeHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.RaiseEventImpl(System.Windows.DependencyObject, System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseTrustedEvent(System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseEvent(System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.Input.InputManager.ProcessStagingArea()
   at System.Windows.Input.InputManager.ProcessInput(System.Windows.Input.InputEventArgs)
   at System.Windows.Input.InputProviderSite.ReportInput(System.Windows.Input.InputReport)
   at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr, System.Windows.Input.InputMode, Int32, System.Windows.Input.RawMouseActions, Int32, Int32, Int32)
   at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr, MS.Internal.Interop.WindowMessage, IntPtr, IntPtr, Boolean ByRef)
   at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr, Int32, IntPtr, IntPtr, Boolean ByRef)
   at MS.Win32.HwndWrapper.WndProc(IntPtr, Int32, IntPtr, IntPtr, Boolean ByRef)
   at MS.Win32.HwndSubclass.DispatcherCallbackOperation(System.Object)
   at System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate, System.Object, Int32)
   at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate)
   at System.Windows.Threading.Dispatcher.LegacyInvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32)
   at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr)


And then i found another interesting thing. Once I clicked OK on the error prompt, the Create ASP.NET Web application wizard became useless –> You can choose any other option (I switched my selection to “No Authentication”) but it kept on restarting the Visual Studio.

Sign in address error 2

And here is the event log entry:

Application: devenv.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.InvalidOperationException
Stack:
   at System.Windows.Window.set_DialogResult(System.Nullable`1<Boolean>)
   at Microsoft.VisualStudio.Web.Project.AuthenticationDialogWindow.OkAuthenticationButton_Click(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.RaiseEventImpl(System.Windows.DependencyObject, System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseEvent(System.Windows.RoutedEventArgs)
   at System.Windows.Controls.Primitives.ButtonBase.OnClick()
   at System.Windows.Controls.Button.OnClick()
   at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.UIElement.OnMouseLeftButtonUpThunk(System.Object, System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventArgs.InvokeHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.ReRaiseEventAs(System.Windows.DependencyObject, System.Windows.RoutedEventArgs, System.Windows.RoutedEvent)
   at System.Windows.UIElement.OnMouseUpThunk(System.Object, System.Windows.Input.MouseButtonEventArgs)
   at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventArgs.InvokeHandler(System.Delegate, System.Object)
   at System.Windows.RoutedEventHandlerInfo.InvokeHandler(System.Object, System.Windows.RoutedEventArgs)
   at System.Windows.EventRoute.InvokeHandlersImpl(System.Object, System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.UIElement.RaiseEventImpl(System.Windows.DependencyObject, System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseTrustedEvent(System.Windows.RoutedEventArgs)
   at System.Windows.UIElement.RaiseEvent(System.Windows.RoutedEventArgs, Boolean)
   at System.Windows.Input.InputManager.ProcessStagingArea()
   at System.Windows.Input.InputManager.ProcessInput(System.Windows.Input.InputEventArgs)
   at System.Windows.Input.InputProviderSite.ReportInput(System.Windows.Input.InputReport)
   at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr, System.Windows.Input.InputMode, Int32, System.Windows.Input.RawMouseActions, Int32, Int32, Int32)
   at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr, MS.Internal.Interop.WindowMessage, IntPtr, IntPtr, Boolean ByRef)
   at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr, Int32, IntPtr, IntPtr, Boolean ByRef)
   at MS.Win32.HwndWrapper.WndProc(IntPtr, Int32, IntPtr, IntPtr, Boolean ByRef)
   at MS.Win32.HwndSubclass.DispatcherCallbackOperation(System.Object)
   at System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate, System.Object, Int32)
   at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate)
   at System.Windows.Threading.Dispatcher.LegacyInvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32)
   at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr)


So I reverted to an older technique that made my ASP.NET website work with my Azure Active Directory. I used Azure management portal to add an application to my AD. I called it AzureADTesting and added my application’s URI and URL (I kept both as same https://localhost:44300/).

add applications
Copied the Federation Metadata document URL.

Azure AD Testing

Created an standard website in Visual Studio 2012 and used “Identity and Access Tool” for Visual Studio 2012 to point it to the Federation metadata endpoint of my Azure AD. It makes the required changes in web.config and adds necessary assembly references. Now when you run the web application, it redirects to Azure AD for authentication. Remember, you can not use your Microsoft Account to login.
Live account error

Users have to be in User role in AD. It is easy actually, you can create a new user in the AD and use the credentials to login.

Create an ASP.NET Web application in Visual Studio 2013 with “No Authentication” option. I used the same application name as i used when creating the ASP.NET MVC4 application.

Copied the web.config to new application from old application. Copied “System.IdentityModel.Tokens.ValidatingIssuerNameRegistry” assembly from the Packages folder from application created in step # 3 to new application and added a reference. That’s it. You are set now. It should work for you.
In order to verify that application was indeed signing in users using AD, I modified the Index view of Home controller and it showed the email address of the user in Name.

Change to Index view of Home controller

Email address

I will spend some more time on finding out the real reason why the same AD is not working with the default template of VS 2013. May be it is a known issue or may be i am missing something.

Thursday 26 December 2013

Source code structure regeneration from .NET assembly

Have you ever run into a scenario where you are presented with an assembly from Production or UAT environment for analysis and also told that this assembly has been present on Production since the time our company had its website launched and the vendor who created the application to us forgot to give us the code for that? I have run into this kind of scenario and i know for sure that some of my colleagues and friends have run into similar situations too.

Of course, we have the option of (hoping that they are not out of business by now) going through the ledgers and bills to find out the work order details, find their current email address and phone number and contact them for the code base. Chances are that they have shown little interest in retaining the code base of what they did 10 years ago. 

Well, the assemblies in question are .NET assemblies you have few more options - use a tool to regenerate the code from the assembly. 

Reflector used to one of the best free .NET tools available but it no longer available as free.

DotPeek is another free tool available that has pretty much all the functionality that is required to regenerate the code. One of the cool features is that it allows to create a project from an assembly and arranges class files in such a way that folder structure follows the namespace hierarchy (goodness that we may need!!) and one file has one class only.

For example : I created simple C# console application, ConsoleApplication1 which only one file Program.cs and it has some classes in different namespaces.


I compiled the application in release mode, added the assembly to dotPeek application and selected the option "Export to Project":


It creates the project which looks like following (notice the nice structure). 


The only sad (if i may say so) part is that it can does not allow generating code for multiple assemblies together.

Sunday 22 December 2013

Testing it all the way

I have always believed in the fact that developers are testers as well and that QA folks are additional pair of eyes who ensure that nothing is missed out. Developers should be testing their code to the point that they have high level of confidence if it comes down to applying their code change to a different environment by skipping BVTs.

This brings me to my main point which is "how do you ensure that your application is well tested" before it reaches in the treacherous hands of QA guys who seem to make everything fail just by touching it. Generally there are two ways that developers use to test the application:

1. Manual testing: This is pretty useful when developer is working on user interfaces. There are things that are hard to automate when it comes to user interfaces - be it XAML based such as WPF, Silverlight or HTML based such as web pages or just simple Windows Forms applications. Few of the things that I prefer to test in the user interface are -

  • Different resolutions: Users may be using the application on high resolution screens e.g. 1920x1080 or on projectors, TV screen which may not support any resolution higher than 1024x768. As a developer, you need to ensure that application works well in all the supported cases.
  • Different devices: Users may be using a tablet, laptop, desktop computer, phone. Some devices may be running on battery and that may require you to keep heavy calculation on server.
  • Different browsers
  • Different culture settings: Good for ensuring that all data in different culture fits in the screen etc.
  • Different type of data: Generally the domain model is separated from the model used by user interfaces and it is possible to have a data type mismatch. In such cases you need to ensure that data is valid e.g. input boxes should not allow users to enter text values if only numerals are allowed.
  • Different amount of data: One such case is when you have a grid then does it starts to show scroll bar if data does not fix in available screen area.
  • <>

2. Automated testing (Unit testing): It has wider impact because it can lead to identifying the design limitations in the application as well as implementation flaws. The greatest advantage of automated unit testing is that you can feel confident about identifying the areas of the application that may be broken if we end up changing a component or behavior e.g. database schema etc. Any well designed application should allow testability of most of the aspects the implementation logic in programmatic manner. There are many strategies available that can be used but all of this begins with a mindset that targets to achieve high level of test coverage - just achieving code coverage is not the right target though because that can be done in misleading ways and raise false hopes. 

Test Driven Development (TDD) has been preferred way and I myself have tried to promote it within the different teams i worked with - sometimes developers accept it and sometimes they shrug their shoulders because their timelines are too tight :). The approach of Red -> Green -> Refactor is a very helpful one and should be used if timelines and other factors permit.

  • Components should be loosely couple - preferably interface driven. Use Inversion of Control (IoC) technique with Dependency Injection (DI) containers (e.g. Unity container to ensure that interface implementations are replaceable when running the tests - this can help in reducing dependency on external systems such as database hosted on SQL Server, a third party WCF service which is not available for purpose of development testing etc.
  • Use Fakes? There are frameworks like Moq, Microsoft Fakes etc. which allow providing predetermined simulated implementation for components and methods. My personal preference is to avoid these unless there is no way left.
  • If application does not allow testing of a certain feature, then there is some issue with the design and it needs to be looked at before it is too late. Unit tests should be the starting point of any implementation and need to written in such a way that they add value in terms of functionality testing and are not written to meet certain coverage criteria. e.g. test cases should be written for cases when method is invoked with a) null values b) illegal values c) application state that may lead to business rule violation i.e. possibility of duplicate data d) exception scenario and the error messages returned by those scenario i.e. ensure that application returns contextual error message e) other scenarios.
  • There is some level of design required when writing test cases as well because in almost all cases we would want the unit tests to be re-runnable - tests should pass repeatedly with each execution which may mean that you have to chose the data carefully e.g. either use a Fake data access layer or prepare database layer in such a way that it works all the time (populate with seed data before test execution and remove it once test runner has finished running all the tests).
  • If the aspects of application are testable then implementing advanced testing scenarios becomes easy e.g. this
  • It makes release management easy as well because it can help you identify which version of component A works with which version of component B.

Sunday 3 November 2013

Parallel.ForEach is good. List.ForEach is not.

I myself had used List.ForEach(Action action) many times believing that it was efficient and that it uses multiple threads. Apparently it does not :). Now that i know that it does not use multiple threads, i can think of one or two good reasons why it may not be doing so but none of those convincing enough.

On the other hand, Parallel.ForEach works as expected and uses machine's available power to the fullest. To test the difference between the two, i used following code:

public partial class TestClass
{
        internal void Method1()
        {

            System.Threading.Thread.Sleep(1);
        }
}

In the Main method, i wrote the following:

            TestClass[] testClasses = new TestClass[10000];
            for (int i = 0; i < testClasses.Length; i++)
            {
                testClasses[i] = new TestClass();
            }

            Stopwatch watch = new Stopwatch();
            watch.Start();
            Parallel.ForEach(testClasses, (t) => t.Method1());
            watch.Stop();
            Console.WriteLine(watch.ElapsedMilliseconds);

            watch.Reset();
            watch.Start();
            List list = new List();
            list.AddRange(testClasses);
            list.ForEach(t => t.Method1());
            watch.Stop();

            Console.WriteLine(watch.ElapsedMilliseconds);

You can use process explorer to witness the spawning of multiple threads when Parallel extension is executed.

Mismatch in Message property between FileNotFoundException and ArgumentNullException

There is an interesting mismatch in content produced by Message property of FileNotFoundException and ArgumentNullException.

Say, your code does something like following:

throw new FileNotFoundException("Test error", "\\share\test.txt");

If you print the Message property of thrown exception, it prints "Test error" and does not provide any details about the value provided in FilePath parameter (i.e. \\share\test.txt).

On the other hand, if you code does something like following:

throw new ArgumentNullException("param", "message");

If you print the Message property of thrown exception, it prints details about both arguments - message and argument.

Somehow it does not seem consistent. We ran into a bit of problem because of this mismatch. For now, we resorted back to ex.ToString() as that prints the complete information.

TF80012 - Maybe there is nothing wrong

Last week i encountered a strange error. While trying to publish some items to TFS using Excel's TFS add-in, i had some network connectivity issues and post that error, whenever i tried to launch TFS query results to Excel using "Open in Microsoft Excel" option, i started to see following prompt.

TF80012: The document cannot be opened because there is a problem with the installation of the Microsoft Visual Studio Team Foundation Office integration components.  For more information, see the following page on the Microsoft website: http://go.microsoft.com/fwlink/?LinkId=220459.

When i visited the URL mentioned in the prompt, i did not find anything that made sense to me. It was strange because everything was working smoothly prior to the publishing failure caused by network connectivity issues. In the end, the solution to this problem was pretty simple.

Somehow, the COM add-in in Excel for TFS 2012 was disabled - maybe Excel found the last error caused by the add-in to be severe or something. When i opened a new Excel workbook, TEAM option was not showing up. All i needed to do was:

1. Open new Excel and select Options from File menu.
2. In the popup window, select Add-Ins, select "COM Add-In" in the "Manage" dropdown at the bottom of the popup window and press Go.
3. In the new popup window, select "Team Foundation Add-in" and press OK.

And everything is back to normal.

Sunday 13 October 2013

Cannot resolve the collation conflict in equal to operation

SQL Server collation settings can cause great grief if you do not ensure that the target SQL Server collation does not conflict of the database that is being created or restored.

I recently ran into this issue again. Everything was verified to be working fine on our development environment and some machines on development environment of customer. But when we deployed the application on their test environment, we started to get strange Collation mismatch exception in specific (not all) SQL statements. 

Cannot resolve the collation conflict between "XXX" and "YYY" in the equal to operation.

As it turned out, those queries were using Temporary tables and performing JOIN operations on permanent tables from our database. Since the SQL Server collation was different from Collation of our database backup, comparison operations started to fail in JOIN operations.

Lengthy and permanent solution to get collations in sync.

Simpler and quick solution was to specify collation in specific SQL statements. It is required for nvarchar, char, varchar columns. We verified that it was not required for cases where column type was bit, int, datetime etc. Something like following:

SELECT
    t1.col1
FROM
    table_1 t1
INNER JOIN #table_2 t2 
    ON t2.col2 COLLATE DATABASE_DEFAULT = t1.col2

It solved the problem for the moment and gave some time for time to find a permanent solution of the problem.

Thursday 4 July 2013

Semantic Logging in WCF service

Implementing Semantic Logging Application Block in a WCF service is as straightforward as implementing it a normal .NET application - because it is all .NET :).
 
So I tried to create one simple implementation of semantic logging in a simple WCF service for demo purpose. The idea was not to implement or develop any SLAB pattern for WCF but instead show that SLAB implementation can be started in as straightforward manner as you would do it for an ASP.NET website or a Windows Forms application etc. Technically, there is no limitation in exposing your SLAB implementation for capturing business events as a service (WCF, REST or OData, anything else) as well which means you should be able to capture business events and data in a structured format and process the stored information to make educated decisions (perhaps a good analysis engine or a good visualization engine).
 
I have used the default service that gets created when you chose to create a new WCF Service application (IService1 is the contract and Service1 is the implementation, hosted with default WCF configuration provided in Web.Config). Only change in the service is the usage of SLAB component.



Implementation of BusinessClassEventListener has been defaulted to log data in a file but you can always be more creative about it for your own benefit. Here are the rest of the classes (of course, you need to add SLAB reference to the project - recommended way is to use NuGet package of SLAB).
 
 
 
 
This is just a sample implementation for reference and definitely not production ready or even "complete" for that matter. I take no responsibility if this fails in your application :). Hope this gives you a good starting point.
 

Sunday 23 June 2013

LightSwitch

If you want to build a quick and simple application to display and edit data, then Microsoft has released a solution for you. LightSwitch. It has been around for a while but now it supports 2 interesting features.

HTML client and SharePoint support. Both of the features are immensely valuable because now we are talking about quick prototype applications for any platform and not just desktop client. 

LightSwitch applications support 4 types of data sources - Database, SharePoint List, OData service, WCF RIA service. OData service is something that has now been treated as a standard way of exposing data related operations on the Internet - mostly because it is easy to consume and is HTTP based (which means it can be consumed by most applications running on most devices such as Windows 8 Store Application, Windows Phone 8 application etc.).


If you plan to create tables instead of using existing data sources, you can use the option "Create Table" which presents with a screen to design the table/entity. These entities are created as part of a local database "ApplicationDatabase".



It comes with built-in implementation for data validation, access control (Forms authentication and Windows Authentication with basic settings to configure permissions) and you are allowed to customize the screens and their navigation. So, while on a high level it is a quick data editing application maker, it can as well be extended to create regular LOB applications. More so, the HTML applications can be hosted in SharePoint and distributed via SharePoint application store.

Compiling a LightSwitch application produces a package (deployable application) which contains a lot of generated code including WCF services, JavaScript files, CSS files, XAP files, config files and client access policy. I created a simple application which had 2 clients (HTML and Desktop) and package looked like following:



Since the generate package is a normal .NET application, we should be able to modify the configuration entries and point to different database servers, deploy the applicationdatabase on a separate server, host WCF service on a separate server and point the application to that. A quick search for ApplicationData.svc in the generated package pointed me to the location in the code (javascript file named data.js) where the URL was being formed. Ideally i should be able to change this and make it work (I haven't tried it but it should work if there is no other kind of hard binding in generated code):



Hoping to use it to create useful applications in double quick time.

Saturday 1 June 2013

SQL Collation mismatch error

I have been put through some grief a few times in last year due to issues caused by SQL collation mismatch. It happened across a few projects that worked on in that duration.

Each time it so happened that when we got ready to deploy the code to a different environment, SQL Server was deployed by a different team in a different geographical location. We provided deployment script and database backups for the databases present in our development environment. Invariably this caused a mismatch between the collation setting of the SQL Server Instance and the database and caused runtime errors complaining about the mismatch in collation.

The fix was pretty straightforward and in our case, a little less risky because the target environments were development and testing environments. The steps and commands are explained in detail here.

A few things to note are:

Do not forget to take backup of the existing databases. "Rebuilddatabase" option removes all the databases.
Default SQL Server Instance name is usually MSSQLSERVER.
User name is case sensitive. We struggled with administrator vs Administrator mismatch when running the commands.
The command is executed in silent mode, so you will not be able to see any progress signs etc. If there are any issues when running the commands then you can check EventViewer for the details.  

We can avoid all this pain if we are careful when installing SQL Server though :).

Thursday 30 May 2013

System.Random - not true random

In .NET framework, System.Random class does not offer true random behavior. If you create two instances of System.Random and invoke Next() method on each of the instances, it will return the same value.

Random random = new Random();
Random random2 = new Random();

Console.WriteLine(random.Next());
Console.WriteLine(random2.Next());

Console.WriteLine(random.Next(1, 1000000));
Console.WriteLine(random2.Next(1, 1000000));

Console.WriteLine(random.Next(1, 100000));
Console.WriteLine(random2.Next(1, 100000));

Console.WriteLine(random.Next(1, 10000));
Console.WriteLine(random2.Next(1, 10000));


However if you changed the above code to have a little wait (I used Thread.Sleep) between the steps that create the instances of Random class, then these instances produce different values.

Random random = new Random();
System.Threading.Thread.Sleep(10);
Random random2 = new Random();

Console.WriteLine(random.Next());
Console.WriteLine(random2.Next());

Console.WriteLine(random.Next(1, 1000000));
Console.WriteLine(random2.Next(1, 1000000));

Console.WriteLine(random.Next(1, 100000));
Console.WriteLine(random2.Next(1, 100000));

Console.WriteLine(random.Next(1, 10000));
Console.WriteLine(random2.Next(1, 10000));


It is because Random's constructor uses current time to set up its seed value. This is something you need to keep in mind the next time you are working on creating random values.

Wednesday 29 May 2013

Procedure or function expects parameter which was not supplied : But its there.

This is a strange issue but a real one. One would normally assume that when executing a SQL server stored procedure, passing "null" as a value for a parameter which has data type as varchar/nvarchar is intuitive. However if you do it, you will get exception with a message like following:
 
"System.Data.SqlClient.SqlException: Procedure or function 'DoSomethingWithString' expects parameter '@inputString', which was not supplied."
 
Fix: It is actually pretty straightforward. Pass DBNull.Value instead of "null".
 
Here is the sample code that you can try.
 
Create a database named "Test" on local database server's default instance. Create a stored procedure in it using following script:
 
Create Procedure dbo.DoSomethingWithString
(
 @inputString NVARCHAR(256)
)
AS
BEGIN
 SET NOCOUNT ON
 SELECT @inputString AS 'Output'
END
 
Create a console application and add the following method:
 
 private static void TestNullStringParameter()
{
            string connectionString = "data source=(local);initial catalog=Test;integrated security=True;";
            try
            {
                using (IDbConnection conn = new SqlConnection(connectionString))
                {
                    using (IDbCommand command = conn.CreateCommand())
                    {
                        command.CommandText = "dbo.DoSomethingWithString";
                        command.CommandType = System.Data.CommandType.StoredProcedure;
                        //command.Parameters.Add(new SqlParameter("inputString", DBNull.Value));
                        command.Parameters.Add(new SqlParameter("inputString", null));
                        conn.Open();
                        using (IDataReader reader = command.ExecuteReader(System.Data.CommandBehavior.CloseConnection))
                        {
                            if (reader.Read())
                            {
                                Console.WriteLine(reader[0]);
                            }
                        }
                    }
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("Exception!!");
                Console.WriteLine(e);
            }
}
 
 


 
 
 
 

Sunday 28 April 2013

Semantic Logging Application Block in Enterprise Library 6.0

Enterprise Library 6.0 has been released. You can find most of the stuff you need to know about it here. One of the most exciting feature of pack is Semantic Logging Application Block. It was available in its CTP form till now but is now an official release as part of EntLib.
 
At its core, it presents pretty powerful set of classes which are nicely decoupled from each other and allow LOB applications to log information (semantically, no pun intended) in 2 ways - in-process and out-of-process. In-Process logging uses ObservableEventListener to efficiently log information while Out-Of-Process logging uses combination of ETW infrastructure and Semantic Logging Service (available as a separate download here).
 
So, I decided to create a simple sample to experience its capabilities. Below is the code I used -
 
A simple Business Class -
public class BusinessClass
    {
        public void DoBusinessEvent(int token)
        {   
            if (token % 2 == 0) { 
                BusinessClassEventSource.Instance.EvenTokenEventOccurred(token); 
            }
            else { 
                BusinessClassEventSource.Instance.OddTokenEventOccurred(token); 
            }
            // Do something interesting like counting to 10.
        }
    }
 
A Custom Event Source -
 
public class BusinessClassEventSource : EventSource
    {
        private const int Even_Token_EventId = 1;
        private const int Odd_Token_EventId = 2;
        private static BusinessClassEventSource businessClassEventSource = new BusinessClassEventSource();
 
        private BusinessClassEventSource()
        {
           //nothig
        }
 
        public static BusinessClassEventSource Instance
        {
            get
            {
                return businessClassEventSource;
            }
        }
 
        public void EvenTokenEventOccurred(int token)
        {
            this.WriteEvent(Even_Token_EventId, token.ToString());
        }
 
        public void OddTokenEventOccurred(int token)
        {
            this.WriteEvent(Odd_Token_EventId, token.ToString());
        }
    }
 
A custom EventListener -
 
public class BusinessClassEventListener : EventListenerIObservable<EventEntry>
    {
        public IDisposable Subscribe(IObserver<EventEntry> observer)
        {
            return null;
        }
 
        protected override void OnEventWritten(EventWrittenEventArgs eventData)
        {
            Console.WriteLine(eventData.ToString());
        }
    }
 
And the main program -
 
class Program
    {
        static void Main(string[] args)
        {
            var listener1 = Microsoft.Practices.EnterpriseLibrary.SemanticLogging.ConsoleLog.CreateListener();
            listener1.EnableEvents(BusinessClassEventSource.Instance, EventLevel.LogAlways);
 
            var listener2 = new BusinessClassEventListener();
            listener2.EnableEvents(BusinessClassEventSource.Instance, EventLevel.LogAlways);
 
            BusinessClass businessClass = new BusinessClass();
            businessClass.DoBusinessEvent(100);
            businessClass.DoBusinessEvent(101);
            
            listener1.DisableEvents(BusinessClassEventSource.Instance);
            listener2.DisableEvents(BusinessClassEventSource.Instance);
 
            Console.ReadLine();
        }
    }
 
The good thing about SLAB is that we can write custom Formatters, Event sinks etc if out of the box capability does not suffice. SLAB provides default wiring to leverage SQL Server, Flat File, Console, Rolling Flat File, Windows Azure Table and others as storage for logged information.

Sunday 7 April 2013

Insufficient resources - MSMQ: Misleading error message

My team was recently performing performance tests on an integration solution which pushed messages on MSMQ queues hosted on a remote server via MSMQ send adapter of BizTalk Server. The functionality used the most plain vanilla implementation possible for transactional queues and it worked without a hitch on development and test environment.

It was a different story on the load test environment though. We were in for a surprise. After we had completed the first round of load testing of the system, we noticed quite a few warnings and errors in the event viewer and all of those pointed to "insufficient resources". The exact error message was -

There are insufficient resources to complete the send operation.
 For example, this could happen when you are trying to send a large message (message larger than 4095 KB) to a non-transactional queue. Large messages can be sent only to transactional queues.


This is actually a very misleading message, something we established soon after we inspected the messages in the suspended messages queue of BizTalk Server. Most of the messages were in the range of 2 KB - 20 KB. So size of the message was not an issue at all.

More intrigue awaited us as we were not able to send even a single message even after we cleared all the suspended messages from BizTalk Server. We checked the size of message log and directory of MSMQ on the destination server - everything was within the bounds of quota limit set of MSMQ and Queue respectively. Changing the destination queue did not help either - we got the same error message but that helped in establishing that the issue was not with the destination server. Somehow the servers that were running the send handler of MSMQ were choked as far as MSMQ transport was concerned.

So we did what every sane person does - turn to search engines. One of the top search results was "Insufficient Resources? Run away, run away!". It has 11 possible reasons listed in it. In our case, it turned out to be 3rd reason. When we checked the folders at "%windir%\system32\msmq\" on the machines that were running host instances of MSMQ send ports, it turned out that there were lot of MSMQ messages that could not be sent (because of our system's issues) and the cumulative size of MSMQ messages had reached beyond 1.7 GB. Once we cleared these folders on the client machines, the process started working again.

Learning - beware of misleading error messages.

Monday 1 April 2013

MSMQ vs SSSB vs Azure Service Bus

There are so many queuing solutions available, both commercial and open source, that it is not funny. Some of the queue solutions come in-built with Microsoft products such as MSMQ, SQL Server Service Broker, Azure Service Bus, Service Bus for Windows.
 
Like all other technology spaces that are crowded with similar solutions, each product/solution has its strength and weaknesses in this case too. Well, that makes things easier for us, the confused souls, in choosing the most appropriate option for ourselves. An interesting point to note is that all of these solutions provide functionalities that are similar in nature e.g. reliability of message delivery, transactional sanctity, poison message handling, journal. Here is my understanding of the three very viable options for a queue based implementation -
 
MSMQ - Possibly one of the oldest queuing solutions available on Windows ecosystem. It has evolved over the years and is perfect for cases when you want to maintain reliability of message delivery across servers that perform different jobs specially when one (or all) system falls under "occasionally offline" category. There is even a feature that allows MSMQ to be exposed over HTTP/HTTPS.
 
SQL Server Service Broker - SQL Server Service Broker is perfect choice for the cases where you need messages to be processed in a sequence inside SQL Server itself. Of course it can be used otherwise too.
 
Azure Service Bus/Service Bus for Windows - Both are great for the cases where two systems need to talk despite the network challenges e.g. Firewall across organizations etc. Both Azure Service Bus and Service Bus for Windows allow communication over HTTP and TCP. Also, they have Fabric Controller as their core component and therefore can easily scale with very little downtime. Not to forget that 1) it supports AMQP standard and therefore can easily be consumed from applications that run on other platforms 2) it supports Topics (publish/subscribe) in addition to the regular queues to allow multiple subscribers for an item 3) generally supports many messaging patterns 4) Supports better security mechanism via ACS/STS.

Monday 18 March 2013

Populating billions of rows in a SQL Server table

Few learning from recent experience gained when my team was trying to fulfill a benign requirement - populate a few billion rows in a table. The table in question had 140 columns - 30 mandatory and rest were optional. Optional columns were also marked as SPARSE in order to save space. Since the table was to be used for performance testing, it was not necessary to populate all the columns for each row.
 
We started out by doing some basic math around how much space we will require on the disk - calculate the space required for mandatory columns per row and then multiply by number of rows. In our case it turned out to be 120 GB. Later we used the Microsoft's recommended approach to get even better estimate. The estimate of the space required jumped to 190+ GB. Questions related to space requirements were resolved.
 
Now came the most intriguing part. How to generate so many rows with minimum effort. Since this database was to be used for performance testing, we figured that not all of the rows need to be unique.
 
We used Data Generation Plan that comes free with database project template of Visual Studio 2010 Ultimate edition to generate 1 million unique rows. Then we duplicated the rows with slight variation in some random columns using INSERT statement to reach up to 4 million rows and exported all the rows into a CSV file. All the activities detailed till now were done on the development environment.
 
Now we took the exported CSV file containing 4 million rows to the test environment and used BULK INSERT command to import rows in parallel sessions in a loop. Before doing so, we changed the recovery mode of the database to Bulk Logged and changed the file growth settings for log and data files to increase by 3 GB instead of the default setting of "By 10 percent, Limited to 2097152 MB". The process took a full day to complete.
 
One interesting learning was that BCP utility runs about 40% slower than BULK INSERT command. Import of 4 million rows took 5-6 minutes using BCP utility against the 3-4 minutes taken by BULK INSERT command on development environment.
 
I am guessing this process would have been much faster had we also introduced partitioning scheme for the table as that might have allowed faster parallel data load for rows that had different partition key ranges.