Sunday 22 December 2013

Testing it all the way

I have always believed in the fact that developers are testers as well and that QA folks are additional pair of eyes who ensure that nothing is missed out. Developers should be testing their code to the point that they have high level of confidence if it comes down to applying their code change to a different environment by skipping BVTs.

This brings me to my main point which is "how do you ensure that your application is well tested" before it reaches in the treacherous hands of QA guys who seem to make everything fail just by touching it. Generally there are two ways that developers use to test the application:

1. Manual testing: This is pretty useful when developer is working on user interfaces. There are things that are hard to automate when it comes to user interfaces - be it XAML based such as WPF, Silverlight or HTML based such as web pages or just simple Windows Forms applications. Few of the things that I prefer to test in the user interface are -

  • Different resolutions: Users may be using the application on high resolution screens e.g. 1920x1080 or on projectors, TV screen which may not support any resolution higher than 1024x768. As a developer, you need to ensure that application works well in all the supported cases.
  • Different devices: Users may be using a tablet, laptop, desktop computer, phone. Some devices may be running on battery and that may require you to keep heavy calculation on server.
  • Different browsers
  • Different culture settings: Good for ensuring that all data in different culture fits in the screen etc.
  • Different type of data: Generally the domain model is separated from the model used by user interfaces and it is possible to have a data type mismatch. In such cases you need to ensure that data is valid e.g. input boxes should not allow users to enter text values if only numerals are allowed.
  • Different amount of data: One such case is when you have a grid then does it starts to show scroll bar if data does not fix in available screen area.
  • <>

2. Automated testing (Unit testing): It has wider impact because it can lead to identifying the design limitations in the application as well as implementation flaws. The greatest advantage of automated unit testing is that you can feel confident about identifying the areas of the application that may be broken if we end up changing a component or behavior e.g. database schema etc. Any well designed application should allow testability of most of the aspects the implementation logic in programmatic manner. There are many strategies available that can be used but all of this begins with a mindset that targets to achieve high level of test coverage - just achieving code coverage is not the right target though because that can be done in misleading ways and raise false hopes. 

Test Driven Development (TDD) has been preferred way and I myself have tried to promote it within the different teams i worked with - sometimes developers accept it and sometimes they shrug their shoulders because their timelines are too tight :). The approach of Red -> Green -> Refactor is a very helpful one and should be used if timelines and other factors permit.

  • Components should be loosely couple - preferably interface driven. Use Inversion of Control (IoC) technique with Dependency Injection (DI) containers (e.g. Unity container to ensure that interface implementations are replaceable when running the tests - this can help in reducing dependency on external systems such as database hosted on SQL Server, a third party WCF service which is not available for purpose of development testing etc.
  • Use Fakes? There are frameworks like Moq, Microsoft Fakes etc. which allow providing predetermined simulated implementation for components and methods. My personal preference is to avoid these unless there is no way left.
  • If application does not allow testing of a certain feature, then there is some issue with the design and it needs to be looked at before it is too late. Unit tests should be the starting point of any implementation and need to written in such a way that they add value in terms of functionality testing and are not written to meet certain coverage criteria. e.g. test cases should be written for cases when method is invoked with a) null values b) illegal values c) application state that may lead to business rule violation i.e. possibility of duplicate data d) exception scenario and the error messages returned by those scenario i.e. ensure that application returns contextual error message e) other scenarios.
  • There is some level of design required when writing test cases as well because in almost all cases we would want the unit tests to be re-runnable - tests should pass repeatedly with each execution which may mean that you have to chose the data carefully e.g. either use a Fake data access layer or prepare database layer in such a way that it works all the time (populate with seed data before test execution and remove it once test runner has finished running all the tests).
  • If the aspects of application are testable then implementing advanced testing scenarios becomes easy e.g. this
  • It makes release management easy as well because it can help you identify which version of component A works with which version of component B.

No comments:

Post a Comment