Monday, November 9, 2009

Impact of ALM Digite Enterprise Tool in Handling Manual and Automation Testing

As a testing manager in Digité, I am perhaps best placed to share Digité Enterprise’s impact on my and my team’s work.

We used to earlier manager our test inventory in Excel sheets. These were in multiple sheets that were named module-wise. Test results would similarly be tracked on Excel and we would depend on a combination of filter/pivot tables in Excel to help us identify the failures, resubmit them for execution, etc.

From the time we have moved all our Manual Test Inventory from Excel to Digité Enterprise, life has become simpler.

Earlier, for any regressing test cycle, life was difficult for a team lead to consolidate the test status when our test suite was divided within a team of 20 testing members. At the end of the day, she would chase everyone to mail back their testing status, find out how many defects identified, calculate pending test cases and then redistribute the same so as to finish testing as per schedule.

With Test Inventory in Digité Enterprise, she executes a report to gather the status. This report gives her the current testing status, number of pending cases, number of failed cases, number of defects identified in the system, status of those defects and when these are expected in the next development cycle.

Further, we have tracked our automated test case inventory also in Digité Enterprise. As the automation team progress automating more test cases, these test cases get excluded from the manual testing cycle using a simple filter in the product. Earlier, this information was maintained in excel by my team leads, automation and manual testing. Having to maintain this information consistent by checking Excel files manually and then generating reports was a challenge and time consuming. Further, these would never match because they would be done at different points of time and data was dynamic. Now we have ability to get all sorts of reports - modulewise failures / prioritywise failures / testerwise defect identification.

Automation Asset management would have been very difficult without it being online on Digité Enterprise. With 3000 QTP Scripts and around 1500 business functions, we track the 7000 manual test cases (sanity, smoke and detail test case) that get automated. Traceability in Digité Enterprise has helped us build a very good information repository. Product changes and their impact to automation business functions are tracked through CRs in the automation project. We track the product branches on which this change is applicable. With our Subversion integration, every change is directly reviewed in Digité Enterprise. Same is true for defect tracking also.

We plan our Test Automation initiative in monthly sprints. For every sprint, we use package functionality. The team logs effort on tasks split across automation development phases. With activity codes correctly set and few reports, we are able to asses every sprint and put a comparison of maintenance verses new development effort of automation framework.

Automated Regression Inventory runs every night, and results are available for analysis next day. After completing root cause analysis of all failures, product defects are filed in Digité Enterprise. A report gives an assessment of how well the nightly automation ran. A number of metrics are automatically calculated to give the nightly test trends across a month, last run’s defects - product and in automation scripts), script failures (false alarms and genuine), current status of product defects and the number of script failures due to the product defects, modulewise report.

In the last 2 years that we have moved our entire Testing process to Digité Enterprise, our manual testing team has shrunk from 10 to 3, while we have kept our Test Automation team stable. Our test repository has added from 14000 to 19000. That is a huge productivity jump in the last 2 years.

Saturday, November 7, 2009

What it takes to build Successful Automation Framework

What it takes to build Successful Automation Framework

Having handled and looked at various automation projects in a considerable span, in this blog I thought of putting down the important lessons learnt while building up a Test Automation framework for our product. The first part covers the problem areas which actually is the ground work that helps defining part two. The second part covers the characteristics of a successful automation initiative and the last one covers the steps to build a successful automation framework.

I have split this blog post into following parts –

1. Problems in Test Automation
1.1 Perspective and Vision Problems
1.2 Technology Problem
1.3 People and Processes Problems

2. Characteristics of Test Automation Framework

3. Important Steps to Build Successful Automation Framework
3.1 Tool Selection
3.2 Automate Failure Analysis (FA)
3.3 Define Maintenance Plan and Process
3.4 Define Infrastructure Requirements
3.5 Project Management

4. Need of an Integrated Automation Asset Management System

5. To Conclude

6. References


Overview

Test Automation is a full time development project that requires careful planning. It has to follow all phases in a Software Development Life Cycle (SDLC) for successful implementation. Let us analyze the loopholes and problem areas of an automation project. This heuristic helps in deciding what should be the characteristics of a successful Test Automation implementation.

Problems in Test Automation

I have seen unsuccessful attempts of Test automation initiatives in the past. It has helped me in ensuring that same mistakes are not repeated. Hence, going through the problem areas has been certainly helpful in developing a successful automation framework along with evaluation of the various tools, technologies, and methodologies.

Perspective and Vision Problems

a. Spare time Test automation. People are allowed to work on Test Automation at their own pace, or considering it as a back burner project and working as and when the test schedule allows. This limits the time and focus the initiative requires.

b. Lack of clear goals. There are many good reasons for taking up Test Automation. It can save time, simplify testing and improve the testing coverage. It can also motivate the testers. But it's not likely to do all these things at the same time. Different parties typically have different expectations. These need to be stated or disappointment is likely.

c. Lack of experience. Junior programmers trying to test their capabilities often tackle Test Automation projects. Since the overall design is not thought thoroughly, automation project becomes difficult to maintain with evolving AUT.

f. Think about testing and not R&D. Many find automating a product more interesting R&D item. But finally it is suppose to cut down manual testing effort. Some automation projects provide convenient cover stories for why their contributors aren't more involved in the testing. Rarely does the outcome contribute much to the test effort.

g. Technology focus. How the software can be automated is a technologically interesting problem. But this can lose sight of whether the result meets the testing needs.

Problems are usually lurking in the software long before testing begins. But testing brings them to light. Testing is difficult enough in itself. When testing is followed by retesting of the repaired software, people can get worn out. Their question is-Will this testing ever end? This desperation can become particularly acute when the schedule dictates that the software should be ready now. If only it weren't for all the testing! In this environment, Test automation may be a ready answer, but it may not be the best. It can be more of a wish than a realistic proposal.

Technology Problem

a. AUT UI – The most important problem in automation is related to the UI. At times the AUT is not coded according to the Accessibility Standard Practices. Automation requires UI object with unique properties to identify itself correctly.

b. Data Input - Data being embedded in the test cases, it is required to update the code every time the data is changed.

c. Dependency between test scripts - Dependency between test scripts is required but may create problem if not dealt correctly, especially when you want large batches to run in entirety even when there are a few failures.

d. Object recognition problem - This problem exists for non-standard controls on the web pages, third party products, etc. With technology evolution, automation tool itself may not be equipped to handle ways in which the Html gets rendered.

e. Test case scheduling & sequencing problem – This problem starts coming up as the inventory grows.

f. Failure Analysis - The failure analysis becomes complex and time consuming as the automation code complexity and the test repository size grows.

g. Total Execution Time - Scripts become time consuming and complex due to lot of code dependency. The total time for automated test execution increases eventually.

h. Reporting – Consolidating and summarizing the test results if kept manual, turns laborious.

i. Maintaining Automation Inventory - With version changes in AUT, change need to be impacted to automation inventory. One needs to plan, estimate for it as the AUT enhancements are pipelined. Many a times, automation assets are ignored during an impact analysis. Automation Inventory is a software product; it has its own development and maintenance workload. Management expectations need to be aligned with this fact.

j. Browser Compatibility - Most automation tools give promising features related to the browser support in their sales demo. But when using the tool, one needs to find various workarounds to resolve issues that arise while using the tool.


People and Processes Problems

a. Knowledge Transfer - Automation team never remains static; there is bound to be attrition. Not having good documentation creates problem for new team members to start with what has already been automated. Hence, coding conventions and documentation, traceability to manual test cases, and regular capturing of metrics is highly important. Technical and process reviews play an important role in molding and shaping all team members to follow the team standards so as to have consistent, quality, and maintainable deliverables. Another important task is the regular knowledge transfer exercise and change of roles and responsibilities which help in lowering dependencies on individual expertise.

b. Technical Expertise - Lack of knowledge of the advanced features in the automation tool and the product creates limitations. In one of the ‘Automation Tool Evaluation’ exercise it was inherent that we could have used advanced automation tool features, which would have made the Test suite simple and maintainable.

c. Automation Tool Maintenance - Eventually after continued usage, the tool memory leaks and there can be UI recognition issues. If covered under AMC, such issues can be quickly escalated to the right people who can resolve them. The issues that we experienced were related to Automation Tool upgrade patches that were not applied, maintenance contract which was exhausted, and senior management who were not keen on AMC.

d. Automation Script Inventory Maintenance - Every new release changes the product i.e. the Application under Test (AUT). Major UI revamp of a product directly affects the automated test cases. Thus the Automation suite has to be impacted for each and every change. Generally, maintenance of existing automation inventory is overlooked but it is unavoidable. This has to be taken into account in terms of cost, effort and time estimation exercises along with AUT change development estimations. Not doing so will incur same cost, effort and time delay at the end of automation script development. Doing this exercise along with the AUT changes helps as impacted scripts can be utilized for AUT testing subsequent to the changes and also number of changes required will be definite. Otherwise it is endless discovery of product changes if the automation suite has to be run after the product has undergone changes and before the automation impact.

e. Business Domain Expertise - How well are our regression tests documented? It is common to use lists of features that are to be checked or one-liner test description. Due to stern release deadline, tests are written assuming that the person executing the tests has enough experience with the AUT product. This is not enough for Test automation. We need to make a dumb machine with no prior product AUT knowledge understand each and every step. Hence, either there has to be a thorough documentation or the person automating test cases has to have enough knowledge of AUT. The project manager needs to take into account time spent on analyzing test cases otherwise this activity becomes an unimportant activity and badly affects the end deliverable.

With all the above points in mind, automation team composition should have a business domain expert, technical expert and a process expert along with team members. There may be same people playing multiple roles but business, process and technical expertise is a must for successful automation projects.
Having understood the problems in Test Automation framework, let’s now define what we need to achieve in setting up a successful Test automation infrastructure.

Characteristics of Test Automation Framework

1. Simple: It should be easy to learn, debug, and deploy the code, and set the test running.

2. Durable: It should withstand small UI Object inconsistencies such as, position and size changes.

3. Defined: Test scripting standards should be well defined.

4. Versatile: The same set of automated test cases should run on different release branches and different client-browser, server combinations and localized setups.

5. Reusable: It should be easy to add test cases to existing automated scripts. It should be easy to append or modify test cases or use existing test cases. Test Data and Test Script should be separate so as to run the same test on multiple data points.

6. Extensible: It should be easy to add new test cases and new components as the application evolves.

7. Integrated: Test results should be integrated with a good reporting engine. It should have a capability to have online test reports and test results with a drill-down to tests failed and trend data.

8. Consistent: Test scripting standards should be implemented so that even new recruits will add code in a uniform manner and rework can be avoided.

9. Re-viewable: A process needs to be established to review the test cases.

10. Testable: A process needs to be set to test and certify automation scripts and maintain the test repository.

11. Maintainable: The test should have a low maintenance.

12. Adaptable: The test should be easily adaptable to modifications in the application.

13. Easy to document: Testing standards should be maintained so as to document every step that is taken in setting up the automation infrastructure.

14. Ease of Storage: Test cases should be easily stored and version control should be implemented for the test cases.

15. Test Independence: Independence should be allowed to run test cases in any order.

16. Repeatable: Automation test cases should have the ability to run repeatedly in the nightly build and be put in parallel schedules.

17. Maintainable: It should be easy enough to maintain automated test case with new product releases.

18. Coverage Traceability – It should be possible to define test cases to automation script traceability and capture various traceability metrics. These metrics along with script repository would be helpful in scheduling priority wise or module wise automation suite depending on the type of major/minor/patch release.

Important Steps to Build Successful Automation Framework

Setting Senior management expectation

Generally, due to the lack of information and understanding of automation projects, senior management lose their patience with the Test automation teams. Hence, it is essential that the project manager provides industry heuristic of automation projects. ROI is always seen long term and it pays off well then. However, the initial investment is very high in terms of cost, effort, resource, and timeline.

Following are some important points to be kept in mind when setting senior management expectations:

1. Once the project starts, involve the senior management and keep them posted on the project progress and issues/challenges faced. This gives better buy-ins and also raises understanding from the management side. It also provides a guideline on future targets by extrapolating current numbers. Regular meetings with all stakeholders simplify communication across the automation team and senior management. It also reduces surprises on both the sides—management and the automation team.

2. Capture and share various processes, effort/time, technical and ROI metrics. There is a definite way in which automation ROI is captured. You can check the following link for better understanding.
http://www.dijohn-ic.com/test_automation_roi.pdf

3. Define the scope of the automation project. Not everything can be automated. The project manager needs to define and make the management aware of the scope and requirements.
In the next part we will cover the tools and define the strategy and the framework for automation test.

Tool Selection

There is enough resources on the Internet to provide information how to evaluate an automation tool.
A few important points are given below which I have learnt over a period of time.

1. Study the application under test; it’s UI, and integration points. Study the business and test cases if test case inventory exists. Study the UI gadgets being used such as flex, GWT, web2.0, etc. and whether the AUT is browser based or a desktop application running on various operating systems, and so on.

2. Understand management investment goals; whether the organization can afford licensed or open source tool, investment in team resources with adequate skill levels, investment in infrastructure, etc. Get Organization buy-in for continued AMC support for a licensed tool. Check if training can be acquired.

3. Once you know the AUT well and investment goals of the organization there are number of automation tools in the market which can be evaluated. At the core is an exercise to evaluate UI driver which is well suited for the AUT. Define automation architecture requirements and evaluate the tool accordingly.

4. Evaluate the tool from the reporting perspective. This is a piece which will be heavily used once the inventory starts building up. It will be a great impediment if results cannot be drilled down or converged at the release, module, and test script level.

5. Every automation tool has UI recognition configuration settings. Evaluate the tool on UI recognition strategies.

6. Check the automation tool footprint, the OS, memory requirement, etc. Memory leaks become an issue when inventory grows.

7. Evaluate the tool for its proper support system. This is essential when the team is ramping up and needs hand holding dealing with various recognition issues which turn out to be tool issues or workarounds.

8. A tool that should be suited to all the characteristics defined in section 3. :-)


Define Automation Strategy and Framework

Starting with defining the objective of Test Automation framework, if the objective is reducing regression effort, then automating bulk of repetitive test cases makes sense. If the objective is reducing field defects, then automation is not the answer. Defining good test cases and then automating will reduce field defects. Test automation will only cut down the time it takes to run the test cases manually.

Here are some important components of the Test Automation framework:

1. Use the keyword driven approach or hybrid framework well suited to AUT and the organization. It can be generic or project specific.

2. Develop scripts independent of the data. Define the data plan. Business analysts can do this job best. Scripts execute on this rich data combinations. Data input should accepted in a CSV format, database, or in a spreadsheet such as, Microsoft Excel. Keeping data in any of this format helps in querying data when the same data is used across different scripts. Data Planning is an important aspect of automation framework which should be thought thoroughly before it is implemented.

3. Separate out application specific functions; define good exception handling for control specific function which is bound to the API of the automation tool. Application specific functions should make use of the control specific function.

4. Define Scheduler, Batch driver framework that is scalable for parallel runs. This will become an essential requirement once the inventory grows and quick automation test run results are expected for certifying AUT releases. At times, only certain scripts need to run for smaller patch certification. Batch driver should have the flexibility of schedule definition.

5. Make use of the version control system to house scripts. This will prove beneficial once products (AUT) have release branches and automation inventory needs to run on all AUT branches.

6. Define reporting framework for quick debugging of failed scripts. Reporting framework should take care of summary and detail level reporting. This can be level driven—a flag that defines level of detail required in a result report. Trend data reporting helps in taking release decisions hence reporting framework should take care of the trend results.

7. Define how automation framework integrates with continuous build system. In such cases, Cruise Control is best suited and easily configurable. Automation suite can be launched by just adding another task to cruise control xml.
Do not restrict yourself to one tool or language for defining the automation framework. In the automation projects which I have been a part of, we used shell scripting, DOS scripting, VB Scripting, and QTP Scripting, at time programming in JAVA, using VB to JAVA bridge. Use the tool and language which 'simply' solves the problem at hand. Always follow the 'KEEP IT SIMPLE' strategy or the framework maintenance will become an overhead. Finally, the objective of the automation framework is to run test cases and find defects in the AUT and not invent a new complicated system.

Automate Failure Analysis (FA)

Automation Test Failure Analysis is the most cumbersome problem faced once the automation test repository is successfully built. Automation teams just keep finding and fixing defects. Good reporting and exception handling is key to successfully break the failure analysis problem. If failure analysis is not tackled, it will delay reporting genuine defects in the AUT, and thus will result in loosing the benefit of automation.

Following are some tricks to reduce the failure analysis problem.

1. Failure Analysis can be best reduced by making use of better logging capabilities of AUT. Certain errors, exceptions can be caught from the UI or log files. These can be parsed to find defects in the AUT.

2. Most of the AUT define common Error Alerts or Screens. Capture these objects in object repository. Recognize and report these errors upfront in the summary. The automation developer need not drill down to a detail report to understand that if failed script is AUT or script coding defect.

3. Batch Runner or framework should define recovery plan. At times it is required that a Test suite (batch) should continue testing on certain type of intermediate failures. At times it is required that a batch should halt on certain types of failure. This logic can be built in the batch runner code.

4. For any automation tool, the UI recognition issues are a big problem. The UI recognition issues arise in case of the following -
i. AUT is slow/fast/unstable - Waits, Timeouts and Visual clues should be set appropriately to deal with AUT timing issues. Consider a case of AJAX, where a page never resubmits and you need to provide a visual clue when a control property changes or a case where an object appears on the screen in an asynchronous submit. In such cases there has to be a timed check.
ii. UI recognition properties are fragile or changing - Make use of the 'html Id' instead of labels or names of control. Avoid using attributes seen on the screen since UI generally gets refactored easily with AUT releases.
iii. AUT enhancement changes and AUT defect or scripting defect - These are defects and proper failures.

Define Maintenance Plan and Process

Consider UI changes due to product (AUT) enhancement as Change Requests (CR) in automation scripts. Capture the CR fixing effort separately. This should be calculated and considered as maintenance effort. Add to (time and effort) cost of developing the product’s (AUT) enhancement cost. Define proper CR tracking system. Similarly, track scripting defects. Define coding standards and review process to curtail scripting defects. Measure teams on scripting defects, which subsequently will reduce scripting defects :-)

Define Team Composition and Knowledge Transfer Scheme

Test automation is a dedicated and a specialized skill. You need to build knowledge of automation tools and technology. Test automation is not a peace time activity. Manual test execution and automation development cannot be undertaken simultaneously in a haphazard manner.

Following are a few important points on structuring an automation team:
1. Build a Test Automation team that comprises of Business Analysts, Technical Specialists, and process owners.

2. Define the documentation standards. Make it a mandatory milestone. It is sometimes boring but very beneficial in the long run. Rather set up a practice to document first and then implement once documentation is reviewed. It is helpful as a number of bugs get caught in review even before lines of code are written.

3. Define clear roles and responsibilities in the team structure. Define primary and secondary ownership of all the automation assets—scripts, framework components, libraries and functions, scripts, machines, schedules, access privileges, etc.

4. Change the roles and responsibilities for knowledge sharing.

5. Schedule a session once in a week where one of the team member shares knowledge of activities he or she manages.

6. Encourage the team to innovate and find/solve various complicated pivotal changes in the automation system. Award the person who takes initiatives or takes effort to bring in valuable changes to the automation system.

7. Keep a good mix of senior to junior ratio in the team members to deal with attrition.

Define Infrastructure Requirements

As the Test Automation inventory grows, the total time it takes to complete one sequential test run will go on an increasing scale. Infrastructure will become major bottleneck. There is trade-off between time vs. hardware investment. You should study the automation tool footprint. Study the advanced tool parameters for long hours automation test runs. Guard against random updates which run on automation machine. This is an exhaustive list and it needs to be built from experience. Every time you discover a problem, add it to a checklist, and keep refining it. Develop scripts in an independent fashion, which helps in scheduling scripts in parallel.

Project Management

My project management experience infers that the Agile methodology is best suited for automation project. To effectively manage an agile automation project, define the monthly sprint with a definite scope. Do a Risk based analysis for selecting what should be automated. Study the test cases and fields defects for root cause analysis. Define the phases and milestones. Studying test cases is requirement analysis. Finding or reusing business functions/keywords is like code development. Scripts need to be thoroughly tested before moving it to production, which is the testing phase. Define a deliverable for each phase. With this, phase activities become measurable. At the end of sprint, defer items which cannot be completed and push it to next sprint. Identify everything developed to completion or production.

Track Sprint for various metrics - effort, time, and rate of automation, number of test cases automated, number of scripts built, and ROI. Over a period of time, sprint metrics can be studied and trend information can be verified whether it is inline with management expectations. Sharing this information also builds transparency of operations and management. These details can be shared with team members to assess their work and success/failure of the team as a whole.

Re-structure, Re-engineer, Re-define to optimize. Once the automation framework is defined, find all components which are not being used extensively. These are components which are thought for a specific objective but not used. These should be re-engineered or thrown out. Don’t let unused legacy code build scrap in the system. Do not hesitate to re-engineer or refine an already built framework.

Need of an Integrated Automation Asset Management System

A Test Automation system thus comprises of framework, scripts, keywords (functions), batches, schedules and finally linked to defects and CRs in AUT and the automation source code itself. All these components are generally tracked in flat files and there is a necessity for an integrated system for management reporting, day-to-day tracking, and optimizing the system as whole.

At Digité Infotech, we have successfully built Test automation harness. Automation Inventory at Digité office is implemented using a keyword driven approach. QuickTest Professional is used as the UI Driver and Digite Enterprise Internal Production system as the platform to manage all automaton assets.

Project management, tracking of various automation assets, team effort, and reporting are all tracked in an integrated way using Digite Enterprise.

Let's take a closer look at the system.
1. Automation Framework development is a project using the Digite Enterprise Internal Production system.



2. All the business functions, scripts, defects are tracked as eforms. With this, we can quickly gauge team ownership of all these assets on a project dashboard. Functions are configured to capture impact on existing control functions, object repositories, and other impact areas. These are reviewed by senior members in the team and review comments are filed in the system using review eform instance.



3. Manual Test Cases are tracked on Digite Enterprise and are in turn traced to QTP Scripts. Thus, one can quickly view manual test case inventory getting converted to an automation inventory.

4. Automation Sprint is tracked as a custom Work Package. It has an automation practice which gets instantiated for every new sprint.



5. A set of custom reports are developed to track various other metrics such as, rate at which tests are automated, effort per manual test case, and sprint wise effort per automated script. Cumulative trends can be plotted based on this data.

6. Similarly, maintenance effort is tracked using the time posted on defects and CRs in Automation project for the monthly sprint. Thus, new development vs. maintenance load can be easily plotted.

7. Another set of custom reports give an online view of runtime automation system.

8. In another blog, ‘Nightly Build Testing @ Digite, I have explained how Nightly Build System can be integrated for AUT defects generation, which you can visit at http://ashwinilalit.blogspot.com/2010/04/nightly-build-testing-digite.html.

To Conclude

Thus you can understand how the ‘Keyword driven’ methodology is best suited to build a successful Test Automation infrastructure framework. As the product evolves, it may undergo small UI changes. Capturing the business keywords help to sustain small UI inconsistencies without affecting the individual tests.

Business case has to be well understood before we begin Test automation. Careful planning of the test, starting with understanding test cases to building keywords, object recognition and reporting framework components, and coding individual test should help build a simple, maintainable automation test repository.
It is essential that we develop a Test Automation project following iterative strategy and applying all the rules of agile software development life cycle. It is essential that we track all automation assets in an integrated system and keep all stakeholders informed on the real time progress of automation sprints. An agile ALM tool like Digité can play an important role in bringing these assets together.

Automation project is a technology project and needs complete focus and dedication. It should not be considered as a back burner activity. With this vision and perspective set for all stakeholders from management and team members, we should be able to develop a successful Test Automation framework that will completely take care of regression testing for any release activity, thus reducing load on the manual testing front.

References

[1] Kaner Cim, “High Volume Automation Testing”, http://www.kaner.com/pdfs/HVAT_STAR.pdf
[2] Manoj Narayan, Harish Krishnamurthy, Kaushik Ramkrishnan, “Realizing the Enterprise Test Automation Vision”, http://www.infosys.com/services/enterprise-quality-services/white-papers/enterprise-test-automation-vision.pdf
[3] Bret Pettichord, “Seven Steps to Test Automation Success”
[4] Misha Rybalov, “Design Patterns for Customer Testing”, http://www.autotestguy.com/archives/Design%20Patterns%20for%20Customer%20Testing.pdf
[5] Mike Kelly, “Frameworks for Test Automation”, http://michaeldkelly.com/images/Frameworks_for_Test_Automation.PDF