Saturday, October 25, 2008

Defect Management

Defects determine the effectiveness of the Testing what we do. If there are no defects, it directly implies that we don’t have our job. There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct. This implies that we lack the knack. In this section, let us understand Defects.

What is a Defect?

For a test engineer, a defect is following: -
•Any deviation from specification
•Anything that causes user dissatisfaction
•Incorrect output
•Software does not do what it intended to do.

Bug / Defect / Error: -
•Software is said to have bug if it features deviates from specifications.
•Software is said to have defect if it has unwanted side effects.
•Software is said to have Error if it gives incorrect output.

But as for a test engineer all are same as the above definition is only for the purpose of documentation or indicative.

Defect Taxonomies

Categories of Defects:

All software defects can be broadly categorized into the below mentioned types:
•Errors of commission: something wrong is done
•Errors of omission: something left out by accident
•Errors of clarity and ambiguity: different interpretations
•Errors of speed and capacity

However, the above is a broad categorization; below we have for you a host of varied types of defects that can be identified in different software applications:
1.Conceptual bugs / Design bugs
2.Coding bugs
3.Integration bugs
4.User Interface Errors
5.Functionality
6.Communication
7.Command Structure
8.Missing Commands
9.Performance
10.Output
11.Error Handling Errors
12.Boundary-Related Errors
13.Calculation Errors
14.Initial and Later States
15.Control Flow Errors
16.Errors in Handling Data
17.Race Conditions Errors
18.Load Conditions Errors
19.Hardware Errors
20.Source and Version Control Errors
21.Documentation Errors
22.Testing Errors

Life Cycle of a Defect

The following self explanatory figure explains the life cycle of a defect:

Action-Based Testing Framework

Action-Based Testing (ABT) provides a powerful framework for organizing test design, automation and execution around keywords. In ABT keywords are called “actions” to make the concept absolutely clear. Actions are the tasks to be executed in a test. Rather than automating an entire test as one long script, an automation engineer can focus on automating actions as individual building-blocks that can be combined in any order to design a test. Non-technical test engineers and business analysts can then define their tests as a series of these automated keywords, concentrating on the test rather than the scripting language.

Traditional test design begins with a written narrative that must be interpreted by each tester or automation engineer working on the test. ABT test design takes place in a spreadsheet, with actions listed in a clear, well-organized sequence. Actions, test data and any necessary GUI interface information are stored in separate spreadsheets, where they can be referenced by the main test module. Tests are then executed from right within the spreadsheet, using third-party scripting tools or TestArchitect’s own built-in automation.

To realize the full power of Action Based Testing, it is important to use high-level actions whenever possible in test design. High-level actions are understandable by those familiar with the business logic of the test. For example, when the user inputs a number, the system makes a mortgage calculation or connects to a telephone. A good high-level action may not be specific to the system under test. “Enter order” is a good high-level step that can be used generically to refer to specific low-level steps that take place in many tests of many different applications.

Automation is then completed through the scripting (programming) of low-level actions. TestArchitect provides a comprehensive set of the low-level actions necessary through its built-in automation feature. In that case creating a high-level action required by the test design would involve only drag-and-drop of a few low-level actions to create that high-level action. The low-level actions behind “enter order” would be the specific steps needed to complete that action via various interfaces such as html, the Windows GUI, etc. An example of a low-level action would be “push button”.

Whenever scripting by an automation engineer is required, breaking this work down into reusable low-level actions saves time and money by making future scripting changes unnecessary even when the software under test undergoes major revisions. A reshuffling of actions is usually all that is required. If more scripting is necessary, it involves only the rewriting of individual actions rather than revision of entire automation scripts and the resulting accumulation of a vast library of old automation.

“The organization develops test standards which can be reused in the next test. The test itself, and the various tasks involved, is therefore more clearly defined. The costs of the test are known in advance and it is clear what has been tested to a specific level of detail. In addition, insight into the approach and the status of the test process can be gained at all times, ensuring that the test process can be adjusted in a timely manner if necessary. This method enhances the quality of both the test process and the test products, resulting in higher quality for the tested system.”

Action Based Testing allows testing teams to create a much more effective test automation framework, overcoming the limitations of other methods:

Full Involvement of the Testing Team in Test Automation

Most testing teams consist primarily of people who have strong knowledge of the application under test or the business domain, but with light expertise in programming. The team members who are fulfilling the role of test automation engineer are often people with a software development or computer science background, but lack a strong expertise in testing fundamentals, the software under test, or the business domain.

Action Based Testing allows both types of team members to contribute to the test automation effort by enabling each person to leverage their unique skills to create effective automated tests. Testers define tests as a series of reusable high-level actions. It is then the task of the automation engineer to determine how to automate the necessary low-level actions and combine them to produce the required high-level actions, both of which can often be reused in many future tests. This approach allows testers to focus on creating good tests, while the automation engineers focus on the technical challenge of implementing actions.

Significant Reduction of Test Automation Maintenance

Many organizations build a significant test automation suite using older automation methods and begin to see some benefits, only to get stuck with a huge maintenance effort when the application changes. Test automation teams end up spending more time maintaining their existing tests than actually creating new tests. This high maintenance burden is due to the fact that automated tests are highly dependent on the UI of the application under test; when the UI changes, so must the test automation. It is usually the case that the core business processes handled by an application will not change, but rather the UI used to enact those business processes changes.

Action Based Testing significantly reduces the maintenance burden by allowing users to define their tests at the business process level. Rather than defining tests as a series of interactions with the UI, test designers can define tests as a series of business actions. For example, a test of a banking application might contain the actions ‘open new account’, ‘deposit’, and ‘withdraw’. Even if the underlying UI changes, these business processes will still remain the same, so the test designer does not need to update the test. It will be the job of the automation engineer to update the actions affected by the UI changes, and this update often needs to be made in only one place.

Improved Quality of Automated Tests

In Action Based Testing, test designers follow a top-down approach which ensures that there is a clearly stated purpose for every test.
The first step is to determine how the overall test automation effort will be broken down into individual test modules. Some common ways of grouping tests include:
• Different functional areas of the application.
• Different types of tests (positive, negative, requirements-based, end-to-end, scenario-based, etc.).
• Different quality attributes being tested (business processes, UI consistency, performance, etc.).

Once the test modules have been identified, the next step is to define test requirements for each module. Test requirements are critical because they force test developers to consider what is being tested in each module, and to explicitly document it.

Once the test requirements are defined, they serve as both a roadmap for developing the test cases in the module, and documentation for the purpose of the tests. Each test case is associated to one or more test requirements, and each test requirement should be addressed by one or more test cases.

By explicitly stating the test requirements, it is possible to easily determine the purpose of a test, and to identify if a test does not sufficiently meet those test requirements. Test requirements can be quickly checked to determine if the test needs maintenance or even retirement. Test developers can be precise and concise in their test creation, creating enough tests to meet their stated requirements without introducing unwanted redundancy.

After explicitly defining the test requirements, the Test designers can start implementing the test cases using either predefined actions or by defining new actions. Test designers can define their tests as high-level business processes, which allow the tests to be more readable than tests defined using low-level interface interactions.

Facilitates Test Automation Strategy

Many testing teams dive into test automation without first considering how they should approach test automation. A very typical approach is to acquire a test automation tool, and then try to start automating as many existing test cases as possible. More often than not, this approach is not effective.

Action Based Testing provides a framework that integrates the entire testing organization in support of effective test automation. Business analysts, testers of all kinds, automation engineers, test leads and QA managers all work within the framework to complete test planning, test design, test automation and test execution.
With the right framework in place, the organization can respond most effectively to everything from marketing requirements to software development changes.

Enables Effective Collaboration by Distributed Teams

With testing teams now often distributed across the country and around the world, the challenge of sharing information, tests and test automation libraries is multiplied many times over. Action Based Testing provides a proven framework for organizing tests and test automation libraries with a clear structure, preventing disruptions that can be caused by distance and time zone differences.

TestArchitect, an ABT-based tool, takes this to the next level by enabling remote sharing of database repositories of test modules, actions and other components, and provides clear control and reporting to managers of access, changes and results.

Why Test Automation Projects Fail to Achieve Their Potential

Despite the clear benefits of test automation, many organizations are not able to build effective test automation programs. Test automation becomes a costly effort that finds fewer bugs and is of questionable value to the organization.
There are a number of reasons why test automation efforts are unproductive. Some of the most common include:

Poor quality of tests being automated

Experts in the field agree that before approaching problems in test automation, we must be certain those problems are not rooted in fundamental flaws in test planning and design.

“It doesn’t matter how clever you are at automating a test or how well you do it, if the test itself achieves nothing then all you end up with is a test that achieves nothing faster.” Mark Fewster, Software Test Automation, I.1, (Addison Wesley, 1999).

Many organizations simply focus on taking existing test cases and converting them into automated tests. There is a strong belief that if 100% of the manual test cases can be automated, then the test automation effort will be a success.
In trying to achieve this goal, organizations find that they may have automated many of their manual tests, but it has come at a huge investment of time and money, and produces few bugs found. This can be due the fact that a poor test is a poor test, whether it is executed manually or automatically.

Lack of good test automation framework and process

Many teams acquire a test automation tool and begin automating as many test cases as possible, with little consideration of how they can structure their automation in such a way that it is scalable and maintainable. Little consideration is given to managing the test scripts and test results, creating reusable functions, separating data from tests, and other key issues which allow a test automation effort to grow successfully.

After some time, the team realizes that they have hundreds or thousands of test scripts, thousands of separate test result files, and the combined work of maintaining the existing scripts while continuing to automate new ones requires a larger and larger test automation team with higher and higher costs and no additional benefit.

“Anything you automate, you’ll have to maintain or abandon. Uncontrolled maintenance costs are probably the most common problem that automated regression test efforts face.” Kaner, Bach and Petticord, ibid

Inability to adapt to changes in the system under test

As teams drive towards their goal of automating as many existing test cases as possible, they often don’t consider what will happen to the automated tests when the system under test (SUT) under goes a significant change.

Lacking a well conceived test automation framework that considers how to handle changes to the system under test, these teams often find that the majority of their test scripts need maintenance. The outdated scripts will usually result in skyrocketing numbers of false negatives, since the scripts are no longer finding the behavior they are programmed to expect.

As the team hurriedly works to update the test scripts to account for the changes, project stakeholders begin to lose faith in the results of the test automation. Often the lack of perceived value in the test automation will result in a decision to scrap the existing test automation effort and start over, using a more intelligent approach that will produce incrementally better results.

“Test products often receive poor treatment from project stakeholders. In everyday practice, many organizations must set up complete tests from scratch, even for minor adaptations, since existing tests have been lost or can no longer be used. To achieve a short time-to-market, tests need to be both easy to maintain and reusable.” Buwalda, Jannsen and Pinkster, Integrated Test Design and Automation: Using the TestFrame Method.

Test Automation Evolution

Software test automation has evolved through several generations of tools and techniques:

Capture/playback tools record the actions of a tester in a manual test execution, and allow tests to be run unattended, greatly increasing test productivity and eliminating the mind-numbing repetition of manual testing. However, even small changes to the software under test require that the test be recorded manually again. Therefore, this first generation of tools is not efficient or scalable.

Scripting, a form of programming in computer languages specifically developed for software test automation, alleviates many issues with capture/ playback tools. However, the developers of these scripts must be highly technical and specialized programmers who work in isolation from the testers actually performing the tests. In addition, scripts are best suited for GUI testing but don’t lend themselves to embedded, batch, or other forms of systems. Finally, as changes to the software under test require complex changes to the associated automation scripts, maintenance of ever-larger libraries of automation scripts becomes an overwhelming challenge.

Data-driven testing is often considered separately as an important development in test automation. This approach simply but powerfully separates the automation script from the data to be input and expected back from the software under test. Key benefits to this approach are that the data can be prepared by testers without relying on automation engineers, and the possible variations and amount of data used to test are vastly increased. This breaking down of the problem into two pieces is very powerful. While this approach greatly extends the usefulness of scripted test automation, the huge maintenance chores required of the automation programming staff remain.

Keyword-based test automation breaks work down even further in an advanced, structured and elegant approach. This reduces the cost and time of test design, automation, and execution by allowing all members of a testing team to focus on what they do best. Using this method, non-technical testers and business analysts can develop executable test automation using “keywords” that represent actions recognizable to end-users, such as “login”, while automation engineers devote their energy to coding the low-level steps that make up those actions, such as “click”, “find text box A in window B”, “enter UserName”, etc. Keyword-based test design can actually begin based on documents developed by business analysts or the marketing department, before the final details of the software to be tested are known. As the test automation process proceeds, bottlenecks are removed and the expensive time of highly-trained professionals is used effectively.

The cost benefits of the keyword method become even more apparent as the testing process continues. When the software under test is changed, revisions to the test and to the automation scripts are necessary. By using a keyword-based framework, organizations can greatly reduce the amount of maintenance needed, and avoid rewriting entire test scripts. Many changes do not require new automation at all, and can be completed by non-technical testers or business analysts. When required, changes to automated keywords can be completed by automation engineers without affecting the rest of the test, and can be swapped into other tests in the library as needed.

The keyword method has become dominant in Europe since its introduction in 1994, where it was incorporated into the TestFrame method and tool, and is now coming into its own in the USA. LogiGear’s Action Based Testing™ represents the continued evolution of this approach under the guidance of the original architect of the keyword method. This method is the foundation of LogiGear’s test automation toolset, TestArchitect™, which not only organizes test design and test automation around keywords, but also offers built-in actions that make it possible to automate many tests without scripting of any kind.

Hybrid testing tools merit a brief discussion due to the intense marketing efforts of several vendors that are helping to bring awareness of keyword-driven technologies to the industry. In these products, a keyword-like user interface is layered atop a traditional automated testing tool. Most of these tools simply provide a GUI window listing the library of low-level functions that automation engineers have produced in scripting language. Many attempt to offer “scriptless” automation through the use of GUI views, templates and business rules-based test design. One example emphasizes a graphical interface that enables non-technical users to create tests by specifying low-level actions. The tool then performs automatic code generation in a target scripting language. These canned keyword scripts must be regenerated each time a change is made to test, and direct editing is not recommended.

These hybrid solutions, in attempting to oversimplify test automation engineering, are unable to offer the power, flexibility and customization necessary to automate tests for complex systems. In addition, because these tools implement a keyword-based interface without an underlying testing method supported by the full keyword framework, the manual creation and maintenance of tests is rather labor intensive due to the use of only low-level keywords.

Friday, October 24, 2008

Frequently Asked Testing Questions - Part1

Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility.

Why does software have bugs?

-> Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

-> Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.

-> Programming errors - programmers, like anyone else, can make mistakes.

-> Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control. (See 'What can be done if requirements are changing continuously?')

-> Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

-> Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it (“if it was hard to write, it should be hard to read”).

-> Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?

A lot depends on the size of the organization and the risks involved. For large organizations with high-risk projects (in terms of lives or money), serious management buy-in is required and a formalized QA process is necessary.

Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity, I order to keep bureaucracy from getting out of hand.

For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and adequate communications among customers, managers, developers, and testers.

In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications.

What is verification? What is validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications. This can be done with checklists, issues lists, walkthroughs and inspection meetings.

Validation typically involves actual testing and takes place after verifications are completed.

What is a 'walkthrough'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes.

What is an 'inspection'?

An inspection is more formalized than a “walkthrough', and typically consist of 3-8 people including a moderator, reader (the author of whatever is being reviewed) and a recorder to take notes.

The subject of the inspection is typically a document, such as a requirements or a test plan.

The purpose is to find problems and see what is missing, not to fix anything.
Attendees should prepare for this type of meeting by reading through the document; most problems will be found during this preparation. The result of the inspection meeting should be documented in a written report. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is far more cost effective than bug detection.

What are five common problems in the software development process?

1) Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
2) Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
3) Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
4) Featuritic - requests to pile on new features after development is underway; extremely common.
5) Miscommunication - if developers don't know what is needed or customers have erroneous expectations, problems are guaranteed.

What are five common solutions to software development problems?

1) Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.
2) Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
3) Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug fixing.
4) Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide a higher comfort level with their requirement decisions and will minimize changes later on.
5) Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so customers' expectations are clarified.

What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the “customer” is and their overall influence in the scheme of things. A wide-angle view of the “customers” of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of “customer” will have their own slant on “quality” - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

What is good code?

Good code is code that works, is bug free, and is readable and maintainable. Some organizations have coding standards all developers are supposed to adhere to, but everyone has different ideas about what is best, or what is too many or too few rules. There are also various theories and metrics. Keep in mind that excessive use of standards and rules can stifle productivity and creativity. Peer reviews, buddy checks code analysis tools, etc. can be used to check for problems and enforce standards.

What is 'good design'?

Design could refer to many things, but often refers to functional design or internal design.Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented. (See further discussion of functional and internal design in 'What's the big deal about requirements?’)

What is the 'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design,internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

Will automated testing tools make testing easier?

Possibly. For small projects, the time needed to learn and implement them may not be worthwhile. For larger projects, or on-going long-term projects, they can be valuable.

A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' with the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes.

The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task.

What makes a good test engineer?

Test engineers have a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.

Previous software development experience is helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduces the learning curve in automated test tool programming.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, Engineer is able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important.

What makes a good QA or Test manager?

QA or Test or QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software, Test, and QA engineers, have the diplomatic skills needed to promote improvements in QA processes, have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people, engineers, managers, and customers; as well as, able to run meetings and keep them focused.

What is the role of documentation in QA?

Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. Ideally, there should be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

What is the big deal about 'requirements'?

One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly', (which is too subjective). A testable requirement would be something like’, 'the product shall allow the user to enter their previously-assigned password to access the application'.

Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house or out of house, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if his/her expectations aren't met should be included as a customer if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

What is a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group will be able to read it.

What is a 'test case'?

A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Note: The process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle, if possible.

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested and determinations made regarding
requirements for regression testing to check the fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial, problem-tracking/management software tools are available. These tools will give the team complete information so developers can understand the bug, get an idea of its severity and reproduce it if necessary.

What is configuration management (CM)?

Configuration management covers the processes used to control, coordinate, and track:
code, requirements, documentation, problems, change requests, designs, tools /compilers/libraries/patches; changes made to them, and who makes the changes.

What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.), managers should be notified and provided with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done.

Common factors in deciding when to stop are:

- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience.

Considerations can include:
- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

Tuesday, October 21, 2008

Test Automation Top 5 Best Practices

Introduction

The top five pitfalls encountered by managers employing software test automation are:

* Uncertainty and lack of control
* Poor scalability and maintainability
* Low test automation coverage
* Poor methods and disappointing quality of tests
* Technology vs. people issues

Following are five "best practice" recommendations to help avoid those pitfalls and successfully integrate test automation into your testing organization.

1. Focus on the Methodology, Not the Tool

A well-designed test automation methodology can help to resolve many of the problems associated with test automation. It is one of the keys to successful test automation. The methodology is the foundation upon which everything else rests. The methodology drives tool selection and the rest of the automation process. The methodology will also help to drive the approach to any offshoring efforts that may be under consideration, helping to guide locating the "appropriate" pieces of the testing process both on- and offshore.

When applying a methodology, it is important that testers and automation engineers understand and accept the methodology. Also, other stakeholders such as managers, business owners, and auditors should have a clear understanding of the methodology, and the benefits that it brings.

2. Choose Extensible Test Tools

Select a test tool that supports extensibility, a team-based Global Test Automation framework (team members are or may be distributed), and offers a solid management platform.

Surveying test tools can be time consuming, but it is important to choose the best tool to meet your overall test needs. Before beginning the survey, however, you should have a good idea of what you need in the first place. This is intimately tied to your overall test methodology.

Make sure your chosen test tool has an "appropriate" automation architecture. Whatever tool is used for the automation, attention should be paid to how the various technical requirements of the test case execution are implemented in a manageable and maintainable way. In looking at tools and considering your methodology, you should ask the basic questions of how well these tools address reusability, scalability and team-based automation (a driver for productivity quantitatively), maintainability (a driver for lowering maintenance cost), and visibility (a driver for productivity qualitatively and a vehicle for control, measurability and manageability).

You should strongly consider tools based on Action Based Testing (ABT). Action Based Testing (ABT) creates a hierarchical test development model that allows test engineers (domain experts who may not be skilled in coding) to focus on developing executable tests based on action keywords, while automation engineers (highly skilled technically but who may not be good at developing effective tests) focus on developing the low-level scripts that implement the keyword-based actions used by the test experts.

Care should be taken to avoid simplistic "Record-playback 2.0" tools that claim to do test automation with no coding. There is nothing against being able to automate without having to code - it is in fact a good benefit to have. However "Record-playback 2.0" tool’s bottlenecks quickly show as you start getting deep into production.

3. Separate Test Design and Test Automation

Test design should be separated from test automation so that automation does not dominate test design. In test automation, it is preferable to use a keywords approach, in which the automation focuses on supplying elementary functionalities that the tester can tie together into tests. This way, the complexity and multitude of the test cases do not lead to an unmanageable amount of test scripts.

The testers (domain experts) should fully focus on the development of test cases. Those test cases in turn are the input for the automation discipline. The automation engineers (highly skilled technically) can give feedback to the testers if certain test cases are hard to automate, suggesting alternative strategies, but mainly the testers should remain in the driver’s seat, not worrying too much about the automation.

In general, no more than 5% of the effort surrounding testing should be expended in automating the tests.

4. Lower Costs

There are three ways that you can look to lower costs:

You can use labor that costs less than your local team
You can use a tool that costs less
You can use training to increase the tool productivity
It is important, however, when addressing costs, not to focus on one dimension too closely without keeping in mind the overall methodology and considering the impact of any decision on other parts of the process. For example, lowering one cost such a labor by outsourcing may actually increase the total costs if that labor that does not have the proper skills.

5. Jumpstart with a Pre-Trained Team

Jumpstart the process with a pre-trained outsourcing partner that knows more about test automation success than you do, and that has a competent, well-trained staff of software testers, automation engineers, test engineers, test leads and project managers.

A pre-trained team can:

Reduce your overall project timeframe, because you don’t need to include training at the beginning of the project schedule
Reduce risk, because you don’t need to worry about how well the team members will learn the material and how skilled they will be after the training is complete
Conclusion

To summarize the preceding in a simple list, the five suggested best practices for test automation success are:

1. Focus on the methodology, not the tool
2. Choose extensible test tools
3. Separate test design and test automation
4. Lower costs
5. Jumpstart with a pre-trained team

This article base on concepts in the book Global Software Test Automation: A Discussion of Software Testing for Executives.