Tuesday, November 11, 2008

Using Fault Tree Analysis to Improve Software Testing

Testing a software product to remove hidden defects is an integral part of the software development life cycle (SDLC). Yet it is well accepted that running a software product through every possible scenario to check for defects is not just difficult, but usually impossible. The enormous cost and huge effort required is simply too much. Thus, more limited testing remains a major part of the software development effort as do the challenges faced in software testing.

The application of process improvement tools to the software development life cycle is becoming popular in the software community. These techniques have already been successfully leveraged by manufacturers which have encouraged software professionals to apply such tools to the SDLC. Using fault tree analysis (FTA) is one good way to improve the effectiveness of software testing. It can help identify the potential causes of a problem, suggest suitable corrective action and offer insight into preparing test case scenarios.

Challenges in Software Testing

1. Inherent challenge: It is next to impossible to test a software product of average complexity to all of its specifications and features. The number of test cases required to test every aspect of a software application would be so large that it would be economically impossible to prepare and execute. For example, a simple program to analyze a string of 10 alphabetic characters could have 2610 combinations. At a rate of one combination per microsecond, testing the string would take 4.5 million years, according to author Watts S. Humphrey in his book Managing the Software Process.

2. Laborious process of test case preparation and documentation: Test case preparation is labor intensive and has to fit into what is normally an already tight schedule. Often project teams are tempted to pay less attention to this activity. Considering the large number of test cases to be developed, it takes considerable effort to document and maintain the documentation. The project team seldom documents all the test cases and has to conduct testing with additional undocumented test cases.

3. Effectiveness of test cases: Identifying a test case is as important as writing a line of code. Lack of proper methods makes this task more challenging. Software engineering researchers, such as Glenford J. Myers in his book Software Reliability, Principles and Practices, have observed that it is impossible to test one's own programs. Test cases for a module created by the software developer tend to have an ingrained bias toward an application's functionality. Such test cases often are prepared to prove what is being developed, instead of to reveal defects – the proper objective of test cases.



Figure 1

4. Resource crunch: More effort is spent in software testing than in any other phase of software development. Figure 1 shows the distribution of efforts in the software development life cycle. The percent distributions are typical industry averages. While specific amounts of effort are scheduled for testing, projects often end up with less testing time than planned because the design and construction phases consume more effort than estimated. Tools that may increase effectiveness of testing are unavailable or unnoticed. Even if tools are available, a project team faced with a new learning curve may not be inclined to use them.

How Can the Challenges Be Met?

An analysis of the testing process reveals that one of the root causes of ineffectiveness is the process of test case creation. A test case is considered effective when it can reveal a defect. With good test cases, most latent defects can be identified and fixed before a product is shipped. Hence improving the test case creation process will help make the software testing process more effective.

The complexity of conventional test case documents often tends to become a bottleneck to improving effectiveness. The way out is to deploy the right tools to design useful test cases, ones which can reveal defects. It may not be necessary to test every possible combination since many of them could be redundant. The focus must be on those tests which can accurately tell about the health of the software.

Fault tree analysis may help simplify designing better test cases to improve effectiveness of the test process. The FTA preparation process brings in a variety of ideas, broadens the scope of thinking and adds creativity to the process.

What Is Fault Tree Analysis?

Fault tree analysis is a top-down approach to identify all potential causes leading to a defect. Each cause is further broken down into least possible events or faults. The analysis begins with a major defect. All the potential events – individual or in combination – that may cause the defect are identified. Potential events are further traced down in a similar way to the lowest possible level.



Table 1

Two logic symbols – known as logic gates – And and Or are used to represent the sequencing of events. The And symbol indicates that all preceding events must exist simultaneously for a defect to occur. The Or symbol indicates that either of the preceding events may lead to said defect. Table 1, known as a truth table, illustrates how the logic gates behave. Let's consider the And gate. The output will exist only when both inputs are present simultaneously. With reference to fault tree analysis, the fault condition exists only if the preceding events exist simultaneously. In the case of the Or gate, either of the inputs is required to produce the output condition. That is, either input state may result in a fault condition.



Figure 2: Demonstration of FTA

Figure 2 illustrates how FTA could be used for a typical situation of troubleshooting, e.g., computer not starting. The fault tree is shown with Levels 1 through 3 tracing the fault conditions, Level 1 being the highest.

There are at least two situations (faults) that may result in a computer not starting. Since either of the situations – power failure or booting failure – is capable of producing the Level 1 fault, the Or gate is used to represent their combination. The Level 2 fault, power failure, may result if the primary power source fails and at the same time the uninterruptible power supply (UPS) is down. An And gate is used to represent this situation. The event, "UPS down," may be further traced to faults like battery failure, hardware failure and so on.

Deciding the scope of FTA at the beginning is essential to limit the analysis to the required level. For example, if the focus is on problems with a computer, there is no need to analyze the failures of a UPS as it may not be an integral part of a computer. However, the failure of a motherboard associated with a booting problem may be discussed further as it is very much part of a computer system.

Advantages of Applying FTA

FTA can be advantageous to software projects in at least three ways:

Value addition: FTA has the potential to serve as a defect-prevention tool. If FTA is performed before baselining the design, it can provide valuable information on application failures and their mechanisms. This information could be utilized to improve the design by preventing the potential defects or by introducing fault-tolerating abilities. FTA is most effective for more complex functions but may not be adding much value when applied to the simple functions of a software application. FTA utilizes the potential of teamwork to bring in a variety of ideas and broaden thinking.

Simplicity: FTA is very simple and can be prepared by project teams with minimum training. Its graphical presentation improves readability and makes it easy to maintain in the event of changes.

Traceability: Some of the conventional test case tools provide a unique identification to individual test cases. Such traceability could be added to FTA by appropriately identifying the individual scenario.

An FTA Case Study

Here is a common example of improving the security of software application by using controlled access. A weakness in choosing an appropriate login name or password may result in weaker application security (user ID and password are focused on more in this example than other factors, such as network or other interfaces). Figure 3 illustrates how this is represented.

The user ID and the password are considered further to see what could lead to a defect, i.e., poor security. The short length, non-use of digits or special characters, and validity not bounded by time, etc., could make a password weak. Similarly such situations could be listed for user IDs and other primary concerns.

Each scenario is identified with a unique number to establish traceability. Such traceability helps test cases to be related to other project artifacts like requirements, design or program specifications. The valid and invalid conditions for respective scenarios also can be noted for quick reference during testing.

Well Known Software Failures

Software systems are pervasive in all aspects of society. From electronic voting to online shopping, a significant part of our daily life is mediated by software. In this page, I collect a list of well-known software failures. I will start with a study of economic cost of software bugs.

Contents

Economic Cost of Software Bugs
Air-Traffic Control System in LA Airport *****
Northeast Blackout **
NASA Mars Climate Orbiter ****
Denver Airport Baggage-handling System *

The number of *s is the ironic factor I assign to each story. The one with most *s is the most ironic one. Why? You will find out.

Economic Cost of Software Bugs

Report Date: 2/2002 Price Tag: $60 Billion Annually

WASHINGTON (COMPUTERWORLD) - Software bugs are costing the U.S. economy an estimated $59.5 billion each year, with more than half of the cost borne by end users and the remainder by developers and vendors, according to a new federal study.

Improvements in testing could reduce this cost by about a third, or $22.5 billion, but it won't eliminate all software errors, the study said. Of the total $59.5 billion cost, users incurred 64% of the cost and developers 36%.

Out of curiosity of how the study calculated the cost, I skimmed through the report. The following is a summary of their methodology.

It divided software developing process into stages: Requirement Gathering and Analysis, Architectural Design, Coding, Unit Test, Integration and Component, RAISE System Test, Early Customer Feedback, Beta Test Programs, and Post-product Release.

Bugs are generated at each stage of the software development process. The later in the production process that a bug is discovered, the more costly it is to repair the bug. Then impact estimates were developed relative to two counterfactual scenarios. The first scenario investigates the cost reductions if all bugs and errors could be found in the same development stage in which they are introduced. This is inferred to as the cost of an inadequate software testing infrastructure. The second scenario investigates the cost reductions associated with finding an increased percentage (but not 100 percent) of bugs and errors closer to the development stages where they are introduced. This is referred to as a cost reduction from feasible infrastructure improvements.

The study examined the impact of buggy software in several major industries -- automotive, aerospace and financial services -- and then extrapolated the results for the U.S. economy. It then concluded software bugs are costing (the first scenario) the U.S. economy an estimated $59.5 billion each year. Improvements in testing (the second scenario) could reduce this cost by about a third, or $22.5 billion

The report also included interesting tables that show the frequency of which stages errors are found, and relative cost to repair defects when found at different stages.

Air-Traffic Control System in LA Airport

Incident Date: 9/14/2004

(IEEE Spectrum) -- It was an air traffic controller's worst nightmare. Without warning, on Tuesday, 14 September, at about 5 p.m. Pacific daylight time, air traffic controllers lost voice contact with 400 airplanes they were tracking over the southwestern United States. Planes started to head toward one another, something that occurs routinely under careful control of the air traffic controllers, who keep airplanes safely apart. But now the controllers had no way to redirect the planes' courses.

The controllers lost contact with the planes when the main voice communications system shut down unexpectedly. To make matters worse, a backup system that was supposed to take over in such an event crashed within a minute after it was turned on. The outage disrupted about 800 flights across the country.

Inside the control system unit is a countdown timer that ticks off time in milliseconds. The VCSU uses the timer as a pulse to send out periodic queries to the VSCS. It starts out at the highest possible number that the system's server and its software can handle—232. It's a number just over 4 billion milliseconds. When the counter reaches zero, the system runs out of ticks and can no longer time itself. So it shuts down.

Counting down from 232 to zero in milliseconds takes just under 50 days. The FAA procedure of having a technician reboot the VSCS every 30 days resets the timer to 232 almost three weeks before it runs out of digits.

Northeast Blackout

Incident Date: 8/14/2003 Price Tag: $6 - $10 Billion

NEW YORK (AP) - A programming error has been identified as the cause of alarm failures that might have contributed to the scope of last summer's Northeast blackout, industry officials said Thursday.

The failures occurred when multiple systems trying to access the same information at once got the equivalent of busy signals, he said. The software should have given one system precedent.

With the software not functioning properly at that point, data that should have been deleted were instead retained, slowing performance, he said. Similar troubles affected the backup systems.

NASA Mars Climate Orbiter

Incident Date: 9/23/1999 Price Tag: $125 million

WASHINGTON (AP) -- For nine months, the Mars Climate Orbiter was speeding through space and speaking to NASA in metric. But the engineers on the ground were replying in non-metric English.

It was a mathematical mismatch that was not caught until after the $125-million spacecraft, a key part of NASA's Mars exploration program, was sent crashing too low and too fast into the Martian atmosphere. The craft has not been heard from since.

Noel Henners of Lockheed Martin Astronautics, the prime contractor for the Mars craft, said at a news conference it was up to his company's engineers to assure the metric systems used in one computer program were compatible with the English system used in another program. The simple conversion check was not done, he said.

Denver Airport Baggage-handling System

Incident Date: 11/1993 - 6/1994 Price Tag: > $200 million

(Scientific America) -- Scheduled for takeoff by last Halloween (1993), the airport's grand opening was postponed until December to allow BAE Automated Systems time to flush the gremlins out of its $193-million system. December yielded to March. March slipped to May. In June the airport's planners, their bond rating demoted to junk and their budget hemorrhaging red ink at the rate of $1.1 million a day in interest and operating costs, conceded that they could not predict when the baggage system would stabilize enough for the airport to open.

Saturday, October 25, 2008

Defect Management

Defects determine the effectiveness of the Testing what we do. If there are no defects, it directly implies that we don’t have our job. There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct. This implies that we lack the knack. In this section, let us understand Defects.

What is a Defect?

For a test engineer, a defect is following: -
•Any deviation from specification
•Anything that causes user dissatisfaction
•Incorrect output
•Software does not do what it intended to do.

Bug / Defect / Error: -
•Software is said to have bug if it features deviates from specifications.
•Software is said to have defect if it has unwanted side effects.
•Software is said to have Error if it gives incorrect output.

But as for a test engineer all are same as the above definition is only for the purpose of documentation or indicative.

Defect Taxonomies

Categories of Defects:

All software defects can be broadly categorized into the below mentioned types:
•Errors of commission: something wrong is done
•Errors of omission: something left out by accident
•Errors of clarity and ambiguity: different interpretations
•Errors of speed and capacity

However, the above is a broad categorization; below we have for you a host of varied types of defects that can be identified in different software applications:
1.Conceptual bugs / Design bugs
2.Coding bugs
3.Integration bugs
4.User Interface Errors
5.Functionality
6.Communication
7.Command Structure
8.Missing Commands
9.Performance
10.Output
11.Error Handling Errors
12.Boundary-Related Errors
13.Calculation Errors
14.Initial and Later States
15.Control Flow Errors
16.Errors in Handling Data
17.Race Conditions Errors
18.Load Conditions Errors
19.Hardware Errors
20.Source and Version Control Errors
21.Documentation Errors
22.Testing Errors

Life Cycle of a Defect

The following self explanatory figure explains the life cycle of a defect:

Action-Based Testing Framework

Action-Based Testing (ABT) provides a powerful framework for organizing test design, automation and execution around keywords. In ABT keywords are called “actions” to make the concept absolutely clear. Actions are the tasks to be executed in a test. Rather than automating an entire test as one long script, an automation engineer can focus on automating actions as individual building-blocks that can be combined in any order to design a test. Non-technical test engineers and business analysts can then define their tests as a series of these automated keywords, concentrating on the test rather than the scripting language.

Traditional test design begins with a written narrative that must be interpreted by each tester or automation engineer working on the test. ABT test design takes place in a spreadsheet, with actions listed in a clear, well-organized sequence. Actions, test data and any necessary GUI interface information are stored in separate spreadsheets, where they can be referenced by the main test module. Tests are then executed from right within the spreadsheet, using third-party scripting tools or TestArchitect’s own built-in automation.

To realize the full power of Action Based Testing, it is important to use high-level actions whenever possible in test design. High-level actions are understandable by those familiar with the business logic of the test. For example, when the user inputs a number, the system makes a mortgage calculation or connects to a telephone. A good high-level action may not be specific to the system under test. “Enter order” is a good high-level step that can be used generically to refer to specific low-level steps that take place in many tests of many different applications.

Automation is then completed through the scripting (programming) of low-level actions. TestArchitect provides a comprehensive set of the low-level actions necessary through its built-in automation feature. In that case creating a high-level action required by the test design would involve only drag-and-drop of a few low-level actions to create that high-level action. The low-level actions behind “enter order” would be the specific steps needed to complete that action via various interfaces such as html, the Windows GUI, etc. An example of a low-level action would be “push button”.

Whenever scripting by an automation engineer is required, breaking this work down into reusable low-level actions saves time and money by making future scripting changes unnecessary even when the software under test undergoes major revisions. A reshuffling of actions is usually all that is required. If more scripting is necessary, it involves only the rewriting of individual actions rather than revision of entire automation scripts and the resulting accumulation of a vast library of old automation.

“The organization develops test standards which can be reused in the next test. The test itself, and the various tasks involved, is therefore more clearly defined. The costs of the test are known in advance and it is clear what has been tested to a specific level of detail. In addition, insight into the approach and the status of the test process can be gained at all times, ensuring that the test process can be adjusted in a timely manner if necessary. This method enhances the quality of both the test process and the test products, resulting in higher quality for the tested system.”

Action Based Testing allows testing teams to create a much more effective test automation framework, overcoming the limitations of other methods:

Full Involvement of the Testing Team in Test Automation

Most testing teams consist primarily of people who have strong knowledge of the application under test or the business domain, but with light expertise in programming. The team members who are fulfilling the role of test automation engineer are often people with a software development or computer science background, but lack a strong expertise in testing fundamentals, the software under test, or the business domain.

Action Based Testing allows both types of team members to contribute to the test automation effort by enabling each person to leverage their unique skills to create effective automated tests. Testers define tests as a series of reusable high-level actions. It is then the task of the automation engineer to determine how to automate the necessary low-level actions and combine them to produce the required high-level actions, both of which can often be reused in many future tests. This approach allows testers to focus on creating good tests, while the automation engineers focus on the technical challenge of implementing actions.

Significant Reduction of Test Automation Maintenance

Many organizations build a significant test automation suite using older automation methods and begin to see some benefits, only to get stuck with a huge maintenance effort when the application changes. Test automation teams end up spending more time maintaining their existing tests than actually creating new tests. This high maintenance burden is due to the fact that automated tests are highly dependent on the UI of the application under test; when the UI changes, so must the test automation. It is usually the case that the core business processes handled by an application will not change, but rather the UI used to enact those business processes changes.

Action Based Testing significantly reduces the maintenance burden by allowing users to define their tests at the business process level. Rather than defining tests as a series of interactions with the UI, test designers can define tests as a series of business actions. For example, a test of a banking application might contain the actions ‘open new account’, ‘deposit’, and ‘withdraw’. Even if the underlying UI changes, these business processes will still remain the same, so the test designer does not need to update the test. It will be the job of the automation engineer to update the actions affected by the UI changes, and this update often needs to be made in only one place.

Improved Quality of Automated Tests

In Action Based Testing, test designers follow a top-down approach which ensures that there is a clearly stated purpose for every test.
The first step is to determine how the overall test automation effort will be broken down into individual test modules. Some common ways of grouping tests include:
• Different functional areas of the application.
• Different types of tests (positive, negative, requirements-based, end-to-end, scenario-based, etc.).
• Different quality attributes being tested (business processes, UI consistency, performance, etc.).

Once the test modules have been identified, the next step is to define test requirements for each module. Test requirements are critical because they force test developers to consider what is being tested in each module, and to explicitly document it.

Once the test requirements are defined, they serve as both a roadmap for developing the test cases in the module, and documentation for the purpose of the tests. Each test case is associated to one or more test requirements, and each test requirement should be addressed by one or more test cases.

By explicitly stating the test requirements, it is possible to easily determine the purpose of a test, and to identify if a test does not sufficiently meet those test requirements. Test requirements can be quickly checked to determine if the test needs maintenance or even retirement. Test developers can be precise and concise in their test creation, creating enough tests to meet their stated requirements without introducing unwanted redundancy.

After explicitly defining the test requirements, the Test designers can start implementing the test cases using either predefined actions or by defining new actions. Test designers can define their tests as high-level business processes, which allow the tests to be more readable than tests defined using low-level interface interactions.

Facilitates Test Automation Strategy

Many testing teams dive into test automation without first considering how they should approach test automation. A very typical approach is to acquire a test automation tool, and then try to start automating as many existing test cases as possible. More often than not, this approach is not effective.

Action Based Testing provides a framework that integrates the entire testing organization in support of effective test automation. Business analysts, testers of all kinds, automation engineers, test leads and QA managers all work within the framework to complete test planning, test design, test automation and test execution.
With the right framework in place, the organization can respond most effectively to everything from marketing requirements to software development changes.

Enables Effective Collaboration by Distributed Teams

With testing teams now often distributed across the country and around the world, the challenge of sharing information, tests and test automation libraries is multiplied many times over. Action Based Testing provides a proven framework for organizing tests and test automation libraries with a clear structure, preventing disruptions that can be caused by distance and time zone differences.

TestArchitect, an ABT-based tool, takes this to the next level by enabling remote sharing of database repositories of test modules, actions and other components, and provides clear control and reporting to managers of access, changes and results.

Why Test Automation Projects Fail to Achieve Their Potential

Despite the clear benefits of test automation, many organizations are not able to build effective test automation programs. Test automation becomes a costly effort that finds fewer bugs and is of questionable value to the organization.
There are a number of reasons why test automation efforts are unproductive. Some of the most common include:

Poor quality of tests being automated

Experts in the field agree that before approaching problems in test automation, we must be certain those problems are not rooted in fundamental flaws in test planning and design.

“It doesn’t matter how clever you are at automating a test or how well you do it, if the test itself achieves nothing then all you end up with is a test that achieves nothing faster.” Mark Fewster, Software Test Automation, I.1, (Addison Wesley, 1999).

Many organizations simply focus on taking existing test cases and converting them into automated tests. There is a strong belief that if 100% of the manual test cases can be automated, then the test automation effort will be a success.
In trying to achieve this goal, organizations find that they may have automated many of their manual tests, but it has come at a huge investment of time and money, and produces few bugs found. This can be due the fact that a poor test is a poor test, whether it is executed manually or automatically.

Lack of good test automation framework and process

Many teams acquire a test automation tool and begin automating as many test cases as possible, with little consideration of how they can structure their automation in such a way that it is scalable and maintainable. Little consideration is given to managing the test scripts and test results, creating reusable functions, separating data from tests, and other key issues which allow a test automation effort to grow successfully.

After some time, the team realizes that they have hundreds or thousands of test scripts, thousands of separate test result files, and the combined work of maintaining the existing scripts while continuing to automate new ones requires a larger and larger test automation team with higher and higher costs and no additional benefit.

“Anything you automate, you’ll have to maintain or abandon. Uncontrolled maintenance costs are probably the most common problem that automated regression test efforts face.” Kaner, Bach and Petticord, ibid

Inability to adapt to changes in the system under test

As teams drive towards their goal of automating as many existing test cases as possible, they often don’t consider what will happen to the automated tests when the system under test (SUT) under goes a significant change.

Lacking a well conceived test automation framework that considers how to handle changes to the system under test, these teams often find that the majority of their test scripts need maintenance. The outdated scripts will usually result in skyrocketing numbers of false negatives, since the scripts are no longer finding the behavior they are programmed to expect.

As the team hurriedly works to update the test scripts to account for the changes, project stakeholders begin to lose faith in the results of the test automation. Often the lack of perceived value in the test automation will result in a decision to scrap the existing test automation effort and start over, using a more intelligent approach that will produce incrementally better results.

“Test products often receive poor treatment from project stakeholders. In everyday practice, many organizations must set up complete tests from scratch, even for minor adaptations, since existing tests have been lost or can no longer be used. To achieve a short time-to-market, tests need to be both easy to maintain and reusable.” Buwalda, Jannsen and Pinkster, Integrated Test Design and Automation: Using the TestFrame Method.

Test Automation Evolution

Software test automation has evolved through several generations of tools and techniques:

Capture/playback tools record the actions of a tester in a manual test execution, and allow tests to be run unattended, greatly increasing test productivity and eliminating the mind-numbing repetition of manual testing. However, even small changes to the software under test require that the test be recorded manually again. Therefore, this first generation of tools is not efficient or scalable.

Scripting, a form of programming in computer languages specifically developed for software test automation, alleviates many issues with capture/ playback tools. However, the developers of these scripts must be highly technical and specialized programmers who work in isolation from the testers actually performing the tests. In addition, scripts are best suited for GUI testing but don’t lend themselves to embedded, batch, or other forms of systems. Finally, as changes to the software under test require complex changes to the associated automation scripts, maintenance of ever-larger libraries of automation scripts becomes an overwhelming challenge.

Data-driven testing is often considered separately as an important development in test automation. This approach simply but powerfully separates the automation script from the data to be input and expected back from the software under test. Key benefits to this approach are that the data can be prepared by testers without relying on automation engineers, and the possible variations and amount of data used to test are vastly increased. This breaking down of the problem into two pieces is very powerful. While this approach greatly extends the usefulness of scripted test automation, the huge maintenance chores required of the automation programming staff remain.

Keyword-based test automation breaks work down even further in an advanced, structured and elegant approach. This reduces the cost and time of test design, automation, and execution by allowing all members of a testing team to focus on what they do best. Using this method, non-technical testers and business analysts can develop executable test automation using “keywords” that represent actions recognizable to end-users, such as “login”, while automation engineers devote their energy to coding the low-level steps that make up those actions, such as “click”, “find text box A in window B”, “enter UserName”, etc. Keyword-based test design can actually begin based on documents developed by business analysts or the marketing department, before the final details of the software to be tested are known. As the test automation process proceeds, bottlenecks are removed and the expensive time of highly-trained professionals is used effectively.

The cost benefits of the keyword method become even more apparent as the testing process continues. When the software under test is changed, revisions to the test and to the automation scripts are necessary. By using a keyword-based framework, organizations can greatly reduce the amount of maintenance needed, and avoid rewriting entire test scripts. Many changes do not require new automation at all, and can be completed by non-technical testers or business analysts. When required, changes to automated keywords can be completed by automation engineers without affecting the rest of the test, and can be swapped into other tests in the library as needed.

The keyword method has become dominant in Europe since its introduction in 1994, where it was incorporated into the TestFrame method and tool, and is now coming into its own in the USA. LogiGear’s Action Based Testing™ represents the continued evolution of this approach under the guidance of the original architect of the keyword method. This method is the foundation of LogiGear’s test automation toolset, TestArchitect™, which not only organizes test design and test automation around keywords, but also offers built-in actions that make it possible to automate many tests without scripting of any kind.

Hybrid testing tools merit a brief discussion due to the intense marketing efforts of several vendors that are helping to bring awareness of keyword-driven technologies to the industry. In these products, a keyword-like user interface is layered atop a traditional automated testing tool. Most of these tools simply provide a GUI window listing the library of low-level functions that automation engineers have produced in scripting language. Many attempt to offer “scriptless” automation through the use of GUI views, templates and business rules-based test design. One example emphasizes a graphical interface that enables non-technical users to create tests by specifying low-level actions. The tool then performs automatic code generation in a target scripting language. These canned keyword scripts must be regenerated each time a change is made to test, and direct editing is not recommended.

These hybrid solutions, in attempting to oversimplify test automation engineering, are unable to offer the power, flexibility and customization necessary to automate tests for complex systems. In addition, because these tools implement a keyword-based interface without an underlying testing method supported by the full keyword framework, the manual creation and maintenance of tests is rather labor intensive due to the use of only low-level keywords.

Friday, October 24, 2008

Frequently Asked Testing Questions - Part1

Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility.

Why does software have bugs?

-> Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

-> Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.

-> Programming errors - programmers, like anyone else, can make mistakes.

-> Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control. (See 'What can be done if requirements are changing continuously?')

-> Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

-> Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it (“if it was hard to write, it should be hard to read”).

-> Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?

A lot depends on the size of the organization and the risks involved. For large organizations with high-risk projects (in terms of lives or money), serious management buy-in is required and a formalized QA process is necessary.

Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity, I order to keep bureaucracy from getting out of hand.

For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and adequate communications among customers, managers, developers, and testers.

In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications.

What is verification? What is validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications. This can be done with checklists, issues lists, walkthroughs and inspection meetings.

Validation typically involves actual testing and takes place after verifications are completed.

What is a 'walkthrough'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes.

What is an 'inspection'?

An inspection is more formalized than a “walkthrough', and typically consist of 3-8 people including a moderator, reader (the author of whatever is being reviewed) and a recorder to take notes.

The subject of the inspection is typically a document, such as a requirements or a test plan.

The purpose is to find problems and see what is missing, not to fix anything.
Attendees should prepare for this type of meeting by reading through the document; most problems will be found during this preparation. The result of the inspection meeting should be documented in a written report. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is far more cost effective than bug detection.

What are five common problems in the software development process?

1) Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
2) Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
3) Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
4) Featuritic - requests to pile on new features after development is underway; extremely common.
5) Miscommunication - if developers don't know what is needed or customers have erroneous expectations, problems are guaranteed.

What are five common solutions to software development problems?

1) Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.
2) Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
3) Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug fixing.
4) Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide a higher comfort level with their requirement decisions and will minimize changes later on.
5) Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so customers' expectations are clarified.

What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the “customer” is and their overall influence in the scheme of things. A wide-angle view of the “customers” of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of “customer” will have their own slant on “quality” - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

What is good code?

Good code is code that works, is bug free, and is readable and maintainable. Some organizations have coding standards all developers are supposed to adhere to, but everyone has different ideas about what is best, or what is too many or too few rules. There are also various theories and metrics. Keep in mind that excessive use of standards and rules can stifle productivity and creativity. Peer reviews, buddy checks code analysis tools, etc. can be used to check for problems and enforce standards.

What is 'good design'?

Design could refer to many things, but often refers to functional design or internal design.Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented. (See further discussion of functional and internal design in 'What's the big deal about requirements?’)

What is the 'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design,internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

Will automated testing tools make testing easier?

Possibly. For small projects, the time needed to learn and implement them may not be worthwhile. For larger projects, or on-going long-term projects, they can be valuable.

A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' with the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes.

The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task.

What makes a good test engineer?

Test engineers have a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.

Previous software development experience is helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduces the learning curve in automated test tool programming.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, Engineer is able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important.

What makes a good QA or Test manager?

QA or Test or QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software, Test, and QA engineers, have the diplomatic skills needed to promote improvements in QA processes, have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people, engineers, managers, and customers; as well as, able to run meetings and keep them focused.

What is the role of documentation in QA?

Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. Ideally, there should be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

What is the big deal about 'requirements'?

One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly', (which is too subjective). A testable requirement would be something like’, 'the product shall allow the user to enter their previously-assigned password to access the application'.

Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house or out of house, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if his/her expectations aren't met should be included as a customer if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

What is a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group will be able to read it.

What is a 'test case'?

A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Note: The process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle, if possible.

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested and determinations made regarding
requirements for regression testing to check the fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial, problem-tracking/management software tools are available. These tools will give the team complete information so developers can understand the bug, get an idea of its severity and reproduce it if necessary.

What is configuration management (CM)?

Configuration management covers the processes used to control, coordinate, and track:
code, requirements, documentation, problems, change requests, designs, tools /compilers/libraries/patches; changes made to them, and who makes the changes.

What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.), managers should be notified and provided with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done.

Common factors in deciding when to stop are:

- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience.

Considerations can include:
- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

Tuesday, October 21, 2008

Test Automation Top 5 Best Practices

Introduction

The top five pitfalls encountered by managers employing software test automation are:

* Uncertainty and lack of control
* Poor scalability and maintainability
* Low test automation coverage
* Poor methods and disappointing quality of tests
* Technology vs. people issues

Following are five "best practice" recommendations to help avoid those pitfalls and successfully integrate test automation into your testing organization.

1. Focus on the Methodology, Not the Tool

A well-designed test automation methodology can help to resolve many of the problems associated with test automation. It is one of the keys to successful test automation. The methodology is the foundation upon which everything else rests. The methodology drives tool selection and the rest of the automation process. The methodology will also help to drive the approach to any offshoring efforts that may be under consideration, helping to guide locating the "appropriate" pieces of the testing process both on- and offshore.

When applying a methodology, it is important that testers and automation engineers understand and accept the methodology. Also, other stakeholders such as managers, business owners, and auditors should have a clear understanding of the methodology, and the benefits that it brings.

2. Choose Extensible Test Tools

Select a test tool that supports extensibility, a team-based Global Test Automation framework (team members are or may be distributed), and offers a solid management platform.

Surveying test tools can be time consuming, but it is important to choose the best tool to meet your overall test needs. Before beginning the survey, however, you should have a good idea of what you need in the first place. This is intimately tied to your overall test methodology.

Make sure your chosen test tool has an "appropriate" automation architecture. Whatever tool is used for the automation, attention should be paid to how the various technical requirements of the test case execution are implemented in a manageable and maintainable way. In looking at tools and considering your methodology, you should ask the basic questions of how well these tools address reusability, scalability and team-based automation (a driver for productivity quantitatively), maintainability (a driver for lowering maintenance cost), and visibility (a driver for productivity qualitatively and a vehicle for control, measurability and manageability).

You should strongly consider tools based on Action Based Testing (ABT). Action Based Testing (ABT) creates a hierarchical test development model that allows test engineers (domain experts who may not be skilled in coding) to focus on developing executable tests based on action keywords, while automation engineers (highly skilled technically but who may not be good at developing effective tests) focus on developing the low-level scripts that implement the keyword-based actions used by the test experts.

Care should be taken to avoid simplistic "Record-playback 2.0" tools that claim to do test automation with no coding. There is nothing against being able to automate without having to code - it is in fact a good benefit to have. However "Record-playback 2.0" tool’s bottlenecks quickly show as you start getting deep into production.

3. Separate Test Design and Test Automation

Test design should be separated from test automation so that automation does not dominate test design. In test automation, it is preferable to use a keywords approach, in which the automation focuses on supplying elementary functionalities that the tester can tie together into tests. This way, the complexity and multitude of the test cases do not lead to an unmanageable amount of test scripts.

The testers (domain experts) should fully focus on the development of test cases. Those test cases in turn are the input for the automation discipline. The automation engineers (highly skilled technically) can give feedback to the testers if certain test cases are hard to automate, suggesting alternative strategies, but mainly the testers should remain in the driver’s seat, not worrying too much about the automation.

In general, no more than 5% of the effort surrounding testing should be expended in automating the tests.

4. Lower Costs

There are three ways that you can look to lower costs:

You can use labor that costs less than your local team
You can use a tool that costs less
You can use training to increase the tool productivity
It is important, however, when addressing costs, not to focus on one dimension too closely without keeping in mind the overall methodology and considering the impact of any decision on other parts of the process. For example, lowering one cost such a labor by outsourcing may actually increase the total costs if that labor that does not have the proper skills.

5. Jumpstart with a Pre-Trained Team

Jumpstart the process with a pre-trained outsourcing partner that knows more about test automation success than you do, and that has a competent, well-trained staff of software testers, automation engineers, test engineers, test leads and project managers.

A pre-trained team can:

Reduce your overall project timeframe, because you don’t need to include training at the beginning of the project schedule
Reduce risk, because you don’t need to worry about how well the team members will learn the material and how skilled they will be after the training is complete
Conclusion

To summarize the preceding in a simple list, the five suggested best practices for test automation success are:

1. Focus on the methodology, not the tool
2. Choose extensible test tools
3. Separate test design and test automation
4. Lower costs
5. Jumpstart with a pre-trained team

This article base on concepts in the book Global Software Test Automation: A Discussion of Software Testing for Executives.

Friday, October 3, 2008

Bug Reporting - Art and Advocacy

This article highlights the essence and traits of finding bugs. It leaps to redefine the art, a tester should inculcate while finding a bug. It enumerates various artifacts in reporting a bug. Whereas, also voices the advocacy on the bugs that has been reported. The basic amenity of a tester being to fight for the bug until it is fixed.

Introduction

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional – business concepts checking, backend testing (like using SQL commands into db or injections), security tests, and many more. This makes us to drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside the testing group notices. These reports play an important role in the Software Development Life Cycle – in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand for the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations – there are humiliations which testers face sometimes, there are cold wars – nonetheless the discussions take the shape of mini quarrels – but at times testers and developers still say the same thing or they are correct but the depiction of their understanding are different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds most of the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting – An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives the detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can be back flows from the development team saying – not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

Module, Page/Window – names that we navigate to
Test data entered and selected
Buttons and the order of clicking
What we saw:

GUI Flaws

Missing or No Validations
Error messages
Incorrect Navigations
What we expected to see:

GUI Flaw: give screenshots with highlight
Incorrect message – give correct language, message
Validations – give correct validations
Error messages – justify with screenshots
Navigations – mention the actual pages
Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable – a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. Problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. Bug should be reported after building a proper context – PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details – make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences – nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary – combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications – If any bug arises that is a contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section, page number for reference. For example: Refer page 14 of SRS section 2-14.

6. Report without passing any kind of judgment in the bug descriptions – the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makes developers think as though testers know more than them and as a result gives birth to a psychological adversity. To avoid this, we can use the word suggestion – and discuss with the developers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority – SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the users system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

8. Provide Screenshots – This is the best approach. For any error say object references, server error, GUI issues, message prompts and any other errors that we can see – should always be saved as a screenshot and be attached to the bug for the proof. It helps the developers understand the issue more specifically.

Bug Advocacy:

This is an interesting part as every now and then we need to fight for each and every bug of ours to prove that the bug really is a bug or the bug really needs to be fixed as it is impacting the application. In the course, we often hear following comments from the programmers / give such comments to programmers:

Scenario 1: "Works for me. So what goes wrong?" – Developers often say that the bugs are not reproducible. They say that this is working fine at their system. In such a case a tester needs to patiently hear and see what exactly the developer means by his statements. He needs to find out where the difference of opinions and understanding lies. It is not always that we have understood the system right. It is quite possible that what we say is wrong and can be rightly done by them.

Scenario 2: “So then I tried . . ." - It is often seen, that with the due pressure on the tester, in the course of finding and reporting bugs, he forgets that the tests need to be performed on the stable condition of the application whereby the application shows a consistent behavior. A tester enter a phone number as special characters – impact of saving this can be a crash or overflow error…even without checking this he also enters the name as 150 characters and save them altogether – again this data can also give some error. Sometimes, the system gives some error and over that we continue to work further on the system until it crashes – then we report a bug. So, in such a case further actions worsens the problem.

Scenario 3: "That's funny, it did it a moment ago." - There are circumstances when programs that fail once a week, or fail once in a blue moon, or never fail when you try them in front of the programmer but always fail when you have a deadline coming up. So, it is important to keep snapshots, test data, databases trace, xml traces, etc for that matter.

Scenario 4: “And then it went wrong” – When the tester himself is not clear about the steps he has performed or the data he has entered and approximately reports down the bug – certainly the bug might get irreproducible. This can lead us and system nowhere or in sea of problems – we can neither close the bug nor can we describe the bug and worst even the tester cannot reproduce the bug or fight for the cause of the bug!

Conclusion:

It is important that we report everything that we feel, we do, what is the impact and measure this impact in terms of severity and priority. A tester is the catalyst of any team – he makes up the team on one hand and breaks up the application on the other. It is important to note all the issues big and small in the application with due course of understanding in terms of business and application – thereby be a face value through strong BUG REPORT as a proof and status of bugs updated at all stages in Software Development Life Cycle. Happy Bugging!!

This is a very good article by Priyanka Prakash which i came accross on the web. Thought i will share this with you all.

Wireless Testing Approach - From functionality to Security

This is a good article that i came accross on AppLabs Technologies website.

The implementation of Wireless LANs (WLAN) has become the cornerstone of many organizations’ mobile computing initiatives. The pervasive WLAN is the primary technology platform for increasing the productivity of your mobile and distributed knowledge workers. An efficient and optimized WLAN implementation improves communication flows, enables rapid access to senior management and enhances collaboration. All of these benefits provide competitive advantages that can positively affect your business.

Although your WLAN architecture may appear sound on paper, testing the actual system across the technology stack and from end-to-end is essential to ensure that your WLAN implementation provides the essential capabilities required to deliver the promised business benefits. Building a WLAN infrastructure from scratch or extending an existing implementation can present issues and risks that need to be addressed through a robust and effective WLAN test strategy.

Despite the existence of the IEEE 802.11 standards-based WLAN market, there is still no guarantee that a WLAN infrastructure constructed from multi-vendor, or even single vendor, hardware and software will provide a seamless and transparent platform for end-to-end business processes.
Some of the issues that need to be addressed are:

Wireless technology continues to outpace the capacity of industry interoperability consortia to provide comprehensive certification programmes;

Operational risks can be mitigated by implementing a homogeneous, single vendor solution but enhanced business benefits may only be realized from a heterogeneous, multi-vendor solution;

There is no single approach to building and operating enterprise scale WLANs and new architectures continue to be developed;

Physical implementation needs to consider the impact of RF interference on the operational mode and the performance of the WLAN;

Latency caused by roaming and re-authentication, especially for real-time applications such as VoIP.

Types of Testing

Functional Testing

Functional testing should be performed at all level of the technology stack, as failure at any level has the potential to disrupt the availability of applications to their users.

Protocol Level Testing

Protocol level testing generally involves comparing network traffic to a specification or standard. Often such specifications or standards include bit-level protocol descriptions. Wireless client adapters and wireless access points need to be tested at this level to ensure compliance with the protocols that the devices are designed to support.

In the wireless medium, protocol level testing involves the expert use of wireless protocol analyzer(s) that allow the tester to see what is happening at Layers 2-7 of the OSI model. Testing at this level is exacting work that requires the ability to understand and interpret the published specification or standard and compare it to the captured network traffic. The following is typical of the output from a protocol analyzer and shows the low level nature of this type of testing:

==== 802.11 packet (encrypted) ====
08 41 02 01 00 40 96 21 DC 83 00 40 96 28 8D DC FF FF FF FF FF FF A0 38 00 01 15 00 EB B1 C7 6A B1 96 B2 16 58 C4 04 5E 2D 6A F3 4B 92 EB FC FC ED 70 98 D0 64 6C 5E BB 1A DD D4 2A 26 2A 8B EF C2 41 67 75 9D FB FE 5D 4E CA A0 45 6D 7C 36 22 22 7D D0 BD 09 16 1D E6 41 D9 94 BE 9B 53 C5 CB

==== CK (basic CKIP key) ====
19 59 8D F5 EF 19 59 8D F5 EF 19 59 8D F5 EF 19

==== PK (permuted key) ====
00 01 15 E6 8B D6 03 23 0B 6A 60 B9 F4 EB 46 99

==== 802.11 packet decrypted ====
08 41 02 01 00 40 96 21 DC 83 00 40 96 28 8D DC FF FF FF FF FF FF A0 38 00 01 15 00 AA AA 03 00 40 96 00 02 2F F1 C0 A6 00 00 00 C0 08 06 00 01 08 00 06 04 00 01 00 40 96 28 8D DC A1 2C EE 03 00 00 00 00 00 00 A1 2C EE 14 21 BD D8 23 21 BD A8 AC 52 E1 01 00 00 00 28 AC 0F 82 46 86 F9 D9

==== Original MSDU ====
DA: FF FF FF FF FF FF
SA: 00 40 96 28 8D DC

Payload: 08 06 00 01 08 00 06 04 00 01 00 40 96 28 8D DC A1 2C EE 03 00 00 00 00 00
00 A1 2C EE 14 21 BD D8 23 21 BD A8 AC 52 E1 01 00 00 00 28 AC 0F 82

Compatibility Testing

The 802.11 wireless world is governed by standards. However the different wireless components do not always interoperate well. Within a single WLAN infrastructure there may be many combinations of client adapters and wireless access points. Even if the model numbers of the components are the same, there may be different software versions deployed within the devices. Compatibility testing is required to prove that the chosen devices do actually work together as expected.

Security Testing

Wireless networks are becoming more popular in the corporate environment. As such, corporate network administrators rightfully insist on making the network as secure as possible. A secure wireless strategy includes encryption, authentication, and key management. Encryption ranges from static WEP to rotating keys generated by the access point. The wireless network can authenticate the wireless user or client using a variety of authentication protocols and backend systems. Key management refers to the mechanism being employed to rotate the keys. Some of the most common systems and
mechanism that are deployed are:

Microsoft Internet Authentication Service (IAS)

Cisco Access Control Server (ACS)

Key Management:

Cisco Centralized Key Management (CCKM)
WPA
WPA2
802.1x Extensible Authentication Protocol (EAP) of all kinds
EAP-TLS (certificate-based authentication)
EAP-GTC (password or token-based authentication)
PEAP
EAP-FAST
LEAP














Although it may seem that these systems and mechanisms should work together and that each one is being used successfully and securely already, there are so many possible permutations that it is entirely possible that many WLAN implementations are effectively uniquely constructed and security testing is required to verify their end-to-end integrity.

Quality of Service Testing

One of the ways that wireless networking has evolved surrounds the use of multimedia applications (voice, video, etc) over the wireless medium. Such applications require guaranteed access to the network in order that the audio/video stream is of an acceptable quality. The mechanism employed to ensure the quality of multimedia communications over the network is called “Quality of Service” (QoS) and is implemented on a wireless network using the Wi-Fi Multimedia (WMM) functionality. WMM is based on a subset of the IEEE 802.11e WLAN QoS draft standard. The implementation of WMM is judged by generating known traffic types on the network and validating correct behavior in terms of priority values in the packets and traffic flow through the network.

End-to-End Testing

A comprehensive WLAN test strategy will include full end-to-end business process testing within the test WLAN environment allowing business risk mitigation before WLAN deployment occurs on site. Due to the many configurations that may need to be tested, this is essentially application regression testing. Regression testing is the form of testing most amenable to test automation. Consideration needs to be given to the feasibility of test automation and the potential cost and quality benefits that may be obtained through test automation.

Performance Testing

A common measure of wireless performance is throughput. Regardless of the 802.11 band (a/b/g), wireless client adapter vendors are concerned with throughput as a performance metric and point of comparison. In the wireless world, range is simulated by adding attenuation to the antenna on the wireless access point.

Wireless throughput is a function of multiple factors, most notably:

Distance between the client adapter and the access point (often simulated in the test environment by introducing attenuation to the wireless signal)

Noise in the environment

Relative orientation of the client and access point antennas

The curve of throughput versus distance (attenuation) varies from adapter to adapter. Even a single adapter’s throughput curve varies with the implemented antenna and its orientation.
















Poor throughput will manifest itself to the end user as increasing response times from their applications. To determine the overall degradation in response times under normal operating conditions load testing can be performed to simulate multiple concurrent users.

How to find a bug in application? Tips and Tricks

A very good and important point. Right? If you are a software tester or a QA engineer then you must be thinking every minute to find a bug in an application. And you should be!

I think finding a blocker bug like any system crash is often rewarding! No I don’t think like that. You should try to find out the bugs that are most difficult to find and those always misleads users.


Finding such a subtle bugs is most challenging work and it gives you satisfaction of your work. Also it should be rewarded by seniors. I will share my experience of one such subtle bug that was not only difficult to catch but was difficult to reproduce also.

I was testing one module from my search engine project. I do most of the activities of this project manually as it is a bit complex to automate. That module consist of traffic and revenue stats of different affiliates and advertisers. So testing such a reports is always a difficult task. When I tested this report it was showing the data accurately processed for some time but when tried to test again after some time it was showing misleading results. It was strange and confusing to see the results.

There was a cron (cron is a automated script that runs after specified time or condition) to process the log files and update the database. Such multiple crons are running on log files and DB to synchronize the total data. There were two crons running on one table with some time intervals. There was a column in table that was getting overwritten by other cron making some data inconsistency. It took us long time to figure out the problem due to the vast DB processes and different crons.

My point is try to find out the hidden bugs in the system that might occur for special conditions and causes strong impact on the system. You can find such a bugs with some tips and tricks.

So what are those tips:

1) Understand the whole application or module in depth before starting the testing.

2) Prepare good test cases before start to testing. I mean give stress on the functional test cases which includes major risk of the application.

3) Create a sufficient test data before tests, this data set include the test case conditions and also the database records if you are going to test DB related application.

4) Perform repeated tests with different test environment.

5) Try to find out the result pattern and then compare your results with those patterns.

6) When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.

7) Use your previous test data pattern to analyse the current set of tests.

8) Try some standard test cases for which you found the bugs in some different application. Like if you are testing input text box try inserting some html tags as the inputs and see the output on display page.

9) Last and the best trick is try very hard to find the bug As if you are testing only to break the application!

Ten Software Testing Myths ...

I was reading Lidor Wyssocky’s blog and came accross a post on 10 Software Development Myths. I found this to be informative and thought why not we have Software Testing Myths and hence the below 10 Software Testing Myths.

It is interesting to note that last 5 myths go unchanged … development and testing share the honor. I even doubt that Lidor might be a software tester or a developer having a strong tester like mind.

10. The tester’s task is easy: he should merely write and execute the test cases by translating requirements to test cases. Additionally log some bugs.

9. Every test case is documented. Otherwise, how on earth can we expect to do regression testing and in general repeat testing?

8. Test case Reviews are a one-time effort. All you have to do is take an artifact after it is completed, and verify that it is correct. Test case reviews, for example, should merely verify that *all* requirements are covered by test cases and EVERY REQUIREMENT is COVERED by AT LEAST ONE TEST CASE.

7. Software Testing should be like manufacturing. Each of us is a robot in an assembly line. Given a certain input, we should be able to come up automatically with the right output. Execute a set of test cases (should execute 100 test cases a day) and report pass/fail status.

6. Software Testing has nothing to do with creativity. Creativity – what? The only part which requires creativity is designing your assembly line of test case design. From that point on, everyone should just be obedient.

5. Creativity and discipline cannot live together. Creativity equals chaos. [This one remains unchanged from original list of software development myths]

4. The answer to every challenge we face in the software industry lies in defining a process. That process defines the assembly line without which we are doomed to work in a constant state of chaos. [BIG ONE …This one remains unchanged from original list of software development myths]

3. Processes have nothing to do with people. You are merely defining inputs and outputs for different parts of your machine.

2. If a process is not 100% repeatable, it is not a process. Letting people adapt the process and do “whatever they want” is just going back to chaos again.

1. Quality is all about serving the customer. Whatever the customer wants, he should get. Things that don’t concern your customer should not be of interest to you.

Thursday, October 2, 2008

Automation Framework - Keyword Driven

Keyword-based software test automation framework can reduce the cost and time of test design, automation and execution. It allows members of a testing team to focus on what they do best, but also allows non-technical testers and business analysts to write automated tests.

Keyword-based test design and test automation is founded on the premise that the discrete functional business events that make up any application can be described using a short text description (keyword) and associated parameter value pairs (arguments). For example, most applications require users to log in; the keyword for this business event could be "Logon User" and the parameters could be "User Id" and "Password". By designing keywords to describe discrete functional business events, testers begin to build up a common library of keywords that can be used to create keyword test cases. This is really a process of creating a language (keywords) to describe a sequence of events within the application (test case).

When properly implemented and maintained, keywords present a superior return on investment because each business event is designed, automated and maintained as a discrete entity. These keywords can then be used to design keyword test cases, but the design and automation overhead for the keyword has already been paid.

When a change occurs within any given keyword, the affected test cases can easily be found and updated appropriately. And once again, any design or automation updates to the keyword are performed only once. Compare this to the Record and Playback approach, which captures a particular business event or part of the business event each time a test case traverses it. (If there are 100 test cases that start with logging on, then this event will have automated 100 times and there will be 100 instances to maintain.)

--------------------------------------------------------------------------------

Keyword development

Development of keywords should be approached in the same manner as any formal development effort. Keywords must be designed, coded, implemented and maintained.

Design

The test designer is responsible for keyword design. At a minimum the design of a keyword should include the keyword name, keyword description and keyword parameters.

Keyword name

A standard keyword naming convention should be drafted and followed to allow designers to efficiently share keywords. The keyword name should begin with the action being performed followed by the functional entity followed by descriptive text (if required). Here are several common examples:

Logon User -- Logon User

Enter Customer Name -- Enter Customer Name

Enter Customer Address -- Enter Customer Address

Validate Customer Name -- Validate Customer Name

Select Customer Record -- Select Customer Record

The keyword name should be a shorthand description of what actions the keyword performs.

Keyword description

The keyword description should describe the behavior of the keyword and contain enough information for the test automation engineer to construct the keyword. For designers, the description is the keyword definition; for automation engineers, it's the functional specification. This should be a short but accurate description. Here is an example for the keyword "Logon User":

Logon User Description: On the Logon screen enter specified User ID and Password and then press the OK button.

Keyword parameters

The keyword parameters should capture all the business inputs that could impact the immediate business event being defined by the keyword. The simplest and most reliable method for getting the appropriate list of parameters is to take a "capture what is displayed" approach.

For the keyword "Logon User," the application displays three elements: "User ID", "Password" and OK button. The two parameters required to support this keyword are "User ID" and "Password." The OK button does not require a parameter because the keyword description states that the OK button will always be pressed. If there were multiple buttons, such as OK, CANCEL and EXIT, then a third parameter "Press Button" would be required and the keyword description would have to be modified.

Code

The test automation engineer takes the keyword name, description, parameters, application under test, and keyword development standards and constructs the keyword. If there are any issues with the design aspects of the keyword, the automation engineer works with the test designer and the design is modified to clarify the intent of the keyword. If there are any automation/engineering challenges, then the automation engineer works with the development team and the tool vendor to find an appropriate automation solution that fits the automation framework.

Implement

Keyword implementation follows the same path of any shareable project resource. At a minimum, the completed keyword should be reviewed by the test designer, unit tested by the automation engineer, function tested, and migrated into the project "testware." This does not need to be a complex or extensive process, but it should ensure that any implemented keyword is published to the testing team and functions as expected.

Maintenance

Keyword maintenance occurs when a keyword defect is detected, when a business event changes or a when keyword standards are modified. Keyword maintenance follows the same deployment path as keyword development: design, code and implement.

Keyword test case

Keyword test cases are a sequence of keywords designed to test or exercise one or more aspects of the application or applications being tested. Keyword test cases must be designed, executed and maintained. Keyword test cases are the responsibility of the test designer/tester. The automation engineer becomes involved only if a defect occurs during keyword test case execution. It should be noted that the keyword design paradigm is often used in the absence of keyword automation. It is an effective standalone test design paradigm.

Design

Keyword test case design involves planning the intent of the test case, building the test case using keywords and testing the design against the application or applications being tested. At first glance this does not appear to be any different from any other method for test case design, but there are significant differences between keyword test case design and any freehand/textual approach to test case design.

Keyword test case designs are as follows:

Consistent -- The same keyword is used to describe the business event every time

Data driven -- The keyword contains the data required to perform the test step,

Self documenting -- The keyword description contains the details of the designers intent

Maintainable -- With consistency comes maintainability, and finally the ability to support automation with no transformation from test design to automated script.

Test designers gain the power of test automation without having to become test automation engineers.

Execution

The tester can perform keyword test case execution manually by performing the keyword steps in sequence. This should be performed as part of the keyword verification process. Test cases, which are constructed using automated keywords, can be executed using the test automation tool or by an integrated test management tool. Test case execution should always be a mechanical exercise whether automation is in use or not. The test case should contain all the information necessary to execute the test case and determine its success or failure.

--------------------------------------------------------------------------------

Maintenance

Test case maintenance must occur when changes occur in the application behavior or in the design of a keyword that is being used in one or more test cases. A properly implemented keyword framework will allow the tester to find all instances of a keyword via some query mechanism, reducing the often-painful process of finding the impacted test cases to one simple step. Furthermore, a well-implemented keyword framework should support global changes to keyword instances.

Keyword implementations

GUI (graphical user interface)

Keyword solutions for GUI-based applications are the easiest to understand and implement. Most shareware, freeware and commercial applications of keyword testing deal with this space.

API (application programming interface)

Keyword solutions for API-based applications appear more complex on the surface, but once these applications are broken down into their discrete functional business events, their behavior is much the same as an equivalent GUI application.

If the business event were "Logon User," it doesn't really matter what application mechanism is used to implement the event. The keyword would look and behave the same if the business drivers were the same. There are several keyword solution vendors that deal with the API space, and the same vendor often has a solution for GUI applications.

Telecom (Telecommunication Protocols)

Keyword solutions for the telecom space (example SS7) require an intimate understanding of telecommunication protocols. There are vendors that offer keyword solutions in this space.

Keywords and test phases

Unit test

Keywords can be applied to unit tests, but it is not recommended. The development group, using the tools and techniques available in the development suite, should do the unit tests.

Function (integration test)

Keyword test solutions focused on designing and implementing keywords as discrete functional business events offer one of the most cost-effective and maintainable test frameworks for function tests. In fact, if test automation of a GUI- or API-based application is required or desired, there are few frameworks that can match its short-term or long-term ROI.

System test

A keyword-based testing solution that leverages the keywords from function test to the system test phase will help expedite the testing process. An effective keyword framework will allow the test designer to combine function-level keywords into system-level keywords.

System-level keywords deal with complete business events rather than the discrete functional business events that make up a business thread. For example, a system-level keyword could be "Complete Customer Application." And that could be made up of this chain of function-level keywords: "Enter Customer Name," "Enter Customer Contact Information," "Enter Customer Personal Information" and "Save Customer Record."

User acceptance tests

Keywords can be applied to user acceptance tests, but it is not recommended unless this is an extensive phase of testing. The end user community using the tools, techniques and processes available in production best perform user acceptance tests.