Friday, October 3, 2008

Bug Reporting - Art and Advocacy

This article highlights the essence and traits of finding bugs. It leaps to redefine the art, a tester should inculcate while finding a bug. It enumerates various artifacts in reporting a bug. Whereas, also voices the advocacy on the bugs that has been reported. The basic amenity of a tester being to fight for the bug until it is fixed.

Introduction

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional – business concepts checking, backend testing (like using SQL commands into db or injections), security tests, and many more. This makes us to drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside the testing group notices. These reports play an important role in the Software Development Life Cycle – in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand for the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations – there are humiliations which testers face sometimes, there are cold wars – nonetheless the discussions take the shape of mini quarrels – but at times testers and developers still say the same thing or they are correct but the depiction of their understanding are different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds most of the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting – An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives the detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can be back flows from the development team saying – not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

Module, Page/Window – names that we navigate to
Test data entered and selected
Buttons and the order of clicking
What we saw:

GUI Flaws

Missing or No Validations
Error messages
Incorrect Navigations
What we expected to see:

GUI Flaw: give screenshots with highlight
Incorrect message – give correct language, message
Validations – give correct validations
Error messages – justify with screenshots
Navigations – mention the actual pages
Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable – a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. Problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. Bug should be reported after building a proper context – PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details – make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences – nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary – combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications – If any bug arises that is a contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section, page number for reference. For example: Refer page 14 of SRS section 2-14.

6. Report without passing any kind of judgment in the bug descriptions – the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makes developers think as though testers know more than them and as a result gives birth to a psychological adversity. To avoid this, we can use the word suggestion – and discuss with the developers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority – SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the users system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

8. Provide Screenshots – This is the best approach. For any error say object references, server error, GUI issues, message prompts and any other errors that we can see – should always be saved as a screenshot and be attached to the bug for the proof. It helps the developers understand the issue more specifically.

Bug Advocacy:

This is an interesting part as every now and then we need to fight for each and every bug of ours to prove that the bug really is a bug or the bug really needs to be fixed as it is impacting the application. In the course, we often hear following comments from the programmers / give such comments to programmers:

Scenario 1: "Works for me. So what goes wrong?" – Developers often say that the bugs are not reproducible. They say that this is working fine at their system. In such a case a tester needs to patiently hear and see what exactly the developer means by his statements. He needs to find out where the difference of opinions and understanding lies. It is not always that we have understood the system right. It is quite possible that what we say is wrong and can be rightly done by them.

Scenario 2: “So then I tried . . ." - It is often seen, that with the due pressure on the tester, in the course of finding and reporting bugs, he forgets that the tests need to be performed on the stable condition of the application whereby the application shows a consistent behavior. A tester enter a phone number as special characters – impact of saving this can be a crash or overflow error…even without checking this he also enters the name as 150 characters and save them altogether – again this data can also give some error. Sometimes, the system gives some error and over that we continue to work further on the system until it crashes – then we report a bug. So, in such a case further actions worsens the problem.

Scenario 3: "That's funny, it did it a moment ago." - There are circumstances when programs that fail once a week, or fail once in a blue moon, or never fail when you try them in front of the programmer but always fail when you have a deadline coming up. So, it is important to keep snapshots, test data, databases trace, xml traces, etc for that matter.

Scenario 4: “And then it went wrong” – When the tester himself is not clear about the steps he has performed or the data he has entered and approximately reports down the bug – certainly the bug might get irreproducible. This can lead us and system nowhere or in sea of problems – we can neither close the bug nor can we describe the bug and worst even the tester cannot reproduce the bug or fight for the cause of the bug!

Conclusion:

It is important that we report everything that we feel, we do, what is the impact and measure this impact in terms of severity and priority. A tester is the catalyst of any team – he makes up the team on one hand and breaks up the application on the other. It is important to note all the issues big and small in the application with due course of understanding in terms of business and application – thereby be a face value through strong BUG REPORT as a proof and status of bugs updated at all stages in Software Development Life Cycle. Happy Bugging!!

This is a very good article by Priyanka Prakash which i came accross on the web. Thought i will share this with you all.

Wireless Testing Approach - From functionality to Security

This is a good article that i came accross on AppLabs Technologies website.

The implementation of Wireless LANs (WLAN) has become the cornerstone of many organizations’ mobile computing initiatives. The pervasive WLAN is the primary technology platform for increasing the productivity of your mobile and distributed knowledge workers. An efficient and optimized WLAN implementation improves communication flows, enables rapid access to senior management and enhances collaboration. All of these benefits provide competitive advantages that can positively affect your business.

Although your WLAN architecture may appear sound on paper, testing the actual system across the technology stack and from end-to-end is essential to ensure that your WLAN implementation provides the essential capabilities required to deliver the promised business benefits. Building a WLAN infrastructure from scratch or extending an existing implementation can present issues and risks that need to be addressed through a robust and effective WLAN test strategy.

Despite the existence of the IEEE 802.11 standards-based WLAN market, there is still no guarantee that a WLAN infrastructure constructed from multi-vendor, or even single vendor, hardware and software will provide a seamless and transparent platform for end-to-end business processes.
Some of the issues that need to be addressed are:

Wireless technology continues to outpace the capacity of industry interoperability consortia to provide comprehensive certification programmes;

Operational risks can be mitigated by implementing a homogeneous, single vendor solution but enhanced business benefits may only be realized from a heterogeneous, multi-vendor solution;

There is no single approach to building and operating enterprise scale WLANs and new architectures continue to be developed;

Physical implementation needs to consider the impact of RF interference on the operational mode and the performance of the WLAN;

Latency caused by roaming and re-authentication, especially for real-time applications such as VoIP.

Types of Testing

Functional Testing

Functional testing should be performed at all level of the technology stack, as failure at any level has the potential to disrupt the availability of applications to their users.

Protocol Level Testing

Protocol level testing generally involves comparing network traffic to a specification or standard. Often such specifications or standards include bit-level protocol descriptions. Wireless client adapters and wireless access points need to be tested at this level to ensure compliance with the protocols that the devices are designed to support.

In the wireless medium, protocol level testing involves the expert use of wireless protocol analyzer(s) that allow the tester to see what is happening at Layers 2-7 of the OSI model. Testing at this level is exacting work that requires the ability to understand and interpret the published specification or standard and compare it to the captured network traffic. The following is typical of the output from a protocol analyzer and shows the low level nature of this type of testing:

==== 802.11 packet (encrypted) ====
08 41 02 01 00 40 96 21 DC 83 00 40 96 28 8D DC FF FF FF FF FF FF A0 38 00 01 15 00 EB B1 C7 6A B1 96 B2 16 58 C4 04 5E 2D 6A F3 4B 92 EB FC FC ED 70 98 D0 64 6C 5E BB 1A DD D4 2A 26 2A 8B EF C2 41 67 75 9D FB FE 5D 4E CA A0 45 6D 7C 36 22 22 7D D0 BD 09 16 1D E6 41 D9 94 BE 9B 53 C5 CB

==== CK (basic CKIP key) ====
19 59 8D F5 EF 19 59 8D F5 EF 19 59 8D F5 EF 19

==== PK (permuted key) ====
00 01 15 E6 8B D6 03 23 0B 6A 60 B9 F4 EB 46 99

==== 802.11 packet decrypted ====
08 41 02 01 00 40 96 21 DC 83 00 40 96 28 8D DC FF FF FF FF FF FF A0 38 00 01 15 00 AA AA 03 00 40 96 00 02 2F F1 C0 A6 00 00 00 C0 08 06 00 01 08 00 06 04 00 01 00 40 96 28 8D DC A1 2C EE 03 00 00 00 00 00 00 A1 2C EE 14 21 BD D8 23 21 BD A8 AC 52 E1 01 00 00 00 28 AC 0F 82 46 86 F9 D9

==== Original MSDU ====
DA: FF FF FF FF FF FF
SA: 00 40 96 28 8D DC

Payload: 08 06 00 01 08 00 06 04 00 01 00 40 96 28 8D DC A1 2C EE 03 00 00 00 00 00
00 A1 2C EE 14 21 BD D8 23 21 BD A8 AC 52 E1 01 00 00 00 28 AC 0F 82

Compatibility Testing

The 802.11 wireless world is governed by standards. However the different wireless components do not always interoperate well. Within a single WLAN infrastructure there may be many combinations of client adapters and wireless access points. Even if the model numbers of the components are the same, there may be different software versions deployed within the devices. Compatibility testing is required to prove that the chosen devices do actually work together as expected.

Security Testing

Wireless networks are becoming more popular in the corporate environment. As such, corporate network administrators rightfully insist on making the network as secure as possible. A secure wireless strategy includes encryption, authentication, and key management. Encryption ranges from static WEP to rotating keys generated by the access point. The wireless network can authenticate the wireless user or client using a variety of authentication protocols and backend systems. Key management refers to the mechanism being employed to rotate the keys. Some of the most common systems and
mechanism that are deployed are:

Microsoft Internet Authentication Service (IAS)

Cisco Access Control Server (ACS)

Key Management:

Cisco Centralized Key Management (CCKM)
WPA
WPA2
802.1x Extensible Authentication Protocol (EAP) of all kinds
EAP-TLS (certificate-based authentication)
EAP-GTC (password or token-based authentication)
PEAP
EAP-FAST
LEAP














Although it may seem that these systems and mechanisms should work together and that each one is being used successfully and securely already, there are so many possible permutations that it is entirely possible that many WLAN implementations are effectively uniquely constructed and security testing is required to verify their end-to-end integrity.

Quality of Service Testing

One of the ways that wireless networking has evolved surrounds the use of multimedia applications (voice, video, etc) over the wireless medium. Such applications require guaranteed access to the network in order that the audio/video stream is of an acceptable quality. The mechanism employed to ensure the quality of multimedia communications over the network is called “Quality of Service” (QoS) and is implemented on a wireless network using the Wi-Fi Multimedia (WMM) functionality. WMM is based on a subset of the IEEE 802.11e WLAN QoS draft standard. The implementation of WMM is judged by generating known traffic types on the network and validating correct behavior in terms of priority values in the packets and traffic flow through the network.

End-to-End Testing

A comprehensive WLAN test strategy will include full end-to-end business process testing within the test WLAN environment allowing business risk mitigation before WLAN deployment occurs on site. Due to the many configurations that may need to be tested, this is essentially application regression testing. Regression testing is the form of testing most amenable to test automation. Consideration needs to be given to the feasibility of test automation and the potential cost and quality benefits that may be obtained through test automation.

Performance Testing

A common measure of wireless performance is throughput. Regardless of the 802.11 band (a/b/g), wireless client adapter vendors are concerned with throughput as a performance metric and point of comparison. In the wireless world, range is simulated by adding attenuation to the antenna on the wireless access point.

Wireless throughput is a function of multiple factors, most notably:

Distance between the client adapter and the access point (often simulated in the test environment by introducing attenuation to the wireless signal)

Noise in the environment

Relative orientation of the client and access point antennas

The curve of throughput versus distance (attenuation) varies from adapter to adapter. Even a single adapter’s throughput curve varies with the implemented antenna and its orientation.
















Poor throughput will manifest itself to the end user as increasing response times from their applications. To determine the overall degradation in response times under normal operating conditions load testing can be performed to simulate multiple concurrent users.

How to find a bug in application? Tips and Tricks

A very good and important point. Right? If you are a software tester or a QA engineer then you must be thinking every minute to find a bug in an application. And you should be!

I think finding a blocker bug like any system crash is often rewarding! No I don’t think like that. You should try to find out the bugs that are most difficult to find and those always misleads users.


Finding such a subtle bugs is most challenging work and it gives you satisfaction of your work. Also it should be rewarded by seniors. I will share my experience of one such subtle bug that was not only difficult to catch but was difficult to reproduce also.

I was testing one module from my search engine project. I do most of the activities of this project manually as it is a bit complex to automate. That module consist of traffic and revenue stats of different affiliates and advertisers. So testing such a reports is always a difficult task. When I tested this report it was showing the data accurately processed for some time but when tried to test again after some time it was showing misleading results. It was strange and confusing to see the results.

There was a cron (cron is a automated script that runs after specified time or condition) to process the log files and update the database. Such multiple crons are running on log files and DB to synchronize the total data. There were two crons running on one table with some time intervals. There was a column in table that was getting overwritten by other cron making some data inconsistency. It took us long time to figure out the problem due to the vast DB processes and different crons.

My point is try to find out the hidden bugs in the system that might occur for special conditions and causes strong impact on the system. You can find such a bugs with some tips and tricks.

So what are those tips:

1) Understand the whole application or module in depth before starting the testing.

2) Prepare good test cases before start to testing. I mean give stress on the functional test cases which includes major risk of the application.

3) Create a sufficient test data before tests, this data set include the test case conditions and also the database records if you are going to test DB related application.

4) Perform repeated tests with different test environment.

5) Try to find out the result pattern and then compare your results with those patterns.

6) When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.

7) Use your previous test data pattern to analyse the current set of tests.

8) Try some standard test cases for which you found the bugs in some different application. Like if you are testing input text box try inserting some html tags as the inputs and see the output on display page.

9) Last and the best trick is try very hard to find the bug As if you are testing only to break the application!

Ten Software Testing Myths ...

I was reading Lidor Wyssocky’s blog and came accross a post on 10 Software Development Myths. I found this to be informative and thought why not we have Software Testing Myths and hence the below 10 Software Testing Myths.

It is interesting to note that last 5 myths go unchanged … development and testing share the honor. I even doubt that Lidor might be a software tester or a developer having a strong tester like mind.

10. The tester’s task is easy: he should merely write and execute the test cases by translating requirements to test cases. Additionally log some bugs.

9. Every test case is documented. Otherwise, how on earth can we expect to do regression testing and in general repeat testing?

8. Test case Reviews are a one-time effort. All you have to do is take an artifact after it is completed, and verify that it is correct. Test case reviews, for example, should merely verify that *all* requirements are covered by test cases and EVERY REQUIREMENT is COVERED by AT LEAST ONE TEST CASE.

7. Software Testing should be like manufacturing. Each of us is a robot in an assembly line. Given a certain input, we should be able to come up automatically with the right output. Execute a set of test cases (should execute 100 test cases a day) and report pass/fail status.

6. Software Testing has nothing to do with creativity. Creativity – what? The only part which requires creativity is designing your assembly line of test case design. From that point on, everyone should just be obedient.

5. Creativity and discipline cannot live together. Creativity equals chaos. [This one remains unchanged from original list of software development myths]

4. The answer to every challenge we face in the software industry lies in defining a process. That process defines the assembly line without which we are doomed to work in a constant state of chaos. [BIG ONE …This one remains unchanged from original list of software development myths]

3. Processes have nothing to do with people. You are merely defining inputs and outputs for different parts of your machine.

2. If a process is not 100% repeatable, it is not a process. Letting people adapt the process and do “whatever they want” is just going back to chaos again.

1. Quality is all about serving the customer. Whatever the customer wants, he should get. Things that don’t concern your customer should not be of interest to you.

Thursday, October 2, 2008

Automation Framework - Keyword Driven

Keyword-based software test automation framework can reduce the cost and time of test design, automation and execution. It allows members of a testing team to focus on what they do best, but also allows non-technical testers and business analysts to write automated tests.

Keyword-based test design and test automation is founded on the premise that the discrete functional business events that make up any application can be described using a short text description (keyword) and associated parameter value pairs (arguments). For example, most applications require users to log in; the keyword for this business event could be "Logon User" and the parameters could be "User Id" and "Password". By designing keywords to describe discrete functional business events, testers begin to build up a common library of keywords that can be used to create keyword test cases. This is really a process of creating a language (keywords) to describe a sequence of events within the application (test case).

When properly implemented and maintained, keywords present a superior return on investment because each business event is designed, automated and maintained as a discrete entity. These keywords can then be used to design keyword test cases, but the design and automation overhead for the keyword has already been paid.

When a change occurs within any given keyword, the affected test cases can easily be found and updated appropriately. And once again, any design or automation updates to the keyword are performed only once. Compare this to the Record and Playback approach, which captures a particular business event or part of the business event each time a test case traverses it. (If there are 100 test cases that start with logging on, then this event will have automated 100 times and there will be 100 instances to maintain.)

--------------------------------------------------------------------------------

Keyword development

Development of keywords should be approached in the same manner as any formal development effort. Keywords must be designed, coded, implemented and maintained.

Design

The test designer is responsible for keyword design. At a minimum the design of a keyword should include the keyword name, keyword description and keyword parameters.

Keyword name

A standard keyword naming convention should be drafted and followed to allow designers to efficiently share keywords. The keyword name should begin with the action being performed followed by the functional entity followed by descriptive text (if required). Here are several common examples:

Logon User -- Logon User

Enter Customer Name -- Enter Customer Name

Enter Customer Address -- Enter Customer Address

Validate Customer Name -- Validate Customer Name

Select Customer Record -- Select Customer Record

The keyword name should be a shorthand description of what actions the keyword performs.

Keyword description

The keyword description should describe the behavior of the keyword and contain enough information for the test automation engineer to construct the keyword. For designers, the description is the keyword definition; for automation engineers, it's the functional specification. This should be a short but accurate description. Here is an example for the keyword "Logon User":

Logon User Description: On the Logon screen enter specified User ID and Password and then press the OK button.

Keyword parameters

The keyword parameters should capture all the business inputs that could impact the immediate business event being defined by the keyword. The simplest and most reliable method for getting the appropriate list of parameters is to take a "capture what is displayed" approach.

For the keyword "Logon User," the application displays three elements: "User ID", "Password" and OK button. The two parameters required to support this keyword are "User ID" and "Password." The OK button does not require a parameter because the keyword description states that the OK button will always be pressed. If there were multiple buttons, such as OK, CANCEL and EXIT, then a third parameter "Press Button" would be required and the keyword description would have to be modified.

Code

The test automation engineer takes the keyword name, description, parameters, application under test, and keyword development standards and constructs the keyword. If there are any issues with the design aspects of the keyword, the automation engineer works with the test designer and the design is modified to clarify the intent of the keyword. If there are any automation/engineering challenges, then the automation engineer works with the development team and the tool vendor to find an appropriate automation solution that fits the automation framework.

Implement

Keyword implementation follows the same path of any shareable project resource. At a minimum, the completed keyword should be reviewed by the test designer, unit tested by the automation engineer, function tested, and migrated into the project "testware." This does not need to be a complex or extensive process, but it should ensure that any implemented keyword is published to the testing team and functions as expected.

Maintenance

Keyword maintenance occurs when a keyword defect is detected, when a business event changes or a when keyword standards are modified. Keyword maintenance follows the same deployment path as keyword development: design, code and implement.

Keyword test case

Keyword test cases are a sequence of keywords designed to test or exercise one or more aspects of the application or applications being tested. Keyword test cases must be designed, executed and maintained. Keyword test cases are the responsibility of the test designer/tester. The automation engineer becomes involved only if a defect occurs during keyword test case execution. It should be noted that the keyword design paradigm is often used in the absence of keyword automation. It is an effective standalone test design paradigm.

Design

Keyword test case design involves planning the intent of the test case, building the test case using keywords and testing the design against the application or applications being tested. At first glance this does not appear to be any different from any other method for test case design, but there are significant differences between keyword test case design and any freehand/textual approach to test case design.

Keyword test case designs are as follows:

Consistent -- The same keyword is used to describe the business event every time

Data driven -- The keyword contains the data required to perform the test step,

Self documenting -- The keyword description contains the details of the designers intent

Maintainable -- With consistency comes maintainability, and finally the ability to support automation with no transformation from test design to automated script.

Test designers gain the power of test automation without having to become test automation engineers.

Execution

The tester can perform keyword test case execution manually by performing the keyword steps in sequence. This should be performed as part of the keyword verification process. Test cases, which are constructed using automated keywords, can be executed using the test automation tool or by an integrated test management tool. Test case execution should always be a mechanical exercise whether automation is in use or not. The test case should contain all the information necessary to execute the test case and determine its success or failure.

--------------------------------------------------------------------------------

Maintenance

Test case maintenance must occur when changes occur in the application behavior or in the design of a keyword that is being used in one or more test cases. A properly implemented keyword framework will allow the tester to find all instances of a keyword via some query mechanism, reducing the often-painful process of finding the impacted test cases to one simple step. Furthermore, a well-implemented keyword framework should support global changes to keyword instances.

Keyword implementations

GUI (graphical user interface)

Keyword solutions for GUI-based applications are the easiest to understand and implement. Most shareware, freeware and commercial applications of keyword testing deal with this space.

API (application programming interface)

Keyword solutions for API-based applications appear more complex on the surface, but once these applications are broken down into their discrete functional business events, their behavior is much the same as an equivalent GUI application.

If the business event were "Logon User," it doesn't really matter what application mechanism is used to implement the event. The keyword would look and behave the same if the business drivers were the same. There are several keyword solution vendors that deal with the API space, and the same vendor often has a solution for GUI applications.

Telecom (Telecommunication Protocols)

Keyword solutions for the telecom space (example SS7) require an intimate understanding of telecommunication protocols. There are vendors that offer keyword solutions in this space.

Keywords and test phases

Unit test

Keywords can be applied to unit tests, but it is not recommended. The development group, using the tools and techniques available in the development suite, should do the unit tests.

Function (integration test)

Keyword test solutions focused on designing and implementing keywords as discrete functional business events offer one of the most cost-effective and maintainable test frameworks for function tests. In fact, if test automation of a GUI- or API-based application is required or desired, there are few frameworks that can match its short-term or long-term ROI.

System test

A keyword-based testing solution that leverages the keywords from function test to the system test phase will help expedite the testing process. An effective keyword framework will allow the test designer to combine function-level keywords into system-level keywords.

System-level keywords deal with complete business events rather than the discrete functional business events that make up a business thread. For example, a system-level keyword could be "Complete Customer Application." And that could be made up of this chain of function-level keywords: "Enter Customer Name," "Enter Customer Contact Information," "Enter Customer Personal Information" and "Save Customer Record."

User acceptance tests

Keywords can be applied to user acceptance tests, but it is not recommended unless this is an extensive phase of testing. The end user community using the tools, techniques and processes available in production best perform user acceptance tests.

An Insight on Software Security Testing

Security testers have one of the most exciting and creative jobs in the industry. They are tasked with finding elusive security bugs in complex software systems and convincing the rest of the team of their importance.

They must prioritize their time and efforts to make sure the best (or worst) security vulnerabilities are found and fixed.To do that, the security tester should have the best resources available:

access to external classes, conferences, magazines and books. The security tester has to find as many critical security bugs with limited resources before the major ship deadline; the attacker has to find only one – and has all the time in the world after ship to do so. Is it a level playing field? No. That is what makes the tester's job so exciting, critical and challenging.

Every great security tester has these three qualities: a great imagination, complete knowledge of the system they are testing and an evil streak so they can think like an attacker -- and beat him at his own game. By mastering those three pillars of expertise a tester will be well on his way to becoming an exceptional security tester.

Imagination -- Many times we don't have all the information we'd like to have as security testers. When exploiting an SQL injection vulnerability, for instance, the security tester has to make certain leaps of faith about the underlying system and make educated guesses about what is really going on to create an effective test.

Complete knowledge of the system -- A great security tester must know about each component of the system he is testing. For Web applications that often means in-depth knowledge of JavaScript, XML, server-side code (ASP, JSP, Ruby, PHP, etc.), databases, Web services and more. The tester must be able to recognize when things are out of place and when components may be used incorrectly. This complete knowledge comes with time and expertise, but it can be aided by intense research of each subject with a security focus in mind. Evil streak -- The previous two pillars of expertise will take a security tester only so far in his quest for security testing nirvana; the pillar that is a game-changer is the ability to think like an attacker. Being able to anticipate the way an attacker will visualize the system is an integral part of testing the system. Similar to mapping out the many ways a burglar might be able to break into your house, the same thought process is needed for security testing so that you cover all the creative ways an attacker could exploit your application.

A great security tester has a great imagination. A great imagination extends beyond the ability to imagine a system as it could be. It also includes the ability to envision the truly interesting bugs and vulnerabilities in a system. Most security assessments are performed black box -- without source, documentation or access to internal systems.

When a security tester approaches a security assessment with little information he must make certain assumptions and inferences about the system he is testing. Sometimes those can be verified later through focused testing, but often they cannot.

SQL injection is an exceptional example of a vulnerability that requires a creative imagination to be discovered. For these vulnerabilities, a tester must be able to envision how certain features in the Web application would be executed on the database.

Great security testers have complete knowledge of a system. The most common Web application vulnerability by far is cross-site scripting (XSS). At Security Innovation our engineers maintain a knowledgebase of all security vulnerabilities we have found over the years of security testing. More than 85% of vulnerabilities found in Web applications are due to XSS. Often they are so ubiquitous that after finding dozens of them, we actually stop looking and instead provide guidance to our customer's development team so they can fix them and we can focus our testing efforts on more mission-critical issues. For that reason XSS is a great example for this subject. Ideally the system is protected by defense in depth. Initially any user input should be checked in the Web browser, then validated on the server using a whitelist regular expression. Finally, when that data is displayed back to the user, it should be whitelist-encoded to make sure no errant characters slip by and are executed on the client's browser.Finding your inner evildoer The final and, in my opinion, the most important pillar of expertise for a great security tester is being able to understand how the system can fail and to think maliciously once you've got your foot in the door.

The moment a potential vulnerability is discovered it must be assessed for risk. The most common risk rating system is DREAD, which stands for Discoverability, Reproducibility, Exploitability, Affected users and Damage potential. A tester with a healthy understanding of the latest exploits and a bit of an evil streak may be able to persuade developers and managers to escalate the vulnerability to a higher risk rating and increase the likelihood of getting it fixed quickly.

What is Network Vulnerability Testing ?

Introduction

Network penetration testing—using tools and processes to scan the network environment for vulnerabilities—helps refine an enterprise’s security policy, identify vulnerabilities, and ensure that the security implementation actually provides the protection that the enterprise requires and expects.Regularly performing penetration tests helps enterprises uncover network security weaknesses that can lead to data or equipment being compromised or destroyed by exploits (attacks on a network, usually by "exploiting" avulnerability of the system),Trojans (viruses), denial of service attacks, and other intrusions.

Testing also exposes vulnerabilities that may be introduced by patches and updates or by misconfigurations on servers, routers, and firewalls.

Penetration Testing Overview

The overall objective of penetration testing is to discover areas of the enterprise network where an intruder can exploit security vulnerabilities. Different types of penetration testing are necessary for different types of network devices. For example, a penetration test of a firewall is different from a penetration test of a typical user’s machine. Even a penetration test of devices in the DMZ (demilitarized zone) is different from performing a scan to see if network penetration is possible.The type of penetration test should be weighed against the value of the data on the machine being tested and the need for connectivity to a given service.

The penetration testing process has three primary components:

• Defining the scope

• Performing the penetration test

• Reporting and delivering results

Step 1: Defining the Scope

Before a penetration test can be launched, the enterprise must define the scope of the testing.This step includes determining the extent of testing, what will be tested, from where it will be tested, and by whom.

Full-Scale vs.Targeted Testing

An enterprise must decide whether to conduct a full-scale test of the entire network or to target specific devices, such as the firewall. It is usually best to do both in order to determine the level of exposure to the public infrastructure, as well as the security of individual targets. For example, firewall policies are often written to allow certain services to pass through them.The security for those services is placed on the device performing those services and not at the firewall.Therefore, it is necessary to test the security of those devices as well as the firewall. Some of the specific targets that should be considered for penetration testing are firewalls, routers,Web servers, mail servers, FTP servers, and DNS servers.

Devices, Systems, and Passwords

In defining the scope of the project, the enterprise must also decide on the range of testing. For example, is it looking only for vulnerabilities that could lead to a compromise of a device, or is it also looking for susceptibility to denial of service attacks? In addition, the enterprise must decide whether it will allow its password file to be hacked by the security team to test its users’ choice of passwords, and whether it will subject its devices to password grinding across the network.

Remote vs. Local Testing

Next, the enterprise must decide whether the testing will be performed from a remote location across the Internet or onsite via the local network. This decision is dictated to a large degree by the targets that are selected for testing and by the current security implementations. For example, a remote test of a machine behind a firewall that hides network address translation for Internet access will fail if the firewall appropriately prevents access to the machine. However, testing the same firewall to see if it will protect users’ computers from a remote scan will be successful.

In-House vs. Outsourced Testing

After the scope of the testing has been determined, the IT team must decide whether to use in-house resources to perform the testing or to hire outside consultants. In-house testing should be chosen only if an enterprise lacks the funds to hire outside consultants, or if the data is so sensitive that no one outside the company should view it. In all other cases, hiring outside consultants is recommended. Outside security consultants are highly trained and have worked with hundreds of different networks, bringing specific expertise and broad experience to the testing process. In addition, they help ensure an unbiased and complete testing procedure. Security
consultants continuously research new vulnerabilities, invest in and understand the latest security testing hardware and software, recommend solutions for resolving problems, and provide additional personnel for the testing process. Enterprises can leverage the experience and resources of outside security consultants to help ensure thorough, properly executed penetration tests.

Step 2: Performing the Penetration Test

Proper methodology is essential to the success of the penetration test. It involves gathering information and then testing the target environment. The testing process begins with gathering as much information as possible about the network architecture, topology, hardware, and software in order to find all security vulnerabilities. Researching public information such as Whois records, SEC filings, business news articles, patents, and trademarks not only provides security engineers with background information, but also gives insight into what information hackers can use to find vulnerabilities. Tools such as ping, traceroute, and nslookup can be used to retrieve information from the target environment and help determine network
topology, Internet provider, and architecture.Tools such as port scanners, NMAP, SNMPC, and NAT help determine hardware, operating systems, patch levels, and services running on each target device.

Three Level of Testing Services:

Common Vulnerability Assessment (CVA)

The CVA is a remote security assessment that focuses on the services that are most commonly misconfigured by personnel and are most commonly exploited by intruders. It also focuses on the most probable means of unauthorized access. A professional security engineer not only interprets the scanner output but also creates an executive summary and recommendations report.

Secure Device Assessment (SDA)

The SDA is an on-location device configuration assessment that includes architectural review of device deployment, operating system configuration, and device and policy configuration.This assessment is similar to an audit, except that it includes scanning services, when necessary.

Secure Exploit Assessment (SEA)

This penetration study encompasses all aspects of the CVA and also includes the following features: additional vulnerability research, DNS auditing, full enumeration including NetBios and Windows NT- and Unix-specific issues, penetration attempts with multi-stage attacks, and custom attack methodologies. Additional options include "brute force" password cracking and grinding, blind scanning(attacker perspective),"war dialing," and testing for denial of service attacks and social engineering (manipulating users to obtain confidential information such as passwords).

Load Testing

Introduction

Load testing generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program's services concurrently. As such, this testing is most relevant for multi-user systems, often one built using a client/server model, such as web servers. However, other types of software systems can be load-tested also.

For example, a word processor or graphics editor can be forced to read an extremely large document; or a financial package can be forced to generate a report based on several years' worth of data. The most accurate load testing occurs with actual, rather than theoretical, results.

When the load placed on the system is raised beyond normal usage patterns, in order to test the system's response at unusually high or peak loads, it is known as stress testing. The load is usually so great that error conditions are the expected result, although no clear boundary exists when an activity ceases to be a load test and becomes a stress test.

There is little agreement on what the specific goals of load testing are. The term is often used synonymously with performance testing, reliability testing, and volume testing.

Testing

Load and Performance testing is to test software intended for a multi-user audience for the desired performance by subjecting it with an equal amount of virtual users and then monitoring the performance under the specified load, usually in a test enviromnent identical to the production, before going live.

For example if a web site with a shopping cart is intended for 100 concurrent users who are doing the following functions:

* 25 VUsers are browsing through the items and logging off
* 25 Vusers are adding items to the shopping cart and checking out
and logging off
* 25 VUsers are returning items previously purchased and logging off
* 25 VUsers are just logged in without any activity

Using various tools available to generate these VUsers the application is subjected to a 100 VUser load as shown above and its performance is monitored. The pass fail criteria is different for each individual organization and there are no standards on what an acceptable criteria should be, across the board.

It is a common misconception that these are record and playback tools like regression testing tools, however it must be clarified that the similarity ends just there. The Load testing tools work at the protocol level where as the regression testing tools work at the GUI object level. To give an example a regression testing tool will simulate a mouse click on an OK button on the browser, but a load testing tool will send out the hypertext that the browser will send after the user clicks the OK button, and again it will send out the hypertext for multiple users each having a unique login ID and password.

Tools

Various tools are also available to find out the causes for slow performance which could be in the following areas:

* Application
* Database
* Network
* Client side processing
* Load Balancer


Input

The following are useful inputs for load-testing a Web application:

* Performance-critical usage scenarios
* Workload models
* Performance acceptance criteria
* Performance metrics associated with the acceptance criteria
* Interview feedback from the designer or developer of the Web application
* Interview feedback from end users of the application
* Interview feedback from the operations personnel who will maintain and manage the application

Output

The main outcomes that load testing helps you to accomplish are:

Updated test plans and test designs for load and performance testing
Various performance measures such as throughput, response time, and resource utilization
Potential bottlenecks that need to be analyzed in the white-box testing phase
The behavior of the application at various load level

Approach for Load Testing

The following steps are involved in load-testing a Web application:

Step 1 - Identify performance acceptance criteria
Step 2 - Identify key scenarios
Step 3 - Create a workload model
Step 4 - Identify the target load levels
Step 5 - Identify metrics
Step 6 - Design specific tests
Step 7 - Run tests
Step 8 - Analyze the results

Summary

Load testing helps to identify the maximum operating capacity of the application and any bottlenecks that might be degrading performance. The basic methodology for performing load testing on a Web application is to identify the performance-critical key scenarios; identify the workload profile for distributing all the load among the key scenarios; identify metrics that you want to collect in order to verify them against your performance objectives; create test cases that will be used to simulate
the load test; use tools to simulate the load according to the test cases and capture the metrics; and finally, analyze the metrics data captured during the tests.