Thursday, November 15, 2007

some recent major computer system failures caused by software bugs

Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. Studies were under way to determine which, if any, portions of the project could be salvaged

In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.

Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers, and that the total cost of the incident could exceed $100 million.

A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.

According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.

In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.

A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders.

* News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.

* In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.

News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.

* In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.

* In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.

* Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.

* January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.

Testing Procedure


What steps are needed to develop and run software tests?


The following are some of the steps to consider:

* Obtain requirements, functional design, and internal design specifications and other necessary documents.

* Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

* Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.

* Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

* Determine test environment requirements (hardware, software, communications, etc.)

* Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

* Determine test input data requirements

* Identify tasks, those responsible for tasks, and labor requirements

* Set schedule estimates, timelines, milestones

* Determine input equivalence classes, boundary value analyses, error classes

* Prepare test plan document and have needed reviews/approvals

* Write test cases

* Have needed reviews/inspections/approvals of test cases

* Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

* Obtain and install software releases

* Perform tests

* Evaluate and report results

* Track problems/bugs and fixes

* Retest as needed

* Maintain and update test plans, test cases, test environment, and testware through life cycle

Bug Tracking

What's a 'test case'?

* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

Why does software have bugs?


* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

* time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* 'no problem'

* 'piece of cake'

* 'I can whip that out in a few hours'

* 'it should be easy to update that old code'

* instead of:

* 'that adds a lot of complexity and we could end up

* making a lot of mistakes'

* 'we have no idea if we can do that; we'll wing it'

* 'I can't estimate how long it will take, until I

* take a close look at it'

* 'we can't figure out what that old spaghetti code

* did in the first place'

* If there are too many unrealistic 'no problem's', the result is bugs.

* poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

* software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

Test Cases, Suits, Scripts and Scenario

Black box testers usually write test cases for the majority of their testing activities. A test case is usually a single step, and its expected result, along with various additional pieces of information. It can occasionally be a series of steps but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario.

Most white box tester write and use test scripts in unit, system, and regression testing. Test scripts should be written for modules with the highest risk of failure and the highest impact if the risk becomes an issue. Most companies that use automated testing will call the code that is used their test scripts.

A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.

Scenario testing is similar to, but not the same as session-based testing, which is more closely related to exploratory testing, but the two concepts can be used in conjunction.

Manual Testing Basics

* In India itself, Software industry growth has been phenomenal.
* IT field has enormously grown in the past 50 years.
* IT industry in India is expected to touch 10,000 crores of which software share is dramatically increasing.

Software Crisis
* Software cost/schedules are grossly inaccurate. Cost overruns of several times, schedule slippage’s by months, or even years are common.
* Productivity of people has not kept pace with demand Added to it is the shortage of skilled people.
* Productivity of people has not kept pace with demand Added to it is the shortage of skilled people.

Software Myths
Management Myths
* Software Management is different.
* Why change or approach to development?
* We have provided the state-of-the-art hardware.
* Problems are technical
* If project is late, add more engineers.
* We need better people.

Developers Myths
* We must start with firm requirements
* Why bother about Software Engineering techniques, I will go to terminal and code it.
* Once coding is complete, my job is done.
* How can you measure the quality..it is so intangible.

Customer’s Myth
* A general statement of objective is good enough to produce software.
* Anyway software is “Flexware”, it can accommodate my changing needs.

What do we do ?
* Use Software Engineering techniques/processes.
* Institutionalize them and make them as part of your development culture.
* Adopt Quality Assurance Frameworks : ISO, CMM
* Choose the one that meets your requirements and adopt where necessary.

Software Quality Assurance:
* The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built.
* Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits.

Verification:
* Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
* The determination of consistency, correctness & completeness of a program at each stage.

Validation:
* Validation typically involves actual testing and takes place after verifications are completed
* The determination of correctness of a final program with respect to its requirements.
Software Life Cycle Models :
* Prototyping Model
* Waterfall Model – Sequential
* Spiral Model
* V Model - Sequential

What makes a good Software QA engineer?
* The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Testing:
* An examination of the behavior of a program by executing on sample data sets.
* Testing comprises of set of activities to detect defects in a produced material.
* To unearth & correct defects.
* To detect defects early & to reduce cost of defect fixing.
* To avoid user detecting problems.
* To ensure that product works as users expected it to.

Why Testing?
* To unearth and correct defects.
* To detect defects early and to reduce cost of defect fixing.
* To ensure that product works as user expected it to.
* To avoid user detecting problems.

Test Life Cycle
* Identify Test Candidates
* Test Plan
* Design Test Cases
* Execute Tests
* Evaluate Results
* Document Test Results
* Casual Analysis/ Preparation of Validation Reports
* Regression Testing / Follow up on reported bugs.

Testing Techniques
* Black Box Testing
* White Box Testing
* Regression Testing
* These principles & techniques can be applied to any type of testing.

Black Box Testing
* Testing of a function without knowing internal structure of the program.

White Box Testing
* Testing of a function with knowing internal structure of the program.

Regression Testing
* To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.

Functional Testing
* Study SRS
* Identify Unit Functions
* For each unit function
* - Take each input function
* - Identify Equivalence class
* - Form Test cases
* - Form Test cases for boundary values
* - From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
* Find the coverage

Unit Testing:
* The most 'micro' scale of testing to test particular functions or code modules. Typically done by the programmer and not by testers .
* Unit - smallest testable piece of software.
* A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
* Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.

Integration Testing:
* Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
* Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.

Alpha testing:
* Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing:
* Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.

System Testing:
* A system is the big component.
* System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
* Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery).

Volume Testing:
* The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

Stress testing:
* This refers to testing system functionality while the system is under unusually heavy or peak load; it's similar to the validation testing mentioned previously but is carried out in a "high-stress" environment. This requires that you make some predictions about expected load levels of your Web site.

Usability testing:
* Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.

Security testing:
* If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.

Test Plan:
* A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used, the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to the implementation of a project to provide a well defined and understood project roadmap.

Test Specification:
* A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step in implementing a Test Plan, prior to the onset of manual testing and/or automated test suite development. It provides a repeatable, comprehensive definition of a testing campaign.

Fuzz Testing

Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct.
The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.

Uses

Fuzz testing is often used in large software development projects that perform black box testing. These usually have a budget to develop test tools, and fuzz testing is one of the techniques which offers a high benefit:cost ratio.

Fuzz testing is also used as a gross measurement of a large software system's quality. The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs.

Fuzz testing is thought to enhance software security and software safety because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for.

However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality.

Fuzz testing methods

As a practical matter, developers need to reproduce errors in order to fix them. For this reason, almost all fuzz testing makes a record of the data it manufactures, usually before applying it to the software, so that if the computer fails dramatically, the test data is preserved.
Modern software has several different types of inputs:

* Event driven inputs are usually from a graphical user interface, or possibly from a mechanism in an embedded system.

* Character driven inputs are from files, or data streams.

* Database inputs are from tabular data, such as relational databases.

There are at least two different forms of fuzz testing:

* Valid fuzz attempts to assure that the random input is reasonable, or conforms to actual production data.

* Simple fuzz usually uses a pseudo random number generator to provide input.

* An combined approach uses valid test data with some proportion of totally random input injected.

By using all of these techniques in combination, fuzz-generated randomness can test the un-designed behavior surrounding a wider range of designed system states.

Fuzz testing may use tools to simulate all of these domains.

Event-driven fuzz

Normally this is provided as a queue of datastructures. The queue is filled with data structures that have random values.

The most common problem with an event-driven program is that it will often simply use the data in the queue, without even crude validation. To succeed in a fuzz-tested environment, software must validate all fields of every queue entry, decode every possible binary value, and then ignore impossible requests.

One of the more interesting issues with real-time event handling is that if error reporting is too verbose, simply providing error status can cause resource problems or a crash. Robust error detection systems will report only the most significant, or most recent error over a period of time.

Character-driven fuzz

Normally this is provided as a stream of random data. The classic source in UNIX is the random data generator.

One common problem with a character driven program is a buffer overrun, when the character data exceeds the available buffer space. This problem tends to recur in every instance in which a string or number is parsed from the data stream and placed in a limited-size area.

Another is that decode tables or logic may be incomplete, not handling every possible binary value.

Database fuzz

The standard database scheme is usually filled with fuzz that is random data of random sizes. Some IT shops use software tools to migrate and manipulate such databases. Often the same schema descriptions can be used to automatically generate fuzz databases.

Database fuzz is controversial, because input and comparison constraints reduce the invalid data in a database. However, often the database is more tolerant of odd data than its client software, and a general-purpose interface is available to users. Since major customer and enterprise management software is starting to be open-source, database-based security attacks are becoming more credible.

A common problem with fuzz databases is buffer overrun. A common data dictionary, with some form of automated enforcement is quite helpful and entirely possible. To enforce this, normally all the database clients need to be recompiled and retested at the same time. Another common problem is that database clients may not enderstand the binary possibilities of the database field type, or, legacy software might have been ported to a new database system with different possible binary values. A normal, inexpensive solution is to have each program validate database inputs in the same fashion as user inputs. The normal way to achieve this is to periodically "clean" production databases with automated verifiers.

Tuesday, November 6, 2007

What is the role of Documentation in QA?

Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible

5 Common Solutions for the problems in Software Testing

Solid requirements, realistic schedules, adequate testing, firm requirements, and good communication.

Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences.

If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.

Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

5 Common Problems in Software Testing

Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.

Requirements are poorly written when they're unclear, incomplete, too general, or not testable; therefore there will be problems.

The schedule is unrealistic if too much work is crammed in too little time.

Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.

It's extremely common that new features are added after development is underway.

Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

Monday, November 5, 2007

Winrunner - Working with TestSuite

Working with TestSuite

WinRunner works with TestDirector, Mercury Interactive’s integrated management tool for organizing and managing the testing process. By combining test planning, test development, test execution, and defect tracking in a central repository, TestDirector helps you to consolidate and manage the testing process to determine application readiness.

Winrunner - Debugging Tools

Debugging Tools

If a test stops running because it encountered an error in syntax or logic, several tools can help you to identify and isolate the problem.
Step commands run a single line or a selected section of a test.
Breakpoints pause a test run at pre-determined points, enabling you to identify flaws in your test.
The Watch List monitors variables, expressions and array elements in your test. During a test run, you can view the values at each break in the test run such as after a Step command, at a breakpoint, or at the end of a test.

Winrunner - Function Generator

Function Generator

WinRunner includes the Function Generator, a visual tool that presents a quick and error-free way to program your tests.

You can add TSL statements to your tests using the Function Generator in two ways: by pointing to a GUI object, or by choosing a function from a list. Once you assign argument values to a function, you can execute it from the Function Generator or paste it into your test.

Winrunner - GUI Map Editor

GUI Map Editor

When you record a test, WinRunner creates a GUI map. A GUI map lists all the objects in your application that were learned by WinRunner. The objects in the GUI map are organized according to the window in which they appear. The GUI map lists the minimum set of properties that uniquely identify an object.

As your application changes during the development process, you do not need to modify many tests. Instead, you can open your GUI map in the GUI Map Editor to add, delete, and modify object definitions.

Winrunner - Running tests and Analyzing Test Results

Running Tests
When you run a test, WinRunner interprets your test, line by line. As the test runs, WinRunner operates your application as though a person were at the controls. WinRunner provides three run modes:
Verify mode, to check your application
Debug mode, to debug your test
Update mode, to update the expected results
WinRunner runs your tests and saves the test results in a repository for you to analyze.
If you are also working with TestDirector, you can schedule and run tests on multiple remote machines.

Analyzing Test Results


After you run your tests, you can analyze the results. WinRunner's interactive reporting tools help you interpret test results by providing detailed reports which list the events that occurred during the test run, including errors and checkpoints.
Test results are color coded and marked as passed or failed.

Winrunner - Data Driven Test

Data-Driven Tests
When you test your application, you may want to check how it performs the same operations on different sets of data. By replacing the fixed values in your test with values stored in a data table (an external file), you can generate multiple test scenarios using the same test.

For example, suppose your application analyzes your company’s allocation of funds. You may want to check how your application responds to several separate sets of budget data. Instead of creating and running several different tests with different sets of data, you can create a data-driven test. When you run your test, WinRunner reads a different set of data from the data table for each iteration of the test.

You can create a data table by inserting variable values in a table or by importing data from an external file.

Winrunner -Programming Techniques and Checkpoints

Programming Techniques
You can use programming to create an entire test, or to add logic to your recorded test. Adding elements such as conditional statements, loops, and arithmetic operators enables you to create a more powerful and complex test.

Checkpoints

Checkpoints enable you to compare the current behavior of your application to its expected behavior.
You can add four types of checkpoints to your tests:
GUI checkpoints check information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
Database checkpoints check the data content in a database.
Text checkpoints read text in GUI objects and in bitmaps, and enable you to check their contents.
Bitmap checkpoints compare a "snapshot" of a window or an area in your application to an image captured in an earlier version.

Winrunner - Context Sensitive Mode

Context Sensitive Recording Mode

As you record, WinRunner identifies each GUI object you select (such as a window, a button, or an edit field), and the type of operation performed (such as type, click, or select).
For example, in a dialog box, if you type "AUTOTEST" in the Name edit field and click the OK button, WinRunner records the following:
edit_set ("Name:", "AUTOTEST");
button_press ("OK");
When you run the test, WinRunner looks in the dialog box for the edit field and the OK button represented in the test.

WinRunner -Creating Tests

Creating Tests

WinRunner provides two methods for creating a test:
You can record the operations you perform on GUI objects in your application, using the Context Sensitive Recording mode. WinRunner generates a test in a C-like Test Script Language (TSL).
You can use programming to create an entire test, or to add logic to your recorded test. In addition,You can add checkpoints to compare the current response of your application to its expected response.You can create data-driven tests to check how your application performs the same operations with different sets of data.

WinRunner -- Mercury's Functional Testing Tool

WinRunner automates testing to ensure that applications work as expected. It records your business processes and generates a test. Afterwards, you can run your test and analyze the results.
WinRunner enables you to adapt and reuse your tests, thereby protecting your investment in test creation.

Testing Process

Testing with WinRunner includes three main stages:

Creating Tests: You can create tests using both recording and programming. While recording tests, you insert checkpoints where you want to check the behavior of the application being tested.


Running Tests: When you run a test, WinRunner emulates a user by entering mouse and keyboard input into your application. Each time WinRunner encounters a checkpoint in the test, it compares the current response of your application to its expected response.


Analyzing Test Results: When a test run ends, you examine the test results. WinRunner lists all the major events that occurred during the run, such as checkpoints, errors, or messages.

Saturday, November 3, 2007

Capability Maturity Model (CMM)

CMM describes software process management maturity relative to five levels

ie., Initial, Repeatable, Defined, Managed, Optimizing

In the Initial level there is a lack of planning and the development of a clear-cut guide that software development teams can follow. Few details of a software process have been defined at this level. Good results are considered miraculous.

KPA ---- Key Process Areas

In the Second level ie., the CMM Repeatable Process is characterized by a commitment to discipline in carrying out a software development project. And is achieved by : Requirements management, software project planning, software project tracking and oversight, software subcontract management, software quality assurance, software configuration management.

In the Third level ie., the CMM Defined Process is to guide the structuring and evaluation of a software project. And is achieved by : Organisational process focus and definition, training program, software product engineering, intergroup coordination, peer reviews.

In the Fourth level ie., the CMM Managed Process is for data gathering and analysis and managing software quality. And is achieved by : Quantitative process management, Software quality management.

In the Fifth level ie., the CMM Optimizing Process is associated with defect prevention, automation of the software process wherever possible, and methods for improving software quality and team productivity and shortening development time.

SEI Maturity Model

First step in improving the existing situation is to get management buy-in and management action to clean up the software management processes.

Second step (Integration) is to get everyone working together as a team.

Third step (Measurements) is to establish objective ways of understanding status and predict where things are going in your process.

Continuous improvement: Understand that this is building a foundation for continually getting better

Configuration Management

Configuration management: helps teams control their day-to-day management of software development activities as software is created, modified, built and delivered. Comprehensive software configuration management includes version control, workspace management, build management, and process control to provide better project control and predictability

ISO 9001 standards

The ISO 9001 standard
ISO 9001 is the quality assurance standard that applies to software engineering. The standard contains 20 requirements that must be present for an effective quality assurance system. Because the ISO 9001 standard is applicable in all engineering disciplines, a special set of ISO guidelines have been developed to help interpret the standard for use in the software process.

The 20 requirements delineated by ISO9001 address the following topic.
1. Management responsibility
2. Quality system
3. Contract review
4. Design control
5. Document and data control
6. Purchasing
7. Control of customer supplied product
8. Product identification and tractability
9. Process control
10. Inspection and testing
11. Control of inspection, measuring, and test equipment
12. Inspection and test status
13. Control of non confirming product
14. Corrective and preventive action
15. Handling, storage, packing, preservation, and delivery
16. Control of quality records
17. Internal quality audits
18. Training
19. Servicing
20. Statistical techniques
In order for a software organization to become registered to ISO 9001, it must establish policies and procedure to address each of the requirement noted above and then be able to demonstrate that these policies and procedures are being followed

Thursday, November 1, 2007

Shortcuts in QTP

Mercury QuickTest Professional 8.2 Shortcut Key Reference Card

File Menu

New > Test CTRL + N
New > Business Component CTRL + SHIFT + N
New > Scripted Component ALT + SHIFT + N
New > Application Area CTRL +Alt + N
Open > Test CTRL + O
Open > Business Component CTRL + SHIFT + O
Open > Application Area CTRL + ALT + O
Save CTRL + S
Export Test to Zip File CTRL + ALT + S
Import Test from Zip File CTRL + ALT + I
Print CTRL + P

Edit Menu

Cut CTRL + X (EV only)
Copy CTRL + C
Paste CTRL + V
Delete DEL
Undo CTRL + Z (EV only)
Redo CTRL + Y (EV only)
Rename Action F2
Find CTRL + F (EV only)
Replace CTRL + H (EV only)
Go To CTRL + G (EV only)
Bookmarks CTRL + B (EV only)
Complete Word CTRL + Space (EV only)
Argument Info CTRL + SHIFT + SPACE (EV only)
Apply “With” To Script CTRL + W (EV only)
Remove “With” Statements CTRL + SHIFT + W (EV only)

Insert Menu

Checkpoint > StandardCheckpoint F12
Output Value > Standard Output Value CTRL + F12
Step > Step Generator F7
New Step F8 OR INS (KV only)
New Step After Block SHIFT + F8 (KV only)
Key: KV = Keyword View
EV = Expert View


Test/Component/Application Area Menu

Record F3
Run F5
Stop F4
Analog Recording CTRL + SHIFT + F4
Low Level Recording CTRL + SHIFT + F3
Step Menu
Object Properties CTRL + ENTER
Value Configuration Options CTRL + F11 on an input value
(KV only)
Output Options CTRL + F11 on an output value
(KV only)

Debug Menu

Pause PAUSE
Step Into F11
Step Over F10
Step Out SHIFT + F11
Insert/Remove Breakpoint F9
Clear All Breakpoints CTRL + SHIFT + F9

Data Table Options

Edit > Cut CTRL + X
Edit > Copy CTRL + C
Edit > Paste CTRL + V
Edit > Clear > Contents CTRL + DEL
Edit > Insert CTRL + I
Edit > Delete CTRL + K
Edit > Fill Right CTRL + R
Edit > Fill Down CTRL + D
Edit > Find CTRL + F
Edit > Replace CTRL + H
Data > Recalc F9
Insert Multi-line Value CTRL + F2 while editing cell
Activate next/previous sheet CTRL + PAGEUP/CTRL + PAGEDOWN

General Options

View Keyword View/Expert View CTRL + TAB
Open context menu for step or Data Table cell SHIFT + F10
or Application key ( )
Expand all branches * [on numeric keypad] (KV only)
Expand branch + [on numeric keypad] (KV only)
Collapse branch - [on numeric keypad] (KV only)

Tuesday, October 30, 2007

Monday, October 29, 2007

Download some of the important documents here


Software Testing 1



Software Testing 2



QTP Documents



Web Testing



QTP Tutorial



Silk User Guide



Use Case Testing




VB SCRIPT

QA, QC FAQs

1. What is the difference between quality, Quality Control and Testing?

Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects.

2. Every project need a tester ?

Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors.

3. What is 'Software Quality Assurance'?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

4. what is Verification and Validation?

Verification: "Are we building the product right?", i.e., does the product conform to the specifications.
Validation: "Are we building the right product?", i.e., does the product do what the user really requires

5. What are Test Deliverables?

Test Plan, Test Case, Defect-Fault, and Status Report are the sets of test deliverables in any testing phase.

6. What is a 'walkthrough'?

'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

7. What's an 'inspection'?

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

8. What makes a good Software Test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.

9.How do you perform integration testing?

To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.

10.What's a 'test case'?

A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Risk Analysis

Risk's associated with projects are analyzed and mitigation's are documented in this document. Types of risk that are associated are
1. Schedule Risk: Factors that may affect the schedule of testing are discussed.
2. Technology Risk: Risks on the hardware and software of the application are discussed here
3. Resource Risk: Test team availability on slippage of the project schedule is discussed.
4. Support Risk: Clarifications required on the specification and availability of personnel for the same is discussed.

Plan-Do-Check-Act (PDCA)


Software development process is comprised of the following four components:-
Plan : Devise a Plan. Define your objective and determine the strategy and support methods required to achive that objective.
Do: Execute the plan : Create the conditions and perform the necessary training to execute the paln.Make sure every one throughly understands the objectives and the plan.
Check:Check the result :Check to determine whether work is progessing according to the plan and whether the expected results are obtained .Check for performance of the set procedures, changes in conditions, or abnormalities that may appear.
Act: Take necessary action: if your checkup reveals that the work is not being performed according to plan or that results are not what was anticipated , devise measures for appropriate action.Testing only involves only check component of the plan- do -check -act(PDCA) cycle.

SDLC Model

Every system or project has a "life cycle": a birth, life, and death. Here's a simple example of how it works.


Simple English

Techno-babble

Who Does What

Real-Life Example

Birth

"Vision
Statement"

Management or Marketing or Customer says "Wouldn't it be great if ..."

I'm the Chief Executive Officer of this company and I need a nice suit for shareholder meetings.

Basic
Needs

"Business
Requirements"

Business Analyst talks with stakeholders and documents the requirements.

I want to hide my big gut and make me look presentable and sophisticated and successful. Jeans and a T-shirt just won't do.

Specific
Needs

"Functional
Requirements"

Business Analyst and/or Systems Analyst investigates business requirements and available technology, develops a detailed plan and specifications.

Tailor proposes a 3-piece suit, wool, dark colour, with pin-stripes, expansion waist, etc.

Detailed
Design

"System Design
Document"

Systems Analyst produces detailed description of all processes, transaction files, etc.

Tailor shows CEO a drawing of proposed suit and how sophisticated he will appear.

Making
It

"Development"

Programmer uses Functional Requirements document and System Design document to write code.

Tailor cuts the cloth.

Basic
Testing

"Unit Testing"

Programmer tests the code he has written, on his own machine.

Tailor fits the bits of cloth together on a flat table, making sure everything will connect, and he hasn't reversed the sleeves.

Testing
Related
Things

"System Testing"

Programmers test related modules in a separate system test environment. Links or interfaces to other systems are "dummied out" or faked, as they are only interested in that one system at this time.

Tailor works on only the suit jacket, hanging it on a clothes dummy or mannequin. He does not work on the vest or trousers, just concentrates on the jacket. When done with the jacket, he works on the vest, and later the trousers.

Bringing
the
Parts
Together

"Integration
Testing"

Programmers and other testers test related systems in a separate Integration Test environment, similar to Production environment. They test the flow of data from one system to another, ensuring everything hangs together properly.

Tailor puts trousers, vest, jacket on a mannequin and ensures everything hangs together properly. Note that a mannequin is an approximation of the real thing.

At this point, the system is theoretically finished, and ready for production.
But the most important part has not yet been done: Acceptance.

Is it
what we
wanted?

"Acceptance
Testing"

Users test the software, with the original Business Requirements as a reference, ensuring the software does what the business wants, and poses minimal risk to the company.

CEO tries on the suit, and if acceptable, pays for it. If not acceptable, changes are made.

Use it

"Production"

Software is in regular use for 17.5 years.

CEO wears the suit at each shareholders
meeting for 17.5 years.

Death

"De-commissioned"

Software is replaced by something better.

CEO donates suit to Salvation Army.

Thursday, September 20, 2007

Some FAQs with Answers

What is 'Software Testing'?
  • Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.

What is 'Software Quality Assurance'?

  • Software Quality Assurance involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

What is the 'Software Quality Gap'?

  • The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.

What is Equivalence Partitioning?

  • In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.

What is Boundary Value Analysis?

  • It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.

Why does software have bugs?

  • Miscommunication or no communication - understand the application's requirements. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Programming errors - programmers "can" make mistakes. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.

What does "finding a bug" consist of?

  • Finding a bug consists of number of steps that are performed: Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed What will happen about bugs that are already known?When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.

What's the big deal about 'requirements'?

  • Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

What can be done if requirements are changing continuously?

  • A common problem and a major headache. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

How can it be known when to stop testing?

  • This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be performed. Common factors in deciding when to stop are:
  1. Deadlines achieved (release deadlines, testing deadlines, etc.
  2. Test cases completed with certain percentage passed
  3. Test budget depleted •
  4. Coverage of code/functionality/requirements reaches a specified point
  5. Defect rate falls below a certain level
  6. Beta or Alpha testing period ends .

What if there isn't enough time for thorough testing?

  • Use risk analysis to determine where testing should be focused. Figure out which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? What do the developers think are the highest-risk aspects of the application? Which tests will have the best high-risk-coverage to time-required ratio? What if the software has so many bugs that it can't really be tested at all?Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem

How does a client/server environment affect testing?

  • Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities.

Does it matter how much the software has been tested already?

  • No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
  1. Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
  2. Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
  3. High stability (bugs are expected to be difficult to find, indicating already well tested)

How is testing affected by object-oriented designs?

  • Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.

Will automated testing tools make testing easier?

  • A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups. Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.

Why outsource testing?

  • Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive. - Testing a software application now involves a variety of skills.Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.

What steps are needed to develop and run software tests?

The following are some of the steps needed to develop and run software tests:

  1. Obtain requirements, functional design, and internal design specifications and other necessary documents
  2. Obtain budget and schedule requirements
  3. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  4. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
  5. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
  6. Determine test environment requirements (hardware, software, communications, etc.)
  7. Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  8. Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
  9. Set schedule estimates, timelines, milestones
  10. Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
  11. Write test cases
  12. Have needed reviews/inspections/approvals of test cases
  13. Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  14. Obtain and install software releases
  15. Perform tests
  16. Evaluate and report results
  17. Track problems/bugs and fixes
  18. Retest as needed Maintain and update test plans, test cases, test environment, and test ware through life cycle

Tuesday, September 18, 2007

Important Terminologies in Software Testing


Traceability Matrix

Traceability matrix/Requirement Traceability matrix (RTM)Traceability matrix is a matrix which is used to keep track of the requirements. It is a mapping between the requrements and test cases, we will do this for identify missing test cases. It is prepared by trhe either test lead or test engg along with test lead.Exact requirements from the requirement doc given by the client are copied in this matrix. These requirements are assigned a unique number and the remark as testable or not. Against each testable requirement test objective and test case is identified. It is highly possible that for one req there could be multiple test objectives and test cases. For each of the test objective and test case unique number is assigned. Number flow is usually like Requirement Id >> test obj Id>> test case id.Advantages:a. We can trace the missing test cases.b. Whenever requirements changes then we can easily refer to matrix document, change the usecase and go to corresponding testcases and change them. c. Easy to test any functionality. Only we need to refer matrix document and we can reach to related test cases.d. We can trace the impact of functionalities on one another. Because different functionalities can have same test cases.

Bug Density

1. Bug Density: Bug Density is nothing but the number of bugs found in 1000 lines of code. Now every organization has this 1000 set to their requirement and need it can be 100 or any number based on the scale of the project.2. what is defect density?Number of defects divided per unit time. Defect density is one of the metric which is equal to the ratio of number of defects to the number of lines of code .Defect Density = Defect/unit sizeDD=Total Defect/KLOC( Kilo lines of Code)Ex: Suppose 10 bugs are found in 1 KLOCTherefore DD is 10/KLOC (Kilo lines of code)3. what is defect matrix?Time at which defects were discovered relative to when they wereinserted into the software.

System Testing!!

What is System testing? Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements [IEEE 90]. System testing is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS).Types of system testing:-a. User interface testingb. Usability testingc. Performance testingd. Compatibility testinge. Error handling testingf. Load testingg. Volume testingh. Stress testingi. User help testingj. Security testingk. Scalability testingl. Capacity testingm. Sanity testingn. Smoke testingo. Exploratory testingp. Adhoc testingq. Regression testingr. Reliability testings. Recovery testingt. Installation testingu. Idempotency testingv. Maintenance testing

Severity and Priority

Severity: Severity determines the defect's effect on the application. Severity is given by Testers
Priority: Determines the defect urgency of repair.Priority is given by Test lead or project manager
1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.Priority is used to organize the work. The field only takes meaning when owner of the bugP1 Fix in next buildP2 Fix as soon as possibleP3 Fix before next releaseP4 Fix it time allowP5 Unlikely to be fixedDefault priority for new defects is set at P3

Bug, Error, Defect and Issue

a. Bug:A software bug is an error, flaw, mistake, failure, or fault in a program that prevents it from behaving as intended (e.g., producing an incorrect result). Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code.
b. Error: The mistake made by developer in coding.
c. Defect: Defect is something which is in the requirement document and it is not implemented or it is implemented in a wrong way.
d. Issue: Issue is something which is not all above, Some issues are there like site is slow, session related problems, security problems etc.
Waterfall Model

This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project.
Requirement
a. Design
b. Implementation & Unit Testing
c. Integration & System Testing
d. Operation
Advantages
a. Simple and easy to use.
b. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
c. Phases are processed and completed one at a time.
d. Works well for smaller projects where requirements are very well understood.
Disadvantages
a. Adjusting scope during the life cycle can kill a project
b. Poor model for complex and object-oriented projects.
c. Poor model for long and ongoing projects.
d. Poor model where requirements are at a moderate to high risk of changing.
Spiral Model

The spiral model gives more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Advantages
a. High amount of risk analysis.
b. Good for large and mission-critical projects.
c. Software is produced early in the software life cycle.
Disadvantages
a. Can be a costly model to use.
b. Risk analysis requires highly specific expertise.
c. Project’s success is highly dependent on the risk analysis phase.
d. Doesn’t work well for smaller projects.

Software Testing

Black Box Testing
Black Box testing refers to the technique of testing a system with no knowledge of the internals of the system. Black Box testers do not have access to the source code and are oblivious of the system architecture. A Black Box tester typically interacts with a system through a user interface by providing inputs and examining outputs without knowing where and how the inputs were operated upon. In Black Box testing, target software is exercised over a range of inputs and the outputs are observed for correctness.
Advantages
a. Efficient Testing — Well suited and efficient for large code segments or units.
b. Unbiased Testing — clearly separates user's perspective from developer's perspective through separation of QA and Development responsibilities.
c. Non intrusive — code access not required.
d. Easy to execute — can be scaled to large number of moderately skilled testers with no knowledge of implementation, programming language, operating systems or networks.
Disadvantages
a. Localized Testing — Limited code path coverage since only a limited number of test inputs are actually tested.
b. Inefficient Test Authoring — without implementation information, exhaustive input coverage would take forever and would require tremendous resources.
c. Blind Coverage — cannot control targeting code segments or paths which may be more error prone than others.

White Box Testing
White Box testing refers to the technique of testing a system with knowledge of the internals of the system. White Box testers have access to the source code and are aware of the system architecture. A White Box tester typically analyzes source code, derives test cases from knowledge about the source code, and finally targets specific code paths to achieve a certain level of code coverage. A White Box tester with access to details about both operations can readily craft efficient test cases that exercise boundary conditions.
Advantages
a. Increased Effectiveness — Crosschecking design decisions and assumptions against source code may outline a robust design, but the implementation may not align with the design intent.
b. Full Code Pathway Capable — all the possible code pathways can be tested including error handling, resource dependencies, and additional internal code logic/flow.
c. Early Defect Identification — Analyzing source code and developing tests based on the implementation details enables testers to find programming errors quickly.
d. Reveal Hidden Code Flaws — access to source code improves understanding and uncovering unintended hidden behavior of program modules.
Disadvantages
a. Difficult To Scale — requires intimate knowledge of target system, testing tools and coding languages, and modeling. It suffers for scalability of skilled and expert testers.
b. Difficult to Maintain — requires specialized tools such as source code analyzers, debuggers, and fault injectors.
c. Cultural Stress — the demarcation between developer and testers starts to blur which may become a cultural stress.
d. Highly Intrusive — requires code modification has been done using interactive debuggers, or by actually changing the source code. This may be adequate for small programs; however, it does not scale well to larger applications. Not useful for networked or distributed systems.

Gray Box Testing
Gray Box testing refers to the technique of testing a system with limited knowledge of the internals of the system. Gray Box testers have access to detailed design documents with information beyond requirement documents. Gray Box tests are generated based on information such as state-based models or architecture diagrams of the target system.
Advantages
a. Offers Combined Benefits — Leverage strengths of both Black Box and White Box testing wherever possible.
b. Non Intrusive — Gray Box does not rely on access to source code or binaries. Instead, based on interface definition, functional specifications, and application architecture.
c. Intelligent Test Authoring — Based on the limited information available, a Gray Box tester can author intelligent test scenarios, especially around data type handling, communication protocols and exception handling.
d. Unbiased Testing — The demarcation between testers and developer is still maintained. The handoff is only around interface definitions and documentation without access to source code or binaries.
Disadvantages
a. Partial Code Coverage — Since the source code or binaries are not available, the ability to traverse code paths is still limited by the tests deduced through available information. The coverage depends on the tester authoring skills.
b. Defect Identification — Inherent to distributed application is the difficulty associated in defect identification. Gray Box testing is still at the mercy of how well systems throw exceptions and how well are these exceptions propagated with a distributed Web Services environment.

Difference between Black Box and White Box Testing

1. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.
2. Synonyms for white-box include: structural, glass-box and clear-box.
3. Generally black box testing will begin early in the software development i.e. in requirement gathering phase itself. But for white box testing approach one has to wait for the designing has to complete.
4. We can use black testing strategy almost any size either it may be small or large. But white box testing will be effective only for small lines of codes or piece of codes.
5. In white box testing we can not test Performance of the application. But in Black box testing we can do it.

QA Role

Anatomy of a Software Development Role: Quality Assurance

The Quality Assurance (QA) role is the role responsible for guaranteeing a level of quality for the end client, and to help the software development team to identify problems early in the process. It is not surprising that people in this role are often known as "testers". Of course, the role is more than just testing. It's about contributing to the quality of the final product. (If you've not been following the series, you should read Cracking the Code: Breaking Down the Software Development Roles.) What's the Quality Assurance role? The quality assurance (QA) role is one that is focused on creating a quality deliverable. In other words, it is the responsibility of the QA role to make sure that the software development process doesn't sacrifice quality in the name of completed objectives. Click here to see how the QA fits within the full organizational chart. The QA role works with the Functional Analyst (FA) and the Solutions Architect (SA) to convert the requirements and design documents into a set of testing cases and scripts, which can be used to verify that the system meets the client needs. This collection of test cases and scripts are collectively referred to as a test plan. The test plan document itself is often simple providing an overview of each of the test cases. The testing cases and scripts are also used to validate that there are no unexplained errors in the system. The test plan is approved by the Subject Matter Experts (SMEs) and represents the criteria to reach a project closing. If the test cases and scripts in the test plan are the agreed upon acceptance criteria for a project then all that is necessary is for project closure is to demonstrate that all of the testing cases and scripts have been executed successfully with passing results. A test case is a general-purpose statement that maps to one or more requirements and design points. It is the overall item being tested. It may be a specific usability feature, or a technical feature that was supposed to be implemented as a part of the project. Test scripts fit into the test cases by validating that case. Test scripts are step-by-step instructions on what to do, what to look for, and what should happen. While the test cases can be created with nearly no input from the architecture or design, the test scripts are specific to how the problem was solved by the software development team and therefore they require an understanding of not only the requirements, but also the architecture, design, and detailed design. The quality assurance role is split into three parts: First The role creates test cases and scripts. Second The role executes or supervises the execution of those test cases and scripts. Third The role facilitates or performs random testing of all components to ensure that there's not a random bug haunting the system. In some organizations, the quality assurance role has two specializations. The first is the classic functional testing and quality assurance as described above. The second, is a performance quality assurance role where the performance of the completed solution is measured and quantified. The performance QA role is an important part of the large system development quality assurance process. The quality assurance role also has within it a wide range of potential titles and specific responsibilities. From the entry-level quality assurance professional who executes and document tests to the QA lead who works with the FA and SA to create the testing plan, cases, and scripts. The role also extends through QA manager position that may take responsibility for the quality of a solution. At this level the QA manager and solutions architect work as peers to ensure the final solution has the highest quality.

Why Software Testing is so hard?



What Is Software Testing? And Why Is It So Hard?

Software testing is arguably the least understood part of the development process. Througha four-phase approach, the author shows why eliminating bugs is tricky and why testing is a constant trade-off.
Virtually all developers know the frustration of having software bugs reported by users. When this happens, developers inevitably ask: How did those bugs escape testing? Countless hours doubtless went into the careful testing of hundreds or thousands of variables and code statements, so how could a bug have eluded such vigilance? The answer requires, first, a closer look at software testing within the context of development. Second, it requires an understanding of the role
software testers and developers—two very different functions—play. Assuming that the bugs users report occur in a software product that really is in error, the answer could be any of these:
The user executed untested code. Because of time constraints, it’s not uncommon for developers to release untested code—code in which users can stumble across bugs.
The order in which statements were executed in actual use differed from that during testing. This order can determine whether software works or fails.
The user applied a combination of untested input values. The possible input combinations that thousands of users can make across a given software interface are simply too numerous for testers to apply them all. Testers must make tough decisions about which inputs to test, and sometimes we make the wrong decisions.
The user’s operating environment was never tested. We might have known about the environment but had no time to test it. Perhaps we did not (or could not) replicate the user’s combination of hardware, peripherals, operating system, and applications in our testing lab. For example, although companies that write networking software are unlikely to create a thousand-node network in their testing lab, users can—and do— create such networks.
Through an overview of the software testing problem and process, this article investigates the problems that testers face and identifies the technical issues that any solution must address. I also survey existing classes of solutions used in practice. Readers interested in further study will find the sidebar “Testing Resources” helpful.
Testers and the Testing Process
To plan and execute tests, software testers must consider the software and the function it computes, the inputs and how they can be combined, and the environment in which the software will eventually operate. This difficult, time-consuming process requires technical sophistication and proper planning. Testers must not only have good development skills—testing often requires a great deal of coding—but also be knowledgeable in formal languages, graph theory, and algorithms. Indeed, creative testers have brought many related computing disciplines to bear on testing problems, often with impressive results. Even simple software presents testers with obstacles, as the sidebar “A Sample Software Testing Problem” shows. To get a clearer view of some of software testing’s inherent difficulties, we can approach testing in four phases:
Modeling the software’s environment
Selecting test scenarios
Running and evaluating test scenarios
Measuring testing progress
These phases offer testers a structure in which to group related problems that they must solve before moving on to the next phase.
Phase 1: Modeling the Software’s Environment
A tester’s task is to simulate interaction between software and its environment.Testers must identify and simulate the interfaces that a software system uses and enumerate the inputs that can cross each interface. This might be the most fundamental issue that testers face, and it can be difficult, considering the various file formats, communication protocols, and third-party (application programming interfaces) available. Four common interfaces are as follows:
Human interfaces include all common methods for people to communicate with software. Most prominent is the GUI but older designs like the command line interface and the menu-driven interface are still in use. Possible input mechanisms to consider are mouse clicks, keyboard events, and input from other devices. Testers then decide how to organize this data to understand how to assemble it into an effective test.
Software interfaces, called APIs, are how software uses an operating system, database, or runtime library. The services these applications provide are modeled as test inputs. The challenge for testers is to check not only the expected but also the unexpected services. For example, all developers expect the operating system to save files for them. The service that they neglect is the operating system’s informing them that the storage medium is full. Even error messages must be tested.
File system interfaces exist whenever software reads or writes data to external files. Developers must write lots of error-checking code to determine if the file contains appropriate data and formatting. Thus, testers must build or generate files with content that is both legal and illegal, and files that contain a variety of text and formatting.
Communication interfaces allow direct access to physical devices (such as device drivers, controllers, and other embedded systems) and require a communication protocol. To test such software, testers must be able to generate both valid and invalid protocol streams. Testers must assemble—and submit to the software under test—many different combinations of commands and data, in the proper packet format. Next, testers must understand the user interaction that falls outside the control of the software under test, since the consequences can be serious if the software is not prepared. Examples of situations testers should address are as follows:
Using the operating system, one user deletes a file that another user has open. What will happen the next time the software tries to access that file?
A device gets rebooted in the middle of a stream of communication. Will the software realize this and react properly or just hang?
Two software systems compete for duplicate services from an API. Will the API correctly service both?
Each application’s unique environment can result in a significant number of user interactions to test.
Considerations
When an interface presents problems of infinite size or complexity, testers face two difficulties: They must carefully select values for any variable input, and they must decide how to sequence inputs. In selecting values, testers determine the values of individual variables and assign interesting value combinations when a program accepts multiple variables as input.Testers most often use the boundary value partitioning technique1 for selecting single values for variables at or around boundaries. For example, testing the minimum, maximum, and zero values for a signed integer is a commonly accepted idea as well as values surrounding each of these partitions—for example, 1 and –1 (which surround the zero boundary). The values between boundaries are treated as the same number; whether we use 16 or 16,000 makes no difference to the software under test.for multiple variables processed simultaneously that could potentially affect each other. Testers must consider the entire cross product of value combinations. For two integers, we consider both positive, both negative, one positive and one zero, and so forth.2In deciding how to sequence inputs, testers have a sequence generation problem. Testers treat each physical input and abstract event as symbols in the alphabet of a formal language and define a model of that language. A model lets testers visualize the set of possible tests to see how each test fits the big picture. The most common model is a graph or state diagram, although many variations exist. Other popular models include regular expressions and grammars, tools from language theory. Less-used models are stochastic processes and genetic algorithms. The model is a representation that describes how input and event symbols are combined to make syntactically valid words and sentences.These sentences are sequences of inputs that can be applied to the software under test. For example, consider the input Filemenu. Open, which invokes a file selection dialog box; filename, which represents the selection (with mouse clicks, perhaps) of an existing file, and ClickOpen and ClickCancel,which represent button presses. The sequence Filemenu.Open filename ClickOpen is legal, as are many others. The sequence ClickCancel Filemenu.Open is impossible because the cancel button cannot be pressed until the dialog box has been invoked. The model of the formal language can make such a distinction between sequences.
Text editor example
We can represent legal uses of the file selection dialog in, for example, a text editor with the regular expression: Filemenu.Open filename* (ClickOpen ClickCancel)
in which the asterisk represents the Kleene closure operator indicating that the filename action can occur zero or more times. This expression indicates that the first input received is Filemenu.Open followed by zero or more selections of a filename (with a combination of mouse clicks and keyboard entries), then either the Open or Cancel button is pressed. This simple model represents every combination of inputs that can happen, whether they make sense or not. To fully model the software environment for the entire text editor, we would need to represent sequences for the user interface and the operating system interface. Furthermore, we would need a description of legal and corrupt files to fully investigate file system interaction. Such a formidable task would require the liberal use of decomposition and abstraction.
Phase 2: Selecting Test Scenarios
Many domain models and variable partitions represent an infinite number of test scenarios, each of which costs time and money. Only a subset can be applied in any realistic software development schedule, so how does a smart tester choose? Is 17 a better integer than 34? How many times should a filename be selected before pressing the Open button? These questions, which have many answers, are being actively researched. Testers, however, prefer an answer that relates to coverage of source code or its input domain. Testers strive for coverage: covering code statements (executing each source line at least once) and covering inputs (applying each externally generated event). These are the minimum criteria that testers use to judge the completeness of their work; therefore, the test set that many testers choose is the one that meets their coverage goals. But if code and input coverage were sufficient, released products would have very few bugs. Concerning the code, it isn’t individual code statements that interest testers but execution paths: sequences of code statements representing an execution of the software. Unfortunately, there are an infinite number of paths. Concerning the input domain, it isn’t the individual inputs that interest testers but input sequences that, taken as a whole, represent scenarios to which the software must respond. There are an infinite number of these, too. Testers sort through these infinite sets to arrive at the best possible test data adequacy criteria, which are meant to adequately and economically represent any of the infinite sets. “Best” and “adequately” are subjective; testers typically seek the set that will find the most bugs. (High and low bug counts, and their interpretation, are discussed later). Many users and quality assurance professionals are interested in having testers evaluate typical use scenarios— things that will occur most often in the field. Such testing ensures that the software works as specified and that the most frequently occurring bugs will have been detected. For example, consider the text editor example again. To test typical use, we would focus on editing and formatting since that is what real users do most. However, to find bugs, a more likely place to look is in the harder-to-code features like figure drawing and table editing.
Execution path test criteria
Test data adequacy criteria concentrate on either execution path coverage or input sequence coverage but rarely both. The most common execution path selection criteria focus on paths that cover control structures. For example,
Select a set of tests that cause each source statement to be executed at least once.
Select a set of tests that cause each branching structure (If, Case, While, and so on) to be evaluated with each of its possible values.However, control flow is only one aspect of the source code. What software actually does is move data from one location to another. The dataflow family of test data adequacy criteria3 describe coverage of this data. For example,
Select a set of tests that cause each data structure to be initialized and then subsequently used. Finally, fault seeding, which claims more attention from researchers than practitioners, is interesting.1 In this method, errors are intentionally inserted (seeded) into the source code. Test scenarios are then designed to find those errors. Ideally, by finding seeded errors, the tester will also find real errors. Thus, a criterion like the following is possible:
Select a set of tests that expose each of the seeded faults.
Input domain test criteria
Criteria for input domain coverage range from simple coverage of an interface to more complex statistical measurement.
Select a set of tests that contain each physical input.
Select a set of tests that cause each interface control (window, menu, button, and so on) to be stimulated. The discrimination criterion4 requires random selection of input sequences until they statistically represent the entire infinite input domain.
Select a set of tests that have the same statistical properties as the entire input domain.
Select a set of paths that are likely to be executed by a typical user.
Summary
Testing researchers are actively studying algorithms to select minimal test sets that satisfy criteria for execution paths and input domains. Most researchers would agree that it is prudent to use multiple criteria when making important release decisions. Experiments comparing test data adequacy criteria are needed, as are new criteria. However, for the present, testers should be aware which criteria are built into their methodology and understand the inherent limitations of these criteria when they report results.We’ll revisit test data adequacy criteria in the fourth phase, test measurement, because the criteria also serve as measures of test completeness.
Phase 3: Running and Evaluating Test Scenarios
Having identified suitable tests, testers convert them to executable form, often as code, so that the resulting test scenarios simulate typical user action. Because manually applying test scenarios is labor-intensive and error-prone, testers try to automate the test scenarios as much as possible. In many environments, automated application of inputs through code that simulates users is possible, and tools are available to help.Complete automation requires simulation of each input source and output destination of the entire operational environment. Testers often include data-gathering code in the simulated environment as test hooks or asserts. This code provides information about internal variables, object properties, and so forth. These hooks are removed when the software is released, but during test scenario execution they provide valuable information that helps testers identify failures and isolate faults. Scenario evaluation, the second part of this phase, is easily stated but difficult to do (much less automate). Evaluation involves the comparison of the software’s actual output, resulting from test scenario execution, to its expected output as documented by a specification. The specification is assumed correct; deviations are failures. In practice, this comparison is difficult to achieve. Theoretically, comparison (to determine equivalence) of two arbitrary, Turingcomputable functions is unsolvable. Returning to the text editor example, if the output is supposed to be “highlight a misspelled word,” how can we determine that each instance of misspelling has been detected? Such difficulty is the reason why the actualversus- expected output comparison is usually performed by a human oracle: a tester who visually monitors screen output and painstakingly analyzes output data. (See the “Testing Terminology” sidebar for an explanation of other common testing terms).
Two approaches to evaluating your test
In dealing with the problems of test evaluation, researchers are pursuing two approaches: formalism, and embedded test code. Formalism chiefly involves the hard work of formalizing the way specifications are written and the way that designs and code are derived from them.5 Both objectoriented and structured development contain mechanisms for formally expressing specifications to simplify the task of comparing expected and actual behavior. Industry has typically shied away from formal methods; nonetheless, a good specification, even an informal one, is still extremely helpful. Without a specification, testers are likely to find only the most obvious bugs. Furthermore, the absence of a specification wastes significant time when testers report unspecified features as bugs. There are essentially two types of embedded test code. The simplest type is test code that exposes certain internal data objects or states that make it easier for an external oracle to judge correctness. As implemented, such functionality is invisible to users. Testers can access test code results through, for example, a test API or a debugger. A more complex type of embedded code features self-testing programs.6 Sometimes this involves coding multiple solutions to the problem and having one solution check the other, or writing inverse routines that undo each operation. If an operation is performed and then undone, the resulting software state should be equivalent to its preoperational state. In this situation, the oracle is not perfect; there could be a bug in both operations where each bug masks the other.
Regression testing
After testers submit successfully reproduced failures to development, developers generally create a new version of the software (in which the bug has been supposedly removed). Testing progresses through subsequent software versions until one is determined to be fit for release. The question is, how much retesting (called regression testing) of version n is necessary using the tests that were run against version n – 1? Any specific fix can (a) fix only the problem that was reported, (b) fail to fix the problem, (c) fix the problem but break something that was previously working, or (d) fail to fix the problem and break something else. Given these possibilities, it would seem prudent to rerun every test from version n – 1 on version n before testing anything new, although such a practice is generally cost-prohibitive.7 Moreover, new software versions often feature extensive new functionality, in addition to the bug fixes, so the regression tests would take time away from testing new code. To save resources, then, testers work closely with developers to prioritize and minimize regression tests. Another drawback to regression testing is that these tests can (temporarily) alter the purpose of the test data adequacy criteria selected in the earlier test selection phase. When performing regression tests, testers seek only to show the absence of a fault and to force the application to exhibit specific behavior. The outcome is that the test data adequacy criteria, which until now guided test selection, are ignored. Instead, testers must ensure that a reliable fix to the code has been made.
Related concerns
Ideally, developers will write code with testing in mind. If the code will be hard to test and verify, then it should be rewritten to make it more testable. Likewise, a testing methodology should be judged by its contribution to solving automation and oracle problems. Too many methodologies provide little guidance in either area. Another concern for testers while running and verifying tests is the coordination of debugging activity with developers. As failures are identified by testers and diagnosed by developers, two issues arise: failure reproduction and test scenario re-execution. Failure reproduction is not the no-brainer it might seem. The obvious answer is, of course, to simply rerun the offending test and observe the errant behavior again, although rerunning a test does not guarantee that the exact same conditions will be created. Scenario re-execution requires that we know the exact state of the operating system and any companion software—for example, client–server applications would require reproduction of the conditions surrounding both the client and the server. Additionally, we must know the state of test automation, peripheral devices, and any other background application running locally or over the network that could affect the application being tested. It is no wonder that one of the most commonly heard phrases in a testing lab is, “Well, it was behaving differently before….”
Phase 4: Measuring Testing Progress
Suppose I am a tester and one day my manager comes to me and asks, “What’s the status of your testing?” Testers are often asked this question but are not well equipped to answer it. The reason is that the state of the practice in test measurement is to count things. We count the number of inputs we’ve applied, the percentage of code we’ve covered, and the number of times we’ve invoked the application. We count the number of times we’ve terminated the application successfully, the number of failures we found, and so on. Interpreting such counts is difficult—is finding lots of failures good news or bad? The answer could be either. A high bug count could mean that testing was thorough and very few bugs remain. Or, it could mean that the software simply has lots of bugs and, even though many have been exposed, lots of them remain. Since counting measures yield very little insight about the progress of testing, many testers augment this data by answering questions designed to ascertain structural and functional testing completeness. For example, to check for structural completeness, testers might ask these questions:
Have I tested for common programming errors?8
Have I exercised all of the source code?1
Have I forced all the internal data to be initialized and used?3
Have I found all seeded errors?1
To check for functional completeness, testers might ask these questions:
Have I thought through the ways in which the software can fail and selected tests that show it doesn’t?9
Have I applied all the inputs?1
Have I completely explored the state space of the software?4
Have I run all the scenarios that I expect a user to execute?10
These questions—essentially, test data adequacy criteria—are helpful to testers; however, determining when to stop testing, determining when a product is ready to release, is more complex. Testers want quantitative measures of the number of bugs left in the software and of the probability that any of these bugs will be discovered in the field. If testers can achieve such a measure, they know to stop testing. We can approach the quantitative problem structurally and functionally.
Testability
From a structural standpoint, Jeffrey Voas has proposed testability11 as a way to determine an application’s testing complexi-ty. The idea that the number of lines of code determines the software’s testing difficulty is obsolete; the issue is much murkier. This is where testability comes into play. If a product has high testability, it is easy to test and, consequently, easier to find bugs in. We can then monitor testing and observe that because bugs are fewer, it is unlikely that many undiscovered ones exist. Low testability would require many more tests to draw the same conclusions; we would expect that bugs are harder to find. Testability is a compelling concept but in its infancy; no data on its predictive ability has yet been published.
Reliability models
How long will the software run before it fails? How expensive will the software be to maintain? It is certainly better to find this out while you still have the software in your testing lab. From a functional standpoint, reliability models10—mathematical models of test scenarios and failure data that attempt to predict future failure patterns based on past data—are well established. These models thus attempt to predict how software will behave in the field based on how it behaved during testing. To accomplish this, most reliability models require the specification of an operational profile, a description of how users are expected to apply inputs. To compute the probability of failure, these models make some assumptions about the underlying probability distribution that governs failure occurrences. Researchers and practitioners alike have expressed skepticism that such profiles can be accurately assembled. Furthermore, the assumptions made by common reliability models have not been theoretically or experimentally verified except in specific application domains. Nevertheless, successful case studies have shown these models to be credible.
Software companies face serious challenges in testing their products, and these challenges are growing bigger as software grows more complex. The first and most important thing to be done is to recognize the complex nature of testing and take it seriously. My advice: Hire the smartest people you can find, help them get the tools and training they need to learn their craft, and listen to them when they tell you about the quality of your software. Ignoring them might be the most expensive mistake you ever make. Testing researchers likewise face challenges. Software companies are anxious to fund good research ideas, but the demand for more practical, less academic work is strong. The time to tie academic research to real industry products is now. We’ll all come out winners.