Thursday, November 15, 2007

Testing Procedure


What steps are needed to develop and run software tests?


The following are some of the steps to consider:

* Obtain requirements, functional design, and internal design specifications and other necessary documents.

* Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

* Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.

* Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

* Determine test environment requirements (hardware, software, communications, etc.)

* Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

* Determine test input data requirements

* Identify tasks, those responsible for tasks, and labor requirements

* Set schedule estimates, timelines, milestones

* Determine input equivalence classes, boundary value analyses, error classes

* Prepare test plan document and have needed reviews/approvals

* Write test cases

* Have needed reviews/inspections/approvals of test cases

* Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

* Obtain and install software releases

* Perform tests

* Evaluate and report results

* Track problems/bugs and fixes

* Retest as needed

* Maintain and update test plans, test cases, test environment, and testware through life cycle

Bug Tracking

What's a 'test case'?

* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

Why does software have bugs?


* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

* time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* 'no problem'

* 'piece of cake'

* 'I can whip that out in a few hours'

* 'it should be easy to update that old code'

* instead of:

* 'that adds a lot of complexity and we could end up

* making a lot of mistakes'

* 'we have no idea if we can do that; we'll wing it'

* 'I can't estimate how long it will take, until I

* take a close look at it'

* 'we can't figure out what that old spaghetti code

* did in the first place'

* If there are too many unrealistic 'no problem's', the result is bugs.

* poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

* software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

2 comments:

Baishakhi said...

great blog. keep it up.
http://free-softwares-4u.blogspot.com/

Unknown said...

Hi.. appreciable effort'' keep it up \m/