Thursday, September 20, 2007

Some FAQs with Answers

What is 'Software Testing'?
  • Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.

What is 'Software Quality Assurance'?

  • Software Quality Assurance involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

What is the 'Software Quality Gap'?

  • The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.

What is Equivalence Partitioning?

  • In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.

What is Boundary Value Analysis?

  • It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.

Why does software have bugs?

  • Miscommunication or no communication - understand the application's requirements. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Programming errors - programmers "can" make mistakes. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.

What does "finding a bug" consist of?

  • Finding a bug consists of number of steps that are performed: Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed What will happen about bugs that are already known?When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.

What's the big deal about 'requirements'?

  • Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

What can be done if requirements are changing continuously?

  • A common problem and a major headache. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

How can it be known when to stop testing?

  • This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be performed. Common factors in deciding when to stop are:
  1. Deadlines achieved (release deadlines, testing deadlines, etc.
  2. Test cases completed with certain percentage passed
  3. Test budget depleted •
  4. Coverage of code/functionality/requirements reaches a specified point
  5. Defect rate falls below a certain level
  6. Beta or Alpha testing period ends .

What if there isn't enough time for thorough testing?

  • Use risk analysis to determine where testing should be focused. Figure out which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? What do the developers think are the highest-risk aspects of the application? Which tests will have the best high-risk-coverage to time-required ratio? What if the software has so many bugs that it can't really be tested at all?Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem

How does a client/server environment affect testing?

  • Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities.

Does it matter how much the software has been tested already?

  • No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
  1. Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
  2. Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
  3. High stability (bugs are expected to be difficult to find, indicating already well tested)

How is testing affected by object-oriented designs?

  • Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.

Will automated testing tools make testing easier?

  • A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups. Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.

Why outsource testing?

  • Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive. - Testing a software application now involves a variety of skills.Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.

What steps are needed to develop and run software tests?

The following are some of the steps needed to develop and run software tests:

  1. Obtain requirements, functional design, and internal design specifications and other necessary documents
  2. Obtain budget and schedule requirements
  3. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  4. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
  5. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
  6. Determine test environment requirements (hardware, software, communications, etc.)
  7. Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  8. Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
  9. Set schedule estimates, timelines, milestones
  10. Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
  11. Write test cases
  12. Have needed reviews/inspections/approvals of test cases
  13. Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  14. Obtain and install software releases
  15. Perform tests
  16. Evaluate and report results
  17. Track problems/bugs and fixes
  18. Retest as needed Maintain and update test plans, test cases, test environment, and test ware through life cycle

Tuesday, September 18, 2007

Important Terminologies in Software Testing


Traceability Matrix

Traceability matrix/Requirement Traceability matrix (RTM)Traceability matrix is a matrix which is used to keep track of the requirements. It is a mapping between the requrements and test cases, we will do this for identify missing test cases. It is prepared by trhe either test lead or test engg along with test lead.Exact requirements from the requirement doc given by the client are copied in this matrix. These requirements are assigned a unique number and the remark as testable or not. Against each testable requirement test objective and test case is identified. It is highly possible that for one req there could be multiple test objectives and test cases. For each of the test objective and test case unique number is assigned. Number flow is usually like Requirement Id >> test obj Id>> test case id.Advantages:a. We can trace the missing test cases.b. Whenever requirements changes then we can easily refer to matrix document, change the usecase and go to corresponding testcases and change them. c. Easy to test any functionality. Only we need to refer matrix document and we can reach to related test cases.d. We can trace the impact of functionalities on one another. Because different functionalities can have same test cases.

Bug Density

1. Bug Density: Bug Density is nothing but the number of bugs found in 1000 lines of code. Now every organization has this 1000 set to their requirement and need it can be 100 or any number based on the scale of the project.2. what is defect density?Number of defects divided per unit time. Defect density is one of the metric which is equal to the ratio of number of defects to the number of lines of code .Defect Density = Defect/unit sizeDD=Total Defect/KLOC( Kilo lines of Code)Ex: Suppose 10 bugs are found in 1 KLOCTherefore DD is 10/KLOC (Kilo lines of code)3. what is defect matrix?Time at which defects were discovered relative to when they wereinserted into the software.

System Testing!!

What is System testing? Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements [IEEE 90]. System testing is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS).Types of system testing:-a. User interface testingb. Usability testingc. Performance testingd. Compatibility testinge. Error handling testingf. Load testingg. Volume testingh. Stress testingi. User help testingj. Security testingk. Scalability testingl. Capacity testingm. Sanity testingn. Smoke testingo. Exploratory testingp. Adhoc testingq. Regression testingr. Reliability testings. Recovery testingt. Installation testingu. Idempotency testingv. Maintenance testing

Severity and Priority

Severity: Severity determines the defect's effect on the application. Severity is given by Testers
Priority: Determines the defect urgency of repair.Priority is given by Test lead or project manager
1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.Priority is used to organize the work. The field only takes meaning when owner of the bugP1 Fix in next buildP2 Fix as soon as possibleP3 Fix before next releaseP4 Fix it time allowP5 Unlikely to be fixedDefault priority for new defects is set at P3

Bug, Error, Defect and Issue

a. Bug:A software bug is an error, flaw, mistake, failure, or fault in a program that prevents it from behaving as intended (e.g., producing an incorrect result). Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code.
b. Error: The mistake made by developer in coding.
c. Defect: Defect is something which is in the requirement document and it is not implemented or it is implemented in a wrong way.
d. Issue: Issue is something which is not all above, Some issues are there like site is slow, session related problems, security problems etc.
Waterfall Model

This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project.
Requirement
a. Design
b. Implementation & Unit Testing
c. Integration & System Testing
d. Operation
Advantages
a. Simple and easy to use.
b. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
c. Phases are processed and completed one at a time.
d. Works well for smaller projects where requirements are very well understood.
Disadvantages
a. Adjusting scope during the life cycle can kill a project
b. Poor model for complex and object-oriented projects.
c. Poor model for long and ongoing projects.
d. Poor model where requirements are at a moderate to high risk of changing.
Spiral Model

The spiral model gives more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Advantages
a. High amount of risk analysis.
b. Good for large and mission-critical projects.
c. Software is produced early in the software life cycle.
Disadvantages
a. Can be a costly model to use.
b. Risk analysis requires highly specific expertise.
c. Project’s success is highly dependent on the risk analysis phase.
d. Doesn’t work well for smaller projects.

Software Testing

Black Box Testing
Black Box testing refers to the technique of testing a system with no knowledge of the internals of the system. Black Box testers do not have access to the source code and are oblivious of the system architecture. A Black Box tester typically interacts with a system through a user interface by providing inputs and examining outputs without knowing where and how the inputs were operated upon. In Black Box testing, target software is exercised over a range of inputs and the outputs are observed for correctness.
Advantages
a. Efficient Testing — Well suited and efficient for large code segments or units.
b. Unbiased Testing — clearly separates user's perspective from developer's perspective through separation of QA and Development responsibilities.
c. Non intrusive — code access not required.
d. Easy to execute — can be scaled to large number of moderately skilled testers with no knowledge of implementation, programming language, operating systems or networks.
Disadvantages
a. Localized Testing — Limited code path coverage since only a limited number of test inputs are actually tested.
b. Inefficient Test Authoring — without implementation information, exhaustive input coverage would take forever and would require tremendous resources.
c. Blind Coverage — cannot control targeting code segments or paths which may be more error prone than others.

White Box Testing
White Box testing refers to the technique of testing a system with knowledge of the internals of the system. White Box testers have access to the source code and are aware of the system architecture. A White Box tester typically analyzes source code, derives test cases from knowledge about the source code, and finally targets specific code paths to achieve a certain level of code coverage. A White Box tester with access to details about both operations can readily craft efficient test cases that exercise boundary conditions.
Advantages
a. Increased Effectiveness — Crosschecking design decisions and assumptions against source code may outline a robust design, but the implementation may not align with the design intent.
b. Full Code Pathway Capable — all the possible code pathways can be tested including error handling, resource dependencies, and additional internal code logic/flow.
c. Early Defect Identification — Analyzing source code and developing tests based on the implementation details enables testers to find programming errors quickly.
d. Reveal Hidden Code Flaws — access to source code improves understanding and uncovering unintended hidden behavior of program modules.
Disadvantages
a. Difficult To Scale — requires intimate knowledge of target system, testing tools and coding languages, and modeling. It suffers for scalability of skilled and expert testers.
b. Difficult to Maintain — requires specialized tools such as source code analyzers, debuggers, and fault injectors.
c. Cultural Stress — the demarcation between developer and testers starts to blur which may become a cultural stress.
d. Highly Intrusive — requires code modification has been done using interactive debuggers, or by actually changing the source code. This may be adequate for small programs; however, it does not scale well to larger applications. Not useful for networked or distributed systems.

Gray Box Testing
Gray Box testing refers to the technique of testing a system with limited knowledge of the internals of the system. Gray Box testers have access to detailed design documents with information beyond requirement documents. Gray Box tests are generated based on information such as state-based models or architecture diagrams of the target system.
Advantages
a. Offers Combined Benefits — Leverage strengths of both Black Box and White Box testing wherever possible.
b. Non Intrusive — Gray Box does not rely on access to source code or binaries. Instead, based on interface definition, functional specifications, and application architecture.
c. Intelligent Test Authoring — Based on the limited information available, a Gray Box tester can author intelligent test scenarios, especially around data type handling, communication protocols and exception handling.
d. Unbiased Testing — The demarcation between testers and developer is still maintained. The handoff is only around interface definitions and documentation without access to source code or binaries.
Disadvantages
a. Partial Code Coverage — Since the source code or binaries are not available, the ability to traverse code paths is still limited by the tests deduced through available information. The coverage depends on the tester authoring skills.
b. Defect Identification — Inherent to distributed application is the difficulty associated in defect identification. Gray Box testing is still at the mercy of how well systems throw exceptions and how well are these exceptions propagated with a distributed Web Services environment.

Difference between Black Box and White Box Testing

1. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.
2. Synonyms for white-box include: structural, glass-box and clear-box.
3. Generally black box testing will begin early in the software development i.e. in requirement gathering phase itself. But for white box testing approach one has to wait for the designing has to complete.
4. We can use black testing strategy almost any size either it may be small or large. But white box testing will be effective only for small lines of codes or piece of codes.
5. In white box testing we can not test Performance of the application. But in Black box testing we can do it.

QA Role

Anatomy of a Software Development Role: Quality Assurance

The Quality Assurance (QA) role is the role responsible for guaranteeing a level of quality for the end client, and to help the software development team to identify problems early in the process. It is not surprising that people in this role are often known as "testers". Of course, the role is more than just testing. It's about contributing to the quality of the final product. (If you've not been following the series, you should read Cracking the Code: Breaking Down the Software Development Roles.) What's the Quality Assurance role? The quality assurance (QA) role is one that is focused on creating a quality deliverable. In other words, it is the responsibility of the QA role to make sure that the software development process doesn't sacrifice quality in the name of completed objectives. Click here to see how the QA fits within the full organizational chart. The QA role works with the Functional Analyst (FA) and the Solutions Architect (SA) to convert the requirements and design documents into a set of testing cases and scripts, which can be used to verify that the system meets the client needs. This collection of test cases and scripts are collectively referred to as a test plan. The test plan document itself is often simple providing an overview of each of the test cases. The testing cases and scripts are also used to validate that there are no unexplained errors in the system. The test plan is approved by the Subject Matter Experts (SMEs) and represents the criteria to reach a project closing. If the test cases and scripts in the test plan are the agreed upon acceptance criteria for a project then all that is necessary is for project closure is to demonstrate that all of the testing cases and scripts have been executed successfully with passing results. A test case is a general-purpose statement that maps to one or more requirements and design points. It is the overall item being tested. It may be a specific usability feature, or a technical feature that was supposed to be implemented as a part of the project. Test scripts fit into the test cases by validating that case. Test scripts are step-by-step instructions on what to do, what to look for, and what should happen. While the test cases can be created with nearly no input from the architecture or design, the test scripts are specific to how the problem was solved by the software development team and therefore they require an understanding of not only the requirements, but also the architecture, design, and detailed design. The quality assurance role is split into three parts: First The role creates test cases and scripts. Second The role executes or supervises the execution of those test cases and scripts. Third The role facilitates or performs random testing of all components to ensure that there's not a random bug haunting the system. In some organizations, the quality assurance role has two specializations. The first is the classic functional testing and quality assurance as described above. The second, is a performance quality assurance role where the performance of the completed solution is measured and quantified. The performance QA role is an important part of the large system development quality assurance process. The quality assurance role also has within it a wide range of potential titles and specific responsibilities. From the entry-level quality assurance professional who executes and document tests to the QA lead who works with the FA and SA to create the testing plan, cases, and scripts. The role also extends through QA manager position that may take responsibility for the quality of a solution. At this level the QA manager and solutions architect work as peers to ensure the final solution has the highest quality.

Why Software Testing is so hard?



What Is Software Testing? And Why Is It So Hard?

Software testing is arguably the least understood part of the development process. Througha four-phase approach, the author shows why eliminating bugs is tricky and why testing is a constant trade-off.
Virtually all developers know the frustration of having software bugs reported by users. When this happens, developers inevitably ask: How did those bugs escape testing? Countless hours doubtless went into the careful testing of hundreds or thousands of variables and code statements, so how could a bug have eluded such vigilance? The answer requires, first, a closer look at software testing within the context of development. Second, it requires an understanding of the role
software testers and developers—two very different functions—play. Assuming that the bugs users report occur in a software product that really is in error, the answer could be any of these:
The user executed untested code. Because of time constraints, it’s not uncommon for developers to release untested code—code in which users can stumble across bugs.
The order in which statements were executed in actual use differed from that during testing. This order can determine whether software works or fails.
The user applied a combination of untested input values. The possible input combinations that thousands of users can make across a given software interface are simply too numerous for testers to apply them all. Testers must make tough decisions about which inputs to test, and sometimes we make the wrong decisions.
The user’s operating environment was never tested. We might have known about the environment but had no time to test it. Perhaps we did not (or could not) replicate the user’s combination of hardware, peripherals, operating system, and applications in our testing lab. For example, although companies that write networking software are unlikely to create a thousand-node network in their testing lab, users can—and do— create such networks.
Through an overview of the software testing problem and process, this article investigates the problems that testers face and identifies the technical issues that any solution must address. I also survey existing classes of solutions used in practice. Readers interested in further study will find the sidebar “Testing Resources” helpful.
Testers and the Testing Process
To plan and execute tests, software testers must consider the software and the function it computes, the inputs and how they can be combined, and the environment in which the software will eventually operate. This difficult, time-consuming process requires technical sophistication and proper planning. Testers must not only have good development skills—testing often requires a great deal of coding—but also be knowledgeable in formal languages, graph theory, and algorithms. Indeed, creative testers have brought many related computing disciplines to bear on testing problems, often with impressive results. Even simple software presents testers with obstacles, as the sidebar “A Sample Software Testing Problem” shows. To get a clearer view of some of software testing’s inherent difficulties, we can approach testing in four phases:
Modeling the software’s environment
Selecting test scenarios
Running and evaluating test scenarios
Measuring testing progress
These phases offer testers a structure in which to group related problems that they must solve before moving on to the next phase.
Phase 1: Modeling the Software’s Environment
A tester’s task is to simulate interaction between software and its environment.Testers must identify and simulate the interfaces that a software system uses and enumerate the inputs that can cross each interface. This might be the most fundamental issue that testers face, and it can be difficult, considering the various file formats, communication protocols, and third-party (application programming interfaces) available. Four common interfaces are as follows:
Human interfaces include all common methods for people to communicate with software. Most prominent is the GUI but older designs like the command line interface and the menu-driven interface are still in use. Possible input mechanisms to consider are mouse clicks, keyboard events, and input from other devices. Testers then decide how to organize this data to understand how to assemble it into an effective test.
Software interfaces, called APIs, are how software uses an operating system, database, or runtime library. The services these applications provide are modeled as test inputs. The challenge for testers is to check not only the expected but also the unexpected services. For example, all developers expect the operating system to save files for them. The service that they neglect is the operating system’s informing them that the storage medium is full. Even error messages must be tested.
File system interfaces exist whenever software reads or writes data to external files. Developers must write lots of error-checking code to determine if the file contains appropriate data and formatting. Thus, testers must build or generate files with content that is both legal and illegal, and files that contain a variety of text and formatting.
Communication interfaces allow direct access to physical devices (such as device drivers, controllers, and other embedded systems) and require a communication protocol. To test such software, testers must be able to generate both valid and invalid protocol streams. Testers must assemble—and submit to the software under test—many different combinations of commands and data, in the proper packet format. Next, testers must understand the user interaction that falls outside the control of the software under test, since the consequences can be serious if the software is not prepared. Examples of situations testers should address are as follows:
Using the operating system, one user deletes a file that another user has open. What will happen the next time the software tries to access that file?
A device gets rebooted in the middle of a stream of communication. Will the software realize this and react properly or just hang?
Two software systems compete for duplicate services from an API. Will the API correctly service both?
Each application’s unique environment can result in a significant number of user interactions to test.
Considerations
When an interface presents problems of infinite size or complexity, testers face two difficulties: They must carefully select values for any variable input, and they must decide how to sequence inputs. In selecting values, testers determine the values of individual variables and assign interesting value combinations when a program accepts multiple variables as input.Testers most often use the boundary value partitioning technique1 for selecting single values for variables at or around boundaries. For example, testing the minimum, maximum, and zero values for a signed integer is a commonly accepted idea as well as values surrounding each of these partitions—for example, 1 and –1 (which surround the zero boundary). The values between boundaries are treated as the same number; whether we use 16 or 16,000 makes no difference to the software under test.for multiple variables processed simultaneously that could potentially affect each other. Testers must consider the entire cross product of value combinations. For two integers, we consider both positive, both negative, one positive and one zero, and so forth.2In deciding how to sequence inputs, testers have a sequence generation problem. Testers treat each physical input and abstract event as symbols in the alphabet of a formal language and define a model of that language. A model lets testers visualize the set of possible tests to see how each test fits the big picture. The most common model is a graph or state diagram, although many variations exist. Other popular models include regular expressions and grammars, tools from language theory. Less-used models are stochastic processes and genetic algorithms. The model is a representation that describes how input and event symbols are combined to make syntactically valid words and sentences.These sentences are sequences of inputs that can be applied to the software under test. For example, consider the input Filemenu. Open, which invokes a file selection dialog box; filename, which represents the selection (with mouse clicks, perhaps) of an existing file, and ClickOpen and ClickCancel,which represent button presses. The sequence Filemenu.Open filename ClickOpen is legal, as are many others. The sequence ClickCancel Filemenu.Open is impossible because the cancel button cannot be pressed until the dialog box has been invoked. The model of the formal language can make such a distinction between sequences.
Text editor example
We can represent legal uses of the file selection dialog in, for example, a text editor with the regular expression: Filemenu.Open filename* (ClickOpen ClickCancel)
in which the asterisk represents the Kleene closure operator indicating that the filename action can occur zero or more times. This expression indicates that the first input received is Filemenu.Open followed by zero or more selections of a filename (with a combination of mouse clicks and keyboard entries), then either the Open or Cancel button is pressed. This simple model represents every combination of inputs that can happen, whether they make sense or not. To fully model the software environment for the entire text editor, we would need to represent sequences for the user interface and the operating system interface. Furthermore, we would need a description of legal and corrupt files to fully investigate file system interaction. Such a formidable task would require the liberal use of decomposition and abstraction.
Phase 2: Selecting Test Scenarios
Many domain models and variable partitions represent an infinite number of test scenarios, each of which costs time and money. Only a subset can be applied in any realistic software development schedule, so how does a smart tester choose? Is 17 a better integer than 34? How many times should a filename be selected before pressing the Open button? These questions, which have many answers, are being actively researched. Testers, however, prefer an answer that relates to coverage of source code or its input domain. Testers strive for coverage: covering code statements (executing each source line at least once) and covering inputs (applying each externally generated event). These are the minimum criteria that testers use to judge the completeness of their work; therefore, the test set that many testers choose is the one that meets their coverage goals. But if code and input coverage were sufficient, released products would have very few bugs. Concerning the code, it isn’t individual code statements that interest testers but execution paths: sequences of code statements representing an execution of the software. Unfortunately, there are an infinite number of paths. Concerning the input domain, it isn’t the individual inputs that interest testers but input sequences that, taken as a whole, represent scenarios to which the software must respond. There are an infinite number of these, too. Testers sort through these infinite sets to arrive at the best possible test data adequacy criteria, which are meant to adequately and economically represent any of the infinite sets. “Best” and “adequately” are subjective; testers typically seek the set that will find the most bugs. (High and low bug counts, and their interpretation, are discussed later). Many users and quality assurance professionals are interested in having testers evaluate typical use scenarios— things that will occur most often in the field. Such testing ensures that the software works as specified and that the most frequently occurring bugs will have been detected. For example, consider the text editor example again. To test typical use, we would focus on editing and formatting since that is what real users do most. However, to find bugs, a more likely place to look is in the harder-to-code features like figure drawing and table editing.
Execution path test criteria
Test data adequacy criteria concentrate on either execution path coverage or input sequence coverage but rarely both. The most common execution path selection criteria focus on paths that cover control structures. For example,
Select a set of tests that cause each source statement to be executed at least once.
Select a set of tests that cause each branching structure (If, Case, While, and so on) to be evaluated with each of its possible values.However, control flow is only one aspect of the source code. What software actually does is move data from one location to another. The dataflow family of test data adequacy criteria3 describe coverage of this data. For example,
Select a set of tests that cause each data structure to be initialized and then subsequently used. Finally, fault seeding, which claims more attention from researchers than practitioners, is interesting.1 In this method, errors are intentionally inserted (seeded) into the source code. Test scenarios are then designed to find those errors. Ideally, by finding seeded errors, the tester will also find real errors. Thus, a criterion like the following is possible:
Select a set of tests that expose each of the seeded faults.
Input domain test criteria
Criteria for input domain coverage range from simple coverage of an interface to more complex statistical measurement.
Select a set of tests that contain each physical input.
Select a set of tests that cause each interface control (window, menu, button, and so on) to be stimulated. The discrimination criterion4 requires random selection of input sequences until they statistically represent the entire infinite input domain.
Select a set of tests that have the same statistical properties as the entire input domain.
Select a set of paths that are likely to be executed by a typical user.
Summary
Testing researchers are actively studying algorithms to select minimal test sets that satisfy criteria for execution paths and input domains. Most researchers would agree that it is prudent to use multiple criteria when making important release decisions. Experiments comparing test data adequacy criteria are needed, as are new criteria. However, for the present, testers should be aware which criteria are built into their methodology and understand the inherent limitations of these criteria when they report results.We’ll revisit test data adequacy criteria in the fourth phase, test measurement, because the criteria also serve as measures of test completeness.
Phase 3: Running and Evaluating Test Scenarios
Having identified suitable tests, testers convert them to executable form, often as code, so that the resulting test scenarios simulate typical user action. Because manually applying test scenarios is labor-intensive and error-prone, testers try to automate the test scenarios as much as possible. In many environments, automated application of inputs through code that simulates users is possible, and tools are available to help.Complete automation requires simulation of each input source and output destination of the entire operational environment. Testers often include data-gathering code in the simulated environment as test hooks or asserts. This code provides information about internal variables, object properties, and so forth. These hooks are removed when the software is released, but during test scenario execution they provide valuable information that helps testers identify failures and isolate faults. Scenario evaluation, the second part of this phase, is easily stated but difficult to do (much less automate). Evaluation involves the comparison of the software’s actual output, resulting from test scenario execution, to its expected output as documented by a specification. The specification is assumed correct; deviations are failures. In practice, this comparison is difficult to achieve. Theoretically, comparison (to determine equivalence) of two arbitrary, Turingcomputable functions is unsolvable. Returning to the text editor example, if the output is supposed to be “highlight a misspelled word,” how can we determine that each instance of misspelling has been detected? Such difficulty is the reason why the actualversus- expected output comparison is usually performed by a human oracle: a tester who visually monitors screen output and painstakingly analyzes output data. (See the “Testing Terminology” sidebar for an explanation of other common testing terms).
Two approaches to evaluating your test
In dealing with the problems of test evaluation, researchers are pursuing two approaches: formalism, and embedded test code. Formalism chiefly involves the hard work of formalizing the way specifications are written and the way that designs and code are derived from them.5 Both objectoriented and structured development contain mechanisms for formally expressing specifications to simplify the task of comparing expected and actual behavior. Industry has typically shied away from formal methods; nonetheless, a good specification, even an informal one, is still extremely helpful. Without a specification, testers are likely to find only the most obvious bugs. Furthermore, the absence of a specification wastes significant time when testers report unspecified features as bugs. There are essentially two types of embedded test code. The simplest type is test code that exposes certain internal data objects or states that make it easier for an external oracle to judge correctness. As implemented, such functionality is invisible to users. Testers can access test code results through, for example, a test API or a debugger. A more complex type of embedded code features self-testing programs.6 Sometimes this involves coding multiple solutions to the problem and having one solution check the other, or writing inverse routines that undo each operation. If an operation is performed and then undone, the resulting software state should be equivalent to its preoperational state. In this situation, the oracle is not perfect; there could be a bug in both operations where each bug masks the other.
Regression testing
After testers submit successfully reproduced failures to development, developers generally create a new version of the software (in which the bug has been supposedly removed). Testing progresses through subsequent software versions until one is determined to be fit for release. The question is, how much retesting (called regression testing) of version n is necessary using the tests that were run against version n – 1? Any specific fix can (a) fix only the problem that was reported, (b) fail to fix the problem, (c) fix the problem but break something that was previously working, or (d) fail to fix the problem and break something else. Given these possibilities, it would seem prudent to rerun every test from version n – 1 on version n before testing anything new, although such a practice is generally cost-prohibitive.7 Moreover, new software versions often feature extensive new functionality, in addition to the bug fixes, so the regression tests would take time away from testing new code. To save resources, then, testers work closely with developers to prioritize and minimize regression tests. Another drawback to regression testing is that these tests can (temporarily) alter the purpose of the test data adequacy criteria selected in the earlier test selection phase. When performing regression tests, testers seek only to show the absence of a fault and to force the application to exhibit specific behavior. The outcome is that the test data adequacy criteria, which until now guided test selection, are ignored. Instead, testers must ensure that a reliable fix to the code has been made.
Related concerns
Ideally, developers will write code with testing in mind. If the code will be hard to test and verify, then it should be rewritten to make it more testable. Likewise, a testing methodology should be judged by its contribution to solving automation and oracle problems. Too many methodologies provide little guidance in either area. Another concern for testers while running and verifying tests is the coordination of debugging activity with developers. As failures are identified by testers and diagnosed by developers, two issues arise: failure reproduction and test scenario re-execution. Failure reproduction is not the no-brainer it might seem. The obvious answer is, of course, to simply rerun the offending test and observe the errant behavior again, although rerunning a test does not guarantee that the exact same conditions will be created. Scenario re-execution requires that we know the exact state of the operating system and any companion software—for example, client–server applications would require reproduction of the conditions surrounding both the client and the server. Additionally, we must know the state of test automation, peripheral devices, and any other background application running locally or over the network that could affect the application being tested. It is no wonder that one of the most commonly heard phrases in a testing lab is, “Well, it was behaving differently before….”
Phase 4: Measuring Testing Progress
Suppose I am a tester and one day my manager comes to me and asks, “What’s the status of your testing?” Testers are often asked this question but are not well equipped to answer it. The reason is that the state of the practice in test measurement is to count things. We count the number of inputs we’ve applied, the percentage of code we’ve covered, and the number of times we’ve invoked the application. We count the number of times we’ve terminated the application successfully, the number of failures we found, and so on. Interpreting such counts is difficult—is finding lots of failures good news or bad? The answer could be either. A high bug count could mean that testing was thorough and very few bugs remain. Or, it could mean that the software simply has lots of bugs and, even though many have been exposed, lots of them remain. Since counting measures yield very little insight about the progress of testing, many testers augment this data by answering questions designed to ascertain structural and functional testing completeness. For example, to check for structural completeness, testers might ask these questions:
Have I tested for common programming errors?8
Have I exercised all of the source code?1
Have I forced all the internal data to be initialized and used?3
Have I found all seeded errors?1
To check for functional completeness, testers might ask these questions:
Have I thought through the ways in which the software can fail and selected tests that show it doesn’t?9
Have I applied all the inputs?1
Have I completely explored the state space of the software?4
Have I run all the scenarios that I expect a user to execute?10
These questions—essentially, test data adequacy criteria—are helpful to testers; however, determining when to stop testing, determining when a product is ready to release, is more complex. Testers want quantitative measures of the number of bugs left in the software and of the probability that any of these bugs will be discovered in the field. If testers can achieve such a measure, they know to stop testing. We can approach the quantitative problem structurally and functionally.
Testability
From a structural standpoint, Jeffrey Voas has proposed testability11 as a way to determine an application’s testing complexi-ty. The idea that the number of lines of code determines the software’s testing difficulty is obsolete; the issue is much murkier. This is where testability comes into play. If a product has high testability, it is easy to test and, consequently, easier to find bugs in. We can then monitor testing and observe that because bugs are fewer, it is unlikely that many undiscovered ones exist. Low testability would require many more tests to draw the same conclusions; we would expect that bugs are harder to find. Testability is a compelling concept but in its infancy; no data on its predictive ability has yet been published.
Reliability models
How long will the software run before it fails? How expensive will the software be to maintain? It is certainly better to find this out while you still have the software in your testing lab. From a functional standpoint, reliability models10—mathematical models of test scenarios and failure data that attempt to predict future failure patterns based on past data—are well established. These models thus attempt to predict how software will behave in the field based on how it behaved during testing. To accomplish this, most reliability models require the specification of an operational profile, a description of how users are expected to apply inputs. To compute the probability of failure, these models make some assumptions about the underlying probability distribution that governs failure occurrences. Researchers and practitioners alike have expressed skepticism that such profiles can be accurately assembled. Furthermore, the assumptions made by common reliability models have not been theoretically or experimentally verified except in specific application domains. Nevertheless, successful case studies have shown these models to be credible.
Software companies face serious challenges in testing their products, and these challenges are growing bigger as software grows more complex. The first and most important thing to be done is to recognize the complex nature of testing and take it seriously. My advice: Hire the smartest people you can find, help them get the tools and training they need to learn their craft, and listen to them when they tell you about the quality of your software. Ignoring them might be the most expensive mistake you ever make. Testing researchers likewise face challenges. Software companies are anxious to fund good research ideas, but the demand for more practical, less academic work is strong. The time to tie academic research to real industry products is now. We’ll all come out winners.

WHY NOT EXPLORATORY TESTING


Why not Exploratory Testing?

Most of the Test Managers says, "Testing without Test Plans is a crime". The Testers should know what is being built and he should analyse the way to proceed. He/ She will have to prepare the Test Plans based upon that to proceed with Testing the Application. Good knowledge of Exploratory Testing is necessary for reading this article. For those who dont have much idea about Exploratory Testing , a small intro is given here.

"The plainest definition of exploratory testing is test design and test execution at the same time. Exploratory software testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. I haven’t found a tester yet who didn’t, at least unconsciously, perform exploratory testing at one time or another. Yet few of us study this approach, and it doesn’t get much respect in our field. It’s high time we stop the denial, and publicly recognize the exploratory approach for what it is: scientific thinking in real time. Friends, that’s a good thing".
–James Bach

There are lots of myths that is behind the Exploratory Testing and the objective of this writing is to expose them. There are lots of proofs which tells that Exploratory Testing is the best among the other testing techniques. The user of this article is recommended to have some idea about the advantages of Exploratory Testing. If the user is a beginner then he can gather information about what are all the circumstances, one can go for an Exploratory Testing, without knowing the reason behind it.

"A test that reveals a problem is a success. A test that did not reveal a problem was a waste of time"
- Cem Kaner

In this Internet arena, in an organization with no Software Configuration Management, Projects are built without any kind of Documentations. If they are available also, they are not going to get updated, as and when the modifications from the client is received. The Project Manager conveys the modifications to his/her team by word of mouth and gets the modifications done without getting them updated in the documentation. It is inevitable for him to get the modifications updated in the documentation, but at the same time, when he prioritizes to get the changes implemented in the code, the former gains low priorities serially and that goes out of his mind, if the code is done. This is going to have a direct impact on the Testing Process. If you are deriving the Test Cases out of the documentations, your Test Cases are going to get aged soon. It is not worth to look back at them. We cannot use them as it has no relevance with the Code. The time invested in preparing the Test Cases is wasted, here.

Let us consider that the requirement is going to get changed frequently, say 4-5 modifications per day. It is the tendency of the people to forget about the documentions and to concentrate more on the implementation part. These are all happening because of the time and cost associated with that. Though they are all inevitable from the angle of LAW, the need of rigor and to satisfy the customer at time pushes the Project Manager to forego this activity, as he/she considers it as a overhead. Finally, they got approval for doing this from the Top Management also. The project life travels in a planned undocumented way and how we can expect the testing should happen in a planned documented, properly scripted WAY?


"If you think you can fully test a program without testing its response to every possible input, fine. Give us a list of your test cases. We can write a program that will pass all your tests but still fail spectacularly on an input you missed. If we can do this deliberately, our contention is that we or other programmers can do it accidentally"
- Cem Kaner

A well documented Test Scripts can also miss bugs. Unplanned Adhoc Testing can also miss bugs. For the former, Time + Money is invested for finding the bugs. As every process is having its own entropy associated with that, this is not an odd process. This have its own entropy, leaving some bugs, covered. The same applies for the latter too. In an organization with no Software Configuration Management, experience proves that the number of bugs covered by well planned Test Scripts is less than the Unplanned Adhoc Testing. But when tried to put a Cost Benefit analysis, the Test Manager is pushed to follow an Unplanned Adhoc Testing approach. Instead of going to an Unplanned Adhoc Testing, why not Exploratory Testing?

"Myers described a much simpler program in 1979. It was just a loop and a few IF statements. In most languages, you could write it in 20 lines of code. This program has 100 trillion paths; a fast tester could test them all in a billion years"
- Cem Kaner

Testing cannot be claimed that it is completed. Testers cannot claim that the program is 100% error free. 20 lines of code is having 100 trillion paths. We normally deal with Klocs, that gives a very big figure, when extrapolated. If we wanna uncover most of the bugs, we have to select Test Cases from the Test Cases for trillion paths, which is trillion*n in number. Planning these type of tests requires more time, which we cannot think if we are in Web Time. If no time is available, then the tests become Uplanned and Adhoc. Started with a good plan, morphed to Unplanned and Adhoc at the end. Instead of welcoming this, Why not Exploratory Testing?

"If you want and expect a program to work, you will be more likely to see a working program – you will miss failures. If you expect it to fail, you'll more likely to see the problems. If you are punished for reporting failures, you will miss failures. You won't only fail to report them – you will not notice them"
- Cem Kaner

Testing relies on the mindset of the testers. It is an art. Most of them says that tests have to be well planned and executed. Let us take a case that we are well planning 100 tests for a program which is going to leave, say 20 bugs covered. Testing is a creative work. It the testers have to go by these well planned 100 tests, they will not be exploratory while executing them. Instead they are required to be more exploratory, while they are executing the tests. They have to be exploratory to find the remaining 20 bugs. If that 20 bugs are going to be very major ones and these 100 tests are going to give only very little number of minor bugs, then what is the advantage in investing a huge amount of money in designing those 100 tests. Instead of this strategy, why not exploratory testing?

Testing is creative work. Agreed, the exploratory nature of the tester is needed very much while designing the tests. At the same time, it is needed very much more than while designing the tests. We cannot deny the fact that exploration is not needed while executing the tests and we cannot categorize that under Unplanned Adhoc testing. This world has got its electric light because of the exploration of Thomas Alwa Edison. This world has got its Air Traffic because of the exploration of Right Brothers. Exploration is the one which is going to fetch good results and remarkable results at the end. Exploration at the time of designing the tests is very much needed for a complex application to plan for the areas which is new for the tester and the areas which are old to him need not to be planned for test as he is good enough with finding bugs with Exploratory Testing for those areas. Even if the tests for that complex parts of the application are not planned also, they can be done at the time of execution if the tester is having expertise in testing and that too in Exploratory Testing, very much.

Designing the tests is also an art and it also involves creativity in itself. It is the different ways of thinking of the tester as an individual which is put into papers and termed as Test Plan. Testing is a mindset. The interpretation of the information differs from people to people. If this is the case then coverage of testing can be achieved by more than one people testing the same piece of code. Thus with the different ways of exploration, we can achieve the coverage.

Summary :

If the exploration is done on Test Application for testing that, then that is termed as Exploratory Testing. If the exploration is done on Test Application for designing the tests then that is termed as Test Planning. Whatever the terms used, Exploration in testing is very much in need.














V-Model description


the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
Advantages
a. Simple and easy to use.
b. Each phase has specific deliverables.
c. Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
d. Works well for small projects where requirements are easily understood.
Disadvantages
a. Very rigid, like the waterfall model.
b. Software is developed during the implementation phase, so no early prototypes of the software are produced.
c. Model doesn’t provide a clear path for problems found during testing phases.