Archive

Posts Tagged ‘Software Testing’

Web Test Plan Development

Web Test Plan Development
The objective of a test plan is to provide a roadmap so that the Web site can be evaluated through requirements or design statements. A test plan is a document that describes objectives and the scope of a Web site project. When you prepare a test plan, you should think through the process of the Web site test. The plan should be written so that it can successfully give the reader a complete picture of the Web site project and should be thorough enough to be useful. Following are some of the items that might be included in a test plan. Keep in mind thatthe items may vary depending on the Web site project.
The Web Testing Process

  • Internet
  • Web Browser
  • Web Server
PROJECT

  • Title of the project:
  • Date:
  • Prepared by:
PURPOSE OF DOCUMENT

  • Objective of testing: Why are you testing the application? Who, what, when, where, why, and how should be some of the questions you ask in this section of the test plan.
  • Overview of the application: What is the purpose of the application? What are the specifications of the project?
TEST TEAM

  • Responsible parties: Who is responsible and in charge of the testing?
  • List of test team: What are the names and titles of the people on the test team?
RISK ASSUMPTIONS

  • Anticipated risks: What types of risks are involved that could cause the test to fail?
  • Similar risks from previous releases: Have there been documented risks from previous tests that may be helpful in setting up the current test?
SCOPE OF TESTING

  • Possible limitations of testing: Are there any factors that may inhibit the test, such as resources and budget?
  • Impossible testing: What are the considerations involved that could prevent the tests that are planned?
  • Anticipated output: What are the anticipated outcomes of the test and have they been documented for comparison?
  • Anticipated input: What are the anticipated outcomes that need to be compared to the test documentation?
TEST ENVIRONMENT: Hardware:

  • What are the operating systems that will be used?
  • What is the compatibility of all the hardware being used?

Software:

  • What data configurations are needed to run the software?
  • Have all the considerations of the required interfaces to other systems been used?
  • Are the software and hardware compatible?
TEST DATA

  • Database setup requirements: Does test data need to be generated or will a specific data from production be captured and used for testing?
  • Setup requirements: Who will be responsible for setting up the environment and maintaining it throughout the testing process?
TEST TOOLS

  • Automated:Will automated tools be used?
  • Manual:Will manual testing be done?
DOCUMENTATION

  • Test cases: Are there test cases already prepared or will they need to be prepared?
  • Test scripts: Are there test scripts already prepared or will they need to be prepared?

PROBLEM TRACKING

  • Tools: What type of tools will be selected?
  • Processes: Who will be involved in the problem tracking process?
REPORTING REQUIREMENTS

  • Testing deliverables: What are the deliverables for the test?
  • Retests: How will the retesting reporting be documented?
PERSONNEL RESOURCES

  • Training:Will training be provided?
  • Implementation: How will training be implemented?
ADDITIONAL DOCUMENTATION

  • Appendix:Will samples be included?
  • Reference materials:Will there be a glossary, acronyms, and/or data dictionary?
Once you have written your test plan, you should address some of the following issues and questions:

  • Verify plan. Make sure the plan is workable, the dates are realistic, and that the plan is published. How will the test plan be implemented and what are the deliverables provided to verify the test?
  • Validate changes. Changes should be recorded by a problem tracking system and assigned to a developer to make revisions, retest, and sign off on changes that have been made.
  • Acceptance testing. Acceptance testing allows the end users to verify that the system works according to their expectation and the documentation. Certification of the Web site should be recorded and signed off by the end users, testers, and management.

Test reports. Reports should be generated and the data should be checked and validated by the test team and users.

Software Testing FAQ Part1

Q: What is software quality assurance?

Software Quality Assurance (SWQA) it is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they are the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or SWQA. This document details some aspects of how he can provide software testing/QA service.

Q: What is quality assurance?

Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews. Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.

Q: Processes and procedures – why follow them?

Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

Q: Standards and templates – what is supposed to be in a document?

All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q: What are the different levels of testing?

Rob Davis has expertise in testing at all testing levels listed in the these FAQs. At each test level, he documents the results. Each level of testing is either considered black or white box testing.

Q: What is black box testing?

Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.

Q: What is white box testing?

White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.

Q: What is unit testing?

Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

Q: What is parallel/audit testing?

Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Q: What is functional testing?

Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.

Q: What is usability testing?

Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers.

Q: What is incremental integration testing?

Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

Q: What is integration testing?

Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q: What is system testing?

System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

Q: What is end-to-end testing?

End-to-end testing is similar to system testing, the *macro* end of the test scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

Q: What is regression testing?

The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What is sanity testing?

Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Q: What is performance testing?

Performance testing verifies loads, volumes and response times, as defined by requirements. Although performance testing is a part of system testing, it can be regarded as a distinct level of testing.

Q: What is load testing?

Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.

Q: What is installation testing?

Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.

Q: What is security/penetration testing?

Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q: What is compatibility testing?

Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

Q: What is comparison testing?

Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.

Q: What is acceptance testing?

Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Q: What is alpha testing?

Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What is beta testing?

Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What testing roles are standard on most testing projects?

Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

Q: What is a Test/QA Team Lead?

The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

Q: What is a Test Engineer?

A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. They also… Speed up the work of your development staff; Reduce your risk of legal liability; Give you the evidence that your software is correct and operates properly; Improve problem tracking and reporting; Maximize the value of your software; Maximize the value of the devices that use it; Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down; Help the work of your development staff, so the development team can devote its time to build up your product; Promote continual improvement; Provide documentation required by FDA, FAA, other regulatory agencies and your customers; Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field; Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

Q: What is a Test Build Manager?

Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.

Q: What is a System Administrator?

Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

Q: What is a Database Administrator?

Database Administrators, Test Build Managers, and System Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

Q: What is a Technical Analyst?

Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

Q: What is a Test Configuration Manager?

Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

Q: What is a test schedule?

The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

Q: What is software testing methodology?

One software testing methodology is a three step process of… 1.      Creating a test strategy; 2.      Creating a test plan/design; and 3.      Executing tests. This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his customers’ applications.

Q: What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

Q: How do you create a test strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process: A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. Testing methodology. This is based on known standards. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. Requirements that the system can not provide, e.g. system limitations. Outputs for this process:  An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q: How do you create a test plan/design?

 Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking… Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts include the specific data that will be used for testing the process or transaction. Test procedures or scripts may cover multiple test scenarios. Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release. Inputs for this process: Approved Test Strategy Document. Test tools, or automated test tools, if applicable. Previously developed scripts, if applicable. Test documentation problems uncovered as a result of testing. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data Outputs for this process:  Approved documents of test scenarios, test cases, test conditions and test data. Reports of software design issues, given to software developers for correction.

Q: How do you execute tests?

 Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities. The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool. Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager and/or Test Team Lead. After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance. The test team reviews test document problems identified during testing, and update documents where appropriate. Inputs for this process:  Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. Test tools, including automated test tools, if applicable. Developed scripts. Changes to the design, i.e. Change Request Documents. Test data. Availability of the test team and project team. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager. Test Readiness Document. Document Updates. Outputs for this process:  Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables. Changes to the code, also known as test fixes. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues. Formal record of test incidents, usually part of problem tracking. Base-lined package, also known as tested source and object code, ready for migration to the next level.  

Software Testing Techniques Part 1

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software’s reliability and quality. But, testing cannot show the absence of defect — it can only show that software defects are present.

White Box Testing

White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.

We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.

Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E – N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
o Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
o Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
o Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:
• the probability that an edge will be executed,
• the processing time expended during link traversal,
• the memory required during link traversal, or
• the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:
1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n – 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops

This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:
1. Condition testing
o exercises the logical conditions in a program.
2. Data flow testing
o selects test paths according to the locations of definitions and uses of variables in the program.

Black Box Testing

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function’s validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

Software Testing Interview Questions Part 2

1. What is diff. between CMMI and CMM levels?
A: – CMM: – this is applicable only for software industry. KPAs -18
CMMI: – This is applicable for software, out sourcing and all other industries. KPA – 25

2. What is the scalability testing?
1. Scalabilty is nothing but how many users that the application should handle

2. Scalability is nothing but maximum no of users that the system can handle

3. Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time.

4. As a part of scalability testing we test the expandability of the application. In scalability we test 1.Applicaation scalability, 2.Performance scalability

Application scalability: to test the possibility of implementing new features in the system or updating the existing features of the system. With the help of design doc we do this testing

Performance scalability: To test how the s/w perform when it is subjected to varying loads to measure and to evaluate the
Performance behavior and the ability for the s/w to continue to function properly under different workloads.

–> To check the comfort level of an application in terms of user load. And user experience and system tolerance levels
–> The point within an application that when subjected to increasing workload begin to degrade in terms of end user experience and system tolerance
–> Response time
Execution time
System resource utilization
Network delays

 stress testing

3. What is status of defect when you are performing regression testing?
A:-Fixed Status

4. What is the first test in software testing process?
A) Monkey testing
B) Unit Testing
c) Static analysis
d) None of the above

A: – Unit testing is the first test in testing process, though it is done by developers after the completion of coding it is correct one.

4. When will the testing starts? a) Once the requirements are Complete b) In requirement phase?

A: – Once the requirements are complete.

This is Static testing. Here, u r supposed to read the documents (requirements) and it is quite a common issue in S/w industry that many requirements contradict with other requirements. These are also can be reported as bugs. However, they will be reviewed before reporting them as bugs (defects).

5. What is the part of Qa and QC in refinement v model?
A: — V model is a kind of SDLC. QC (Quality Control) team tests the developed product for quality. It deals only with product, both in static and dynamic testing. QA (Quality Assurance) team works on the process and manages for better quality in the process. It deals with (reviews) everything right from collecting requirements to delivery.

6. What are the bugs we cannot find in black box?
A: — If there r any bugs in security settings of the pages or any other internal mistake made in coding cannot be found in black box testing.

7. What are Microsoft 6 rules?
A: — As far as my knowledge these rules are used at user Interface test.
These are also called Microsoft windows standards. They are

. GUI objects are aligned in windows
• All defined text is visible on a GUI object
• Labels on GUI objects are capitalized
• Each label includes an underlined letter (mnemonics)
• Each window includes an OK button, a Cancel button, and a System menu

8. What are the steps to test any software through automation tools?
A: — First, you need to segregate the test cases that can be automated. Then, prepare test data as per the requirements of those test cases. Write reusable functions which are used frequently in those test cases. Now, prepare the test scripts using those reusable functions and by applying loops and conditions where ever necessary. However, Automation framework that is followed in the organization
should strictly follow through out the process.

9. What is Defect removable efficiency?
A: – The DRE is the percentage of defects that have been removed
during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.) Number Defects Removed
DRE = –—————————————————— * 100
Number Defects at Start of Process

DRE=A/A+B = 0.8

A = Testing Team (Defects by testing team)
B = customer ( ” ” customer )

If dre <=0.8 then good product otherwise not.

10. Example for bug not reproducible?
A: — Difference in environment
11. During alpha testing why customer people r invited?
A: — becaz alpha testing related to acceptance testing, so,
accepting testing is done in front of client or customer for
there acceptance

12. Difference between adhoc testing and error guessing?
A: — Adhoc testing: without test data r any documents performing testing.

Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

13. Diff between test plan and test strategy?
A: — Test plan: After completion of SRS learning and business requirement gathering test management concentrate on test planning, this is done by Test lead, or Project lead.

Test Strategy: Depends on corresponding testing policy quality analyst finalizes test Responsibility Matrix. This is dont by QA. But both r Documents.

14. What is “V-n-V” Model? Why is it called as “V”& why not “U”? Also tell at what Stage Testing should be best to stared?
A: — It is called V coz it looks like V. the detailed V model is shown below.

SRS                          Acceptance testing                               /                              /     HLD (High Level Design)   System testing                            /                           /        LLD (Low level      Integration testing              Design)     /                         /                            Unit Testing                       /                      /                Coding

There is no such stage for which you wait to start testing.
Testing starts as soon as SRS document is ready. You can raise defects that are present in the document. It’s called verification.

15. What is difference in between Operating System 2000 and OS XP?
A; — Windows 2000 and Windows XP are essentially the same operating system (known internally as Windows NT 5.0 and Windows NT 5.1, respectively.) Here are some considerations if you’re trying to decide which version to use:

Windows 2000 benefits:

1) Windows 2000 has lower system requirements, and has a simpler interface (no “Styles” to mess with).
2) Windows 2000 is slightly less expensive, and has no product activation.
3) Windows 2000 has been out for a while, and most of the common problems and security holes have been uncovered and fixed.
4) Third-party software and hardware products that aren’t yet XP-compatible may be compatible with Windows 2000; check the manufacturers of your devices and applications for XP support before you upgrade.

Windows XP benefits:

1) Windows XP is somewhat faster than Windows 2000, assuming you have a fast processor and tons of memory (although it will run fine with a 300 MHz Pentium II and 128MB of RAM).
2) The new Windows XP interface is more cheerful and colorful than earlier versions, although the less- cartoon “Classic” interface can still be used if desired.
3 Windows XP has more bells and whistles, such as the Windows Movie Maker, built-in CD writer support, the Internet Connection Firewall, and Remote Desktop Connection.
4) Windows XP has better support for games and comes with more games than Windows 2000.
5) Manufacturers of existing hardware and software products are more likely to add Windows XP compatibility now than Windows 2000 compatibility.

Software Testing Interview Questions Part 3

16. What is bug life cycle?
A: — New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into “Rejected”
Fixed: when developer make changes to the code to rectify the bug…
Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into “Closed” and if the problem resists again, it’s “Reopen

17. What is deferred status in defect life cycle?
A: — Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next build

18. What is smoke test?
A; — Testing the application whether it’s performing its basic functionality properly or not, so that the test team can go ahead with the application

19. Do you use any automation tool for smoke testing?
A: – Definitely can use.

20. What is Verification and validation?
A: — Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases.

21. What is test plan and explain its contents?
A: — Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

22. Advantages of automation over manual testing?
A: — Time, resource and Money

23. What is ADhoc testing?
A: — AdHoc means doing something which is not planned.

24. What is mean by release notes?
A: — It’s a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status.

25. Scalability testing comes under in which tool?
A: — Scalability testing comes under performance testing. Load testing, scalability testing both r same.

26. What is the difference between Bug and Defect?
A: — Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure.

A Mistake in code is called Error.

Due to Error in coding, test engineers are getting mismatches in application is called defect.

If defect accepted by development team to solve is called Bug.

27. What is hot fix?
A: — A hot fix is a single, cumulative package that includes one or more files that are used to address a problem in a software product. Typically, hot fixes are made to address a specific customer situation and may not be distributed outside the customer organization.

Bug found at the customer place which has high priority.

28. What is the difference between functional test cases and compatability testcases?
A: — There are no Test Cases for Compatibility Testing; in Compatibility Testing we are Testing an application in different Hardware and software. If it is wrong plz let me know.

29. What is Acid Testing??
A: — ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable

Mostly this will be done database testing.

30. What is the main use of preparing a traceability matrix?
A: — To Cross verify the prepared test cases and test scripts with user requirements.

To monitor the changes, enhance occurred during the development of the project.

Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.

Software Testing Interview Questions Part 4

31. If we have no SRS, BRS but we have test cases does u execute the test cases blindly or do u follow any other process?
A: — Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.

2) In addition you can refer to Backend, I mean look into the Database. To gain more knowledge of the application

32. How to execute test case?
A: — There are two ways:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation pertaining details.

33. Difference between re testing and regression testing?

A: — Retesting: –

Re-execution of test cases on same application build with different input values is retesting.

Regression Testing:

Re-execution of test cases on modifies form of build is called regression testing…

34. What is the difference between bug log and defect tracking?
A; — Bug log is a document which maintains the information of the bug where as bug tracking is the process.

35. Who will change the Bug Status as Differed?
A: — Bug will be in open status while developer is working on it Fixed after developer completes his work if it is not fixed properly the tester puts it in reopen After fixing the bug properly it is in closed state.

Developer

36. wht is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to determine the ease with which a user can learn to operate, input, and interpret outputs of a system or component.

Smoke testing is nothing but to check whether basic functionality of the build is stable or not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not as per the format we check spelling graphic font sizes everything in the window present or not|

37. what is bug, deffect, issue, error?

A: — Bug: — Bug is identified by the tester.
Defect:– Whenever the project is received for the analysis phase ,may be some requirement miss to get or understand most of the time Defect itself come with the project (when it comes).
Issue: — Client site error most of the time.
Error: — When anything is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer.

Bug: a fault or defect in a system or machine

Defect: an imperfection in a device or machine;

Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help

Error:
Error is the deviation of a measurement, observation, or calculation from the truth

38. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the whole functionality of the system or the application whether it is meeting the functional specifications

Integration testing means testing the functionality of integrated module when two individual modules are integrated for this we use top-down approach and bottom up approach

39. what type of testing u perform in organization while u do System Testing, give clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

40. What is the main use of preparing Traceability matrix and explain the real time usage?

A: — A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.

A traceability matrix is a report from the requirements database or repository.

41. How can u do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the ease with which users can learn and use a product.

ST:
It’s a Web Testing defn.allows web site capability improvement.

PT:
Testing to determine whether the system/software meets the specified portability requirements.

42. What does u mean by Positive and Negative testing & what is the diff’s between them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with valid inputs and verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and verifying the output.

Difference is nothing but how the application behaves when we enter some invalid inputs suppose if it accepts invalid input the application
Functionality is wrong

Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test to fail” BVA is the best example of -ve testing.

43. what is change request, how u use it?

A: — Change Request is a attribute or part of Defect Life Cycle.

Now when u as a tester finds a defect n report to ur DL…he in turn informs the Development Team.
The DT says it’s not a defect it’s an extra implementation or says not part of req’ment. Its newscast has to pay.

Here the status in ur defect report would be Change Request

I think change request controlled by change request control board (CCB). If any changes required by client after we start the project, it has to come thru that CCB and they have to approve it. CCB got full rights to accept or reject based on the project schedule and cost.

44. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of available information to determine how often specified events and unspecified events may occur and the magnitude of their likely consequences

OR

procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures, and highlight how the impact can be eliminated or reduced

Types :

1.QUANTITATIVE RISK ANALYSIS
2.QUALITATIVE RISK ANALYSIS

45. What is API ?

A:– Application program interface

Software Testing Interview Questions Part 5

February 27, 2008 1 comment

46. High severity, low priority bug?

A: — A page is rarely accessed, or some activity is performed rarely but that thing outputs some important Data incorrectly, or corrupts the data, this will be a bug of H severity L priority

47. If project wants to release in 3months what type of Risk analysis u do in Test plan?

A:– Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—
1)First I go for the Installation side, means that –
+ is it working with all versions of Windows ,Netscape or other softwares in other words we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequent and smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent and smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.
8) HTML Code work on that or not.
9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0

49. Where you involve in testing life cycle ,what type of test you perform ?

A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing
etc.

50. what is Testing environment in your company ,means hwo testing process start ?

A:– testing process is going as follows
quality assurance unit
quality assurance manager
testlead
test engineer

51. who prepares the use cases?

A:– In Any company except the small company Business analyst prepares the use cases
But in small company Business analyst prepares along with team lead

52. What methodologies have you used to develop test cases?

A:– generally test engineers uses 4 types of methodologies
1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing

53. Why we call it as a regression test nor retest?

A:– If we test whether defect is closed or not i.e Retesting But here we are checking the impact also regression means repeated times

54. Is automated testing better than manual testing. If so, why?

A:– Automated testing and manual testing have advantages as well as disadvantages
Advantages: It increase the efficiency of testing process speed in process
reliable
Flexible
disadvantage’s
Tools should have compatibility with our development or deployment tools needs lot of time initially If the requirements are changing continuously Automation is not suitable
Manual: If the requirements are changing continuously Manual is suitable Once the build is stable with manual testing then only we go 4 automation
Disadvantages:
It needs lot of time
We can not do some type of testing manually
E.g Performances

55. what is the exact difference between a product and a project.give an example ?

A:– Project Developed for particular client requirements are defined by client Product developed for market Requirements are defined by company itself by conducting market survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is project
Product: Example is “Ready made Shirt” where the particular company will imagine particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements

56. Define Brain Stromming and Cause Effect Graphing? With Eg?

A:– BS:
A learning technique involving open group discussion intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-
Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG :
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

57. Actually by using severity u should know which one u need to solve so what is the need of priority?

A:– I guess severity reflects the seriousness of the bug where as priority refers to which bug should rectify first. of course if the severity is high the same case is with priority in normal.

severity decided by the tester where as priority decided by developers. which one need to solve first knows through priority not with severity. how serious of the bug knows through
severity.

severity is nothing impact of that bug on the application. Priority is nothing but importance to resolve the bug yeah of course by looking severity we can judge but sometimes high severity bug doesn’t have high priority At the same time High priority bug don’t have high severity
So we need both severity and priority

58. What do u do if the bug that u found is not accepted by the developer and he is saying its not reproducible. Note:The developer is in the on site location ?

A:– once again we will check that condition with all reasons. then we will attach screen shots with strong reasons. then we will explain to the project manager and also explain to the client when they contact us

Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we r using is correct we will raise it as defect We will take screen shots and sends them with test procedure also

59. what is the difference between three tier and two tier application?

A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name’,front end is the monitoring level.

Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server,browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application.

All the client server applications are 2 tier architectures.
Here in these architecture, all the “Business Logic” is stored in clients and “Data” is stored in Servers. So if user request anything, business logic will b performed at client, and the data is retrieved from Server(DB Server). Here the problem is, if any business logic changes, then we
need to change the logic at each any every client. The best ex: is take a super market, i have branches in the city. At each branch i have clients, so business logic is stored in clients, but the actual data is store in servers.If assume i want to give some discount on some items, so i
need to change the business logic. For this i need to goto each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture.

So 3-tier architecture came into picture:

Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture.

Assume for the above. Ex. if i want to give some discount, all my business logic is there in Server. So i need to change at one place, not at each client. This is the main advantage of 3-tier architecture.