Archive

Archive for July, 2008

Web Test Plan Development

Web Test Plan Development
The objective of a test plan is to provide a roadmap so that the Web site can be evaluated through requirements or design statements. A test plan is a document that describes objectives and the scope of a Web site project. When you prepare a test plan, you should think through the process of the Web site test. The plan should be written so that it can successfully give the reader a complete picture of the Web site project and should be thorough enough to be useful. Following are some of the items that might be included in a test plan. Keep in mind thatthe items may vary depending on the Web site project.
The Web Testing Process

  • Internet
  • Web Browser
  • Web Server
PROJECT

  • Title of the project:
  • Date:
  • Prepared by:
PURPOSE OF DOCUMENT

  • Objective of testing: Why are you testing the application? Who, what, when, where, why, and how should be some of the questions you ask in this section of the test plan.
  • Overview of the application: What is the purpose of the application? What are the specifications of the project?
TEST TEAM

  • Responsible parties: Who is responsible and in charge of the testing?
  • List of test team: What are the names and titles of the people on the test team?
RISK ASSUMPTIONS

  • Anticipated risks: What types of risks are involved that could cause the test to fail?
  • Similar risks from previous releases: Have there been documented risks from previous tests that may be helpful in setting up the current test?
SCOPE OF TESTING

  • Possible limitations of testing: Are there any factors that may inhibit the test, such as resources and budget?
  • Impossible testing: What are the considerations involved that could prevent the tests that are planned?
  • Anticipated output: What are the anticipated outcomes of the test and have they been documented for comparison?
  • Anticipated input: What are the anticipated outcomes that need to be compared to the test documentation?
TEST ENVIRONMENT: Hardware:

  • What are the operating systems that will be used?
  • What is the compatibility of all the hardware being used?

Software:

  • What data configurations are needed to run the software?
  • Have all the considerations of the required interfaces to other systems been used?
  • Are the software and hardware compatible?
TEST DATA

  • Database setup requirements: Does test data need to be generated or will a specific data from production be captured and used for testing?
  • Setup requirements: Who will be responsible for setting up the environment and maintaining it throughout the testing process?
TEST TOOLS

  • Automated:Will automated tools be used?
  • Manual:Will manual testing be done?
DOCUMENTATION

  • Test cases: Are there test cases already prepared or will they need to be prepared?
  • Test scripts: Are there test scripts already prepared or will they need to be prepared?

PROBLEM TRACKING

  • Tools: What type of tools will be selected?
  • Processes: Who will be involved in the problem tracking process?
REPORTING REQUIREMENTS

  • Testing deliverables: What are the deliverables for the test?
  • Retests: How will the retesting reporting be documented?
PERSONNEL RESOURCES

  • Training:Will training be provided?
  • Implementation: How will training be implemented?
ADDITIONAL DOCUMENTATION

  • Appendix:Will samples be included?
  • Reference materials:Will there be a glossary, acronyms, and/or data dictionary?
Once you have written your test plan, you should address some of the following issues and questions:

  • Verify plan. Make sure the plan is workable, the dates are realistic, and that the plan is published. How will the test plan be implemented and what are the deliverables provided to verify the test?
  • Validate changes. Changes should be recorded by a problem tracking system and assigned to a developer to make revisions, retest, and sign off on changes that have been made.
  • Acceptance testing. Acceptance testing allows the end users to verify that the system works according to their expectation and the documentation. Certification of the Web site should be recorded and signed off by the end users, testers, and management.

Test reports. Reports should be generated and the data should be checked and validated by the test team and users.

Understanding Software Defects

Understanding Software Defects

Describe 13 major categories of Software defects:
  • User interface errors – the system provides something that is different from Interface.
  • Error handling – the way the errors are recognized and treated may be in error .
  • Boundary-related errors – the treatment of values at the edges of their ranges may be incorrect .
  • Calculation errors – arithmetic and logic calculations may be incorrect .
  • Initial and later states – the function fails the first time it is used but not later, or Vice-versa .
  • Control flow errors – the choice of what is done next is not appropriate for the current state .
  • Errors in handling or interpreting data – passing and converting data between systems (and even separate components of the system) may introduce errors.
  • Race conditions – when two events could be processed, one is always accepted prior to the other and things work fine, however eventually the other event may be processed first and unexpected or incorrect results are produced.
  • Load conditions – as the system is pushed to maximum limits problems start to occur, e.g. arrays overflow, disks full .
  • Hardware – interfacing with devices may not operate correctly under certain conditions, e.g. device unavailable .
  • Source and version control – out-of-date programs may be used where correct revisions are available.
  • Documentation – the user does not observe operation described in manuals .
  • Testing errors – the tester makes mistakes during testing and thinks the system is behaving incorrectly.

Web Testing Challenges

Web Testing Challanges

Understanding the Web test process is essential for deciding how to proceed with the selection of a Web test process, automated test tools, and methodologies.

Following are several challenges that need to be considered when deciding on the Web process that is most applicable for your business:

The Web is in a state of constant change. The developer and tester need to understand how changes will affect their development and the Web sitetest process. As technology changes, testers will need to understand how this will affect them and how they will handle their testing responsibilities

When setting up the test scenarios, the tester needs to understand how to implement different scenarios that will meet different types of business requirements. For example, is a tester testing a site with graphic user interface (GUI) buttons and text boxes or testing HyperText MarkupLanguage (HTML) code? Simulating response time by pressing buttons and inputting different values will verify if correct calculations are valid.

The test environment can be a difficult part of the setup for the tester.

You need to be aware of all of the different components that make up the environment; the networking piece can be especially difficult to simulate.

The following several considerations need to be addressed

Multiple server tiers

Firewalls

Databases

Database servers

In the test environment, it is important to know how the different components will interact with each other.

When setting up the Web testing environment, special consideration should be given to how credit card transactions are handled, carried out, and verified. Because testers are responsible for setting up the test scenarios, they will need to be able to simulate the quantity of transactions that are going to be processed on the Web site.

Security is a constant concern for business on the Internet as well as for developers and testers. There are hackers who enjoy breaking the secuiry on a Web site.

Web-based applications Present New Challanges ,Both For Developers and Testers .These challanges include .

  1. Short Release Cycle
  2. Constant Changing Technology
  3. Possible Huge Number Of Users during Initial Website Launch.
  4. Inability To control The Users Running Environment.
  5. Twenty Four -24 Hour avilability of web site.

web test plan

Skills of a Tester’s Skull

Understanding Skills

The first and foremost activity of Software Testing is to understand the requirements/ functionalities of the software to be tested. Formal Documents like Software Requirement Specifications, Software Functional Specifications, Use Case Specifications and other Documents like Minutes of Meeting serves as the key references for planning and executing the testing tasks. The testers must have very good understanding skills to read and understand the information written in these documents. Many a times, it is possible to have many interpretations for the information presented in the documents. The testers must be able to identify the duplicate & ambiguous requirements. If the requirements are not clear or ambiguous, the testers must identify the sources of the requirements and get them clarified. The sources of the requirements in most of the project development team should be Business Analysts, Business Users or any other competent authority as identified by the Project Management team. The testers shall analyze and co-relate the information gathered within the perspective of the project.

Listening Skills

Documents are not the only source of reference for the testing activities. The information required for the testing activities may be acquired through offline meetings, seminars, conferences, etc. The minutes of the meetings, conferences, seminars may or may not be recorded in a formal document. The testers must have very good active listening skills in order to collate and co-relate all of that information and refer them for the testing activities. While the requirements or functionalities of the software are discussed over a meeting, many a times, some part of the requirements are missed out. The testers should be able to identify and get them clarified before heading towards the subsequent testing phases.

Test Planning Skills

All software requirements shall be testable. The software shall be designed in such a way that all software requirements shall be testable. The test plan shall be formulated in such a way that paves the way for validating all the software requirements. In the real time scenario, there could be many requirements that are not testable. The tester with his/her test planning skills should be able to find out a workaround to test those non-testable requirements. If there is no way to test them, that shall be communicated clearly to the appropriate authority. There could be many requirements that are very complex to test and the tester should be able to identify the best approach to test them.

Test Design Skills

Software Testing Science preaches many techniques like Equivalence Class Partitioning, Boundary Value Analysis, Orthogonal Array and many more techniques for an effective test design. The testers shall be aware of all those techniques and apply them into their software test designing practice. The tester shall be aware of various formats and templates to author and present the test cases or test procedures in a neat fashion. The tester shall aware of the best practices and the acceptable standards for designing the test cases. The tester shall be aware of the how to write test cases – non-ambiguous, simple, straight to the point.

The test case needs to contain Test Case Description, Test Steps and its corresponding expected results. The tester shall be aware of how to present the content in these three topics effectively in such a way that they can be read without any ambiguity by all the project stakeholders.

Test Execution Skills

Test Execution is nothing but executing the steps that is specified in the test design documents. During the execution, the testers shall capture the actual results and compare against the expected results specified in the test design documents. If there are any deviations between the expected and actual results, the testers shall consider that as a defect. The tester shall analyze the cause of the defect and if it is found and confirmed in the application under test, the same shall be communicated to the developers and it shall get fixed. If the cause of the defect is found with the test case, the same shall be communicated to the test designers and the test cases shall be modified/ amended accordingly. If the testers are not confident about the application functionalities and the test design documents, they may not confidently come to a conclusion about the defect in case of any discrepancies. This will lead to defects being leaked to the next phase and the testers needs to avoid this scenario. The testers shall be confident about the application functionalities and in case of any ambiguity or clarifications; they need to get them sorted out before executing the tests or at least during the test execution.

Defect Reporting Skills

Defect Reports are one of the critical deliverables from a tester. The defect reports are viewed by the development team, business analysts, project managers, technical managers and the quality assurance engineers along with the testers. Hence, the defect reports shall carry enough information about the defect. Steps to reproduce the defect, its expected result and actual result along with other information such as Severity, Priority, Assigned To (developer), Test Environment details are very critical for a defect report without which, the defect report is considered as incomplete. The tester shall be aware of the importance of the defect report and he/she shall create defect report in such a way that it is non-ambiguous. During the course of fixing the defect, the developers may come back to the testing team for more information regarding the defect and the tester shall provide the same without failing.

Test Automation Skills

Test Automation is a wonderful phenomenon by which the testing cost is drastically reduced. The manual test cases upon automation can be executed by running the automated test scripts. This means that the manual effort to run those automated test cases is not necessary and hence the total test effort is reduced drastically. The testers shall be aware of the technique for adopting test automation into the current testing process. Identifying the test automation candidates is very critical for the success of the automation project. Automation candidates shall be identified in such a way that the testing cost towards manual test execution would reduce significantly. This involves lots of thoughts from the financial perspective as well. The testers shall understand the process of do’s & don’ts of automation to make the automation project successful.

Conclusion

The testers shall understand/ learn and be confident about the application functionalities. Test planning, designing, execution and defect reporting are the basic and essential skills that a tester shall possess and develop in his day-to-day career. Professionals who are perfectionist in using these skills are called as “Testing Professionals” or “Testers” or “Testing Engineers”. Hope now, you are a tester…

Good Test Case

What is Test Case? How to write a good test case?

Test Cases are the implementation of a test case design which will help the software tester to detect defects in the application or the system being tested. This should be the primary goal of any test case or set of test cases. When I write a test case, I think of both types of test cases, positive test cases and negative test cases. Positive test cases are those which execute the happy path in the application and make sure that the happy path is working fine. Negative test cases as the name suggests are destructive test cases which are documented with some out-of-box thinking to break the system.
A Test Case should be documented in a manner that is useful for the current test cycle and any future test cycles. At a bare minimum each test case should contain: Sr No, Summary or Title, Description, Steps to reproduce, Expected Results, Actual Results and Status of the test case or remarks.

Test Case Summary or Title

The Summary or title should contain the essence of the test case including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and help prevents duplicate test cases from occurring.

Test Case Description

The description should clearly state what sequence of events to be executed by the test case. The Test Case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.

Test Case Steps to reproduce

Each test case step should clearly state the navigation, test data, and events required to accomplish the step. Using a common descriptive approach encourages reuse and conformity.

Expected Results of Test Case

The expected behavior of the system after any test case step that requires verification / validation – this could include: screen pop-ups, data updates, display changes, or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.

Status or Remarks

The operational status of the test case – Executed or not executed etc.

Categories: Test Case Tags: