Archive

Archive for February, 2008

WinRunner Functions

Database Functions

db_check

This function captures and compares data from a database.
Note that the checklist file (arg1) can be created only during record.
arg1 – checklist file.

db_connect

This function creates a new connection session with a database.
arg1 – the session name (string)
arg2 – a connection string
for example “DSN=SQLServer_Source;UID=SA;PWD=abc123″

db_disconnect

This function disconnects from the database and deletes the session.
arg1 – the session name (string)

db_dj_convert

This function executes a Data Junction conversion export file (djs).
arg1 – the export file name (*.djs)
arg2 – an optional parameter to override the output file name
arg3 – a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 – an optional parameter to
limit the records number (-1 is no limit and is the default)

db_execute_query

This function executes an SQL statement.
Note that a db_connect for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – an SQL statement
arg3 – an out parameter to return the records number.

db_get_field_value

This function returns the value of a single item of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the row index number (zero based)
arg3 – the column index number (zero based) or the column name.

db_get_headers

This function returns the fields headers and fields number of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – an out parameter to return the fields number
arg3 – an out parameter to return the concatenation
of the fields headers delimited by TAB.

db_get_last_error

This function returns the last error message of the last ODBC operation.
arg1 – the session name (string)
arg2 – an out parameter to return the last error.

db_get_row

This function returns a whole row of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the row number (zero based)
arg3 – an out parameter to return the concatenation
of the fields values delimited by TAB.

db_record_check

This function checks that the specified record exists in the
database.Note that the checklist file (arg1) can be created
only using the Database Record Verification Wizard.
arg1 – checklist file.
arg2 – success criteria.
arg3 – number of records found.

db_write_records

This function writes the records of an executed query into a file.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the output file name
arg3 – a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 – an optional parameter to
limit the records number (-1 is no limit and is the default).

ddt_update_from_db

This function updates the table with data from database.
arg1 – table name.
arg2 – query or conversion file (*.sql ,*.djs).
arg3 (out) – num of rows actually retrieved.
arg4 (optional) – max num of rows to retrieve (default – no limit).

GUI-Functions

GUI_add

This function adds an object to a buffer
arg1 is the buffer in which the object will be entered
arg2 is the name of the window containing the object
arg3 is the name of the object
arg4 is the description of the object

GUI_buf_get_desc

This function returns the description of an object
arg1 is the buffer in which the object exists
arg2 is the name of the window containing the object
arg3 is the name of the object
arg4 is the returned description

GUI_buf_get_desc_attr

This function returns the value of an object property
arg1 is the buffer in which the object exists
arg2 is the name of the window
arg3 is the name of the object
arg4 is the property
arg5 is the returned value

GUI_buf_get_logical_name

This function returns the logical name of an object
arg1 is the buffer in which the object exists
arg2 is the description of the object
arg3 is the name of the window containing the object
arg4 is the returned name

GUI_buf_new

This function creates a new GUI buffer
arg1 is the buffer name

GUI_buf_set_desc_attr

This function sets the value of an object property
arg1 is the buffer in which the object exists
arg2 is the name of the window
arg3 is the name of the object
arg4 is the property
arg5 is the value

GUI_close

This function closes a GUI buffer
arg1 is the file name.

GUI_close_all

This function closes all the open GUI buffers.

GUI_delete

This function deletes an object from a buffer
arg1 is the buffer in which the object exists
arg2 is the name of the window containing the object
arg3 is the name of the object (if empty, the window will be deleted)

GUI_desc_compare

This function compares two phisical descriptions (returns 0 if the same)
arg1 is the first description
arg2 is the second description

Software Testing Techniques Part 1

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software’s reliability and quality. But, testing cannot show the absence of defect — it can only show that software defects are present.

White Box Testing

White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.

We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.

Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E – N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
o Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
o Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
o Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:
• the probability that an edge will be executed,
• the processing time expended during link traversal,
• the memory required during link traversal, or
• the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:
1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n – 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops

This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:
1. Condition testing
o exercises the logical conditions in a program.
2. Data flow testing
o selects test paths according to the locations of definitions and uses of variables in the program.

Black Box Testing

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function’s validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

Software Testing Interview Questions Part 2

1. What is diff. between CMMI and CMM levels?
A: – CMM: – this is applicable only for software industry. KPAs -18
CMMI: – This is applicable for software, out sourcing and all other industries. KPA – 25

2. What is the scalability testing?
1. Scalabilty is nothing but how many users that the application should handle

2. Scalability is nothing but maximum no of users that the system can handle

3. Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time.

4. As a part of scalability testing we test the expandability of the application. In scalability we test 1.Applicaation scalability, 2.Performance scalability

Application scalability: to test the possibility of implementing new features in the system or updating the existing features of the system. With the help of design doc we do this testing

Performance scalability: To test how the s/w perform when it is subjected to varying loads to measure and to evaluate the
Performance behavior and the ability for the s/w to continue to function properly under different workloads.

–> To check the comfort level of an application in terms of user load. And user experience and system tolerance levels
–> The point within an application that when subjected to increasing workload begin to degrade in terms of end user experience and system tolerance
–> Response time
Execution time
System resource utilization
Network delays

 stress testing

3. What is status of defect when you are performing regression testing?
A:-Fixed Status

4. What is the first test in software testing process?
A) Monkey testing
B) Unit Testing
c) Static analysis
d) None of the above

A: – Unit testing is the first test in testing process, though it is done by developers after the completion of coding it is correct one.

4. When will the testing starts? a) Once the requirements are Complete b) In requirement phase?

A: – Once the requirements are complete.

This is Static testing. Here, u r supposed to read the documents (requirements) and it is quite a common issue in S/w industry that many requirements contradict with other requirements. These are also can be reported as bugs. However, they will be reviewed before reporting them as bugs (defects).

5. What is the part of Qa and QC in refinement v model?
A: — V model is a kind of SDLC. QC (Quality Control) team tests the developed product for quality. It deals only with product, both in static and dynamic testing. QA (Quality Assurance) team works on the process and manages for better quality in the process. It deals with (reviews) everything right from collecting requirements to delivery.

6. What are the bugs we cannot find in black box?
A: — If there r any bugs in security settings of the pages or any other internal mistake made in coding cannot be found in black box testing.

7. What are Microsoft 6 rules?
A: — As far as my knowledge these rules are used at user Interface test.
These are also called Microsoft windows standards. They are

. GUI objects are aligned in windows
• All defined text is visible on a GUI object
• Labels on GUI objects are capitalized
• Each label includes an underlined letter (mnemonics)
• Each window includes an OK button, a Cancel button, and a System menu

8. What are the steps to test any software through automation tools?
A: — First, you need to segregate the test cases that can be automated. Then, prepare test data as per the requirements of those test cases. Write reusable functions which are used frequently in those test cases. Now, prepare the test scripts using those reusable functions and by applying loops and conditions where ever necessary. However, Automation framework that is followed in the organization
should strictly follow through out the process.

9. What is Defect removable efficiency?
A: – The DRE is the percentage of defects that have been removed
during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.) Number Defects Removed
DRE = –—————————————————— * 100
Number Defects at Start of Process

DRE=A/A+B = 0.8

A = Testing Team (Defects by testing team)
B = customer ( ” ” customer )

If dre <=0.8 then good product otherwise not.

10. Example for bug not reproducible?
A: — Difference in environment
11. During alpha testing why customer people r invited?
A: — becaz alpha testing related to acceptance testing, so,
accepting testing is done in front of client or customer for
there acceptance

12. Difference between adhoc testing and error guessing?
A: — Adhoc testing: without test data r any documents performing testing.

Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

13. Diff between test plan and test strategy?
A: — Test plan: After completion of SRS learning and business requirement gathering test management concentrate on test planning, this is done by Test lead, or Project lead.

Test Strategy: Depends on corresponding testing policy quality analyst finalizes test Responsibility Matrix. This is dont by QA. But both r Documents.

14. What is “V-n-V” Model? Why is it called as “V”& why not “U”? Also tell at what Stage Testing should be best to stared?
A: — It is called V coz it looks like V. the detailed V model is shown below.

SRS                          Acceptance testing                               /                              /     HLD (High Level Design)   System testing                            /                           /        LLD (Low level      Integration testing              Design)     /                         /                            Unit Testing                       /                      /                Coding

There is no such stage for which you wait to start testing.
Testing starts as soon as SRS document is ready. You can raise defects that are present in the document. It’s called verification.

15. What is difference in between Operating System 2000 and OS XP?
A; — Windows 2000 and Windows XP are essentially the same operating system (known internally as Windows NT 5.0 and Windows NT 5.1, respectively.) Here are some considerations if you’re trying to decide which version to use:

Windows 2000 benefits:

1) Windows 2000 has lower system requirements, and has a simpler interface (no “Styles” to mess with).
2) Windows 2000 is slightly less expensive, and has no product activation.
3) Windows 2000 has been out for a while, and most of the common problems and security holes have been uncovered and fixed.
4) Third-party software and hardware products that aren’t yet XP-compatible may be compatible with Windows 2000; check the manufacturers of your devices and applications for XP support before you upgrade.

Windows XP benefits:

1) Windows XP is somewhat faster than Windows 2000, assuming you have a fast processor and tons of memory (although it will run fine with a 300 MHz Pentium II and 128MB of RAM).
2) The new Windows XP interface is more cheerful and colorful than earlier versions, although the less- cartoon “Classic” interface can still be used if desired.
3 Windows XP has more bells and whistles, such as the Windows Movie Maker, built-in CD writer support, the Internet Connection Firewall, and Remote Desktop Connection.
4) Windows XP has better support for games and comes with more games than Windows 2000.
5) Manufacturers of existing hardware and software products are more likely to add Windows XP compatibility now than Windows 2000 compatibility.

Software Testing Interview Questions Part 3

16. What is bug life cycle?
A: — New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into “Rejected”
Fixed: when developer make changes to the code to rectify the bug…
Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into “Closed” and if the problem resists again, it’s “Reopen

17. What is deferred status in defect life cycle?
A: — Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next build

18. What is smoke test?
A; — Testing the application whether it’s performing its basic functionality properly or not, so that the test team can go ahead with the application

19. Do you use any automation tool for smoke testing?
A: – Definitely can use.

20. What is Verification and validation?
A: — Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases.

21. What is test plan and explain its contents?
A: — Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

22. Advantages of automation over manual testing?
A: — Time, resource and Money

23. What is ADhoc testing?
A: — AdHoc means doing something which is not planned.

24. What is mean by release notes?
A: — It’s a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status.

25. Scalability testing comes under in which tool?
A: — Scalability testing comes under performance testing. Load testing, scalability testing both r same.

26. What is the difference between Bug and Defect?
A: — Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure.

A Mistake in code is called Error.

Due to Error in coding, test engineers are getting mismatches in application is called defect.

If defect accepted by development team to solve is called Bug.

27. What is hot fix?
A: — A hot fix is a single, cumulative package that includes one or more files that are used to address a problem in a software product. Typically, hot fixes are made to address a specific customer situation and may not be distributed outside the customer organization.

Bug found at the customer place which has high priority.

28. What is the difference between functional test cases and compatability testcases?
A: — There are no Test Cases for Compatibility Testing; in Compatibility Testing we are Testing an application in different Hardware and software. If it is wrong plz let me know.

29. What is Acid Testing??
A: — ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable

Mostly this will be done database testing.

30. What is the main use of preparing a traceability matrix?
A: — To Cross verify the prepared test cases and test scripts with user requirements.

To monitor the changes, enhance occurred during the development of the project.

Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.

Software Testing Interview Questions Part 4

31. If we have no SRS, BRS but we have test cases does u execute the test cases blindly or do u follow any other process?
A: — Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.

2) In addition you can refer to Backend, I mean look into the Database. To gain more knowledge of the application

32. How to execute test case?
A: — There are two ways:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation pertaining details.

33. Difference between re testing and regression testing?

A: — Retesting: –

Re-execution of test cases on same application build with different input values is retesting.

Regression Testing:

Re-execution of test cases on modifies form of build is called regression testing…

34. What is the difference between bug log and defect tracking?
A; — Bug log is a document which maintains the information of the bug where as bug tracking is the process.

35. Who will change the Bug Status as Differed?
A: — Bug will be in open status while developer is working on it Fixed after developer completes his work if it is not fixed properly the tester puts it in reopen After fixing the bug properly it is in closed state.

Developer

36. wht is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to determine the ease with which a user can learn to operate, input, and interpret outputs of a system or component.

Smoke testing is nothing but to check whether basic functionality of the build is stable or not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not as per the format we check spelling graphic font sizes everything in the window present or not|

37. what is bug, deffect, issue, error?

A: — Bug: — Bug is identified by the tester.
Defect:– Whenever the project is received for the analysis phase ,may be some requirement miss to get or understand most of the time Defect itself come with the project (when it comes).
Issue: — Client site error most of the time.
Error: — When anything is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer.

Bug: a fault or defect in a system or machine

Defect: an imperfection in a device or machine;

Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help

Error:
Error is the deviation of a measurement, observation, or calculation from the truth

38. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the whole functionality of the system or the application whether it is meeting the functional specifications

Integration testing means testing the functionality of integrated module when two individual modules are integrated for this we use top-down approach and bottom up approach

39. what type of testing u perform in organization while u do System Testing, give clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

40. What is the main use of preparing Traceability matrix and explain the real time usage?

A: — A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.

A traceability matrix is a report from the requirements database or repository.

41. How can u do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the ease with which users can learn and use a product.

ST:
It’s a Web Testing defn.allows web site capability improvement.

PT:
Testing to determine whether the system/software meets the specified portability requirements.

42. What does u mean by Positive and Negative testing & what is the diff’s between them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with valid inputs and verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and verifying the output.

Difference is nothing but how the application behaves when we enter some invalid inputs suppose if it accepts invalid input the application
Functionality is wrong

Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test to fail” BVA is the best example of -ve testing.

43. what is change request, how u use it?

A: — Change Request is a attribute or part of Defect Life Cycle.

Now when u as a tester finds a defect n report to ur DL…he in turn informs the Development Team.
The DT says it’s not a defect it’s an extra implementation or says not part of req’ment. Its newscast has to pay.

Here the status in ur defect report would be Change Request

I think change request controlled by change request control board (CCB). If any changes required by client after we start the project, it has to come thru that CCB and they have to approve it. CCB got full rights to accept or reject based on the project schedule and cost.

44. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of available information to determine how often specified events and unspecified events may occur and the magnitude of their likely consequences

OR

procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures, and highlight how the impact can be eliminated or reduced

Types :

1.QUANTITATIVE RISK ANALYSIS
2.QUALITATIVE RISK ANALYSIS

45. What is API ?

A:– Application program interface

Software Testing Interview Questions Part 5

February 27, 2008 1 comment

46. High severity, low priority bug?

A: — A page is rarely accessed, or some activity is performed rarely but that thing outputs some important Data incorrectly, or corrupts the data, this will be a bug of H severity L priority

47. If project wants to release in 3months what type of Risk analysis u do in Test plan?

A:– Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—
1)First I go for the Installation side, means that –
+ is it working with all versions of Windows ,Netscape or other softwares in other words we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequent and smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent and smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.
8) HTML Code work on that or not.
9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0

49. Where you involve in testing life cycle ,what type of test you perform ?

A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing
etc.

50. what is Testing environment in your company ,means hwo testing process start ?

A:– testing process is going as follows
quality assurance unit
quality assurance manager
testlead
test engineer

51. who prepares the use cases?

A:– In Any company except the small company Business analyst prepares the use cases
But in small company Business analyst prepares along with team lead

52. What methodologies have you used to develop test cases?

A:– generally test engineers uses 4 types of methodologies
1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing

53. Why we call it as a regression test nor retest?

A:– If we test whether defect is closed or not i.e Retesting But here we are checking the impact also regression means repeated times

54. Is automated testing better than manual testing. If so, why?

A:– Automated testing and manual testing have advantages as well as disadvantages
Advantages: It increase the efficiency of testing process speed in process
reliable
Flexible
disadvantage’s
Tools should have compatibility with our development or deployment tools needs lot of time initially If the requirements are changing continuously Automation is not suitable
Manual: If the requirements are changing continuously Manual is suitable Once the build is stable with manual testing then only we go 4 automation
Disadvantages:
It needs lot of time
We can not do some type of testing manually
E.g Performances

55. what is the exact difference between a product and a project.give an example ?

A:– Project Developed for particular client requirements are defined by client Product developed for market Requirements are defined by company itself by conducting market survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is project
Product: Example is “Ready made Shirt” where the particular company will imagine particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements

56. Define Brain Stromming and Cause Effect Graphing? With Eg?

A:– BS:
A learning technique involving open group discussion intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-
Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG :
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

57. Actually by using severity u should know which one u need to solve so what is the need of priority?

A:– I guess severity reflects the seriousness of the bug where as priority refers to which bug should rectify first. of course if the severity is high the same case is with priority in normal.

severity decided by the tester where as priority decided by developers. which one need to solve first knows through priority not with severity. how serious of the bug knows through
severity.

severity is nothing impact of that bug on the application. Priority is nothing but importance to resolve the bug yeah of course by looking severity we can judge but sometimes high severity bug doesn’t have high priority At the same time High priority bug don’t have high severity
So we need both severity and priority

58. What do u do if the bug that u found is not accepted by the developer and he is saying its not reproducible. Note:The developer is in the on site location ?

A:– once again we will check that condition with all reasons. then we will attach screen shots with strong reasons. then we will explain to the project manager and also explain to the client when they contact us

Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we r using is correct we will raise it as defect We will take screen shots and sends them with test procedure also

59. what is the difference between three tier and two tier application?

A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name’,front end is the monitoring level.

Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server,browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application.

All the client server applications are 2 tier architectures.
Here in these architecture, all the “Business Logic” is stored in clients and “Data” is stored in Servers. So if user request anything, business logic will b performed at client, and the data is retrieved from Server(DB Server). Here the problem is, if any business logic changes, then we
need to change the logic at each any every client. The best ex: is take a super market, i have branches in the city. At each branch i have clients, so business logic is stored in clients, but the actual data is store in servers.If assume i want to give some discount on some items, so i
need to change the business logic. For this i need to goto each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture.

So 3-tier architecture came into picture:

Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture.

Assume for the above. Ex. if i want to give some discount, all my business logic is there in Server. So i need to change at one place, not at each client. This is the main advantage of 3-tier architecture.

Software Testing Interview Questions Part 6

60. What is Impact analysis? How to do impact analysis in yr project?

A: — Impact analysis means when we r doing regressing testing at that time we r checking that the bug fixes r working properly, and by fixing these bug other components are working as per their requirements r they got disturbed.

61. HOW TO TEST A WEBSITE BY MANUAL TESTING?

A: — Web Testing
During testing the websites the following scenarios should be considered.
Functionality
Performance
Usability
Server side interface
Client side compatibility
Security

Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.

Performance:
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middle ware for potential purchase.

Connection speed:
Tested over various Networks like Dial up, ISDN etc

Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.

Stress
Continuous load
Performance of memory, cpu, file handling etc.

Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several

Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance

Server side interface:
In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.

Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection

Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.

To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time

Load testing:
Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the server over an extended period of time

Goals of load testing:
Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. Ensure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.

Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
The main purpose behind this madness is to make sure that the system fails and recovers gracefully — this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following.
Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?

Etc.

Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or reg The purpose of compatibility testing is to reveal issues related to the product& interaction session test suite.with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix,and any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

62. which comes first test strategy or test plan?

A:– Test strategy comes first ans this is the high level document…. and approach for the testing starts from test strategy and then based on this the test lead prepares the
test plan….

63. what is the difference between web based application and client server application as a testers point of view?

A:– According to Tester’s Point of view——
1) Web Base Application (WBA)is a 3 tier application ;Browser,Back end and Server.
Client server Application(CSA) is a 2 tier Application ;Front End ,Back end .
2) In the WBA tester test for the Script error like java script error VB script error etc, that shown at the page. In the CSA tester does not test for any script error.
3) Because in the WBA once changes perform reflect at every machine so tester has less work for test. Whereas in the CSA every time application need to be instal hence ,it maybe possible that some machine has some problem for that Hardware testing as well as software testing is needed.

63. What is the significance of doing Regression testing?

A:– To check for the bug fixes. And this fix should not disturb other functionality

To Ensure the newly added functionality or existing modified functionality or developer fixed bug arises any new bug or affecting any other side effect. this is called regression test and ensure already PASSED TEST CASES would not arise any new bug.

64. What are the diff ways to check a date field in a website?

A:– There are different ways like :–
1) you can check the field width for minimum and maximum.
2) If that field only take the Numeric Value then check it’ll only take Numeric no other type.
3) If it takes the date or time then check for other.
4) Same way like Numeric you can check it for the Character,Alpha Numeric aand all.
5) And the most Important if you click and hit the enter key then some time pag e may give the error of javascript, that is the big fault on the page .
6) Check the field for the Null value ..
ETC…………………

The date field we can check in different ways Possitive testing: first we enter the date in given format

Negative Testing: We enter the date in invalid format suppose if we enter date like 30/02/2006 it should display some error message and also we use to check the numeric or text

Software Testing Types

WHAT KINDS OF TESTING SHOULD BE CONSIDERED?

1. Black box testing: not based on any knowledge of internal design or code.Tests are based on requirements and functionality
2. White box testing: based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, and conditions.
3. Unit testing: the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
4. Incremental integration testing: continuous testing of an application as new functionality is added; requires that various aspects of an applications functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
6. Integration testing: testing of combined parts of an application to determine if they function together correctly the ‘parts’ can be code modules, individual applications, client and server applications on a networked. This type of testing is especially relevant to client/server and distributed systems.
7. Functional testing: black-box type testing geared to functional requirements of an application; testers should do this type of testing. This does not mean that the programmers should not check their code works before releasing it(which of course applies to any stage of testing).
8. System testing: black –box type testing that is based on overall requirements specifications; covers all combined parts of system.
9. End to end testing: similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
10. Sanity testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5minutes warrant further testing in item current state.
11. Regression testing: re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
12. Acceptance testing: final testing based on specifications of the end-user or customer, or based on use by end users/customers over some limited period of time.
13. Load testing: testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
14. Stress testing: term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repletion of certain actions or inputs input of large numerical values, large complex queries to a database system, etc.
15. Performance testing: term often used interchangeable with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and another ‘type’ of testing) is defined in requirements documentation or QA or test plans.
16. Usability testing: testing for ‘user-friendlinesses’. Clearly this is subjective,and will depend on the targeted end-ser or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used programmers and testers are usually not appropriate as usability testers.
17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes.
18. Recovery testing: testing how well a system recovers from crashes, hardware failures or other catastrophic problems.
19. Security testing: testing how well system protects against unauthorized internal or external access, damage, etc, any require sophisticated testing techniques.
20. Compatibility testing: testing how well software performs in a particular hardware/software/operating/system/network/etc environment.
21. Exploratory testing: often taken to mean a creative, informal software test that is not based on formal test plans of test cases; testers may be learning the software as they test it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software testing it.
23. User acceptance testing: determining if software is satisfactory to an end-user or customer.
24. Comparison testing: comparing software weakness and strengths to competing products.
25. Alpha testing: testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
26. Beta testing: testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
27. Mutation testing: method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected proper implementation requires large computational resources.

Difference between client server testing and web server testing.
Web systems are one type of client/server. The client is the browser, the server is whatever is on the back end (database, proxy, mirror, etc). This differs from so-called “traditional” client/server in a few ways but both systems are a type of client/server. There is a certain client that connects via some protocol with a server (or set of servers).

Also understand that in a strict difference based on how the question is worded, “testing a Web server” specifically is simply testing the functionality and performance of the Web server itself. (For example, I might test if HTTP Keep-Alives are enabled and if that works. Or I might test if the logging feature is working. Or I might test certain filters, like ISAPI. Or I might test some general characteristics such as the load the server can take.) In the case of “client server testing”, as you have worded it, you might be doing the same general things to some other type of server, such as a database server. Also note that you can be testing the server directly, in some cases, and other times you can be testing it via the interaction of a client.

You can also test connectivity in both. (Anytime you have a client and a server there has to be connectivity between them or the system would be less than useful so far as I can see.) In the Web you are looking at HTTP protocols and perhaps FTP depending upon your site and if your server is configured for FTP connections as well as general TCP/IP concerns. In a “traditional” client/server you may be looking at sockets, Telnet, NNTP, etc.

What’s the difference between load and stress testing?

One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested” nor subjected to a meaningful stress test.

1. Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts,etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

2. Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. The term “load testing” by itself is too vague and imprecise to warrant use. For example, do you mean representative load,” “overload,” “high load,” etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, “load testing” is merely testing at the highest transaction arrival rate in performance testing.

Q.What is difference between client server and Web Testing?
Vijay: Well Srividya I would like to add one more testing type i.e Desktop Testing in this discussion. So now we have three testing types Desktop application testing, Client server application testing and Web application testing.

Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.

Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

I think this will have an idea of all three testing environment. Keep in mind that even the difference exist in these three environment, the basic quality assurance and testing principles remains same and applies to all.

Categories: Kinds of Testing

Very Basics Of Testing

February 26, 2008 1 comment

TESTING

Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. Testing is the exposure of the system to trial input to see whether it produces correct output.

Testing Phases:

Software testing phases include the following:

Test activities are determined and test data selected.

The test is conducted and test results are compared with the expected results.

 There are various types of Testing:

Unit Testing:

            Unit testing is essentially for the verification of the code produced during the coding phase and the goal is test the internal logic of the module/program. In the Generic code project, the unit testing is done during coding phase of data entry forms whether the functions are working properly or not. In this phase all the drivers are tested they are rightly connected or not.

Integration Testing:

            All the tested modules are combined into sub systems, which are then tested. The goal is to see if the modules are properly integrated, and the emphasis being on the testing interfaces between the modules. In the generic code integration testing is done mainly on table creation module and insertion module.

 

System Testing: 

                 It is mainly used if the software meets its requirements. The reference document for this process is the requirement document.

Acceptance Testing:

          It is performed with realistic data of the client to demonstrate that the software is working satisfactorily. In the Generic code project testing is done to check whether the Creation of tables and respected data entry was working successfully or not.

Testing Methods: 

            Testing is a process of executing a program to find out errors. If testing is conducted successfully, it will uncover all the errors in the software. Any testing can be done basing on two ways:

 White Box Testing: 

     It is a test case design method that uses the control structures of the procedural design to derive test cases. using this testing a software Engineer can derive the following test cases:

     Exercise all the logical decisions on either true or false sides.  Execute all loops at their boundaries and within their operational boundaries. Exercise the internal data structures to assure their validity.

 Black Box Testing: 

            It is a test case design method used on the functional requirements of the software. It will help a software engineer to derive sets of input conditions that will exercise all the functional requirements of the program. Black Box testing attempts to find errors in the following categories:

Incorrect or missing functions

Interface errors

Errors in data structures

Performance errors

Initialization and termination errors

By black box testing we derive a set of test cases that satisfy the following criteria:

            Test cases that reduce by a count that is greater than one, the number of additional test cases that must be designed to achieve reasonable testing.

         Test cases that tell us something about the presence or absence of classes of errors rather than errors associated only with a specific test at hand.

 TEST APPROACH :

  Testing can be done in two ways:

        Bottom up approach

        Top down approach

 Bottom up approach:        

 Testing can be performed starting from smallest and lowest level modules and proceeding one at a time. For each module in bottom up testing a short program executes the module and provides the needed data so that the module is asked to perform the way it will when embedded with in the larger system. When bottom level modules are tested attention turns to those on the next level that use the lower level ones they are tested individually and then linked with the previously examined lower level modules.

 Top down approach:     

          This type of testing starts from upper level modules. Since the detailed activities usually performed in the lower level routines are not provided stubs are written. A stub is a module shell called by upper level module and that when reached properly will return a message to the calling module indicating that proper interaction occurred. No attempt is made to verify the correctness of the lower level module.

Testing Overview

Testing Overview

Purpose of Testing

Testing accomplishes a variety of things, but most importantly it measures the quality of the software you are developing. This view presupposes there are defects in your software waiting to be discovered and this view is rarely disproved or even disputed.Several factors contribute to the importance of making testing a high priority of any software development effort. These include:

·                     Reducing the cost of developing the program. Minimal savings that might occur in the early stages of the development cycle by delaying testing efforts are almost certainly bound to increase development costs later. Common estimates indicate that a problem that goes undetected and unfixed until a program is actually in operation can be 40 – 100 times more expensive to resolve than resolving the problem early in the development cycle.

·                     Ensuring that your application behaves exactly as you explain to the user. For the vast majority of programs, unpredictability is the least desirable consequence of using an application.

·                     Reducing the total cost of ownership. By providing software that looks and behaves as shown in your documentation, your customers require fewer hours of training and less support from product experts.

·                     Developing customer loyalty and word-of-mouth market share. Finding success with a program that offers the kind of quality that only thorough testing can provide is much easier than trying to build a customer base on buggy and defect-riddled code.

Organize the Testing Effort

The earlier in the development cycle that testing becomes part of the effort the better. Planning is crucial to a successful testing effort, in part because it has a great deal to do with setting expectations. Considering budget, schedule, and performance in test plans increases the likelihood that testing does take place and is effective and efficient. Planning also ensures tests are not forgotten or repeated unless necessary for regression testing.

Requirements-Based Testing

The requirements section of the software specification does more than set benchmarks and list features. It also provides the basis for all testing on the product. After all, testing generally identifies defects that create, cause, or allow behavior not expected in the software based on descriptions in the specification; thus, the test team should be involved in the specification-writing process. Specification writers should maintain the following standards when presenting requirements:

·                     All requirements should be unambiguous and interpretable only one way.

·                     All requirements must be testable in a way that ensures the program complies.

·                     All requirements should be binding because customers demand them. You should begin designing test cases as the specification is being written. Analyze each specification from the viewpoint of how well it supports the development of test cases. The actual exercise of developing a test case forces you to think more critically about your specifications.

Develop a Test Plan

The test plan outlines the entire testing process and includes the individual test cases. To develop a solid test plan, you must systematically explore the program to ensure coverage is thorough, but not unnecessarily repetitive. A formal test plan establishes a testing process that does not depend upon accidental, random testing.Testing, like development, can easily become a task that perpetuates itself. As such, the application specifications, and subsequently the test plan, should define the minimum acceptable quality to ship the application.

Test Plan Approaches: Waterfall versus Evolutionary

Two common approaches to testing are the waterfall approach and the evolutionary approach.The waterfall approach is a traditional approach to testing that descends directly from the development team in which each person works in phases, from requirements analysis to various types of design and specification, to coding, final testing, and release. For the test team, this means waiting for a final spec and then following the pattern set by development. A significant disadvantage of this approach is that it eliminates the opportunity for testing to identify problems early in the process; therefore, it is best used only on small projects of limited complexity.An alternative is the evolutionary approach in which you develop a modular piece (or unit) of an application, test it, fix it, feel somewhat satisfied with it, and then add another small piece that adds functionality. You then test the two units as an integrated component, increasing the complexity as you proceed. Some of the advantages to this approach are as follows:

·                     You have low-cost opportunities to reappraise requirements and refine the design, as you understand the application better.

·                     You are constantly delivering a working, useful product. If you are adding functionality in priority order, you could stop development at any time and know that the most important work is completed.

·                     Rather than trying to develop one huge test plan, you can start with small, modular pieces of what will become part of the large, final test plan. In the interim, you can use the smaller pieces to find bugs.

·                     You can add new sections to the test plan or go into depth in new areas, and use each part. The range of the arguments associated with different approaches to testing is very large and well beyond the scope of this documentation. If the suggestions here do not seem to fit your project, you may want to do further research.

Optimization

A process closely related to testing is optimization. Optimization is the process by which bottlenecks are identified and removed by tuning the software, the hardware, or both. The optimization process consists of four key phases: collection, analysis, configuration, and testing. In the first phase of optimizing an application, you need to collect data to determine the baseline performance. Then by analyzing this data you can develop theories that identify potential bottlenecks. After making and documenting adjustments in configuration or code, you must repeat the initial testing and determine if your theories proved true. Without baseline performance data, it is impossible to determine if your modifications helped or hindered your application

Unit Testing

The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit.For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:

·                     Is the error due to a defect in unit 1?

·                     Is the error due to a defect in unit 2?

·                     Is the error due to defects in both units?

·                     Is the error due to a defect in the interface between the units?

·                     Is the error due to a defect in the test? Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole. Drivers and stubs can be reused so the constant changes that occur during the development cycle can be retested frequently without writing large amounts of additional test code. In effect, this reduces the cost of writing the drivers and stubs on a per-use basis and the cost of retesting is better controlled.

Integration Testing

Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once.Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis.You can do integration testing in a variety of ways but the following are three common strategies:

·                     The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited functionality.

·                     The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The downside, however, is that the need for drivers complicates test management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach also provides poor support for early release of limited functionality.

·                     The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern discussed above. The outputs for each function are then integrated in the top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality. It also helps minimize the need for stubs and drivers. The potential weaknesses of this approach are significant, however, in that it can be less systematic than the other two approaches, leading to the need for more regression testing.

Regression Testing

Any time you modify an implementation within a program, you should also do regression testing. You can do so by rerunning existing tests against the modified code to determine whether the changes break anything that worked prior to the change and by writing new tests where necessary. Adequate coverage without wasting time should be a primary consideration when conducting regression tests. Try to spend as little time as possible doing regression testing without reducing the probability that you will detect new failures in old, already tested code.Some strategies and factors to consider during this process include the following:

·                     Test fixed bugs promptly. The programmer might have handled the symptoms but not have gotten to the underlying cause.

·                     Watch for side effects of fixes. The bug itself might be fixed but the fix might create other bugs.

·                     Write a regression test for each bug fixed.

·                     If two or more tests are similar, determine which is less effective and get rid of it.

·                     Identify tests that the program consistently passes and archive them.

·                     Focus on functional issues, not those related to design.

·                     Make changes (small and large) to data and find any resulting corruption.

·                     Trace the effects of the changes on program memory.

Building a Library

The most effective approach to regression testing is based on developing a library of tests made up of a standard battery of test cases that can be run every time you build a new version of the program. The most difficult aspect involved in building a library of test cases is determining which test cases to include. The most common suggestion from authorities in the field of software testing is to avoid spending excessive amounts of time trying to decide and err on the side of caution. Automated tests, as well as test cases involving boundary conditions and timing almost definitely belong in your library. Some software development companies include only tests that have actually found bugs. The problem with that rationale is that the particular bug may have been found and fixed in the distant past.Periodically review the regression test library to eliminate redundant or unnecessary tests. Do this about every third testing cycle. Duplication is quite common when more than one person is writing test code. An example that causes this problem is the concentration of tests that often develop when a bug or variants of it are particularly persistent and are present across many cycles of testing. Numerous tests might be written and added to the regression test library. These multiple tests are useful for fixing the bug, but when all traces of the bug and its variants are eliminated from the program, select the best of the tests associated with the bug and remove the rest from the library.  

Test Management

  • Test Plans
  • Test Script Planner
  • Test Scripts
  • Test Execution Results
  • Defect Reports
Categories: Test Management Tags:

Software Testing Life Cycle

Software Testing Life Cycle:

The test development life cycle contains the following components:

Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting

Typical interaction scenario from a user’s perspective for system requirements studies or testing. In other words, “an actual or realistic example scenario”. A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.

  • Users of a program are called users or clients.
  • Users of an enterprise are called customers, suppliers, etc.

Use Case:

A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system’s declared responsibilities, showing how the primary actor’s goal might be delivered or might fail.

Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.

This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.

There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.

Scopes: Strategic and System

Strategic Scope:

The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.

System Scope:

Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases

Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:

A sub goal or step is below the main level of interest to the user. Examples are “logging in” and “locate a device in a DB”. Always at System Scope.

User Level Use Case:

This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question “Does your job performance depend on how many of these you do in a day”. For example “Create Site View” or “Create New Device” would be user level goals but “Log In to System” would not. Always at System Scope.

Summary Level Use Case:

Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal “Configure Data Base” might include as a step, user level goal “Add Device to database”. Either at System of Strategic Scope.

Test Documentation

Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:

  • What to test? Test Plan
  • How to test? Test Specification
  • What are the results? Test Results Analysis Report

Risk Analysis

February 13, 2008 1 comment

Risk Analysis:

A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.

Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software

3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Prodicts.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where any workproduct fulfills the directions of the preceeding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.

Demonstrates that the implemented system meets the user requirements.

Serves as a single source for tracking purposes.

Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps

Categories: Risk Analysis Tags:

Test Plan

February 13, 2008 1 comment

Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.

Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.

Unit Test Plan {UTP}

The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.

What is to be tested?

The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.

Sequence of Testing

The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.

Basic Functionality of Units

How the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.

  • Unit Testing Tools
  • Priority of Program units
  • Naming convention for test cases
  • Status reporting mechanism
  • Regression test approach
  • ETVX criteria

Integration Test Plan

The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.

What is to be tested?

This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.

Sequence of Integration

When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.

System Test Plan {STP}

The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.

What is to be tested?

This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.

Functional Groups and the Sequence

The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.

Acceptance Test Plan {ATP}

The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.

Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.

A sample Test Plan Outline along with their description is as shown below:

Test Plan Outline

1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.
4. TEST ITEMS – List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED – List each of the features (functions or requirements) which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED – Explicitly lists each feature, function, or requirement which won’t be tested and why not. 7. APPROACH – Describe the data flows and test philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement – Itemized list of expected output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA – Must the test run from start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES – What, besides software, will be delivered?
Test report
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS

The schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.

Categories: Test Plan Tags:

Test Strategy

Test Strategy:How we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:

Specific
Practical
Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.

Example of a poorly stated (and probably poorly conceived) test strategy:

“We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification.”

Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs

Test Plan – Why

  • Identify Risks and Assumptions up front to reduce surprises later.
  • Communicate objectives to all team members.
  • Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.
  • Failing to plan = planning to fail.

Test Plan – What

  • Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.
  • Details out project-specific Test Approach.
  • Lists general (high level) Test Case areas.
  • Include testing Risk Assessment.
  • Include preliminary Test Schedule
  • Lists Resource requirements.
Categories: Test Strategy Tags:

Testing Stop Process

February 13, 2008 1 comment

When to Stop Testing

This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. “When to stop testing” is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:

  • Deadlines ( release deadlines,testing deadlines.)
  • Test cases completed with certain percentages passed
  • Test budget depleted
  • Coverage of code/functionality/requirements reaches a specified point
  • The rate at which Bugs can be found is too small
  • Beta or Alpha Testing period ends
  • The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: –

  • Measuring Test Coverage.
  • Number of test cycles.
  • Number of high priority bugs.

Testing Start Process

When Testing should start:

Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed.

The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer.

In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, its constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created.

The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.

Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I wouldn’t have written the code that way.”

The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors.

Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early stages when the specification is being written might cost next to nothing, or 10 cents in our example. The same bug, if not found until the software is coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top $100.

 

Manual Testing Process :
Process is a roadmap to develop the project is consists a number of sequential steps.
Software Testing Life Cycle:
• Test Plan
• Test Development
• Test Execution
• Analyse Results
• Defect Tracking
• Summarise Report
Test Plan :
It is a document which describes the testing environment, purpose, scope, objectives, test strategy, schedules, mile stones, testing tool, roles and responsibilities, risks, training, staffing and who […]

Categories: Testing Start Process

Introduction to Software Testing

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is “the process of questioning a product in order to evaluate it”, where the “questions” are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.

Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software’s reliability and quality. But, testing cannot show the absence of defect — it can only show that software defects are present.

Software testing Questions Part 1

1.Why did you ever become involved in QA/testing?

2.What is the testing lifecycle and explain each of its phases?

3.What is the difference between testing and Quality Assurance?

4.What is Negative testing?

5.What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?

6.What are two of your strengths that you will bring to our QA/testing team?

7.How would you define Quality Assurance?

8.What do you like most about Quality Assurance/Testing?

9.What do you like least about Quality Assurance/Testing?

10.What is the Waterfall Development Method and do you agree with all the steps?

11.What is the V-Model Development Method and do you agree with this model?

12.What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?

13.What is a “Good Tester”?

14.Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?

15.List 5 words that best describe your strengths.

16.What are two of your weaknesses?

17.What methodologies have you used to develop test cases?

18.In an application currently in production, one module of code is being modified. Is it necessary to re-test the whole application or is it enough to just test functionality associated with that module?

19.Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.

20.Define Verification and Validation. Explain the differences between the two.

21.Explain the differences between White-box, Gray-box, and Black-box testing.

22.How do you go about going into a new organization? How do you assimilate?

23.Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.

24.What is ISO 9000? Have you ever been in an ISO shop?

25.When are you done testing?

26.What is the difference between a test strategy and a test plan?

27.What is ISO 9003? Why is it important?

28.What are ISO standards? Why are they important?

29.What is IEEE 829? (This standard is important for Software Test Documentation-Why?)

30.What is IEEE? Why is it important?

31.Do you support automated testing? Why?

32.We have a testing assignment that is time-driven. Do you think automated tests are the best solution?

33.What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?

34.Are reusable test cases a big plus of automated testing and explain why.

35.Can you build a good audit trail using Compuware’s QACenter products. Explain why.

36.How important is Change Management in today’s computing environments?

37.Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.

38.We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.

39.When is a good time for system testing?

40.Are regression tests required or do you feel there is a better use for resources?

41.Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.

42.Tell me about a difficult time you had at work and how you worked through it.

43.Give me an example of something you tried at work but did not work out so you had to go at things another way.

44.How can one file compare future dated output files from a program, which has change, against the baseline run, which used current date for input The client does not want to mask dates on the output files to allow compares. – Answer-Rerun baseline and future date input files same # of days as future dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs’ future dated output.