Archive

Archive for April, 2008

Win Runner Navigation

Win Runner Navigation

Using Rapid Test Script wizard

  • Start->Program Files->Winrunner->winruner
  • Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard
  • Click Next button of welcome to script wizard
  • Select hand icon and click on Application window and Cilck Next button
  • Select the tests and click Next button
  • Select Navigation controls and Click Next button
  • Set the Learning Flow(Express or Comprehensive) and click Learn button
  • Select start application YES or NO, then click Next button
  • Save the Startup script and GUI map files, click Next button
  • Save the selected tests, click Next button
  • Click Ok button
  • Script will be generated.then run the scripts.
  • Run->Run from top Find results of each script and select tools->text report in Winrunner test results

Using GUI-Map Configuration Tool:

  • Open an application.
  • Select Tools-GUI Map Configuration; Windows pops-up.
  • Click ADD button; Click on hand icon.
  • Click on the object, which is to be configured. A user-defined class for that object is added to list.
  • Select User-defined class you added and press ‘Configure’ button.
  • Mapped to Class;(Select a corresponding standard class from the combo box).
  • You can move the properties from available properties to Learned Properties. By selecting Insert button .
  • Select the Selector and recording methods.
  • Click Ok button
  • Now, you will observe Win runner identifying the configured objects.

Using Record-ContextSensitive mode:

  • Create->Record context Sensitive
  • Select start->program files->Accessories->Calculator
  • Do some action on the application.
  • Stop recording
  • Run from Top; Press ‘OK’.

Using Record-Analog Mode:

  • Create->Insert Function->from function generator
  • Function name:(select ‘invoke_application’ from combo box).
  • Click Args button; File: mspaint.
  • Click on ‘paste’ button; Click on ‘Execute’ button to open the application; Finally click on ‘Close’.
  • Create->Record-Analog .
  • Draw some picture in the paintbrush file.
  • Stop Recording
  • Run->Run from Top; Press ‘OK’.

GUI CHECK POINTS-Single Property Check:

  • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
  • Click on’paste’ and click on’execute’ & close the window.
  • Create->Record Context sensitive.
  • Do some operations & stop recording.
  • Create->GUI Check Point->For single Property.
  • Click on some button whose property to be checked.
  • Click on paste.
  • Now close the Flight1a application; Run->Run from top.
  • Press ‘OK’ it displays results window.
  • Double click on the result statement. It shows the expected value & actual value window.

GUI CHECK POINTS-For Object/Window Property:

  • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
  • Click on’paste’ and click on’execute’ & close the window.
  • Create->Record Context sensitive.
  • Do some operations & stop recording.
  • Create->GUI Check Point->Object/Window Property.
  • Click on some button whose property to be checked.
  • Click on paste.
  • 40Now close the Flight 1a application; Run->Run from top.
  • Press ‘OK’ it displays results window.
  • Double click on the result statement. It shows the expected value & actual value window.

GUI CHECK POINTS-For Object/Window Property:

  • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
  • Click on’paste’ and click on’execute’ & close the window.
  • Create->Record Context sensitive.
  • Do some operations & stop recording.
  • Create->GUI Check Point->For Multiple Object.
  • Click on some button whose property to be checked.
  • Click on Add button.
  • Click on few objects & Right click to quit.
  • Select each object & select corresponding properties to be checked for that object: click ‘OK’.
  • Run->Run from Top. It displys the results.

BITMAP CHECK POINT:For object/window.

  • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
  • Click on’paste’ and click on’execute’ & close the window.
  • Create->Record Context sensitive.
  • Enter the Username, Password & click ‘OK’ button
  • Open the Order in Flight Reservation Application
  • Select File->Fax Order& enter Fax Number, Signature
  • Press ‘Cancel’ button.
  • Create->Stop Recording.
  • Then open Fax Order in Flight Reservation Application
  • Create->Bitmap Check->For obj.window;
  • Run->run from top.
  • The test fails and you can see the difference.

For Screen Area:

  • Open new Paint Brush file;
  • Create->Bitmapcheck point->from screen area.
  • Paint file pops up; select an image with cross hair pointer.
  • Do slight modification in the paint file(you can also run on the same paint file);
  • Run->Run from Top.
  • The test fails and you can see the difference of images.

DATABASE CHECK POINTSUsing Default check(for MS-Access only)

  • Create->Database Check Point->Default check
  • Select the Specify SQL Statement check box
  • Click Next button
  • Click Create button
  • Type New DSN name and Click New button
  • Then select a driver for which you want to set up a database & double clcik that driver
  • Then select Browse button and retype same DSN name and Click save button.
  • Click Next button & click Finish button
  • Select database button & set path of the your database name
  • Click ‘OK’ button & then Click the your DSN window ‘OK’ button
  • Type the SQL query in SQL box
  • The click Finish button Note : same process will be Custom Check Point

Runtime Record Check Point.

  • Repeat above 10 steps.
  • Type query of two related tables in SQL box Ex: select Orders.Order_Number, Flights.Flight_Number from Orders, Flights where Flight.Flight_Number=Orders.Flight_Number.
  • Select Finish Button
  • Select hand Icon button& select Order No in your Application
  • Click Next button.
  • Select hand Icon button& select Filght No in your Application
  • Click Next button
  • Select any one of the following check box 1. One match record 2. One or more match records. 3. No match record
  • select Finish button the script will be generated

Synchronization PointFor Obj/Win Properties:

  • Open start->Programs->Win Runner->Sample applications->Flight1A.
  • Open winrunner window
  • Create->RecordContext Sensitive
  • Insert information for new Order &click on “insert Order” button
  • After inserting click on “delete” button
  • Stop recording& save the file.
  • Run->Run from top: Gives your results.

Without Synchronization:

  • settings->General Options->Click on “Run” tab. “Timeout for checkpoints& Cs statements’ value:10000 follow 1 to 7->the test display on “Error Message” that “delete” button is disabled.

With Synchronization:

  • Keep Timeout value:1000 only
  • Go to the Test Script file, insert pointed after “Insert Order” button, press statement.
  • Create->Synchronization->For Obj/Window Property
  • Click on”Delete Order” button & select enable property; click on “paste”.
  • It inserts the Synch statement.

For Obj/Win Bitmap:

  • Create-> Record Context Sensitive.
  • Insert information for new order & click on “Insert order” button
  • Stop recording & save the file.
  • Go to the TSL Script, just before inserting of data into “date of flight” insert pointer.
  • Create->Synchronization->For Obj/Win Bitmap is selected.
  • (Make sure Flight Reservation is empty) click on “data of flight” text box
  • Run->Run from Top; results are displayed. Note:(Keep “Timeout value” :1000)

Get Text: From Screen Area: (Note: Checking whether Order no is increasing when ever Order is created)

  • Open Flight1A; Analysis->graphs(Keep it open)
  • Create->get text->from screen area
  • Capture the No of tickets sold; right clcik &close the graph
  • Now , insert new order, open the graph(Analysis->graphs)
  • Go to Winrunner window, create->get text->from screen area
  • Capture the No of tickets sold and right click; close the graph
  • Save the script file
  • Add the followinf script; If(text2==text1) tl_step(“text comparision”,0,”updateed”); else tl_step(“text comparision”,1,”update property”);
  • Run->Run from top to see the results.

Get Text: For Object/Window:

  • Open a “Calc” application in two windows (Assuming two are two versions)
  • Create->get text->for Obj/Window
  • Click on some button in one window
  • Stop recording
  • Repeat 1 to 4 for Capture the text of same object from another “Calc” application.
  • Add the following TSL(Note:Change “text” to text1 & text2 for each statement) if(text1==text2) report_msg(“correct” text1); Else report_msg(“incorrect” text2);
  • Run & see the results

Using GUI-Spy:

Using the GUI Spy, you can view and verify the properties of any GUI object on selected application

  • Tools->Gui Spy…
  • Select Spy On ( select Object or Window)
  • Select Hand icon Button
  • Point the Object or window & Press Ctrl_L + F3.
  • You can view and verify the properties.

Using Virtual Object Wizard:

Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name

  • Tools->Virtual Object Wizard.
  • Click Next Button
  • Select standard class object for the virtual object Ex: class:Push_button
  • Click Next button
  • Click Mark Object button
  • Drag the cursor to mark the area of the virtual object.
  • Click Next button
  • Assign the Logical Name, This name will appear in the test script when you record object.
  • Select Yes or No check box
  • Click Finish button
  • Go to winrunner window & Create->Start Recording.
  • Do some operations
  • Stop Recording

Using Gui Map Editor:

Using the GUI Map Editor, you can view and modify the properties of any GUI object on selected application. To modify an object’s logical name in a GUI map file

  • Tools->GUI Map Editor
  • Select Learn button
  • Select the Application A winrunner message box informs “do you want to learn all objects within the window” & select ‘yes’’ button.
  • Select perticular object and select Modify Button
  • Change the Logical Name& click ‘OK’ Button
  • Save the File

To find an object in a GUI map file:

  • Choose Tools > GUI Map Editor.
  • Choose View > GUI Files.
  • Choose File > Open to load the GUI map file.
  • Click Find. The mouse pointer turns into a pointing hand.
  • Click the object in the application being tested. The object is highlighted in the GUI map file.

To highlight an object in a Application:

  • Choose Tools > GUI Map Editor.
  • Choose View > GUI Files.
  • Choose File > Open to load the GUI map file.
  • Select the object in the GUI map file
  • Click Show. The object is highlighted in the Application.

Data Driver Wizard

  • Start->Programs->Wirunner->Sample applications->Flight 1A
  • Open Flight Reservation Application
  • Go to Winrunner window
  • Create->Start recording
  • Select file->new order, insert the fields; Click the Insert Order
  • Tools->Data Table; Enter different Customer names in one row and Tickets in another row.
  • Default that two column names are Noname1 and Noname2.
  • Tools->Data Driver Wizard
  • Click Next button &select the Data Table
  • Select Parameterize the test; select Line by Line check box
  • Click Next Button
  • Parameterize each specific values with column names of tables;Repeat for all
  • Final Click finish button.
  • Run->Run from top;
  • View the results.

Merge the GUI Files:

Manual Merge

  • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button.
  • Select the Manual Merge. Manual Merge enables you to manually add GUI objects from the source to target files.
  • To specify the Target GUI map file click the browse button& select GUI map file
  • To specify the Source GUI map file. Click the add button& select source GUI map file.
  • Click ‘OK’ button
  • GUI Map File Manual Merge Tool Opens Select Objects and move Source File to Target File
  • Close the GUI Map File Manual Merge Tool

Auto Merge

  • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button.
  • Select the Auto Merge in Merge Type. If you chose Auto Merge and the source GUI map files are merged successfully without conflicts,
  • To specify the Target GUI map file click the browse button& select GUI map file
  • To specify the Source GUI map file.
  • Click the add button& select source GUI map file.
  • Click ‘OK’ button A message confirms the merge.

Manually Retrive the Records form Database

  • db_connect(query1,DSN=Flight32);
  • db_execute_query(query1,select * from Orders,rec);
  • db_get_field_value(query1,#0,#0);
  • db_get_headers(query1, field_num,headers);
  • db_get_row(query1,5,row_con);
  • db_write_records(query1,,c:\\str.txt,TRUE,10);

Software Testing Interview Questions Part 5

46. High severity, low priority bug?

A: — A page is rarely accessed, or some activity is performed rarely but that thing outputs some important Data incorrectly, or corrupts the data, this will be a bug of H severity L priority

47. If project wants to release in 3 months what type of Risk analysis u do in Test plan?

A:– Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—
1)First I go for the Installation side, means that –
+ is it working with all versions of Windows ,Netscape or other softwares in other words we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequent and smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent and smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.
8) HTML Code work on that or not.
9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0

49. Where you involve in testing life cycle ,what type of test you perform ?

A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing
etc.

50. what is Testing environment in your company ,means hwo testing process start ?

A:– testing process is going as follows
quality assurance unit
quality assurance manager
testlead
test engineer

51. who prepares the use cases?

A:– In Any company except the small company Business analyst prepares the use cases
But in small company Business analyst prepares along with team lead

52. What methodologies have you used to develop test cases?

A:– generally test engineers uses 4 types of methodologies
1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing

53. Why we call it as a regression test nor retest?

A:– If we test whether defect is closed or not i.e Retesting But here we are checking the impact also regression means repeated times

54. Is automated testing better than manual testing. If so, why?

A:– Automated testing and manual testing have advantages as well as disadvantages
Advantages: It increase the efficiency of testing process speed in process
reliable
Flexible
disadvantage’s
Tools should have compatibility with our development or deployment tools needs lot of time initially If the requirements are changing continuously Automation is not suitable
Manual: If the requirements are changing continuously Manual is suitable Once the build is stable with manual testing then only we go 4 automation
Disadvantages:
It needs lot of time
We can not do some type of testing manually
E.g Performances

55. what is the exact difference between a product and a project.give an example ?

A:– Project Developed for particular client requirements are defined by client Product developed for market Requirements are defined by company itself by conducting market survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is project
Product: Example is “Ready made Shirt” where the particular company will imagine particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements

56. Define Brain Stromming and Cause Effect Graphing? With Eg?

A:– BS:
A learning technique involving open group discussion intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-
Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG :
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

57. Actually by using severity u should know which one u need to solve so what is the need of priority?

A:– I guess severity reflects the seriousness of the bug where as priority refers to which bug should rectify first. of course if the severity is high the same case is with priority in normal.

severity decided by the tester where as priority decided by developers. which one need to solve first knows through priority not with severity. how serious of the bug knows through
severity.

severity is nothing impact of that bug on the application. Priority is nothing but importance to resolve the bug yeah of course by looking severity we can judge but sometimes high severity bug doesn’t have high priority At the same time High priority bug don’t have high severity
So we need both severity and priority

58. What do u do if the bug that u found is not accepted by the developer and he is saying its not reproducible. Note:The developer is in the on site location ?

A:– once again we will check that condition with all reasons. then we will attach screen shots with strong reasons. then we will explain to the project manager and also explain to the client when they contact us

Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we r using is correct we will raise it as defect We will take screen shots and sends them with test procedure also

59. what is the difference between three tier and two tier application?

A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name’,front end is the monitoring level.

Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server,browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application.

All the client server applications are 2 tier architectures.
Here in these architecture, all the “Business Logic” is stored in clients and “Data” is stored in Servers. So if user request anything, business logic will b performed at client, and the data is retrieved from Server(DB Server). Here the problem is, if any business logic changes, then we
need to change the logic at each any every client. The best ex: is take a super market, i have branches in the city. At each branch i have clients, so business logic is stored in clients, but the actual data is store in servers.If assume i want to give some discount on some items, so i
need to change the business logic. For this i need to goto each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture.

So 3-tier architecture came into picture:

Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture.

Assume for the above. Ex. if i want to give some discount, all my business logic is there in Server. So i need to change at one place, not at each client. This is the main advantage of 3-tier architecture.

Levels of testing

Levels of testing
We divide Testing In Four Level,Unit Testing,Integration Testing, System Testing and Acceptance Testing.
Unit testing:-
Generally the code which is generated is compiled. The unit test is white box oriented and the steps can be conducted in parallel for multiple components.
1. The module Interface is tested to ensure that information properly flows into and out of the program unit under test.
2. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution.
3. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit and restrict processing.
4. All the statements are executed at least once and error handling paths are tested
Integration testing:-
Integration testing is a systematic technique for constructing the program structure .After Performing Unit Testing perform integration testing. you have like
Top down :-Top down integration testing with the main routine and one or two immediate subordinate routines in the system structure it is good to have modules are integrated as they are developed, top level interfaces are tested first.
Bottom up :-Bottom up integration testing is the traditional strategy used to integrate the components of a software system into a functioning whole
Regressive testing:- Retesting the already test modules and adding new modules .Regressive testing is an important strategy for reducing side effects.
System Level Testing :
System Testing is third level of Testing In this level we check Functionility of application.
Performance testing: – Performance testing is designed to test the run time performance of software or hardware.
Recovery testing :- is a system test forces the software to fail in a variety of ways and verifies that recovery is properly performed .if recovery is automatic, re initialization , check pointing ,data recovery and restart are evaluated for correctness.
Security Testing: – Security testing attempts to verify that protection mechanisms built into a system will in fact, protect it from improper penetration.
Acceptance Testing:- When customer software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. conducted by the end user rather than software engineers, an acceptance test can range from an informal test drive to a planned and systematically executed series of tests. If software is developed as a product to be used by many customers, it is impractical to perform acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find .

Automation Testing- Win Runner

Automation Testing- Win Runner

Testing Automation: Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don’t have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level.There are Many automation Testiing Tool For Functional & Regression Testing,Performance Testing, Bug Tracking and Test Management Tool.

What are the Advantages of Automation in testing?
  • Fast
  • Reliable
  • Repeatable
  • Programmable
  • Comprehensive
  • Reusable
Win Runner: Win Runner is a Functional & Regression Testing Tool of Mercury.

Win Runner:

  • Need For Automation
  • WinRunner Introduction
  • WinRunner Basic Session / Examples
  • WinRunner Advanced Session / Examples

Few Reasons

  • Running test manually is boring and frustrating
  • Eliminates human error
  • Write once, run as many times as needed
  • Provides increased testing coverage
  • Allows testers to focus on verifying new rather than existing functionality
  • Creates tests that can be maintained and reused throughout the application life cycle

WinRunner is functional testing tool

  • Specifically a regression test tool
  • Help in creating reusable and adaptable script
  • Used for automating testing process
  • Need to write scripts in TSL for the same
  • Help in detecting early defects before regression Testing
  • Specifically a regression test tool
  • Help in creating reusable and adaptable script
  • Used for automating testing process
  • Need to write scripts in TSL for the same
  • Help in detecting early defects before regression testing

Test Plan Documenting System

  • Test Plan Design
  • Test Case Design
  • Test Script Creation – Manual & Automated

Test Execution Management

  • Scenario Creation
  • Test Runs

Analysis of Results

  • Reports & Graphs

Defect Tracking System WinRunner Testing Process

  • Create GUI map
  • Create tests
  • Debug tests
  • Run tests
  • Examine results
  • Report defects

Testing Process of Win Runner in Detail WinRunner testing process involves six main stages .

  • Create GUI Map File : So that WinRunner can recognize the GUI objects in the application being tested
  • Create test scripts : by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
  • Debug Test: run tests in Debug mode to make sure they run smoothly
  • Run Tests: run tests in Verify mode to test your application.
  • View Results: determines the success or failure of the tests.
  • Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

WinRunner Testing Modes Context Sensitive

  • Records the actions on the AUT in terms of GUI objects.
  • Ignores the physical location of the object on the screen

Analog

  • Records mouse clicks, keyboard input, and the exact x- and y-coordinates traveled by the mouse

Types of GUI Map files

GUI Map File per Test mode

  • Separate GUI Map File for each test

Global GUI Map File mode

  • Single GUI Map File for a group of tests

Different modes for running the tests

  • Verify
  • Debug
  • Update

Checkpoints

  • GUI Checkpoint
  • Bitmap Checkpoint
  • Database Checkpoint
  • Synchronization point

GUI Checkpoint

  • A GUI checkpoint examines the behavior of an object’s properties
  • During execution, the current state of the GUI objects is compared to the expected results

Bitmap Checkpoint

  • Compares captured bitmap images pixel by pixel
  • When running a test that includes bitmap checkpoints, make sure that the screen display settings are the same as when the test script was created. If the screen settings are different, WinRunner will report a bitmap mismatch .

Database Checkpoint

  • A query is defined on the database and the database checkpoint checks the values contained in the result set
  • Result set is a set of values retrieved from the results of the query
  • Ways to define the query
  • (a) Microsoft query
  • (b) ODBC query
  • (c) Data junction

Synchronization point

  • When you run tests, your application may not always respond to input with the same speed
  • Insert a synchronization point into the test script at the exact point where the problem occurs
  • A synchronization point tells WinRunner to pause the test run in order to wait for a specified response in the application

Using Regular Expressions

  • Enables WR to identify objects with varying names or titles
  • Can be used in
  • An object’s physical descriptions in the GUI map
  • GUI Checkpoint
  • Text Checkpoint

Virtual Objects:

  • Can teach WinRunner to recognize any bitmap in a window as a GUI object
  • Make test scripts easier to read and understand

Creating Data-Driven Tests:

  • To test how the AUT performs with multiple sets of data
  • Can be done using the..
  • Data Driver Wizard
  • Add command manually in script

Advantage of Data-Driven Tests:

  • Run the same test with different data
  • Test the AUT for both, positive and negative results
  • Expandable
  • Easy to maintain

Bug Life Cycle

The main purpose behind any Software Development process is to provide the client (Final/End User of the software product) with a complete solution (software product), which will help him in managing his business/work in cost effective and efficient way. A software product developed is considered successful if it satisfies all the requirements stated by the end user.

 Any software development process is incomplete if the most important phase of Testing of the developed product is excluded. Software testing is a process carried out in order to find out and fix previously undetected bugs/errors in the software product. It helps in improving the quality of the software product and make it secure for client to use.

What is a bug/error?
A bug or error in software product is any exception that can hinder the functionality of either the whole software or part of it.

How do I find out a BUG/ERROR?
Basically, test cases/scripts are run in order to find out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs, it is called as a bug.

What is a Test Case?
A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs.

What do I do if I find a bug/error?
In normal terms, if a bug or error is detected in a system, it needs to be communicated to the developer in order to get it fixed.

Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed.

(Please note that there are various ways to communicate the bug to the developer and track the bug status)

Statuses associated with a bug:
New:
When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug.

Assigned:
After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is assigned to it.

Open:
Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that he/she is working on it to find a solution.

Fixed:
Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.

Pending Retest:
After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Retest’ is assigned to it.

Retest:
The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to ‘Retest’ and assigns it to a tester for retesting.

Closed:
After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester closes it and marks it with ‘Closed’ status.

Reopen:
If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as ‘Reopen’.

Pending Reject:
If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending Reject’.

Rejected:
If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as ‘Rejected’.

Postponed:
Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with ‘Postponed’ status.

Deferred:
In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with ‘Deferred’ status.

Very Basics Of Testing

TESTING

Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. Testing is the exposure of the system to trial input to see whether it produces correct output.

Testing Phases:

Software testing phases include the following:

Test activities are determined and test data selected.

The test is conducted and test results are compared with the expected results.

There are various types of Testing:

Unit Testing:

Unit testing is essentially for the verification of the code produced during the coding phase and the goal is test the internal logic of the module/program. In the Generic code project, the unit testing is done during coding phase of data entry forms whether the functions are working properly or not. In this phase all the drivers are tested they are rightly connected or not.

Integration Testing:

All the tested modules are combined into sub systems, which are then tested. The goal is to see if the modules are properly integrated, and the emphasis being on the testing interfaces between the modules. In the generic code integration testing is done mainly on table creation module and insertion module.

System Testing:

It is mainly used if the software meets its requirements. The reference document for this process is the requirement document. Acceptance Testing:

It is performed with realistic data of the client to demonstrate that the software is working satisfactorily. In the Generic code project testing is done to check whether the Creation of tables and respected data entry was working successfully or not.

Testing Methods:

Testing is a process of executing a program to find out errors. If testing is conducted successfully, it will uncover all the errors in the software. Any testing can be done basing on two ways:

White Box Testing:

It is a test case design method that uses the control structures of the procedural design to derive test cases. using this testing a software Engineer can derive the following test cases:

Exercise all the logical decisions on either true or false sides. Execute all loops at their boundaries and within their operational boundaries. Exercise the internal data structures to assure their validity.

Black Box Testing:

It is a test case design method used on the functional requirements of the software. It will help a software engineer to derive sets of input conditions that will exercise all the functional requirements of the program. Black Box testing attempts to find errors in the following categories:

Incorrect or missing functions

Interface errors

Errors in data structures

Performance errors

Initialization and termination errors

By black box testing we derive a set of test cases that satisfy the following criteria:

Test cases that reduce by a count that is greater than one, the number of additional test cases that must be designed to achieve reasonable testing.

Test cases that tell us something about the presence or absence of classes of errors rather than errors associated only with a specific test at hand.

TEST APPROACH :
Testing can be done in two ways:

Bottom up approach

Top down approach

Bottom up approach:

Testing can be performed starting from smallest and lowest level modules and proceeding one at a time. For each module in bottom up testing a short program executes the module and provides the needed data so that the module is asked to perform the way it will when embedded with in the larger system. When bottom level modules are tested attention turns to those on the next level that use the lower level ones they are tested individually and then linked with the previously examined lower level modules.

Top down approach:

This type of testing starts from upper level modules. Since the detailed activities usually performed in the lower level routines are not provided stubs are written. A stub is a module shell called by upper level module and that when reached properly will return a message to the calling module indicating that proper interaction occurred. No attempt is made to verify the correctness of the lower level module.

Phone Interview Tips

Now a days, employers use telephone interviews as a way of identifying and recruiting candidates for employment. Phone interviews are often used to screen candidates in order to narrow the pool of applicants who will be invited for in-person interviews.

While you are actively job searching, it is important to be prepared for a phone interview on a moments notice. You never know when a recruiter call and ask if you have a few minutes to talk.

When there is a call for you from company then you need to clear your head and shift your focus from family to your career. When you pick up the phone, ask the recruiter to repeat his or her name. Verify the spelling and write it down. Use the recruiter’s name in your response. You are now ready to make a good impression during your first five minutes.

There are three basic types of telephone interviews:

You initiate a call to the Hiring Manager and he or she is interested in your background. The call from that point forward is an interview.
A company calls you based upon a previous contact. You will likely be unprepared for the call, but it is still an interview.
You have a pre-set time with a company representative to speak further on the phone.
Here are some phone interview tips to help you:

1.Be Prepared For preparing the phone interview, there are several things you can do. To prepare for the phone interview you can consider the following points:

You can keep all of your employer research materials within easy reach of the phone.
You can tape your resume to a wall near the phone. It will help a lot during the call and will be a constant reminder for your job search.
Have a notepad handy to take notes.
If the phone interview will occur at a set time, Following are some additional points you have to consider:

Turn off call waiting on your phone.
Place a “Do Not Disturb” note on your door.
Warm up your voice while waiting for the call.
Have a glass of water handy, so that you will not have a chance to take a break during the call.
Turn off your stereo, TV, and any other potential distraction.

2.Do not be afraid to pick up the phone The first step in the hiring process is the telephone interview. It may happen that when you pick up the phone, the call may be from any company. Then that time ask the recruiter to repeat his or her name. Verify the spelling and write it down. Use the recruiter’s name in your response.
If there is really any problem for you to talk, then ask for a telephone number and a convenient time to call back. You are now ready to make a good impression during your first five minutes.

The phone interview tips will help you master the phone interview and get you to the next step – the face to face interview. So do not afraid to pick the phone.

3.Be a good listener During telephonic interview, you must keep in mind that you must be a good listener.
Avoid interrupting and let the recruiter complete his thought or question before you respond. Ask for clarification. Use open-ended questions. The more information you can gather, the better you can respond. We must know the fact that good listener is the best quality.

4.During phone interview Here are the some points for successful phone interviewing. Follow these simple rules and you should achieve success in this important phase of job-hunting.
Here are some do’s for phone Interviews:

  1. Smile always helps you in every situation. Smiling will project a positive image to the listener and will change the tone of your voice.
  2. Do keep a glass of water handy, in case you need to wet your mouth.
  3. Do know what job you are interviewing for.
  4. Speak slowly and enunciate clearly.
  5. Take your time, it is perfectly acceptable to take a moment to collect your thoughts.
  6. Remember your goal is to set up a face to face interview. After you thank the interviewer ask if it would be possible to meet in person.
  7. Do give accurate and detailed contact information in your cover letter so your interviewers can easily connect with you.
  8. Household members must understand the importance of phone messages in your job search.
  9. Use the person’s title (Mr. or Ms. and their last name.) Only use a first name if they ask you to.
  10. When being interviewed by phone, do make sure you are in a place where you can read notes, take notes, and concentrate.
  11. If you cannot devote enough time to a phone interview, do suggest a specific alternate time to the recruiter.
  12. Give short answers.
  13. Do ensure that you can hear and are being clearly heard.
  14. Do create a strong finish to your phone interview with thoughtful questions.

Following are some Don’ts for phone Interviews:

  1. Do not smoke, chew gum, eat, or drink.
  2. Do not interrupt the interviewer.
  3. Do not cough. If you cannot avoid these behaviors, say, “excuse me.”
  4. Do not feel you have to fill in the silences. If you have completed a response, but the interviewer has not asked his or her next question, do not start anything new; ask a question of your own related to your last response.

5.The Open and Available Technique You have a major advantage in a phone interview which does not exist in a face-to-face interview. You cannot be seen. Use this to your advantage.
Have all of your materials on yourself and the employer open and available on your desk as you are speaking on the phone. This includes not only your resume, but also a “cheat sheet” of compelling story subjects, which you would like to introduce. It can also include a “cheat sheet” about the employer, including specific critical points describing the employer and their products.

As anyone may be interviewer is speaking with you on the other end of the phone, he/she has no idea that you are actually being prompted from a document as you are speaking. All that person can hear is a well-informed, well-prepared interviewee. Keep in mind that this preparation is not “cheating” at all. It is preparation, pure and simple.

So have your materials open and available when you are preparing for a phone interview. They are there to support you and enhance your value to the employer, who will greatly respect your ability to answer questions with focus and meaningful content.

6.Focus on what you offer and can do The phone interview is surprise for us, so we must prepared for the telephonic interview. The recruiter’s mission is to screen candidates and recommend those who will best meet the employer’s needs.
When describing your background, avoid the negative points. You will only get one chance to make a positive first impression. Stay focused by reviewing and use the key points you wrote down about your strengths.

7. Sound positive, self-confident and focused The recruiter has called you indicates that your resume or a member of your network has given him or her a favorable impression of you. You need to confirm this impression. Put a smile on your face and into your voice.
You need to demonstrate your enthusiasm and interest through your voice and telephone manner. Check your voice by taping your voice. Listen it very carefully and make the necessary changes.

8. Write out your responses and practice reading them aloud This will help you to remember the response. By knowing what to say, you will seem more confident and all qualities that recruiters seek in candidates. Most candidates usually are asked about their salary expectations during screening interviews. Recruiters and employers usually have a salary range in mind, and while often unwilling to share it at this stage, they expect you to answer.
Your objective at this point is to win acceptance and be recommended for further consideration. Accordingly, you may want to avoid providing a direct answer to this question. These issues could include non-cash benefits and compensation, scope of responsibilities, work environment, job location, career advancement and others.

9.Ask about the next step At the end of the interview, tell the recruiter you are interested. Ask about the next step in the interview process as well as the hiring timetable. If you do not receive a positive response and you are sincerely interested, ask the recruiter if he or she has any areas of concern.
If there is a misunderstanding about you or the recruiter does not seem certain that you are suitable, try to clarify the problem, then ask again about the next step and timetable.

10.After the Interview After the phone interview. Following are some points which we have to consider after the phone interview:

Take notes about how you answered and what you were asked.
Remember to say “thank you.” at the end of conversation.

Software Testing FAQ Part2

  1. What is Test Bed?
    An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
  2. What is Software Requirements Specification?
    A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
  3. What is Soak Testing?
    Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
  4. What is Smoke Testing?
    A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
  5. What is Scalability Testing?
    Performance testing focused on ensuring the application under test gracefully handles increases in work load.
  6. What is Release Candidate?
    A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
  7. What is Ramp Testing?
    Continuously raising an input signal until the system breaks down.
  8. What is Race Condition?
    A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
  9. What is Quality System?
    The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
  10. What is Quality Policy?
    The overall intentions and direction of an organization as regards quality as formally expressed by top management.
  11. What is Quality Management?
    That aspect of the overall management function that determines and implements the quality policy.
  12. What is Quality Control?
    The operational techniques and the activities used to fulfill and verify requirements of quality.
  13. What is Quality Circle?
    A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
  14. What is Quality Audit?
    A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
  15. What is Quality Assurance?
    All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
  16. What is Monkey Testing?
    Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
  17. What is Metric?
    A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
  18. What is Localization Testing?
    This term refers to making software specifically designed for a specific locality.
  19. What is Independent Test Group (ITG)?
    A group of people whose primary responsibility is software testing.
  20. What is Gorilla Testing?
    Testing one particular module, functionality heavily.
  21. What is Gray Box Testing?
    A combination of Black Box and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings.
  22. What is Functional Specification?
    A document that describes in detail the characteristics of the product with regard to its intended features.
  23. What is Functional Decomposition?
    A technique used during planning, analysis and design; creates a functional hierarchy for the software.
  24. What is Exhaustive Testing?
    Testing which covers all combinations of input values and preconditions for an element of the software under test.
  25. What is Equivalence Partitioning?
    A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
  26. What is Equivalence Class?
  27. What is Endurance Testing?
    Checks for memory leaks or other problems that may occur with prolonged execution.
  28. What is Emulator?
    A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
  29. What is Depth Testing?
    A test that exercises a feature of a product in full detail.
  30. What is Dependency Testing?
    Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
  31. What is Defect?
    Non conformance to requirements or functional / program specification
  32. What is Debugging?
    The process of finding and removing the causes of software failures.
  33. What is Data Driven Testing?
    Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
  34. What is Data Flow Diagram?
    A modeling notation that represents a functional decomposition of a system.
  35. What is Cyclomatic Complexity?
    A measure of the logical complexity of an algorithm, used in white-box testing.
  36. What is Conversion Testing?
    Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
  37. What is Context Driven Testing?
    The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
  38. What is Conformance Testing?
    The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
  39. What is Concurrency Testing?
    Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
  40. What is Component?
    A minimal software item for which a separate specification is available.
  41. What is Code Coverage?
    An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
  42. What is Code Complete?
    Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
  43. What is Cause Effect Graph?
    A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
  44. What is Capture/Replay Tool?
    A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
  45. What is Breadth Testing?
    A test suite that exercises the full functionality of a product but does not test features in detail.
  46. What is Branch Testing?
    Testing in which all branches in the program source code are tested at least once.
  47. What is Boundary Value Analysis?
    BVA is similar to Equivalence Partitioning but focuses on “corner cases” or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
  48. What is Boundary Testing?
    Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
  49. What is Binary Portability Testing?
    Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
  50. What is Baseline?
    The point at which some deliverable produced during the software engineering process is put under formal change control.
  51. What is Basis Set?
    The set of tests derived using basis path testing.
  52. What is Basis Path Testing?
    A white box test case design technique that uses the algorithmic flow of the program to design tests.
  53. What is Basic Block?
    A sequence of one or more consecutive, executable statements containing no branches.
  54. What is Backus-Naur Form?
    A metalanguage used to formally describe the syntax of a language.
  55. What is Automated Testing?
    Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
    The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
  56. What is Automated Software Quality (ASQ)?
    The use of software tools, such as automated testing tools, to improve software quality.
  57. What is Application Programming Interface (API)?
    A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
  58. What is Application Binary Interface (ABI)?
    A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
  59. What is Agile Testing?
    Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
  60. What is Ad Hoc Testing?
    A testing phase where the tester tries to ‘break’ the system by randomly trying the system’s functionality. Can include negative testing as well.
  61.  How do you debug an ASP.NET Web application?
    Attach the aspnet_wp.exe process to the DbgClr debugger.
  62. Why are there five tracing levels in System.Diagnostics.TraceSwitcher?
    The tracing dumps can be quite verbose.
    � For applications that are constantly running you run the risk of overloading the machine and the hard drive.
    � Five levels range from None to Verbose, allowing you to fine-tune the tracing activities.
  63. What’s the difference between the Debug class and Trace class?
    Documentation looks the same. Use Debug class for debug builds, use Trace class for both debug and release builds.
  64. Difference between Smoke Testing and Sanity Testing?
    Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.Sanity Testing is a cursory testing,it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
  65. What is the difference between QC and QA?
    Quality assurance is the process where the documents for the product to be tested is verified with actual requirements of the customers. It includes inspection, auditing , code review , meeting etc. Quality control is the process where the product is actually executed and the expected behavior is verified by comparing with the actual behavior of the software under test. All the testing types like black box testing, white box testing comes under quality control. Quality assurance is done before quality control.
  66. What is a scenario?
    A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  67. What is Difference betweem Manual and Automation Testing?
    This answer is quite simple, Manual is when user needs to do many things based on the test case specified, say like click some tab and check if the tab is working fine or click on a particular URL and check if the web site specified opens. The above stuff can also be done automatically, i.e. through some automated tools like Winrunner , Silk Test etc the things can be automated , so user just has to trigger the tool and the tool will take care of executing , only thing needed is we need to know how to make the testcases automated through that tool which is not very difficult.
  68. What is L10 Testing?
    L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not.
  69. What is I18N Testing?
    I18N Testing is “Internationalization testing”
    Determine whether your developed product’s support for international character encoding methods is sufficient and whether your product development methodologies take into account international coding standards.
  70. What is SEI? CMM? CMMI? ISO? IEEE? ANSI?
  71. SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
    CMM = ‘Capability Maturity Model’, now called the CMMI (’Capability Maturity Model Integration’), developed by the SEI. It’s a model of 5 levels of process ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
    Level 1 – characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects.� Few if any processes in place; successes may not be repeatable.
    Level 2 – software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
    Level 3 – standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
    Level 4 – metrics are used to track productivity, processes, and products.� Project performance is predictable, and quality is consistently high.
    Level 5 – the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

    ISO = ‘International Organisation for Standardization’ – The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 – Quality Management Systems: Requirements; (b)Q9000-2000 – Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 – Quality Management Systems: Guidelines for Performance Improvements.

    To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products – it indicates only that documented processes are followed.

    IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730) etc

    ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

    Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

  72. What makes a good Software Test engineer?
    A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
  73. What is documentation change management?
    Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes
  74. What is the role of documentation in QA?
    Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.
  75. How do you perform integration testing?
    First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
    Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
  76. What is clear box testing?
    Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
  77. What is closed box testing?
    Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.
  78. What is open box testing?
    Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
  79. What is ‘Software Quality Assurance’?
    Software QA involves the entire software development PROCESS – monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to ‘prevention’.
  80. What is Security Testing?
    Security TestingApplication vulnerabilities leave your system open to attacks, Downtime, Data theft, Data corruption and application Defacement. Security within an application or web service is crucial to avoid such vulnerabilities and new threats.While automated tools can help to eliminate many generic security issues, the detection of application vulnerabilities requires independent evaluation of your specific application’s features and functions by experts. An external security vulnerability review by Third Eye Testing will give you the best possible confidence that your application is as secure as possible.
  81. What is Stress Testing?
    Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing. Also see testing, software testing, performance testing.
  82. What is Acceptance Testing?
    Acceptance TestingUser acceptance testing (UAT) is one of the final stages of a software project and will often occur before the customer accepts a new system.Users of the system will perform these tests which, ideally, developers have derived from the User Requirements Specification, to which the system should conform.Test designers will draw up a formal test plan and devise a range of severity levels. The focus in this type of testing is less on simple problems (spelling mistakes, cosmetic problems) and show stoppers (major problems like the software crashing, software will not run etc.). Developers should have worked out these issues during unit testing and integration testing. Rather, the focus is on a final verification of the required business function and flow of the system. The test scripts will emulate real-world usage of the system. The idea is that if the software works as intended and without issues during a simulation of normal use, it will work just the same in production.Results of these tests will allow both the customers and the developers to be confident that the system will work as intended.
  83. What is Compatibility Testing?
    Compatibility TestingOne of the challenges of software development is ensuring that the application works properly on the different platforms and operating systems on the market and also with the applications and devices in its environment.Compatibility testing service aims at locating application problems by running them in real environments, thus ensuring you that your application is compatible with various hardware, operating system and browser versions.
  84. What is Fuzz Testing?
    Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct.The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.
  85. What is Walkthrough?
    A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.
  86. What is Top-Down Testing?
    An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
  87. What is Oracle?
    A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.
  88. What is Mutation Analysis?
    A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program.
  89. Do you know about LCSAJ?
    LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
  90. What is Inspection?
    A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
  91. What is Executable Statement?
    A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.
  92. Do you know about Cause- Effect graph?
    A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.
  93. What is Capture/Playback Tool?
    A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.
  94. What is Bottom Up Testing?
    An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
  95. What is Big Bang Testing?
    Integration testing where no incremental testing takes place prior to all the system’s components being combined to form the system.
  96. What is Productivity Metrics?
    PRODUCTIVITY METRICSOUTPUT / INPUT OR
    VALUE OF MATERIAL / COST OF PRODUCTIONEg: Non Commented Software Source (NCSS) Per Engineer Per Month
    NCSS per Person Month
    NCSS per Function Point
    NCSS can also be replaced by KLOC (Kilo Lines of Code)
  97. What is Effort Variance?
    Effort variance = (Actual effort – Planned Effort)/Planned effort * 100
  98. How do we calculate Schedule Variance?
    Schedule variance = (Actual time taken – Planned time) / Planned time * 100
  99. How do you create a test plan/design?
    Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…
    •Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
    •Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
    •It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
    •Test scenarios are executed through the use of test procedures or scripts.
    •Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
    •Test procedures or scripts include the specific data that will be used for testing the process or transaction.
    •Test procedures or scripts may cover multiple test scenarios.
    •Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
    •Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
    • Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
    •A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
    Inputs for this process:
    •Approved Test Strategy Document.
    •Test tools, or automated test tools, if applicable.
    • Previously developed scripts, if applicable.
    • Test documentation problems uncovered as a result of testing.
    • A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
    Outputs for this process:
    • Approved documents of test scenarios, test cases, test conditions and test data.
    •Reports of software design issues, given to software developers for correction.
  100. How do you create a test strategy?
    The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.Inputs for this process:
    •A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
    •A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
    •Testing methodology. This is based on known standards.
    •Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
    •Requirements that the system can not provide, e.g. system limitations.
    Outputs for this process:
    •An approved and signed off test strategy document, test plan, including test cases.
    •Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
  101. What is the general testing process?
    The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.
  102. What is software testing methodology?
     is a three step process of…
    Creating a test strategy;
    Creating a test plan/design; and
    Executing tests.
    This methodology can be used and molded to your organization’s needs.
  103. What is a test schedule?
    The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.
  104. What is a Test Configuration Manager?
    Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.
  105. What is a Technical Analyst?
    Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.
  106. What is a Database Administrator?
    Database Administrators, Test Build Managers, and System Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator
  107. What is a System Administrator?
    Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.
  108. What is a Test Build Manager?
    Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.
  109. What is a Test Engineer?
    A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. They also…
    •Speed up the work of your development staff;
    •Reduce your risk of legal liability;
    •Give you the evidence that your software is correct and operates properly;
    •Improve problem tracking and reporting;
    •Maximize the value of your software;
    •Maximize the value of the devices that use it;
    •Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;
    •Help the work of your development staff, so the development team can devote its time to build up your product;
    •Promote continual improvement;
    •Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
    •Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;
    •Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.
  110. What is a Test/QA Team Lead?
    The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team
  111. What testing roles are standard on most testing projects?
    Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.
  112. What is alpha testing?
    Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers
  113. What is acceptance testing?
    Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.
  114. What is comparison testing?
    Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.
  115. What is beta testing?
    Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.
  116. What is compatibility testing?
    Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.
  117. What is recovery/error testing?
    Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  118. What is security/penetration testing?
    Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
  119. What is installation testing?
    Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.
  120. What is load testing?
    Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.
  121. What is performance testing?
    Performance testing verifies loads, volumes and response times, as defined by requirements. Although performance testing is a part of system testing, it can be regarded as a distinct level of testing.
  122. What is sanity testing?
    Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
  123. What is end-to-end testing?
    End-to-end testing is similar to system testing, the *macro* end of the test scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
  124. What is regression testing?
    The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
  125. What is system testing?
    System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
    Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.
  126. What is integration testing?
    Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
  127. What is usability testing?
    Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers
  128. What is functional testing?
    Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.
  129. What is parallel/audit testing?
    Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
  130. What is incremental integration testing?
    Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.
  131. What is unit testing?
    Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.
  132. What is white box testing?
    White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.
  133. What is black box testing?
    Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.
  134. What is a good code?
    A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.
  135. What is an inspection?
    An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.
  136. What is validation?
    Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed
  137. What is verification?
    Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
  138. How is testing affected by object-oriented designs?
    Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements.
    While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application’s objects. If the application was well-designed this can simplify test design.
  139. How can it be known when to stop testing?
    This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
    • Deadlines (release deadlines, testing deadlines, etc.)
    • Test cases completed with certain percentage passed
    • Test budget depleted
    • Coverage of code/functionality/requirements reaches a specified point
    • Bug rate falls below a certain level
    • Beta or alpha testing period ends
  140. What is ‘configuration management’?
    Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes
  141. What’s a ‘test case’?
    • A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly.
    • A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
    • Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s useful to prepare test cases early in the development cycle if possible.
  142. What’s the role of documentation in QA?
    • Critical. (Note that documentation can be electronic, not necessarily paper.)
    • QA practices should be documented such that they are repeatable.
    • Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented.
  143. What makes a good software QA engineer?
    • The same qualities a good tester has are useful for a QA engineer.
    • Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization.
    • Communication skills and the ability to understand various sides of issues are important.
    • In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed.
    • An ability to find problems as well as to see ‘what’s missing’ is important for inspections and reviews.
    • There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.
    • Change management for documentation should be used if possible.
  144. What makes a good test engineer?
    • A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail.
    • Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
    • Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming.
    • Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
  145. How do you view the contents of the GUI map?
    GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
  146. What is the different between GUI map and GUI map files?
    The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
    GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
  147. If the object does not have a name then what will be the logical name?
    If the object does not have a name then the logical name could be the attached text.
  148. What are the reasons that WinRunner fails to identify an object on the GUI?
    WinRunner fails to identify an object in a GUI due to various reasons.� The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.
  149. What is the use of Test Director software?
    TestDirector is Mercury Interactive software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
  150. Explain WinRunner testing process?
    WinRunner testing process involves six main stages
    1. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
    2. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
    3. Debug Test: run tests in Debug mode to make sure they run smoothly
    4. Run Tests: run tests in Verify mode to test your application.
    5. View Results: determines the success or failure of the tests.
    6. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

Software testing interview guidelines

Software Testing interview Questions
Software testing job is a high-tech and hi-profile job. The testers are recruited in order to evaluate the quality of specific computer software. Typically, quality refers to many standards including accuracy, comprehensiveness, security, capability, trustworthiness, effectiveness, portability, maintainability, compatibility and usability. A software testing team may comprise of different professional like tester, developer, business analyst, customer, information service management, test manager, senior organization management, and quality team.

Questions on Test Automation

In this section, the interviewer may ask you questions regarding automation such as your familiarity with specific automation tools, your idea of implementation of automation tools in software testing job, problems encountered while using automation tools, whether test automation has the ability to improve test effectiveness, whether it has the ability to substitute manual testing, selection of tools during testing, evaluation of tools, etc.

Questions on Load Testing

This section primarily focuses on your in-depth knowledge concerning software testing job. The questions may include the knowledge regarding the criterion to be used for the selection of web transaction for load testing, purpose of creating virtual user, purpose of recommendation for adding verification checks in all scenarios, context of parameterize text verification check, and necessity of parameterize fields in virtual user script, etc.

General questions

This is indeed a vast area to be concerned. While assessing the ability of an applicant, the interviewer may inquire about his past experience, potential clients, projects encountered, major challenges, etc. The questions may shed light on various arena including STLC, Bug life cycle, requirement of documentations for testing, major defect in Bugzilla defect tracking tools, knowledge regarding artistic, clean, positive and negative testing, knowledge regarding standard keys, non standard keys and customized keys, requirement of synchronization in quick testing, explanation of step generator in QTP, the reason for using object spy in QTP, difference between test effort and test procedure, etc. The interviewer may also put some critical thinking questions in front of the candidate such as if a project is released without testing, the candidate needs to explain the possible situation, etc.

Software Requirements Management

Software Requirements Management

Between 40% and 60% of software failures and defects are the result of poor software management and requirements definition. In plain English, this means that about half of the problems encountered could have been avoided by making it clear, from the very beginning, what the customer expected from the respective project. This is to say that the programming was fine and the developers did their job well – only they did a different job from what they were supposed to.

The definition of a successful program is that it is 100% compliant with its initial requirements. But it those requirements contain mistakes, are unclear or poorly defined, then there is very little one can do to correct the problem later in the process. So, a bit of advance planning simply doubles the success changes of any software project.

The persons in charge with writing the requirements should be the project managers and the team in charge with software engineering, all the stakeholders, the clients and the end-users. Writing good requirements takes time and practice, and, even with all the new tools designed to help you, it will not happen overnight. You need a good, clear, organized mind, good programming knowledge (because you’ll need to know exactly what your team of developers can do, and you need to make sure that you speak the same language with them) and, to a certain extent, good people skills. You will need to get in touch with the clients during this period, and to find out exactly what they want and how they want it. Some of them are not capable of explaining what they need, others don’t have the time to meet you and look over the drafts, other thinks they know better and they give you all the wrong ideas and others will simply be very happy to approve your specifications without having a second look at them. You need to persuade all of them about the importance of this step, hold long, boring meeting, and then “translate” their needs for the programmers and developers. If customers or end-users are not available, despite your best efforts, you can use “surrogates” to simulate the actual users.

Make sure you remain in close contact with the client for the entire duration of the process. Their needs may change, or they may find out about something they forgot to tell you in the beginning – so inform them that you will always be available to meet with them and look at all the options again.

Also, the quality testing department needs to be informed about the requirements from the very beginning, because they will design their tests accordingly, and also they may have some details about what can go wrong in some cases.

One of the biggest issues is the time you have available for writing the requirements. Sometimes, when the deadline is very tight, the developers may start working before you completed the requirements, and this can cause a lot of problems later on.

The process of requirement management ends when the final product is shipped, and the customer is fully satisfied by it. However, the fewer modifications your requirements will suffer, the better for everybody. You should be able to trace your requirements all the time, and we’ll have a look, later on, at the tools that enable you to do this.

In some cases, when a client comes up with an additional issue, it may be too late to change a requirement or ad a new one – the workload and the costs are simply too big to make it worth it. This remains subject to negotiation between you and the client – but your task is to know exactly what would be the effect of implementing new requirements, and to translate it into the language of the client (meaning that the client may not be receptive when he sees how many code lines need to be changed, but he may understand when you tell him how much this will cost).

Tracing requirements also involves additional tests, performed from time to time, to insure that the process runs smoothly and errors are identified and corrected early on. When faced with a big project, you may have different sets of requirements, some that apply to the entire project, and some for parts of it. When a certain design is implemented for a certain requirements, make a note about the effects and the alternatives – it may be useful for future projects (or even for the same project, if the client is not satisfied with the result).

So far, we’ve seen what software requirements are. In the following sections, we’ll show some tips and tools about what good software requirements are. If this section is your responsibility, the wisest thing you can do is to get the IEEE Software Engineering Standards Collection. At 400 dollars, it may be somewhat expensive, but it will give you a lot of useful details about terminology, processes, the quality assurance and the user documentation needed. Also, the standards are conveniently given for each separate unit of the process – the specific part about software requirements specification is IEEE STD 830-1998, which describes the content of good requirements specifications and provides some useful samples. The guide is designed for in-house software, but it can be used for commercial software as well, with minor changes. Another useful reference is the “Standards Guidelines and Examples of System and Software Requirements Engineering” by M. Dorfman and R. Thayer, a compilation covering all the major standards (IEEE, ANSI, NASA and US Military). These are all flexible instruments, and should be used as such. 

Software Testing FAQ Part1

Q: What is software quality assurance?

Software Quality Assurance (SWQA) it is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they are the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or SWQA. This document details some aspects of how he can provide software testing/QA service.

Q: What is quality assurance?

Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews. Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.

Q: Processes and procedures – why follow them?

Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

Q: Standards and templates – what is supposed to be in a document?

All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q: What are the different levels of testing?

Rob Davis has expertise in testing at all testing levels listed in the these FAQs. At each test level, he documents the results. Each level of testing is either considered black or white box testing.

Q: What is black box testing?

Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.

Q: What is white box testing?

White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.

Q: What is unit testing?

Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

Q: What is parallel/audit testing?

Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Q: What is functional testing?

Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.

Q: What is usability testing?

Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers.

Q: What is incremental integration testing?

Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

Q: What is integration testing?

Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q: What is system testing?

System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

Q: What is end-to-end testing?

End-to-end testing is similar to system testing, the *macro* end of the test scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

Q: What is regression testing?

The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What is sanity testing?

Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Q: What is performance testing?

Performance testing verifies loads, volumes and response times, as defined by requirements. Although performance testing is a part of system testing, it can be regarded as a distinct level of testing.

Q: What is load testing?

Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.

Q: What is installation testing?

Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.

Q: What is security/penetration testing?

Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q: What is compatibility testing?

Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

Q: What is comparison testing?

Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.

Q: What is acceptance testing?

Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Q: What is alpha testing?

Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What is beta testing?

Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What testing roles are standard on most testing projects?

Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

Q: What is a Test/QA Team Lead?

The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

Q: What is a Test Engineer?

A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. They also… Speed up the work of your development staff; Reduce your risk of legal liability; Give you the evidence that your software is correct and operates properly; Improve problem tracking and reporting; Maximize the value of your software; Maximize the value of the devices that use it; Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down; Help the work of your development staff, so the development team can devote its time to build up your product; Promote continual improvement; Provide documentation required by FDA, FAA, other regulatory agencies and your customers; Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field; Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

Q: What is a Test Build Manager?

Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.

Q: What is a System Administrator?

Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

Q: What is a Database Administrator?

Database Administrators, Test Build Managers, and System Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

Q: What is a Technical Analyst?

Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

Q: What is a Test Configuration Manager?

Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

Q: What is a test schedule?

The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

Q: What is software testing methodology?

One software testing methodology is a three step process of… 1.      Creating a test strategy; 2.      Creating a test plan/design; and 3.      Executing tests. This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his customers’ applications.

Q: What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

Q: How do you create a test strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process: A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. Testing methodology. This is based on known standards. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. Requirements that the system can not provide, e.g. system limitations. Outputs for this process:  An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q: How do you create a test plan/design?

 Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking… Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts include the specific data that will be used for testing the process or transaction. Test procedures or scripts may cover multiple test scenarios. Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release. Inputs for this process: Approved Test Strategy Document. Test tools, or automated test tools, if applicable. Previously developed scripts, if applicable. Test documentation problems uncovered as a result of testing. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data Outputs for this process:  Approved documents of test scenarios, test cases, test conditions and test data. Reports of software design issues, given to software developers for correction.

Q: How do you execute tests?

 Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities. The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool. Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager and/or Test Team Lead. After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance. The test team reviews test document problems identified during testing, and update documents where appropriate. Inputs for this process:  Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. Test tools, including automated test tools, if applicable. Developed scripts. Changes to the design, i.e. Change Request Documents. Test data. Availability of the test team and project team. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager. Test Readiness Document. Document Updates. Outputs for this process:  Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables. Changes to the code, also known as test fixes. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues. Formal record of test incidents, usually part of problem tracking. Base-lined package, also known as tested source and object code, ready for migration to the next level.