Archive

Archive for March, 2008

Software Testing FAQ

What are 5 common problems in the software development process?

  • poor requirements – if requirements are unclear, incomplete, too general, and not testable, there will be problems.
  • unrealistic schedule – if too much work is crammed in too little time, problems are inevitable.
  • inadequate testing – no one will know whether or not the program is any good until the customer complains or systems crash.
  • featuritis – requests to pile on new features after development is underway; extremely common.
  • miscommunication – if developers don’t know what’s needed or customer’s have erroneous expectations, problems can be expected.

What are 5 common solutions to software development problems?

  • solid requirements – clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. In ‘agile’-type environments, continuous close coordination with customers/end-users is necessary to ensure that changing/emerging requirements are understood.
  • realistic schedules – allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
  • adequate testing – start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. ‘Early’ testing could include static code analysis/testing, test-first development, unit testing by developers, built-in testing and diagnostic capabilities, automated post-build testing, etc.
  • stick to initial requirements where feasible – be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. In ‘agile’-type environments, initial requirements may be expected to change significantly, requiring that true agile processes be in place.
  • communication – require walkthroughs and inspections when appropriate; make extensive use of group communication tools – groupware, wiki’s, bug-tracking tools and change management tools, intranet capabilities, etc.; ensure that information/documentation is available and up-to-date – preferably electronic, not paper; promote teamwork and cooperation; use protoypes and/or continuous communication with end-users if possible to clarify expectations.

What are some recent major computer system failures caused by software bugs?

  • News reports in December of 2007 indicated that significant software problems were continuing to occur in a new ERP payroll system for a large urban school system. It was believed that more than one third of employees had received incorrect paychecks at various times since the new system went live in January of that year, resulting in overpayments of $53 million, as well as underpayments. An employees’ union brought a lawsuit against the school system, the cost of the ERP system was expected to rise by 40%, and the non-payroll part of the ERP system was delayed. Inadequate testing reportedly contributed to the problems.
  • In November of 2007 a regional government reportedly brought a multi-million dollar lawsuit against a software services vendor, claiming that the vendor ‘minimized quality’ in delivering software for a large criminal justice information system and the system did not meet requirements. The vendor also sued its subcontractor on the project.
  • In June of 2007 news reports claimed that software flaws in a popular online stock-picking contest could be used to gain an unfair advantage in pursuit of the game’s large cash prizes. Outside investigators were called in and in July the contest winner was announced. Reportedly the winner had previously been in 6th place, indicating that the top 5 contestants may have been disqualified.
  • A software problem contributed to a rail car fire in a major underground metro system in April of 2007 according to newspaper accounts. The software reportedly failed to perform as expected in detecting and preventing excess power usage in equipment on a new passenger rail car, resulting in overheating and fire in the rail car, and evacuation and shutdown of part of the system.
  • Tens of thousands of medical devices were recalled in March of 2007 to correct a software bug. According to news reports, the software would not reliably indicate when available power to the device was too low.
  • A September 2006 news report indicated problems with software utilized in a state government’s primary election, resulting in periodic unexpected rebooting of voter checkin machines, which were separate from the electronic voting machines, and resulted in confusion and delays at voting sites. The problem was reportedly due to insufficient testing.
  • In August of 2006 a U.S. government student loan service erroneously made public the personal data of as many as 21,000 borrowers on it’s web site, due to a software error. The bug was fixed and the government department subsequently offered to arrange for free credit monitoring services for those affected.
  • A software error reportedly resulted in overbilling of up to several thousand dollars to each of 11,000 customers of a major telecommunications company in June of 2006. It was reported that the software bug was fixed within days, but that correcting the billing errors would take much longer.
  • News reports in May of 2006 described a multi-million dollar lawsuit settlement paid by a healthcare software vendor to one of its customers. It was reported that the customer claimed there were problems with the software they had contracted for, including poor integration of software modules, and problems that resulted in missing or incorrect data used by medical personnel.
  • In early 2006 problems in a government’s financial monitoring software resulted in incorrect election candidate financial reports being made available to the public. The government’s election finance reporting web site had to be shut down until the software was repaired.
  • Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. The problem was rectified and trading resumed later the same day.
  • A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. In the article, an automotive software specialist indicated that the automobile industry spends $2 billion to $3 billion per year fixing software problems.
  • Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. In March of 2005 it was decided to scrap the entire project.
  • In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.
  • Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank’s customers, and that the total cost of the incident could exceed $100 million.
  • A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.
  • According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company’s vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.
  • In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980’s one nation surreptitiously allowed a hostile nation’s espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.
  • A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers’ online orders.
  • News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.
  • In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court’s ruling that “…six miscues out of more than 400 trades does not indicate negligence.” was invalidated.
  • In April of 2003 it was announced that a large student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.
  • News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.
  • In March of 2002 it was reported that software bugs in Britain’s national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.
  • A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
  • According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
  • In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date ’31/12/2000′; the trains were started by altering the control system’s date settings.
  • News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn’t work.
  • In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district’s CIO was fired. The school district decided to reinstate it’s original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
  • A review board concluded that the NASA Mars Polar Lander failed in December 1999 due to software problems that caused improper functioning of retro rockets utilized by the Lander as it entered the Martian atmosphere.
  • In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
  • Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
  • In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
  • A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
  • In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
  • The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
  • In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
  • January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
  • In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
  • A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software’s inability to handle credit cards with year 2000 expiration dates.
  • In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others’ reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to “…unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers.”
  • In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC’s to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that ‘It had nothing to do with the integrity of the software. It was human error.’
  • On June 4 1996 the first flight of the European Space Agency’s new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
  • Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
  • On January 1 1984 all computers produced by one of the leading minicomputer makers of the time reportedly failed worldwide. The cause was claimed to be a leap year bug in a date handling function utilized in deletion of temporary operating system files. Technicians throughout the world worked for several days to clear up the problem. It was also reported that the same bug affected many of the same computers four years later.
  • Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a ‘…funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

What kinds of testing should be considered?

  • Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
  • unit testing – the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
  • system testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • end-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • sanity testing or smoke testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.
  • regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
  • acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • load testing – testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
  • stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
  • usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.
  • recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • failover testing – typically used interchangeably with ‘recovery testing’
  • security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • compatability testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • context-driven testing – testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • user acceptance testing – determining if software is satisfactory to an end-user or customer.
  • comparison testing – comparing software weaknesses and strengths to competing products.
  • alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • mutation testing – a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

Will automated testing tools make testing easier?

  • Possibly. For small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools. For larger projects, or on-going long-term projects they can be valuable.
  • A common type of automated tool is the ‘record/playback’ type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them ‘recorded’ and the results logged by a tool. The ‘recording’ is typically in the form of text based on a scripting language that is interpretable by the testing tool. Often the recorded script is manually modified and enhanced. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just ‘playing back’ the ‘recorded’ actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the ‘recordings’ may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.
  • Another common type of approach for automation of functional testing is ‘data-driven’ or ‘keyword-driven’ automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an ‘action’ would be something like ‘enter a value in a text box’). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained – such as via a spreadsheet – since they are separate from the test drivers. The test drivers ‘read’ the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.
  • Other automated tools can include:

code analyzers – monitor code complexity, adherence to standards, etc.

coverage analyzers – these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.

memory analyzers – such as bounds-checkers and leak detectors.

load/performance test tools – for testing client/server and web applications under various load levels.

web test tools – to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site’s interactions are secure.
                                         
other tools – for test case management, documentation management, bug reporting, and configuration management, file and database comparisons, screen captures, security testing, macro recorders, etc.

Test automation is, of course, possible without COTS tools. Many successful automation efforts utilize custom automation software that is targeted for specific projects, specific software applications, or a specific organization’s software development environment. In test-driven agile software development environments, automated tests are often built into the software during (or preceding) coding of the application.

What steps are needed to develop and run software tests?
The following are some of the steps to consider:

  • Obtain requirements, functional design, and internal design specifications and other necessary documents
  • Obtain budget and schedule requirements
  • Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  • Determine project context, relative to the existing quality culture of the organization and business, and how it might impact testing scope, aproaches, and methods.
  • Identify application’s higher-risk aspects, set priorities, and determine scope and limitations of tests
  • Determine test approaches and methods – unit, integration, functional, system, load, usability tests, etc.
  • Determine test environment requirements (hardware, software, communications, etc.)
  • Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  • Determine test input data requirements
  • Identify tasks, those responsible for tasks, and labor requirements
  • Set schedule estimates, timelines, milestones
  • Determine input equivalence classes, boundary value analyses, error classes
  • Prepare test plan document and have needed reviews/approvals
  • Write test cases
  • Have needed reviews/inspections/approvals of test cases
  • Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  • Obtain and install software releases
  • Perform tests
  • Evaluate and report results
  • Track problems/bugs and fixes
  • Retest as needed
  • Maintain and update test plans, test cases, test environment, and testware through life cycle

What is ‘good code’ in project?
‘Good code’ is code that works, is reasonably bug free, and is readable and maintainable. Some organizations have coding ‘standards’ that all developers are supposed to adhere to, but everyone has different ideas about what’s best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. ‘Peer reviews’, ‘buddy checks’ pair programming, code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:

  • minimize or eliminate use of global variables.
  • use descriptive function and method names – use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
  • use descriptive variable names – use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
  • function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
  • function descriptions should be clearly spelled out in comments preceding a function’s code.
  • organize code for readability.
  • use whitespace generously – vertically and horizontally
  • each line of code should contain 70 characters max.
  • one code statement per line.
  • coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
  • in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
  • no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
  • make extensive use of error handling procedures and status and error logging.
  • for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
  • for C++, keep class methods small, less than 50 lines of code per method is preferable.
  • for C++, make liberal use of exception handlers

 What is a ‘walkthrough’?
A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

 What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term ‘IV & V’ refers to Independent Verification and Validation.

 What makes a good Software Test engineer?
A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

Advertisements

Visa Interview Questions

Here we is a set of U.S. visa interview questions that you may be asked as a part of the H-1B visa application process.

  • What is the purpose of your trip to the United States?
  • Why do you want to work in the US?
  • Where are you working currently?
  • What is the company you are going to work in USA?
  • What is your role in US Company?
  • What is your current salary?
  • What is the salary you will get in United States?
  • Have you been to any other country before? If yes, how long was your stay there?
  • Where will you be staying in the U.S.?
  • How long do you plan to stay in the U.S.?
  • Will you be back to your home country?
  • When you will be back to your home country?

Points to Remember:

  • Applicants should practice for the interview with friends.
  • Applicant must arrive at the exact time mentioned in the appointment letter.
  • Applicant must answer all questions clearly during the interview.
  • Applicant must be honest with consular officials at all times.
  • Applicants must ensure that they have all the forms and personal documents that they need to submit at the time of interview.

Some more questions

  1. What is the purpose of your trip to the United States?
  2. Do you have any family in the United States?
  3. Why are you changing your Job?
  4. Why do you want to work in the US?
  5. Have you applied for visa for any other country?
  6. Do you know what is the living cost in the U.S. specific to the place where you are going?
  7. When are you planning to travel?
  8. How will you survive for the first month?
  9. Have you been to any other country before?
    If yes, how long was your stay there?
  10. Will you come back to India?
  11. When will you return to india?
  12. Why would you want to return to India?
  13. Is it your first H1B or visa revalidation?
  14. After the conclusion of your visa, what will you do?

Questions About Your Education/Experience

  1. Are you a student?
  2. Which university is your degree from?
  3. What was your thesis about?
  4. What is the diff between PL SQL n SQL?
  5. What are the software’s you know?
    Do you have work experience with them?
  6. What courses did you complete here [Home Country]?
  7. Show me your certificates.
  8. Can I see your educational certificates and experience letters.
  9. Tell me in detail about all the jobs and work experiences and profile.
  10. What’s your highest educational qualification?

Questions About Your Current Company

  1. How long have you been working?
  2. Where are you working currently?
  3. What is your current salary?
  4. What is your current role in the current company?
  5. Is it an Indian company you currently work for?

Questions About Sponsoring Company

  1. What is the company you are going to work for in USA?
  2. Where are you going to work in US?
  3. Why are you joining [New Company]?
  4. How do you know this is a real company?
  5. When did you receive your offer letter?
  6. What will you be working on there? Is it an internal project?
  7. I need a client letter describing your work project.
  8. Tell me what do you know about [New Company]?
  9. When was the US company founded?
  10. Tell me about the project and the company (client) you will be working for?
  11. How did you find out about this company?
  12. How did you contact the [New Company]?
  13. What is the current project you will be working on?
  14. What are your responsibilities and for which client are you going to be working for? Please explain in detail.
  15. Do you have any proof from your new employer regarding your responsibilities?
  16. Do you have any company photographs?
  17. How long has the company been in the current location?
  18. How many rounds of interviews has the USA company conducted?
    What are they?
  19. What is the name of your interviewer?
  20. Can you give me the dates of your interview?
  21. Who are the clients for your U.S. company?
  22. What are the technologies you are working on?
  23. Who is the President/CEO of the U.S. company?
  24. What kind of projects is the U.S. company working on?
  25. What is the annual turn over of the company?
  26. How many employees does the U.S. Company have?
  27. What’s your designation in [Previous Company] and what’s your designation in the [New Company]?
  28. Will you be working from [New Company] office or client’s place?
  29. Can I see the Employee petition to USCIS and the Tax Returns of the Company?
  30. What is the salary you will get in USA?
  31. How many rounds of interviews did the U.S. company conduct?
    What are they? 4 rounds (2 technical, 1 HR, 1 manager interview)
  32. Can I see your client end letter and itinerary of services.

A few more here

  1. How you came to know about this company?
  2. Where are you working currently?
  3. What is the company you are going to work in USA?
  4. What is your current salary?
  5. What is the salary you will get in USA?
  6. How many rounds of interviews the USA company conducted? What are they?
  7. Who had taken interview for you?
  8. Can you give me the dates of your interview?
  9. Who are the clients for your USA company?
  10. What are the technologies you are working on?
  11. Who is the President/CEO of US company?
  12. What kind of projects US company is working on?
  13. What is the annual turn over of the company?
  14. How many employees in US Company?
  15. Why are you changing your Job?
  16. Why you want to work in US?
  17. Have you applied for any other Country?
  18. Do you know what is the leaving cost in US specific to the place where you are going?
  19. When did you received your offer letter?
  20. What is the current project you are going to work?
  21. What is your current role?
  22. What is your role in US company?
  23. Where are you going to work in US?
  24. What is your designation in US company?
  25. When did US company founded?
  26. What is your current pay?
  27. What is your pay in US company?
  28. When are you planned to travel?
  29. How will you survive for the first month?
  30. Have you been to any other country before?
  31. Will you come back to India?
  32. When you will be back to india?
  33. Why you want to return back?

Quick Test Professional Q & A

1. What is Quick test pro?

It’s a Mercury interactive’s keyword driven testing tool

2. By using QTP what kind of applications we can test?

By using QTP we can test standard windows applications, Web objects, ActiveX controls, and Visual basic applications.

3. What is called as test?

Test is a collection of steps organized into one or more actions, which are used to verify that your application performs as expected

4. What is the meaning of business component?

It’s a collection of steps representing a single task in your application. Business components are combined into specific scenario to build business process tests in Mercury Quality center with Business process testing

5. How the test will be created in QTP?

As we navigate through our application, QTP records each step we perform and generates a test or component that graphically displays theses steps in a table-based keyword view.

6. What are all the main tasks which will be accomplished by the QTP after creating a test?

After we have finished recording, we can instruct QTP to check the properties of specific objects in our application by means of enhancement features available in QTP. When we perform a run session, QTP performs each step in our test or component. After the run session ends, we can view a report detailing which steps were performed, and which one succeeded or failed.

7. What is Actions?

A test is composed of actions. The steps we add to a test are included with in the test’s actions. By each test begin with a single action. We can divide our test into multiple actions to organize our test.

8. What are all the main stages will involve in QTP while testing?

  • Creating tests or business components
  • Running tests or business components
  • Analyzing results

9. How the creation of test will be accomplished in QTP?

We can create the test or component by either recording a session on our application or web site or building an object repository and adding steps manually to the keyword view using keyword-driven functionality. We can then modify our test with programming statements.

10. What is the purpose of documentation in key word view?

The documentation column of the key word view used to displays a description of each step in easy to understand sentences.

11. Keyword view in QTP is also termed as

Icon based view

12. What is the use of data table in QTP?

Parameterizing the test

13. What is the use of working with actions?

To design a modular and efficient tests

14. What is the file extension of the code file and object repository file in QTP?

The extension for code file is .vbs and the extension for object repository is .tsr

15. What are the properties we can use for identifying a browser and page when using descriptive programming?

The name property is used to identify the browser and the title property is used to identify the page

16. What are the different scripting languages we can use when working with QTP?

VB script

17. Give the example where we can use a COM interface in our QTP project?

COM interface appears in the scenario of front end and back end.

18. Explain the keyword create object with example

Create object is used to create and return a reference to an automation object.

For example:

Dim ExcelSheetSet

ExcelSheet=createobject(“Excel.Sheet”)

19. How to open excel sheet using QTP script?

You can open excel in QTP by using the following command

System.Util.Run”Path of the file”

20. Is it necessary to learn VB script to work with QTP?

Its not mandate that one should mastered in VB script to work with QTP. It is mostly user friendly and for good results we need to have basic VB or concepts which will suffice

21. If Win Runner and QTP both are functional testing tools from the same company. Why a separate tool QTP came in to picture?

QTP has some additional functionality which is not present in Win Runner. For example, you can test (Functionality and Regression testing) an application developed in .Net technology with QTP, which is not possible to test in Win Runner

22. Explain in brief about the QTP automation object model

The test object model is a large set of object types or classes that QTP uses to represent the objects in our application. Each test object has a list of properties that can uniquely identify objects of that class

23. What is a Run-Time data table?

The test results tree also includes the table-shaped icon that displays the run-time data table-a table that shows the values used to run a test containing data table parameters or the data table output values retrieved from a application under test

24. What are all the components of QTP test script?

QTP test script is a combination of VB script statements and statements that use QuickTest test objects, methods and properties

25. What is test object?

It’s an object that QTP uses to represent an object in our application. Each test object has one or more methods and properties that we can use to perform operations and retrieve values for that object. Each object also has a number of identification properties that can describe the object.

26. What are all the rules and guidelines want to be followed while working in expert view?

Case-sensitivity

VB script is not case sensitive and does not differentiate between upper case and lower case spelling of words.

Text strings

When we enter value as a string, that time we must add quotation marks before and after the string

Variables

We can use variables to store strings, integers, arrays and objects. Using variables helps to make our script more readable and flexible.

Parentheses

To achieve the desired result and to avoid the errors, it is important that we use parentheses() correctly in our statements.

Comments

We can add comments to our statements using apostrophe (‘), either at a beginning of the separate line or at the end of a statement

Spaces

We can add extra blank spaces to our script to improve clarity. These spaces are ignored by the VB script

Additional Interview Questions

Q1. What is verification?

 A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!

Q2. What is validation?

 A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Q3. What is a walkthrough?

A: A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual’s or team’s competency.

Q4. What is an inspection?

 A: An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.

Q5. What is quality?

 A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

Q6. What is good code?

 A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

Q7. What is good design?

 A: Design could mean too many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

Q8. What is software life cycle?

A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.

Q9. Why are there so many software bugs?

 A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.

Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.

Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.

As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.

Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.

Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.

Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

Q10. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.  

Q11. Give me five common problems that occur during software development.

A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new features after development is underway and poor communication.  Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.

The schedule is unrealistic if too much work is crammed in too little time.

Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.

It’s extremely common that new features are added after development is underway.

Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

Q12. Do automated testing tools make testing easier?

A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!

Q13. Give me five solutions to problems that occur during software development.

A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.

Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.

Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, and tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

Q14. What makes a good test engineer?

A: Rob Davis is a good test engineer because he

Has a “test to break” attitude,

Takes the point of view of the customer,

Has a strong desire for quality,

Has an attention to detail, He’s also

Tactful and diplomatic and

Has well a communication skill, both oral and written. And he

Has previous software development experience, too.

Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.

Q15. What makes a good QA engineer?

A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis’ communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.

Q16. What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn’t have a one-page resume. The followings are some of the comments I have personally heard: “Well, Joe Blow (car salesman) said I should have a one-page resume.” “Well, I read a book and it said you should have a one page resume.” “I can’t really go into what I really did because if I did, it’d take more than one page on my resume.” “Gosh, I wish I could put my job at IBM on my resume but if I did it’d make my resume more than one page, and I was told to never make the resume more than one page long.” “I’m confused, should my resume be more than one page? I feel like it should, but I don’t want to break the rules.” Or, here’s another comment, “People just don’t read resumes that are longer than one page.” I have heard some more, but we can start with these. So what’s the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have. The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn’t getting you interviews, then you should change it. The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one, scanners don’t like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution. Five, resume readers don’t like to guess and most won’t call you to clarify what is on your resume. Generally speaking, your resume should tell your story. If you’re a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what. Short resumes — for people long on experience — are not appropriate. The real audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right hands, it will be read thoroughly.  

Q17. What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.

Q18. What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

Q19. What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application’s externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, “user-friendly”, which is too subjective. A testable requirement would be something such as, “the product shall allow the user to enter their previously-assigned password to access the application”. Care should be taken to involve all of a project’s significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren’t met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. You CAN learn to capture requirements, with little or no outside help. Get CAN get free information. Click on a link!

Q20. What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q21. What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…

Test case identifier;

Test case name;

Objective;

Test conditions/setup;

Input data requirements/steps, and

Expected results.

Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

Q22. What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

Q23. What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.

Q24. What if the software is so buggy it can’t be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

Q25. How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…

Deadlines, e.g. release deadlines, testing deadlines;

Test cases completed with certain percentage passed;

Test budget has been depleted;

Coverage of code, functionality, or requirements reaches a specified point;

Bug rate falls below a certain level; or

Beta or alpha testing period ends.

Q26. What if there isn’t enough time for thorough testing?

A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

·         Which functionality is most important to the project’s intended purpose?

·         Which functionality is most visible to the user?

·         Which functionality has the largest safety impact?

·         Which functionality has the largest financial impact on users?

·         Which aspects of the application are most important to the customer?

·         Which aspects of the application can be tested early in the development cycle?

·         Which parts of the code are most complex and thus most subject to errors?

·         Which parts of the application were developed in rush or panic mode?

·         Which aspects of similar/related previous projects caused problems?

·         Which aspects of similar/related previous projects had large maintenance expenses?

·         Which parts of the requirements and design are unclear or poorly thought out?

·         What do the developers think are the highest-risk aspects of the application?

·         What kinds of problems would cause the worst publicity?

·         What kinds of problems would cause the most customer service complaints?

·         What kinds of tests could easily cover multiple functionalities?

·         Which tests will have the best high-risk-coverage to time-required ratio?

Q27. What if the project isn’t big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.

Q28. What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…

·         Ensure the code is well commented and well documented; this makes changes easier for the developers.

·         Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.

·         In the project’s initial schedule, allow for some extra time to commensurate with probable changes.

·         Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.

·         Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.

·         Ensure customers and management understands scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.

·         Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.

·         Design some flexibility into automated test scripts;

·         Focus initial automated testing on application aspects that are most likely to remain unchanged;

·         Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;

·         Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;

·         Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

Q29. What if the application has functionality that wasn’t in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

Q30. How can software QA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

 

Q34. What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or Software QA. This document details some aspects of how he can provide software testing/QA service.

Q35. What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.

Q36. Process and procedures – why follow them?

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

Q37. Standards and templates – what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q38. What are the different levels of testing?

A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results. Each level of testing is either considered black or white box testing.

Q39. What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.

Q40. What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.

Q41. What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

Q42. What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Q43. What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.

Q44. What is usability testing?

A: Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

Q45. What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

Q46. What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q47. What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.

Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!

Q48. What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

Q49. What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q50. What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Q51. What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.

Q52. What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.

Q53. What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.

Q54. What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

Q55. What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q56. What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

Q57. What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.

Q58. What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Q59. What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.

Q60. What is beta testing?

A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q61. What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!

Q62. What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

Q63. What is a Test Engineer?

A: Test Engineers are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also…

·         Speed up the work of the development staff;

·         Reduce your organization’s risk of legal liability;

·         Give you the evidence that your software is correct and operates properly;

·         Improve problem tracking and reporting;

·         Maximize the value of your software;

·         Maximize the value of the devices that use it;

·         Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;

·         Help the work of your development staff, so the development team can devote its time to build up your product;

·         Promote continual improvement;

·         Provide documentation required by FDA, FAA, other regulatory agencies and your customers;

·         Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;

·         Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

Q64. What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.

Q65. What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

Q66. What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

Q67. What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

Q68. What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

Q69. What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

Q70. What is software testing methodology?

A: One software testing methodology is the use a three step process of…

1.      Creating a test strategy;

2.      Creating a test plan/design; and

3.      Executing tests.

This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.

Q71. What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

Q72. How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

·         A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

·         A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

·         Testing methodology. This is based on known standards.

·         Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.

·         Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

·         An approved and signed off test strategy document, test plan, including test cases.

·         Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q73. How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…

Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

Test scenarios are executed through the use of test procedures or scripts.

Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

Test procedures or scripts include the specific data that will be used for testing the process or transaction.

Test procedures or scripts may cover multiple test scenarios.

Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.

Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.

A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

Approved Test Strategy Document.

Test tools, or automated test tools, if applicable.

Previously developed scripts, if applicable.

Test documentation problems uncovered as a result of testing.

A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.

Outputs for this process:

Approved documents of test scenarios, test cases, test conditions and test data.

Reports of software design issues, given to software developers for correction.

Q74. How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.

A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.

Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.

After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.

The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:

Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.

Test tools, including automated test tools, if applicable.

Developed scripts.

Changes to the design, i.e. Change Request Documents.

Test data.

Availability of the test team and project team.

General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.

A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.

Test Readiness Document.

Document Updates.

Outputs for this process:

Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.

Changes to the code, also known as test fixes.

Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.

Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.

Formal record of test incidents, usually part of problem tracking.

Base-lined package, also known as tested source and object code, ready for migration to the next level.

Q75. What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach:

Black box testing,

White box testing,

Unit testing,

Incremental testing,

Integration testing,

Functional testing,

System testing,

End-to-end testing,

Sanity testing,

Regression testing,

Acceptance testing,

Load testing,

Performance testing,

Usability testing,

Install/uninstall testing,

Recovery testing,

Security testing,

Compatibility testing,

Exploratory testing, ad-hoc testing,

User acceptance testing,

Comparison testing,

Alpha testing,

Beta testing, and

Mutation testing.

Q76. What is stress testing?

A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denials of service tools.

Q77. What is load testing?

A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!

Q79. What is the difference between performance testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!

Q80. What is the difference between reliability testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q81. What is the difference between volume testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q82. What is incremental testing?

A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Q83. What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. You CAN learn software testing, with little or no outside help. Get CAN get free information. Click on a link!

Q84. What is automated testing?

A: Automated testing is a formally specified and controlled method of formal testing approach.

Q85. What is alpha testing?

A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

Q86. What is beta testing?

A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.

Q87. What is the difference between alpha and beta testing?

A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public.

Q88. What is clear box testing?

A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!

Q89. What is boundary value analysis?

A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.

Q90. What is ad hoc testing?

A: Ad hoc testing is a testing approach; it is the least formal testing approach.

Q91. What is gamma testing?

A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.

Q92. What is glass box testing?

A: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

Q93. What is open box testing?

A: Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

Q94. What is black box testing?

A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. Get CAN get free information. Click on a link!

Q95. What is functional testing?

A: Functional testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.

Q96. What is closed box testing?

A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.

Q97. What is bottom-up testing?

A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.

Q98. What is software quality?

A: The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.

Q99. How do test case templates look like?

A: Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly. Test case templates contain all particulars of every test case. Often these templates are in the form of a table. One example of this table is a 6-column table, where column 1 is the “Test Case ID Number”, column 2 is the “Test Case Name”, column 3 is the “Test Objective”, column 4 is the “Test Conditions/Setup”, column 5 is the “Input Data Requirements/Steps”, and column 6 is the “Expected Results”. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for users to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. You CAN learn to create test case templates, with little or no outside help. Get CAN get free information. Click on a link!

Q100. What is a software fault?

A: Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs.

Q101. What is software failure?

A: Software failure occurs when the software does not do what the user expects to see.

Q102. What is the difference between a software fault and a software failure?

A: Software failure occurs when the software does not do what the user expects to see. A software fault, on the other hand, is a hidden programming error. A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended.

WinRunner Testing Process

 

WinRunner Interview Questions And Answers

1) Explain WinRunner testing process?

a) WinRunner testing process involves six main stages

i.Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii.Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iii.Debug Test: run tests in Debug mode to make sure they run smoothly

iv.Run Tests: run tests in Verify mode to test your application.

v.View Results: determines the success or failure of the tests.

vi.Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

2) What is contained in the GUI map?

a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b) There are 2 types of GUI Map files.

i.Global GUI Map file: a single GUI Map file for the entire application

ii.GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

3) How does WinRunner recognize objects on the application?

a) WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

4) Have you created test scripts and what is contained in the test scripts?

a) Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

5) How does WinRunner evaluates test results?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.


6) Have you performed debugging of the scripts?

a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

7) How do you run your test scripts?

a) We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

8) How do you analyze results and report the defects?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

9) What are the different modes of recording?

a) There are two type of recording in WinRunner.

i.Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii.Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

10) What is the purpose of loading WinRunner Add-Ins?

a) Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.

 

11.How to modify the logical name or physical description of the object?

Using GUI editor

12.How Winrunner handles varying window labels?

Using regular expressions

13.How do you write messages to the report?

report_msg

14. How you used WinRunner in your project?

Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.

15. Explain WinRunner testing process?

WinRunner testing process involves six main stages

i.Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii.Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iiiDebug Test: run tests in Debug mode to make sure they run smoothly

iv.Run Tests: run tests in Verify mode to test your application.

v.View Results: determines the success or failure of the tests.

vi.Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

16. What in contained in the GUI map?

a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b) There are 2 types of GUI Map files.

i.Global GUI Map file: a single GUI Map file for the entire application

ii.GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

17. How does WinRunner recognize objects on the application?

a)WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads anobject’sdescription in the GUI map and then looks for an object with the same properties in the application being tested.

18. Have you created test scripts and what is contained in the test scripts?

a) Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

19. How does WinRunner evaluates test results?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

20. Have you performed debugging of the scripts?

a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

31. How do you view the contents of the GUI map?

a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

32. When you create GUI map do you record all the objects of specific objects?

a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

33. What is the purpose of set_window command?

a) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

Syntax: set_window(, time);

The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

34. How do you load GUI map?

a) We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

35. What is the disadvantage of loading the GUI maps through start up scripts?

a) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.

b) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

36. How do you unload the GUI map?

a) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

37. What actually happens when you load GUI map?

a) When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.

38. What is the purpose of the temp GUI map file?

a) While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

39. What is the extension of gui map file?

a) The extension for a GUI Map file is “.gui”.

40. How do you find an object in an GUI map.

a) The GUI Map Editor is been provided with a Find and Show Buttons.

i To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

ii.To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

41. What different actions are performed by find and show button?

a) To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

b) To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

42. How do you identify which files are loaded in the GUI map?

a) The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.

43. How do you modify the logical name or the physical description of the objects in GUI map?

a) You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.


44. When do you feel you need to modify the logical name?

a) Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

45. When it is appropriate to change physical description?

a) Changing the physical description is necessary when the property value of an object changes.

46. How WinRunner handles varying window labels?

a) We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.

i.The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

ii.The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

[

47. What is the purpose of regexp_label property and regexp_MSW_class property?

a) The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

b) The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

48. How do you suppress a regular expression?

a) We can suppress the regular expression of a window by replacing the regexp_label property with label property.

49. How do you copy and move objects between different GUI map files?

a) We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:

i Choose Tools > GUI Map Editor to open the GUI Map Editor.

ii.Choose View > GUI Files.

iii.Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.

iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.

v.In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

vi.Click Copy or Move.

vii.To restore the GUI Map Editor to its original size, click Collapse.

50. How do you select multiple objects during merging the files?

a) Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

51. How do you clear a GUI map files?

a) We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.

52. How do you filter the objects in the GUI map?

a) GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

i. Logical name displays only objects with the specified logical name.

ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.

iii. Class displays only objects of the specified class, such as all the push buttons.

53.How do you configure gui map?

a) When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.

b) Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.

c) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

54.W hat is the purpose of GUI map configuration?

a) GUI Map configuration is used to map a custom object to a standard object.

55. How do you make the configuration and mappings permanent?

a) The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

56. What is the purpose of GUI spy?

a) Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

57. What is the purpose of obligatory and optional properties of the objects?

a) For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or “optional”.

i. An obligatory property is always learned (if it exists).

ii.An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

58. When the optional properties are learned?

a) An optional property is used only if the obligatory properties do not provide unique identification of an object.

59. What is the purpose of location indicator and index indicator in GUI map configuration?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

ii. An index selector uses a unique number to identify the object in a window.

1. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

60. How do you handle custom objects?

a) A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_ statements.

b) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

61. What is the name of custom class in WinRunner and what methods it applies on the custom objects?

a) WinRunner learns custom class objects under the generic “object” class. WinRunner records operations on custom objects using obj_ statements.

62. In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

ii. An index selector uses a unique number to identify the object in a window.

63 What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.

a) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

b) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

c) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were “object” class.

d) Ignore instructs WinRunner to disregard all operations performed on the class.

64. How do you find out which is the start up file in WinRunner?

a) The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

65. What are the virtual objects and how do you learn them?

a) Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.

b) Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

To define a virtual object using the Virtual Object wizard:

i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.

ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.

iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.

iv.Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.

v. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

66.H ow you created you test scripts 1) by recording or 2) programming?

a) Programming. I have done complete programming only, absolutely no recording.

67. What are the two modes of recording?

a) There are 2 modes of recording in WinRunner

i.Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii.Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

68. What is a checkpoint and what are different types of checkpoints?

a) Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.

You can add four types of checkpoints to your test scripts:

i. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.

ii. Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to an image captured in an earlier version.

iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.

iv. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

 

 

 

 

 

71. What is parameterizing?

a) In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

72. How do you maintain the document information of the test scripts?

a) Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

73. What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?

a) You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:

i. button_check_info

ii. scroll_check_info

iii. edit_check_info

iv. static_check_info

v. list_check_info

vi. win_check_info

vii. obj_check_info

Syntax: button_check_info (button, property, property_value );

edit_check_info ( edit, property, property_value );

74. What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

a) You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.

b) Creating a GUI Checkpoint using the Default Checks

i. You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.

ii. To create a GUI checkpoint using default checks:

1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

2. Click an object.

3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

c) Creating a GUI Checkpoint by Specifying which Properties to Check

d) You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.

e) To create a GUI checkpoint by specifying which properties to check:

i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

ii. Double-click the object or window. The Check GUI dialog box opens.

iii. Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.

iv. Select the properties you want to check.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

75. What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

a) To create a GUI checkpoint for two or more objects:

i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.

ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.

iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.

iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.

v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.

vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

76. What information is contained in the checklist file and in which file expected results are stored?

a) The checklist file contains information about the objects and the properties of the object we are verifying.

b) The gui*.chk file contains the expected results which is stored in the exp folder

77. What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

a) You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.

b) When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.

c) Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.

d) To capture a window or object as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.

ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap ( object, bitmap, time );

iii. For an object bitmap, the syntax is:

obj_check_bitmap ( object, bitmap, time );

iv. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:

win_check_bitmap (“Flight Reservation”, “Img2”, 1);

v. However, if you click the Date of Flight box in the same window, the statement might be:

obj_check_bitmap (“Date of Flight:”, “Img1”, 1);

Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

78. What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

a) You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).

b) To capture an area of the screen as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.

ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.

iv. The win_check_bitmap statement for an area of the screen has the following syntax:

win_check_bitmap ( window, bitmap, time, x, y, width, height );

79. What do you verify with the database checkpoint default and what command it generates, explain syntax?

a) By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.

b) When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.

c) You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you

d) specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.

e) You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.

Syntax: db_check(, );

f) You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.

Syntax:

db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

ChecklistFileName A file created by WinRunner and saved in the test’s checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.

Contains one of the following values:

1. DVR_ONE_OR_MORE_MATCH – The checkpoint passes if one or more matching database records are found.

2. DVR_ONE_MATCH – The checkpoint passes if exactly one matching database record is found.

3. DVR_NO_MATCH – The checkpoint passes if no matching database records are found.

RecordNumber An out parameter returning the number of records in the database.

80. How do you handle dynamically changing area of the window in the bitmap checkpoints?

a) The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

81. What do you verify with the database check point custom and what command it generates, explain syntax?

a) When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.

b) You can create a custom check on a database in order to:

i. check the contents of part or the entire result set

ii. edit the expected results of the contents of the result set

iii. count the rows in the result set

iv. count the columns in the result set

c) You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

82. What do you verify with the sync point for object/window property and what command it generates, explain syntax?

a) Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.

b) You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.

c) You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:

Syntax:

obj_exists ( object [, time ] );

win_exists ( window [, time ] );

83. What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?

a) You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.

b) During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax:

obj_wait_bitmap ( object, image, time );

win_wait_bitmap ( window, image, time );

84. What do you verify with the sync point for screen area and what command it generates, explain syntax?

a) For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution

Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

85. How do you edit checklist file and when do you need to edit the checklist file?

a) WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

86. How do you edit the expected value of an object?

a) We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

87. How do you modify the expected results of a GUI checkpoint?

a) We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

88. How do you handle ActiveX and Visual basic objects?

a) WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

89. How do you create ODBC query?

a) We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

90. How do you record a data driven test?

a) We can create a data-driven testing using data from a flat file, data table or a database.

i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.

ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.

iii.Database: we store test data in the database and access these data using ‘db_*’ functions.

91. How do you convert a database file to a text file?

a) You can use Data Junction to create a conversion file which converts a database to a target text file.

92. How do you parameterize database check points?

a) When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

93. How do you create parameterize SQL commands?

a) A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:


i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.

FROM specifies the path of the database.

WHERE (optional) specifies the conditions, or filters to use in the query.

Departure is the parameter that represents the departure point of a flight.

Day_Of_Week is the parameter that represents the day of the week of a flight.

b) When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check(“list1.cdl”, “dbvf1”, NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

94. Explain the following commands:

a) db_connect

to connect to a database

db_connect(, );

b) db_execute_query

to execute a query

db_execute_query ( session_name, SQL, record_number );

record_number is the out value.

c) db_get_field_value

returns the value of a single field in the specified row_index and column in the session_name database session.

db_get_field_value ( session_name, row_index, column );

d) db_get_headers

returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.

db_get_headers ( session_name, header_count, header_content );

e) db_get_row

returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );

f) db_write_records

writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

g) db_get_last_error

returns the last error message of the last ODBC or Data Junction operation in the session_name database session.

db_get_last_error ( session_name, error );

h) db_disconnect

disconnects from the database and ends the database session.

db_disconnect ( session_name );

i) db_dj_convert

runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

95. What check points you will use to read and check text on the GUI and explain its syntax?

a) You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b) You can use a text checkpoint to:

i.Read text from a GUI object or window in your application, using obj_get_text and win_get_text

ii.Search for text in an object or window, using win_find_text and obj_find_text

iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text

iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

96. Explain Get Text checkpoint from object/window with syntax?

a) We use obj_get_text (, ) function to get the text from an object

b) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

97. Explain Get Text checkpoint from screen area with syntax?

a) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

98. Explain Get Text checkpoint from selection (web only) with syntax?

a) Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

i. object The logical name of the object.

ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.

iv. out_text The output variable that stores the text string.

v. text_before Defines the start of the search area for a particular text string.

vi. text_after Defines the end of the search area for a particular text string.

vii. index The occurrence number to locate. (The default parameter number is numbered 1).

99. Explain Get Text checkpoint web text checkpoint with syntax?

a) We use web_obj_text_exists function for web text checkpoints.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] );

a. object The logical name of the object to search.

b. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the character #.

c. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the character #.

d. text_to_find The string that is searched for.

e. text_before Defines the start of the search area for a particular text string.

f. text_after Defines the end of the search area for a particular text string.

100. Which TSL functions you will use for

a) Searching text on the window

i. find_text ( string, out_coord_array, search_area [, string_def ] );

string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression.

out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below).

search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle.

string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.

b) getting the location of the text string

i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

window The logical name of the window to search.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the “Using Regular Expressions” chapter in your User’s Guide.

result_array The name of the output variable that stores the location of the string as a four-element array.

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

c) Moving the pointer to that text string

i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] );

window The logical name of the window.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark).

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

d) Comparing the text

i. compare_text (str1, str2 [, chars1, chars2]);

str1, str2 The two strings to be compared.

chars1 One or more characters in the first string.

chars2 One or more characters in the second string. These characters are substituted for those in chars1.

101. What are the steps of creating a data driven test?

a) The steps involved in data driven testing are:

i. Creating a test

ii. Converting to a data-driven test and preparing a database

iii. Running the test

iv. Analyzing the test results.

102. Record a data driven test script using data driver wizard?

a) You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.

To create a data-driven test:

i. If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.

ii. Choose Tools > DataDriver Wizard.

iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.

iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use

v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.

vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, “table.”

vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table

viii. To the script at a later time without making changes throughout the script.

ix. Choose from among the following options:

1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements

2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.

3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.

4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.

5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.

6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.

7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.

x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.

Choose whether and how to replace the selected data:

1. Do not replace this data: Does not parameterize this data.

2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.

3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.

xi. The final screen of the wizard opens.

1. If you want the data table to open after you close the wizard, select Show data table now.

2. To perform the tasks specified in previous screens and close the wizard, click Finish.

3. To close the wizard without making any changes to the test script, click Cancel.


103. What are the three modes of running the scripts?

a) WinRunner provides three modes in which to run tests—Verify, Debug, and Update. You use each mode during a different phase of the testing process.

i. Verify

1. Use the Verify mode to check your application.

ii. Debug

1. Use the Debug mode to help you identify bugs in a test script.

iii. Update

1. Use the Update mode to update the expected results of a test or to create a new expected results folder.

104. Explain the following TSL functions:

a) Ddt_open

Creates or opens a datatable file so that WinRunner can access it.

Syntax: ddt_open ( data_table_name, mode );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write).

b) Ddt_save

Saves the information into a data file.

Syntax: dt_save (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

c) Ddt_close

Closes a data table file

Syntax: ddt_close ( data_table_name );

data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters.

d) Ddt_export

Exports the information of one data table file into a different data table file.

Syntax: ddt_export (data_table_namename1, data_table_namename2);

data_table_namename1 The source data table filename.

data_table_namename2 The destination data table filename.

e) Ddt_show

Shows or hides the table editor of a specified data table.

Syntax: ddt_show (data_table_name [, show_flag]);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

f) Ddt_get_row_count

Retrieves the no. of rows in a data tables

Syntax: ddt_get_row_count (data_table_name, out_rows_count);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

out_rows_count The output variable that stores the total number of rows in the data table.

g) ddt_next_row

Changes the active row in a database to the next row

Syntax: ddt_next_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

h) ddt_set_row

Sets the active row in a data table.

Syntax: ddt_set_row (data_table_name, row);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The new active row in the data table.

i) ddt_set_val

Sets a value in the current row of the data table

Syntax: ddt_set_val (data_table_name, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

j) ddt_set_val_by_row

Sets a value in a specified row of the data table.

Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

k) ddt_get_current_row

Retrieves the active row of a data table.

Syntax: ddt_get_current_row ( data_table_name, out_row );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

out_row The output variable that stores the active row in the data table.

l) ddt_is_parameter

Returns whether a parameter in a datatable is valid

Syntax: ddt_is_parameter (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The parameter name to check in the data table.

m) ddt_get_parameters

Returns a list of all parameters in a data table.

Syntax: ddt_get_parameters ( table, params_list, params_num );

table The pathname of the data table.

params_list This out parameter returns the list of all parameters in the data table, separated by tabs.

params_num This out parameter returns the number of parameters in params_list.

n) ddt_val

Returns the value of a parameter in the active roe in a data table.

Syntax: ddt_val (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The name of the parameter in the data table.

o) ddt_val_by_row

Returns the value of a parameter in the specified row in a data table.

Syntax: ddt_val_by_row ( data_table_name, row_number, parameter );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row_number The number of the row in the data table.

parameter The name of the parameter in the data table.

p) ddt_report_row

Reports the active row in a data table to the test results

Syntax: ddt_report_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

q) ddt_update_from_db

imports data from a database into a data table. It is inserted into your test script when you select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database.

105. How do you handle unexpected events and errors?

a) WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

Define Exception Handling

Define Exception

Define Handler Function

WinRunner enables you to handle the following types of exceptions:

Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.

TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.

Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.

Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

106. How do you handle pop-up exceptions?

a) A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be

i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.

ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

107. How do you handle TSL exceptions?

a) A TSL exception enables you to detect and respond to a specific error code returned during test execution.

b) Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.

c) The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.

d) Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

108. How do you handle object exceptions?

a) During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.

b) You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

109. How do you comment your script?

a) We comment a script or line of script by inserting a ‘#’ at the beginning of the line.

110. What is a compile module?

a) A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.

b) Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

111. What is the difference between script and compile module?

a) Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.

b) WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of “Compiled Module”.

c) By default, modules containing TSL code have a property value of “main”. Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a “call” statement. Example of a call for the “app_init” script:

call cso_init();

call( “C:\\MyAppFolder\\” & “app_init” );

d) Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:

reload (“C:\\MyAppFolder\\” & “flt_lib”);

or

load (“C:\\MyAppFolder\\” & “flt_lib”);

112. Write and explain various loop command?

a) A for loop instructs WinRunner to execute one or more statements a specified number of times.

It has the following syntax:

for ( [ expression1 ]; [ expression2 ]; [ expression3 ] )statement

i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following.

ii. For example, the for loop below selects the file UI_TEST from the File Name list

iii. in the Open window. It selects this file five times and then stops.

set_window (“Open”)

for (i=0; i<5;>

list_select_item(“File_Name:_1″,”UI_TEST”); #Item Number2

b) A while loop executes a block of statements for as long as a specified condition is true.

It has the following syntax:

while ( expression )

statement ;

i. While expression is true, the statement is executed. The loop ends when the expression is false. For example, the while statement below performs the same function as the for loop above.

set_window (“Open”);

i=0;

while (i<5){

i++;

list_select_item (“File Name:_1”, “UI_TEST”); # Item Number 2

}

c) A do/while loop executes a block of statements for as long as a specified condition is true. Unlike the for loop and while loop, a do/while loop tests the conditions at the end of the loop, not at the beginning.

A do/while loop has the following syntax:

do

statement

while (expression);

i. The statement is executed and then the expression is evaluated. If the expression is true, then the cycle is repeated. If the expression is false, the cycle is not repeated.

ii. For example, the do/while statement below opens and closes the Order dialog box of Flight Reservation five times.

set_window (“Flight Reservation”);

i=0;

do

{

menu_select_item (“File;Open Order…”);

set_window (“Open Order”);

button_press (“Cancel”);

i++;

}

while (i<5);

113. Write and explain decision making command?

a) You can incorporate decision-making into your test scripts using if/else or switch statements.

i. An if/else statement executes a statement if a condition is true; otherwise, it executes another statement.

It has the following syntax:

if ( expression )

statement1;

[ else

statement2; ]


expression is evaluated. If expression is true, statement1 is executed. If expression1 is false, statement2 is executed.

b) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values.

It has the following syntax:

switch (expression )

{

case case_1: statements

case case_2: statements

case case_n: statements

default: statement(s)

}

The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

114. Write and explain switch command?

a) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values.

It has the following syntax:

switch (expression )

{

case case_1: statements

case case_2: statements

case case_n: statements

default: statement(s)

}

b) The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

115. How do you write messages to the report?

a) To write message to a report we use the report_msg statement

Syntax: report_msg (message);

116. What is a command to invoke application?

a) Invoke_application is the function used to invoke an application.

Syntax: invoke_application(file, command_option, working_dir, SHOW);

117. What is the purpose of tl_step command?

a) Used to determine whether sections of a test pass or fail.

Syntax: tl_step(step_name, status, description);

118. Which TSL function you will use to compare two files?

a) We can compare 2 files in WinRunner using the file_compare function.

Syntax: file_compare (file1, file2 [, save file]);

119. What is the use of function generator?

a) The Function Generator provides a quick, error-free way to program scripts. You can:

i. Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.

ii. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.

iii. Add Customization functions that enable you to modify WinRunner to suit your testing environment.

120. What is the use of putting call and call_close statements in the test script?

a) You can use two types of call statements to invoke one test from another:

i. A call statement invokes a test from within another test.

ii. A call_close statement invokes a test from within a script and closes the test when the test is completed.

iii. The call statement has the following syntax:

1. call test_name ( [ parameter1, parameter2, …parametern ] );

iv. The call_close statement has the following syntax:

1. call_close test_name ( [ parameter1, parameter2, … parametern ] );

v. The test_name is the name of the test to invoke. The parameters are the parameters defined for the called test.

vi. The parameters are optional. However, when one test calls another, the call statement should designate a value for each parameter defined for the called test. If no parameters are defined for the called test, the call statement must contain an empty set of parentheses.

121. What is the use of treturn and texit statements in the test script?

a) The treturn and texit statements are used to stop execution of called tests.

i. The treturn statement stops the current test and returns control to the calling test.

ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.

b) Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.

treturn

c) The treturn statement terminates execution of the called test and returns control to the calling test.

The syntax is:

treturn [( expression )];

d) The optional expression is the value returned to the call statement used to invoke the test.

texit

e) When tests are run interactively, the texit statement discontinues test execution. However, when tests are called from a batch test, texit ends execution of the current test only; control is then returned to the calling batch test.

The syntax is:

texit [( expression )];

122. Where do you set up the search path for a called test.

a) The search path determines the directories that WinRunner will search for a called test.

b) To set the search path, choose Settings > General Options. The General Options dialog box opens. Click the Folders tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in which they are listed in the box. Note that the search paths you define remain active in future testing sessions.

123. How you create user-defined functions and explain the syntax?

a) A user-defined function has the following structure:

[class] function name ([mode] parameter…)

{

declarations;

statements;

}

b) The class of a function can be either static or public. A static function is available only to the test or module within which the function was defined.

c) Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as follows:

in: A parameter that is assigned a value from outside the function.

out: A parameter that is assigned a value from inside the function.

inout: A parameter that can be assigned a value from outside or inside the function.

124. What does static and public class of a function means?

a) The class of a function can be either static or public.

b) A static function is available only to the test or module within which the function was defined.

c) Once you execute a public function, it is available to all tests, for as long as the test containing the function remains open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module are available for the duration of the testing session.

d) If no class is explicitly declared, the function is assigned the default class, public.

125. What does in, out and input parameters means?

a) in: A parameter that is assigned a value from outside the function.

b) out: A parameter that is assigned a value from inside the function.

c) inout: A parameter that can be assigned a value from outside or inside the function.

126. What is the purpose of return statement?

a) This statement passes control back to the calling function or test. It also returns the value of the evaluated expression to the calling function or test. If no expression is assigned to the return statement, an empty string is returned.

Syntax: return [( expression )];


127. What does auto, static, public and extern variables means?

a) auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.

b) static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.

c) public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.

d) extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

128. How do you declare constants?

a) The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.

b) The syntax of this declaration is:

[class] const name [= expression];

129. How do you declare arrays?

a) The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.

b) class array_name [ ] [=init_expression]

c) The array class may be any of the classes used for variable declarations (auto, static, public, extern).

130. How do you load and unload a compile module?

a) In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.

b) You can load a module either as a system module or as a user module. A system module is generally a closed module that is “invisible” to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).

load (module_name [,1|0] [,1|0] );

The module_name is the name of an existing compiled module.


Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.

(Default = 0)

The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.

(Default = 0)

c) The unload function removes a loaded module or selected functions from memory.

d) It has the following syntax:

unload ( [ module_name | test_name [ , “function_name” ] ] );

141. How do you update your expected results?

142. How do you run your script with multiple sets of expected results?

143. How do you view and evaluate test results for various check points?

144. How do you view the results of file comparison?

145. What is the purpose of Wdiff utility?

146. What are batch tests and how do you create and run batch tests ?

147. How do you store and view batch test results?

148. How do you execute your tests from windows run command?

149. Explain different command line options?

150. What TSL function you will use to pause your script?

151. What is the purpose of setting a break point?

152. What is a watch list?

153. During debugging how do you monitor the value of the variables?

154.What are the reasons that WinRunner fails to identify an object on the GUI?

a) WinRunner fails to identify an object in a GUI due to various reasons.

i. The object is not a standard windows object.

ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

155.What do you mean by the logical name of the object.

a) An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

156.If the object does not have a name then what will be the logical name?

If the object does not have a name then the logical name could be the attached text.

157.What is the different between GUI map and GUI map files?

a) The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.

i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

b) GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

158.How do you view the contents of the GUI map?

a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

160.What is startup script in WinRunner?

It is writing a script and when WinRunner starts it automatically runs the script. If you write script like invoking some application as soon as the script is run the application will be invoked for the purpose of testing

161.What is the purpose of loading WinRunner add-ins?

Add-ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator,and while executing the script only those functions in the loaded add-in will be executed,else WinRunner will give an error message saying it does not recognize the function

162.What is the purpose of GUI spy?

Using the GUI spy you can view the properties of any GUI object and your desktop. You use the spy pointer to point to an object,and the GUI spy displays the properties and their values in the GUI spy dialog box. You can choose to view all properties of an object, or only the selected set of properties that WinRunner learns.

163.When you create GUI map do you record all the objects of specific objects?

a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

164.What is the purpose of set_window command?

b) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.


Syntax: set_window(, time);

The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

165.How do you load GUI map?

c) We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

166..What is the disadvantage of loading the GUI maps through start up scripts?

d) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.

e) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

167.How do you unload the GUI map?

f) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

 

Test Director Interview Questions

  1. How many types of tabs in Test Director?
    There are 4 tabs available in Test Director.
    1. Requirement
    2. Test plan
    3.Test lab.
    4.Defect
     We can change the name of Tabs, as we like. Test Director enables us to rename the Tab. In TD there only 4 tabs are available.
  2. What is meant by test lab in test director?
    Test lab is a part of test director where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in the project.
  3. What is Test Director
    Its a Mercury interactive’s Test management tool. It includes all the features we need to organize and manage the testing process.
  4. What are all the main features of Test Director?
    It enables us to create a database of tests,execute tests, report and track defects detected in the software.
  5. How the assessment of the application will be taken place in Test Director?
    As we test, Test Director allows us to continuously assess the status of our application by generating sophisticated reports and graphs. By integrating all tasks involved in software testing Test Director helps us to ensure that our software is ready for deployment.
  6. What the planning tests will do in Test Director?
    It is used to develop a test plan and create tests. This includes defining goals and strategy,designing tests,automating tests where beneficial and analyzing the plan.
  7. What the running tests will do in Test Director?
    It execute the test created in the planning test phase, and analyze the test results.
  8. What the tracking defects will do in Test Director?
    It is used to monitor the software quality. It includes reporting defects,determining repair priorities,assigning tasks,and tracking repair progress.
  9. What are all the three main views available in What the running tests will do in Test Director?

    ? Plan tests

    ? Run tests

    ? Track defects

    Each view includes all the tools we need to complete each phase of the testing process

  10. What is test plan tree?

    A test plan tree enables you to organize and display your test hierarchically,according to your testing requirements

  11. What are all the contents of test plan tree?

    Test plan tree can include several type of tests

    ? Manual test scripts

    ? Win Runner test scripts

    ? Batch of Win Runner test scripts

    ? Visual API test scripts

    ? Load Runner scenario scripts and Vuser scripts

  12. What is test step?

    A test step includes the action to perform in our application,input to enter,and its expected output

  13. TestDirector is a Test management tool

  14. What are all the components of TestDirector5.0?

    Plan tests,Run tests,Track defects

  15. Who is having full privileges in TestDirector project?

    TD Admin

  16. What is test set?

    A test set is a subset of tests in our database designed to achieve a specified testing objective

  17. How the analyzing of test results will be accomplished in TestDirector?

    Analyzing results will be accomplished by viewing run data and by generating TestDirector reports and graphs

  18. What is Test set builder?

    The test set builder enables us to create,edit,and delete test sets. Our test sets can include both manual and automated tests. We can include the same test in different test sets.

  19. How the running of Manual test will be accomplished in TestDirector?

    When running a manual test,we perform the actions detailed in the test steps and compare them to the expected results. The mini step window allows us to conveniently perform manual tests and record results with TestDirector minimized

  20. How the running of Automated test will be accomplished in TestDirector?

    We can execute automated tests on our own computer or on multiple remote hosts. The test execution window enables us to control test execution and manage hosts. We can execute a single test, or launch an entire batch of tests.

  21. How to execute test plan in testdirector?

    Write testcases in test plan(Description and expected results)

    To execute them go to TEST LAB . Give coverage from test plan to test lab by selecting “select tests”.

    Then click on RUN of RUN TEST SET to execute the test cases.

  22. What is the use of Test Director software?

    TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create database of manual and automated tests, build test cycles, run tests, and report and track defects.

  23. How you integrated your automated scripts from TestDirector?

    When you work with WinRunner , you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual.

    20.Who all are having the rights to detect and report the defects?

  24. Who all are having the rights to detect and report the defects?

    ? Software developers

    ? Testers

    ? End users

  25. What are the main things want to be considered while reporting the bug?

    ? Description about the bug

    ? Software version

    ? Additional informations necessary to reproduce and repair the defect

  26. Describe about mailing defect data

    Connecting TestDirector to your e-mail system lets you routinely inform development and quality assurance personnel about defect repair activity.

  27. What is report designer?

    Its a powerful and flexible tool that allows you to design your own unique reports

  28. What is the use of graphs in TestDirector?

    TestDirector graphs help us visualize the progress of test planning,test execution,and defect tracking for our application,so that we can detect bottlenecks in the testing process.

  29. What is difference between master test plan and test plan?
    Master plan is a document showing the planning for whole of the project i.e. all the phases of the project whereas the test plan is the document required for only testing people.
  30. How to generate the graphs in Test Director?
    The generation of graphs in the Test Director that to Test Lab module is:
    1. Analysis
    2. graph.
    3. Graph Wizard.
    4. select the graph type as summary and click the next button.
    5. Select the show current tests and click the next button.
    6. Select the define a new filter and click the filter button.
    7. Select the tests set and click on the ok button
    8. select the plan: subject and click on the Ok button.
    9. select the plan:status.
    10. select the test set as X axis.
    11. Click the finish button.
  31. How can we export multiple test cases from TD in a single go?
    If we want to export it to any office tools
    1. Select the multiple steps/cases you need.
    2. Right click > save as and proceed.
  32. How to upload test cases from excel into test director?
    First we have to activate excel in test director with the help of adding page. After activation we can view the option as ‘export to test director’ in excel under tools tab.
    If you select the export to test director pop up dialog box opens the following process should be followed i.e.,
    Enter
    1. URL of Test Director
    2. Domain name and project name
    3. user name and password
    4. Select anyone of these 3 options: Requirements or test cases or defects
    5. Select a map option. a)select a map b)select a new map name c)create a temporary map.
    6. Map the test director to corresponding excel. Map the field whatever you mention in excel.These are the required steps to export the excel into TD.
  33. Does anyone know of any integration between Test director and Rational Robot?
    Any idea on the technical feasibility of such integration and level of effort would also be interesting?
    Test Director is software management tool. It is a Mercury Interactive Product.
    Rational Robot is a Rational product. It comes included with ‘test manager’ module for software management.
    Integrating the test director and Rational Robot is not feasible.
  34. Explain the project tree in test director?
    Project Tree in test director : Planning tests –Create test– execute tests–tracking defects.
  35.  What is coverage status, what does it do?
    Coverage status is percentage of testing covered at a given time. If you have 100 test cases in a project and you have finished 40 of test cases out of total test cases then your coverage status of project is 40%. Coverage Status is used to keep track of project accomplishment to meet the final deadline of the deliverables.
  36.  Difference between data validation and data integrity.
    Data Validation: We check whether the input data/output data is valid in terms of its length, data type etc..
    Data Integrity: We check out database contains only accurate data. We implement data integrity with the help of integrity constraints. So in data integrity we check whether the integrity constraints or implemented correctly or not
  37.  What are the uses of filters in test director?
    Limits the data displayed according to the criteria you specified.
    For Ex: You could create a filter that displays only those tests associated with the specified tester and subject.
    Each filter contains multiple filter conditions. These are expressions that are applied to a test plan tree or to the field in a grid. When the filter is applied, only those records that meet the filter conditions are displayed.
    You create and edit filters with the filter dialog box.This opens when you select the filter command. Once you have created a filter, we can save it for future use.
  38. How do we generate test cases through test director?
    Test lab we do the design step. Design grid we create parent child tree.
    Ex: Database operation (test name)
    1. Delete an order
    Description: Click delete button to delete the order.
    Expected results: Order is deleted.
    Pass/Fail:
    Update an order
    create an order.
  39. What are the datatypes a available in PL/SQL ?
  40. Difference between WEBINSPECT-QAINSPECT and WINRUNNER/TEST DIRECTOR?
  41. How will you generate the defect ID in test director? Is it generated automatically or not?
  42. Difference between TD 8.0 (Test director) and QC 8.0 (Quality Center).
  43. How do you ensure that there are no duplication of bugs in Test Director?
  44. Difference between WinRunner and Test Director?
  45. How will you integrated your automated scripts from TestDirector?
  46. What is the use of Test Director software?
  47. What is the Extra tab in QC ? (TCS)
  48. What is Test Builder?(TCS)
  49. I am using Quality Center 9.2. While raising a defect i can see that there is only option to send a mail to assigned and it is also sent to me. Is there any way i can send the mail to multiple users ? I know i can save the defect and than later sent it to multiple users, but is there any way where i can send it at the time of raising the defect itself. Any configuration that can be done by an adminstrator?
  50. what is connection between test plan tab and test lab tab? how can u access your test cases in test lab tab?
  51. How do you create Test script from Quality centre Using QTP
  52. What is the difference between Quality centre and Test Director
  53. what is the last version in TD to support for qtp?
  54. What r the madatory fields in TD?
  55. TD is web based or client/server?
  56. What is the latest version in TD?
  57. What r the views in testdirector?
  58. How to map testcases with requirements in test lab?
  59. how can we see all fields in test director
  60. How do you prepare Bug Reports ? What all do you include in Bug Report ?
  61. How can we connect defectreport from testlab&testplan tabs?
  62. Hi, I would like to upload from excel sheet test case to Quality center test plan section .Please tell me how to upload data(written test case in excel) to Quality center.
  63. what is RTM in test director?
  64. what iz d difference between TD an QC??
  65. Can any one tell how to install the TD in our system?
  66. how many ways are there to copy test cases in test director
  67. how to import excel/word data into the test plan module of quality center 9.0
  68. what would be purpose of Dashboard on Quality center? please provide me the advantage of dashboard? thanks in advance….
  69. what was the advantage of Quality center? functions performed using Quality center? thanks in advance
  70. can test cases we wroitten directly on td. wat else we can do in td
  71. What do you mean by Requirment Coverage?
  72. how will you do execution workflow, creat and modify test sets

A – Z Testing

To know with the basic definitions of software testing and quality assurance this is the best glossary compiled by Erik van Veenendaal. Also for each definition there is a reference of IEEE or ISO mentioned in brackets.

 

A

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]

acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]

accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]

accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing.

actual result: The behavior produced/observed when a component or system is tested.

 

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.

adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability testing.

 

agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability testing.

anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident, problem.

attractiveness: The capability of the software product to be attractive to the user. [ISO 9126]

audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1) the form or content of the products to be produced

(2) the process by which the products shall be produced

(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]

audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]

 

B

back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610]

basic block: A sequence of one or more consecutive executable statements containing no branches.

basis test set: A set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610]

bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.

best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.

beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing.

black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

black box test design techniques: Documented procedure to derive and select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.

boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, ifthen- else.

branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

branch testing: A white box test design technique in which test cases are designed to execute branches.

business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

 

C

Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]

capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing. See also test automation.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]

 

certification: The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

changeability: The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability.

classification tree method: A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [Grochtmann]

code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

co-existence: The capability of the software product to co-exist with other independent software in a common environment sharing common resources. [ISO 9126] See portability testing.

complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISO 9126]

compliance testing: The process of testing to determine the compliance of component or system.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.

component specification: A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

component testing: The testing of individual software components. [After IEEE 610]

compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.

concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.

condition determination testing: A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

condition outcome: The evaluation of a condition to True or False.

configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards compliance. [IEEE 610]

configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]

configuration identification: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]

configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]

configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]

consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]

control flow: An abstract representation of all possible sequences of events (paths) in the execution through a component or system.

conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.

COTS: Acronym for Commercial Off-The-Shelf software.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.

coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by the test suite.

cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where – L = the number of edges/links in a graph – N = the number of nodes in a graph – P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

D

data definition: An executable statement where a variable is assigned a value.

data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.

data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]

data flow analysis: A form of static analysis based on the definition and usage of variables.

data flow coverage: The percentage of definition-use pairs that have been exercised by a test case suite.

data flow test: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.

debugging: The process of finding, analyzing and removing the causes of failures in software.

debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.

decision: A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.

decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.

decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]

decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

decision outcome: The result of a decision (which therefore determines the branches to be taken).

defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]

defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]

definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).

deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.

design-based testing: An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).

desk checking: Testing of software or specification by manual simulation of its execution.

development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]

documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.

domain: The set from which valid input and/or output values can be selected.

driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]

dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

dynamic testing: Testing that involves the execution of the software of a component or system.

 E

efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]

efficiency testing: The process of testing to determine the efficiency of a software product.

elementary comparison testing: A black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.

entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

entry point: The first executable statement within a component.

equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

error: A human action that produces an incorrect result. [After IEEE 610]

error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

error seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]

error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].

exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.

executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.

exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.

exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing. [After Gilb and Graham]

exit point: The last executable statement within a component.

expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

exploratory testing: Testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [Bach]

F

fail: A test is deemed to fail if its actual result does not match its expected result.

failure: Actual deviation of the component or system from its expected delivery, service or result. [After Fenton]

failure mode: The physical or functional manifestation of a failure. For example, a system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution.

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

fault tolerance: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability.

fault tree analysis: A method used to analyze the causes of faults (defects).

feasible path: A path for which a set of input values and preconditions exists which causes it to be executed.

feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints). [After IEEE 1008]

finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]

formal review: A review characterized by documented procedures and requirements, e.g. inspection.

frozen test basis: A test basis document that can only be amended by a formal change control process. See also baseline.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.

functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

functional test design technique: Documented procedure to derive and select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique.

functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

functionality: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]

functionality testing: The process of testing to determine the functionality of a software product.

G

glass box testing: See white box testing.

H

heuristic evaluation: A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called “heuristics”).

high level test case: A test case without concrete (implementation level) values for input data and expected results.

horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification).

 

I

impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overal project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.

incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

incident: Any event occurring during testing that requires investigation. [After IEEE 1008]

incident management: The process of recognizing, investigating, taking action and disposing of incidents. It involves recording incidents, classifying them and identifying the impact. [After IEEE 1044]

incident management tool: A tool that facilitates the recording and status tracking of incidents found during testing. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

incident report: A document reporting on any event that occurs during the testing which requires investigation. [After IEEE 829]

independence: Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO-178b]

infeasible path: A path that cannot be exercised by any set of possible input values.

informal review: A review not based on a formal (documented) procedure.

input: A variable (whether stored within a component or outside) that is read by a component.

input domain: The set from which valid input values can be selected.. See also domain.

input value: An instance of an input. See also input.

inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]

installability: The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability.

installability testing: The process of testing the installability of a software product. See also portability testing.

installation guide: Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

installation wizard: Supplied software on any suitable media, which leads the installer through the installation process. It normally runs the installation process, provides feedback on installation results, and prompts for options.

instrumentation: The insertion of additional code into the program in order to collect information about program behavior during execution.

instrumenter: A software tool used to carry out instrumentation.

intake test: A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase.

integration: The process of combining components or systems into larger assemblies.

integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.

interface testing: An integration test type that is concerned with testing the interfaces between components or systems.

interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.

interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.

invalid testing: Testing using input values that should be rejected by the component or system. See also error tolerance.

isolation testing: Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

 K

keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data driven testing.

 L

LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

learnability: The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability.

load test: A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

low level test case: A test case with concrete (implementation level) values for input data and expected results.

 

M

maintenance: Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment. [IEEE 1219]

maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system.

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]

maintainability testing: The process of testing to determine the maintainability of a software product.

management review: A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and heir system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

maturity: (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model, Test Maturity Model. (2) The capability of the software product to avoid failure as a result of defects in the software. [ISO 9126] See also reliability.

measure: The number or category assigned to an attribute of an entity by making a measurement [ISO 14598].

measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]

measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]

memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.

metric: A measurement scale and the method used for measurement. [ISO 14598]

milestone: A point in time in a project at which defined (intermediate) deliverables and results should be ready.

moderator: The leader and main person responsible for an inspection or other review process.

monitor: A software tool or hardware device that run concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]

multiple condition coverage: The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.

multiple condition testing: A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).

mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.

N

N-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]

N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing.

negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique. [After Beizer].

non-conformity: Non fulfillment of a specified requirement. [ISO 9000]

non-functional requirement: A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability.

non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

non-functional test design techniques: Methods used to design or select tests for nonfunctional testing.

 O

off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

operability: The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability.

operational environment: Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.

operational profile testing: Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]

operational testing: Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]

output: A variable (whether stored within a component or outside) that is written by a component.

output domain: The set from which valid output values can be selected. See also domain.

output value: An instance of an output. See also output.

 P

pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.

pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.

 

Pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]

path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

path sensitizing: Choosing a set of input values to force the execution of a given path.

path testing: A white box test design technique in which test cases are designed to execute paths.

performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See efficiency.

performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. Defect Detection Percentage (DDP) for testing. [CMMI]

performance testing: The process of testing to determine the performance of a software product. See efficiency testing.

performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

phase test plan: A test plan that typically addresses one test level.

portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]

portability testing: The process of testing to determine the portability of a software product.

postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.

precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

 Priority: The level of (business) importance assigned to an item, e.g. defect.

process cycle test: A black box test design technique in which test cases are designed toexecute business procedures and processes. [TMap]

 process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

 project test plan: A test plan that typically addresses multiple test levels.

pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

 Q

quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]

quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement. [ISO 9000]

R

random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability.

recoverability testing: The process of testing to determine the recoverability of a software product. See also reliability testing.

regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]

reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISO 9126]

reliability testing: The process of testing to determine the reliability of a software product.

replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability.

requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]

requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

requirements management tool: A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.

requirements phase: The period of time in the software life cycle during which the equirements for a software product are defined and documented. [IEEE 610]

resource utilization: The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions. [After ISO 9126] See also efficiency.

resource utilization testing: The process of testing to determine the resource-utilization of a software product.

result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.

resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]

re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

reviewer: The person involved in the review who shall identify and describe anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

risk-based testing: Testing oriented towards exploring and providing information about product risks. [After Gerrard]

risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also errortolerance, fault-tolerance.

root cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

 

S

safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO 9126]

safety testing: The process of testing to determine the safety of a software product.

scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

scalability testing: Testing to determine the scalability of the software product.

scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.

scripting language: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/replay tool).

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126]

security testing: Testing to determine the security of the software product.

severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]

simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]

simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator.

smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.

software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]

specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]

specification-based test design technique: See black box test design technique.

specified input: An input for which the specification predicts a result.

stability: The capability of the software product to avoid unexpected effects from modifications in the software. [ISO 9126] See also maintainability.

 

state diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]

state table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

state transition: A transition between two states of a component or system.

state transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.

statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

statement coverage: The percentage of executable statements that have been exercised by a test suite.

statement testing: A white box test design technique in which test cases are designed toexecute statements.

static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.

static analyzer: A tool that carries out static analysis.

static code analysis: Analysis of program source code carried out without execution of that software.

static code analyzer: A tool that carries out static code analysis. The tool checks source code, for certain properties such as conformance to coding standards, quality metrics or data flow anomalies.

static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

statistical testing: A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing.

status accounting: An element of configuration management, consisting of the recording andreporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes. [IEEE 610]

 

Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]

structural coverage: Coverage measures based on the internal structure of the component.

structural test design technique: See white box test design technique.

stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]

subpath: A sequence of executable statements within a component.

suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality.

Software Usability Measurement Inventory (SUMI): A questionnaire based usability test technique to evaluate the usability, e.g. user-satisfaction, of a component or system. [Veenendaal]

syntax testing: A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

system: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]

system integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

 

T

technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review. [Gilb and Graham, IEEE 1028]

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test charter: A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing. See also exploratory testing.

test comparator: A test tool to perform automated test comparison.

test comparison: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.

test data: Data that exists (for example, in a database) before a test is executed, and thataffects or is affected by the component or system under test.

test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

 

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

test design tool: A tool that support the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool itself.

test design technique: A method used to derive or select test cases.

test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]

test evaluation report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

test execution: The process of running a test by the component or system under test, producing actual result(s).

test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

test execution phase: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]

test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

test execution technique: The method used to perform the actual test execution, either manually or automated.

test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback. [Fewster and Graham]

test harness: A test environment comprised of stubs and drivers needed to conduct a test.

test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

test item: The individual element to be tested. There usually is one test object and many test items. See also test object.

test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. [After TMap]

test log: A chronological record of relevant details about the execution of tests. [IEEE 829]

test logging: The process of recording information about tests executed into a test log.

test manager: The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.

test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

test object: The component or system to be tested. See also test item.

test objective: A reason or purpose for designing and executing a test.

test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]

test performance indicator: A metric, in general high level, indicating to what extent a certain target value or criterion is met. Often related to test process improvement objectives, e.g. Defect Detection Percentage (DDP).

test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level. [After Gerrard]

test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process [After IEEE 829]

test planning: The activity of establishing or updating a test plan.

test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

test point analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

test procedure: See test procedure specification.

test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

test process: The fundamental test process comprises planning, specification, execution, recording and checking for completion. [BS 7925/2]

test repeatability: An attribute of a test indicating whether the same results are produced each time the test is executed.

test run: Execution of a test on a specific version of the test object.

test script: Commonly used to refer to a test procedure specification, especially an automated one.

test specification: A document that consists of a test design specification, test case specification and/or test procedure specification.

test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a programme (one or more projects).

test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]

test target: A set of exit criteria.

test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [TMap] See also CAST.

test type: A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases. [After TMap]

testability: The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability.

testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]

testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]

tester: A technically skilled professional who is involved in the testing of a component or system.

testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]

thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

 

U

understandability: The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability.

unreachable code: Code that cannot be reached and therefore is impossible to execute.

usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]

usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. [After ISO 9126]

use case testing: A black box test design technique in which test cases are designed to execute user scenarios.

user test: A test whereby real-life users are involved to evaluate the usability of a component or system.

 

V

V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

verification: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

vertical traceability: The tracing of requirements through the layers of development documentation to components.

volume testing: Testing where the system is subjected to large volumes of data. See also resource-utilization testing.

 

W

walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028]

white box test design technique: Documented procedure to derive and select test cases based on an analysis of the internal structure of a component or system.

white box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.