Archive

Archive for March, 2008

Software Testing FAQ

What are 5 common problems in the software development process?

  • poor requirements – if requirements are unclear, incomplete, too general, and not testable, there will be problems.
  • unrealistic schedule – if too much work is crammed in too little time, problems are inevitable.
  • inadequate testing – no one will know whether or not the program is any good until the customer complains or systems crash.
  • featuritis – requests to pile on new features after development is underway; extremely common.
  • miscommunication – if developers don’t know what’s needed or customer’s have erroneous expectations, problems can be expected.

What are 5 common solutions to software development problems?

  • solid requirements – clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. In ‘agile’-type environments, continuous close coordination with customers/end-users is necessary to ensure that changing/emerging requirements are understood.
  • realistic schedules – allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
  • adequate testing – start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. ‘Early’ testing could include static code analysis/testing, test-first development, unit testing by developers, built-in testing and diagnostic capabilities, automated post-build testing, etc.
  • stick to initial requirements where feasible – be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. In ‘agile’-type environments, initial requirements may be expected to change significantly, requiring that true agile processes be in place.
  • communication – require walkthroughs and inspections when appropriate; make extensive use of group communication tools – groupware, wiki’s, bug-tracking tools and change management tools, intranet capabilities, etc.; ensure that information/documentation is available and up-to-date – preferably electronic, not paper; promote teamwork and cooperation; use protoypes and/or continuous communication with end-users if possible to clarify expectations.

What are some recent major computer system failures caused by software bugs?

  • News reports in December of 2007 indicated that significant software problems were continuing to occur in a new ERP payroll system for a large urban school system. It was believed that more than one third of employees had received incorrect paychecks at various times since the new system went live in January of that year, resulting in overpayments of $53 million, as well as underpayments. An employees’ union brought a lawsuit against the school system, the cost of the ERP system was expected to rise by 40%, and the non-payroll part of the ERP system was delayed. Inadequate testing reportedly contributed to the problems.
  • In November of 2007 a regional government reportedly brought a multi-million dollar lawsuit against a software services vendor, claiming that the vendor ‘minimized quality’ in delivering software for a large criminal justice information system and the system did not meet requirements. The vendor also sued its subcontractor on the project.
  • In June of 2007 news reports claimed that software flaws in a popular online stock-picking contest could be used to gain an unfair advantage in pursuit of the game’s large cash prizes. Outside investigators were called in and in July the contest winner was announced. Reportedly the winner had previously been in 6th place, indicating that the top 5 contestants may have been disqualified.
  • A software problem contributed to a rail car fire in a major underground metro system in April of 2007 according to newspaper accounts. The software reportedly failed to perform as expected in detecting and preventing excess power usage in equipment on a new passenger rail car, resulting in overheating and fire in the rail car, and evacuation and shutdown of part of the system.
  • Tens of thousands of medical devices were recalled in March of 2007 to correct a software bug. According to news reports, the software would not reliably indicate when available power to the device was too low.
  • A September 2006 news report indicated problems with software utilized in a state government’s primary election, resulting in periodic unexpected rebooting of voter checkin machines, which were separate from the electronic voting machines, and resulted in confusion and delays at voting sites. The problem was reportedly due to insufficient testing.
  • In August of 2006 a U.S. government student loan service erroneously made public the personal data of as many as 21,000 borrowers on it’s web site, due to a software error. The bug was fixed and the government department subsequently offered to arrange for free credit monitoring services for those affected.
  • A software error reportedly resulted in overbilling of up to several thousand dollars to each of 11,000 customers of a major telecommunications company in June of 2006. It was reported that the software bug was fixed within days, but that correcting the billing errors would take much longer.
  • News reports in May of 2006 described a multi-million dollar lawsuit settlement paid by a healthcare software vendor to one of its customers. It was reported that the customer claimed there were problems with the software they had contracted for, including poor integration of software modules, and problems that resulted in missing or incorrect data used by medical personnel.
  • In early 2006 problems in a government’s financial monitoring software resulted in incorrect election candidate financial reports being made available to the public. The government’s election finance reporting web site had to be shut down until the software was repaired.
  • Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. The problem was rectified and trading resumed later the same day.
  • A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. In the article, an automotive software specialist indicated that the automobile industry spends $2 billion to $3 billion per year fixing software problems.
  • Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. In March of 2005 it was decided to scrap the entire project.
  • In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.
  • Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank’s customers, and that the total cost of the incident could exceed $100 million.
  • A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.
  • According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company’s vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.
  • In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980’s one nation surreptitiously allowed a hostile nation’s espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.
  • A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers’ online orders.
  • News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.
  • In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court’s ruling that “…six miscues out of more than 400 trades does not indicate negligence.” was invalidated.
  • In April of 2003 it was announced that a large student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.
  • News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.
  • In March of 2002 it was reported that software bugs in Britain’s national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.
  • A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
  • According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
  • In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date ’31/12/2000′; the trains were started by altering the control system’s date settings.
  • News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn’t work.
  • In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district’s CIO was fired. The school district decided to reinstate it’s original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
  • A review board concluded that the NASA Mars Polar Lander failed in December 1999 due to software problems that caused improper functioning of retro rockets utilized by the Lander as it entered the Martian atmosphere.
  • In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
  • Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
  • In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
  • A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
  • In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
  • The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
  • In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
  • January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
  • In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
  • A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software’s inability to handle credit cards with year 2000 expiration dates.
  • In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others’ reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to “…unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers.”
  • In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC’s to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that ‘It had nothing to do with the integrity of the software. It was human error.’
  • On June 4 1996 the first flight of the European Space Agency’s new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
  • Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
  • On January 1 1984 all computers produced by one of the leading minicomputer makers of the time reportedly failed worldwide. The cause was claimed to be a leap year bug in a date handling function utilized in deletion of temporary operating system files. Technicians throughout the world worked for several days to clear up the problem. It was also reported that the same bug affected many of the same computers four years later.
  • Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a ‘…funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

What kinds of testing should be considered?

  • Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
  • unit testing – the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
  • system testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • end-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • sanity testing or smoke testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.
  • regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
  • acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • load testing – testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
  • stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
  • usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.
  • recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • failover testing – typically used interchangeably with ‘recovery testing’
  • security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • compatability testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • context-driven testing – testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • user acceptance testing – determining if software is satisfactory to an end-user or customer.
  • comparison testing – comparing software weaknesses and strengths to competing products.
  • alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • mutation testing – a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

Will automated testing tools make testing easier?

  • Possibly. For small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools. For larger projects, or on-going long-term projects they can be valuable.
  • A common type of automated tool is the ‘record/playback’ type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them ‘recorded’ and the results logged by a tool. The ‘recording’ is typically in the form of text based on a scripting language that is interpretable by the testing tool. Often the recorded script is manually modified and enhanced. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just ‘playing back’ the ‘recorded’ actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the ‘recordings’ may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.
  • Another common type of approach for automation of functional testing is ‘data-driven’ or ‘keyword-driven’ automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an ‘action’ would be something like ‘enter a value in a text box’). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained – such as via a spreadsheet – since they are separate from the test drivers. The test drivers ‘read’ the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.
  • Other automated tools can include:

code analyzers – monitor code complexity, adherence to standards, etc.

coverage analyzers – these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.

memory analyzers – such as bounds-checkers and leak detectors.

load/performance test tools – for testing client/server and web applications under various load levels.

web test tools – to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site’s interactions are secure.
                                         
other tools – for test case management, documentation management, bug reporting, and configuration management, file and database comparisons, screen captures, security testing, macro recorders, etc.

Test automation is, of course, possible without COTS tools. Many successful automation efforts utilize custom automation software that is targeted for specific projects, specific software applications, or a specific organization’s software development environment. In test-driven agile software development environments, automated tests are often built into the software during (or preceding) coding of the application.

What steps are needed to develop and run software tests?
The following are some of the steps to consider:

  • Obtain requirements, functional design, and internal design specifications and other necessary documents
  • Obtain budget and schedule requirements
  • Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  • Determine project context, relative to the existing quality culture of the organization and business, and how it might impact testing scope, aproaches, and methods.
  • Identify application’s higher-risk aspects, set priorities, and determine scope and limitations of tests
  • Determine test approaches and methods – unit, integration, functional, system, load, usability tests, etc.
  • Determine test environment requirements (hardware, software, communications, etc.)
  • Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  • Determine test input data requirements
  • Identify tasks, those responsible for tasks, and labor requirements
  • Set schedule estimates, timelines, milestones
  • Determine input equivalence classes, boundary value analyses, error classes
  • Prepare test plan document and have needed reviews/approvals
  • Write test cases
  • Have needed reviews/inspections/approvals of test cases
  • Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  • Obtain and install software releases
  • Perform tests
  • Evaluate and report results
  • Track problems/bugs and fixes
  • Retest as needed
  • Maintain and update test plans, test cases, test environment, and testware through life cycle

What is ‘good code’ in project?
‘Good code’ is code that works, is reasonably bug free, and is readable and maintainable. Some organizations have coding ‘standards’ that all developers are supposed to adhere to, but everyone has different ideas about what’s best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. ‘Peer reviews’, ‘buddy checks’ pair programming, code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:

  • minimize or eliminate use of global variables.
  • use descriptive function and method names – use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
  • use descriptive variable names – use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
  • function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
  • function descriptions should be clearly spelled out in comments preceding a function’s code.
  • organize code for readability.
  • use whitespace generously – vertically and horizontally
  • each line of code should contain 70 characters max.
  • one code statement per line.
  • coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
  • in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
  • no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
  • make extensive use of error handling procedures and status and error logging.
  • for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
  • for C++, keep class methods small, less than 50 lines of code per method is preferable.
  • for C++, make liberal use of exception handlers

 What is a ‘walkthrough’?
A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

 What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term ‘IV & V’ refers to Independent Verification and Validation.

 What makes a good Software Test engineer?
A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

Visa Interview Questions

Here we is a set of U.S. visa interview questions that you may be asked as a part of the H-1B visa application process.

  • What is the purpose of your trip to the United States?
  • Why do you want to work in the US?
  • Where are you working currently?
  • What is the company you are going to work in USA?
  • What is your role in US Company?
  • What is your current salary?
  • What is the salary you will get in United States?
  • Have you been to any other country before? If yes, how long was your stay there?
  • Where will you be staying in the U.S.?
  • How long do you plan to stay in the U.S.?
  • Will you be back to your home country?
  • When you will be back to your home country?

Points to Remember:

  • Applicants should practice for the interview with friends.
  • Applicant must arrive at the exact time mentioned in the appointment letter.
  • Applicant must answer all questions clearly during the interview.
  • Applicant must be honest with consular officials at all times.
  • Applicants must ensure that they have all the forms and personal documents that they need to submit at the time of interview.

Some more questions

  1. What is the purpose of your trip to the United States?
  2. Do you have any family in the United States?
  3. Why are you changing your Job?
  4. Why do you want to work in the US?
  5. Have you applied for visa for any other country?
  6. Do you know what is the living cost in the U.S. specific to the place where you are going?
  7. When are you planning to travel?
  8. How will you survive for the first month?
  9. Have you been to any other country before?
    If yes, how long was your stay there?
  10. Will you come back to India?
  11. When will you return to india?
  12. Why would you want to return to India?
  13. Is it your first H1B or visa revalidation?
  14. After the conclusion of your visa, what will you do?

Questions About Your Education/Experience

  1. Are you a student?
  2. Which university is your degree from?
  3. What was your thesis about?
  4. What is the diff between PL SQL n SQL?
  5. What are the software’s you know?
    Do you have work experience with them?
  6. What courses did you complete here [Home Country]?
  7. Show me your certificates.
  8. Can I see your educational certificates and experience letters.
  9. Tell me in detail about all the jobs and work experiences and profile.
  10. What’s your highest educational qualification?

Questions About Your Current Company

  1. How long have you been working?
  2. Where are you working currently?
  3. What is your current salary?
  4. What is your current role in the current company?
  5. Is it an Indian company you currently work for?

Questions About Sponsoring Company

  1. What is the company you are going to work for in USA?
  2. Where are you going to work in US?
  3. Why are you joining [New Company]?
  4. How do you know this is a real company?
  5. When did you receive your offer letter?
  6. What will you be working on there? Is it an internal project?
  7. I need a client letter describing your work project.
  8. Tell me what do you know about [New Company]?
  9. When was the US company founded?
  10. Tell me about the project and the company (client) you will be working for?
  11. How did you find out about this company?
  12. How did you contact the [New Company]?
  13. What is the current project you will be working on?
  14. What are your responsibilities and for which client are you going to be working for? Please explain in detail.
  15. Do you have any proof from your new employer regarding your responsibilities?
  16. Do you have any company photographs?
  17. How long has the company been in the current location?
  18. How many rounds of interviews has the USA company conducted?
    What are they?
  19. What is the name of your interviewer?
  20. Can you give me the dates of your interview?
  21. Who are the clients for your U.S. company?
  22. What are the technologies you are working on?
  23. Who is the President/CEO of the U.S. company?
  24. What kind of projects is the U.S. company working on?
  25. What is the annual turn over of the company?
  26. How many employees does the U.S. Company have?
  27. What’s your designation in [Previous Company] and what’s your designation in the [New Company]?
  28. Will you be working from [New Company] office or client’s place?
  29. Can I see the Employee petition to USCIS and the Tax Returns of the Company?
  30. What is the salary you will get in USA?
  31. How many rounds of interviews did the U.S. company conduct?
    What are they? 4 rounds (2 technical, 1 HR, 1 manager interview)
  32. Can I see your client end letter and itinerary of services.

A few more here

  1. How you came to know about this company?
  2. Where are you working currently?
  3. What is the company you are going to work in USA?
  4. What is your current salary?
  5. What is the salary you will get in USA?
  6. How many rounds of interviews the USA company conducted? What are they?
  7. Who had taken interview for you?
  8. Can you give me the dates of your interview?
  9. Who are the clients for your USA company?
  10. What are the technologies you are working on?
  11. Who is the President/CEO of US company?
  12. What kind of projects US company is working on?
  13. What is the annual turn over of the company?
  14. How many employees in US Company?
  15. Why are you changing your Job?
  16. Why you want to work in US?
  17. Have you applied for any other Country?
  18. Do you know what is the leaving cost in US specific to the place where you are going?
  19. When did you received your offer letter?
  20. What is the current project you are going to work?
  21. What is your current role?
  22. What is your role in US company?
  23. Where are you going to work in US?
  24. What is your designation in US company?
  25. When did US company founded?
  26. What is your current pay?
  27. What is your pay in US company?
  28. When are you planned to travel?
  29. How will you survive for the first month?
  30. Have you been to any other country before?
  31. Will you come back to India?
  32. When you will be back to india?
  33. Why you want to return back?

Quick Test Professional Q & A

1. What is Quick test pro?

It’s a Mercury interactive’s keyword driven testing tool

2. By using QTP what kind of applications we can test?

By using QTP we can test standard windows applications, Web objects, ActiveX controls, and Visual basic applications.

3. What is called as test?

Test is a collection of steps organized into one or more actions, which are used to verify that your application performs as expected

4. What is the meaning of business component?

It’s a collection of steps representing a single task in your application. Business components are combined into specific scenario to build business process tests in Mercury Quality center with Business process testing

5. How the test will be created in QTP?

As we navigate through our application, QTP records each step we perform and generates a test or component that graphically displays theses steps in a table-based keyword view.

6. What are all the main tasks which will be accomplished by the QTP after creating a test?

After we have finished recording, we can instruct QTP to check the properties of specific objects in our application by means of enhancement features available in QTP. When we perform a run session, QTP performs each step in our test or component. After the run session ends, we can view a report detailing which steps were performed, and which one succeeded or failed.

7. What is Actions?

A test is composed of actions. The steps we add to a test are included with in the test’s actions. By each test begin with a single action. We can divide our test into multiple actions to organize our test.

8. What are all the main stages will involve in QTP while testing?

  • Creating tests or business components
  • Running tests or business components
  • Analyzing results

9. How the creation of test will be accomplished in QTP?

We can create the test or component by either recording a session on our application or web site or building an object repository and adding steps manually to the keyword view using keyword-driven functionality. We can then modify our test with programming statements.

10. What is the purpose of documentation in key word view?

The documentation column of the key word view used to displays a description of each step in easy to understand sentences.

11. Keyword view in QTP is also termed as

Icon based view

12. What is the use of data table in QTP?

Parameterizing the test

13. What is the use of working with actions?

To design a modular and efficient tests

14. What is the file extension of the code file and object repository file in QTP?

The extension for code file is .vbs and the extension for object repository is .tsr

15. What are the properties we can use for identifying a browser and page when using descriptive programming?

The name property is used to identify the browser and the title property is used to identify the page

16. What are the different scripting languages we can use when working with QTP?

VB script

17. Give the example where we can use a COM interface in our QTP project?

COM interface appears in the scenario of front end and back end.

18. Explain the keyword create object with example

Create object is used to create and return a reference to an automation object.

For example:

Dim ExcelSheetSet

ExcelSheet=createobject(“Excel.Sheet”)

19. How to open excel sheet using QTP script?

You can open excel in QTP by using the following command

System.Util.Run”Path of the file”

20. Is it necessary to learn VB script to work with QTP?

Its not mandate that one should mastered in VB script to work with QTP. It is mostly user friendly and for good results we need to have basic VB or concepts which will suffice

21. If Win Runner and QTP both are functional testing tools from the same company. Why a separate tool QTP came in to picture?

QTP has some additional functionality which is not present in Win Runner. For example, you can test (Functionality and Regression testing) an application developed in .Net technology with QTP, which is not possible to test in Win Runner

22. Explain in brief about the QTP automation object model

The test object model is a large set of object types or classes that QTP uses to represent the objects in our application. Each test object has a list of properties that can uniquely identify objects of that class

23. What is a Run-Time data table?

The test results tree also includes the table-shaped icon that displays the run-time data table-a table that shows the values used to run a test containing data table parameters or the data table output values retrieved from a application under test

24. What are all the components of QTP test script?

QTP test script is a combination of VB script statements and statements that use QuickTest test objects, methods and properties

25. What is test object?

It’s an object that QTP uses to represent an object in our application. Each test object has one or more methods and properties that we can use to perform operations and retrieve values for that object. Each object also has a number of identification properties that can describe the object.

26. What are all the rules and guidelines want to be followed while working in expert view?

Case-sensitivity

VB script is not case sensitive and does not differentiate between upper case and lower case spelling of words.

Text strings

When we enter value as a string, that time we must add quotation marks before and after the string

Variables

We can use variables to store strings, integers, arrays and objects. Using variables helps to make our script more readable and flexible.

Parentheses

To achieve the desired result and to avoid the errors, it is important that we use parentheses() correctly in our statements.

Comments

We can add comments to our statements using apostrophe (‘), either at a beginning of the separate line or at the end of a statement

Spaces

We can add extra blank spaces to our script to improve clarity. These spaces are ignored by the VB script

Additional Interview Questions

Q1. What is verification?

 A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!

Q2. What is validation?

 A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Q3. What is a walkthrough?

A: A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual’s or team’s competency.

Q4. What is an inspection?

 A: An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.

Q5. What is quality?

 A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

Q6. What is good code?

 A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

Q7. What is good design?

 A: Design could mean too many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

Q8. What is software life cycle?

A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.

Q9. Why are there so many software bugs?

 A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.

Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.

Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.

As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.

Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.

Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.

Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

Q10. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.  

Q11. Give me five common problems that occur during software development.

A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new features after development is underway and poor communication.  Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.

The schedule is unrealistic if too much work is crammed in too little time.

Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.

It’s extremely common that new features are added after development is underway.

Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

Q12. Do automated testing tools make testing easier?

A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!

Q13. Give me five solutions to problems that occur during software development.

A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.

Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.

Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, and tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

Q14. What makes a good test engineer?

A: Rob Davis is a good test engineer because he

Has a “test to break” attitude,

Takes the point of view of the customer,

Has a strong desire for quality,

Has an attention to detail, He’s also

Tactful and diplomatic and

Has well a communication skill, both oral and written. And he

Has previous software development experience, too.

Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.

Q15. What makes a good QA engineer?

A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis’ communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.

Q16. What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn’t have a one-page resume. The followings are some of the comments I have personally heard: “Well, Joe Blow (car salesman) said I should have a one-page resume.” “Well, I read a book and it said you should have a one page resume.” “I can’t really go into what I really did because if I did, it’d take more than one page on my resume.” “Gosh, I wish I could put my job at IBM on my resume but if I did it’d make my resume more than one page, and I was told to never make the resume more than one page long.” “I’m confused, should my resume be more than one page? I feel like it should, but I don’t want to break the rules.” Or, here’s another comment, “People just don’t read resumes that are longer than one page.” I have heard some more, but we can start with these. So what’s the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have. The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn’t getting you interviews, then you should change it. The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one, scanners don’t like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution. Five, resume readers don’t like to guess and most won’t call you to clarify what is on your resume. Generally speaking, your resume should tell your story. If you’re a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what. Short resumes — for people long on experience — are not appropriate. The real audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right hands, it will be read thoroughly.  

Q17. What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.

Q18. What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

Q19. What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application’s externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, “user-friendly”, which is too subjective. A testable requirement would be something such as, “the product shall allow the user to enter their previously-assigned password to access the application”. Care should be taken to involve all of a project’s significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren’t met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. You CAN learn to capture requirements, with little or no outside help. Get CAN get free information. Click on a link!

Q20. What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q21. What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…

Test case identifier;

Test case name;

Objective;

Test conditions/setup;

Input data requirements/steps, and

Expected results.

Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

Q22. What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

Q23. What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.

Q24. What if the software is so buggy it can’t be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

Q25. How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…

Deadlines, e.g. release deadlines, testing deadlines;

Test cases completed with certain percentage passed;

Test budget has been depleted;

Coverage of code, functionality, or requirements reaches a specified point;

Bug rate falls below a certain level; or

Beta or alpha testing period ends.

Q26. What if there isn’t enough time for thorough testing?

A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

·         Which functionality is most important to the project’s intended purpose?

·         Which functionality is most visible to the user?

·         Which functionality has the largest safety impact?

·         Which functionality has the largest financial impact on users?

·         Which aspects of the application are most important to the customer?

·         Which aspects of the application can be tested early in the development cycle?

·         Which parts of the code are most complex and thus most subject to errors?

·         Which parts of the application were developed in rush or panic mode?

·         Which aspects of similar/related previous projects caused problems?

·         Which aspects of similar/related previous projects had large maintenance expenses?

·         Which parts of the requirements and design are unclear or poorly thought out?

·         What do the developers think are the highest-risk aspects of the application?

·         What kinds of problems would cause the worst publicity?

·         What kinds of problems would cause the most customer service complaints?

·         What kinds of tests could easily cover multiple functionalities?

·         Which tests will have the best high-risk-coverage to time-required ratio?

Q27. What if the project isn’t big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.

Q28. What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…

·         Ensure the code is well commented and well documented; this makes changes easier for the developers.

·         Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.

·         In the project’s initial schedule, allow for some extra time to commensurate with probable changes.

·         Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.

·         Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.

·         Ensure customers and management understands scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.

·         Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.

·         Design some flexibility into automated test scripts;

·         Focus initial automated testing on application aspects that are most likely to remain unchanged;

·         Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;

·         Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;

·         Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

Q29. What if the application has functionality that wasn’t in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

Q30. How can software QA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

 

Q34. What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or Software QA. This document details some aspects of how he can provide software testing/QA service.

Q35. What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.

Q36. Process and procedures – why follow them?

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

Q37. Standards and templates – what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q38. What are the different levels of testing?

A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results. Each level of testing is either considered black or white box testing.

Q39. What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.

Q40. What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.

Q41. What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

Q42. What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Q43. What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.

Q44. What is usability testing?

A: Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

Q45. What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

Q46. What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q47. What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.

Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!

Q48. What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

Q49. What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q50. What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Q51. What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.

Q52. What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.

Q53. What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.

Q54. What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

Q55. What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q56. What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

Q57. What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.

Q58. What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Q59. What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.

Q60. What is beta testing?

A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q61. What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!

Q62. What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

Q63. What is a Test Engineer?

A: Test Engineers are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also…

·         Speed up the work of the development staff;

·         Reduce your organization’s risk of legal liability;

·         Give you the evidence that your software is correct and operates properly;

·         Improve problem tracking and reporting;

·         Maximize the value of your software;

·         Maximize the value of the devices that use it;

·         Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;

·         Help the work of your development staff, so the development team can devote its time to build up your product;

·         Promote continual improvement;

·         Provide documentation required by FDA, FAA, other regulatory agencies and your customers;

·         Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;

·         Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

Q64. What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.

Q65. What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

Q66. What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

Q67. What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

Q68. What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

Q69. What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

Q70. What is software testing methodology?

A: One software testing methodology is the use a three step process of…

1.      Creating a test strategy;

2.      Creating a test plan/design; and

3.      Executing tests.

This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.

Q71. What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

Q72. How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

·         A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

·         A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

·         Testing methodology. This is based on known standards.

·         Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.

·         Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

·         An approved and signed off test strategy document, test plan, including test cases.

·         Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q73. How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…

Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

Test scenarios are executed through the use of test procedures or scripts.

Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

Test procedures or scripts include the specific data that will be used for testing the process or transaction.

Test procedures or scripts may cover multiple test scenarios.

Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.

Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.

A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

Approved Test Strategy Document.

Test tools, or automated test tools, if applicable.

Previously developed scripts, if applicable.

Test documentation problems uncovered as a result of testing.

A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.

Outputs for this process:

Approved documents of test scenarios, test cases, test conditions and test data.

Reports of software design issues, given to software developers for correction.

Q74. How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.

A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.

Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.

After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.

The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:

Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.

Test tools, including automated test tools, if applicable.

Developed scripts.

Changes to the design, i.e. Change Request Documents.

Test data.

Availability of the test team and project team.

General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.

A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.

Test Readiness Document.

Document Updates.

Outputs for this process:

Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.

Changes to the code, also known as test fixes.

Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.

Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.

Formal record of test incidents, usually part of problem tracking.

Base-lined package, also known as tested source and object code, ready for migration to the next level.

Q75. What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach:

Black box testing,

White box testing,

Unit testing,

Incremental testing,

Integration testing,

Functional testing,

System testing,

End-to-end testing,

Sanity testing,

Regression testing,

Acceptance testing,

Load testing,

Performance testing,

Usability testing,

Install/uninstall testing,

Recovery testing,

Security testing,

Compatibility testing,

Exploratory testing, ad-hoc testing,

User acceptance testing,

Comparison testing,

Alpha testing,

Beta testing, and

Mutation testing.

Q76. What is stress testing?

A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denials of service tools.

Q77. What is load testing?

A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!

Q79. What is the difference between performance testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!

Q80. What is the difference between reliability testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q81. What is the difference between volume testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q82. What is incremental testing?

A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Q83. What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. You CAN learn software testing, with little or no outside help. Get CAN get free information. Click on a link!

Q84. What is automated testing?

A: Automated testing is a formally specified and controlled method of formal testing approach.

Q85. What is alpha testing?

A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

Q86. What is beta testing?

A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.

Q87. What is the difference between alpha and beta testing?

A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public.

Q88. What is clear box testing?

A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!

Q89. What is boundary value analysis?

A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.

Q90. What is ad hoc testing?

A: Ad hoc testing is a testing approach; it is the least formal testing approach.

Q91. What is gamma testing?

A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.

Q92. What is glass box testing?

A: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

Q93. What is open box testing?

A: Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

Q94. What is black box testing?

A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. Get CAN get free information. Click on a link!

Q95. What is functional testing?

A: Functional testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.

Q96. What is closed box testing?

A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.

Q97. What is bottom-up testing?

A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.

Q98. What is software quality?

A: The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.

Q99. How do test case templates look like?

A: Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly. Test case templates contain all particulars of every test case. Often these templates are in the form of a table. One example of this table is a 6-column table, where column 1 is the “Test Case ID Number”, column 2 is the “Test Case Name”, column 3 is the “Test Objective”, column 4 is the “Test Conditions/Setup”, column 5 is the “Input Data Requirements/Steps”, and column 6 is the “Expected Results”. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for users to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. You CAN learn to create test case templates, with little or no outside help. Get CAN get free information. Click on a link!

Q100. What is a software fault?

A: Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs.

Q101. What is software failure?

A: Software failure occurs when the software does not do what the user expects to see.

Q102. What is the difference between a software fault and a software failure?

A: Software failure occurs when the software does not do what the user expects to see. A software fault, on the other hand, is a hidden programming error. A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended.

WinRunner Testing Process

 

WinRunner Interview Questions And Answers

1) Explain WinRunner testing process?

a) WinRunner testing process involves six main stages

i.Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii.Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iii.Debug Test: run tests in Debug mode to make sure they run smoothly

iv.Run Tests: run tests in Verify mode to test your application.

v.View Results: determines the success or failure of the tests.

vi.Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

2) What is contained in the GUI map?

a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b) There are 2 types of GUI Map files.

i.Global GUI Map file: a single GUI Map file for the entire application

ii.GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

3) How does WinRunner recognize objects on the application?

a) WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

4) Have you created test scripts and what is contained in the test scripts?

a) Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

5) How does WinRunner evaluates test results?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.


6) Have you performed debugging of the scripts?

a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

7) How do you run your test scripts?

a) We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

8) How do you analyze results and report the defects?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

9) What are the different modes of recording?

a) There are two type of recording in WinRunner.

i.Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii.Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

10) What is the purpose of loading WinRunner Add-Ins?

a) Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.

 

11.How to modify the logical name or physical description of the object?

Using GUI editor

12.How Winrunner handles varying window labels?

Using regular expressions

13.How do you write messages to the report?

report_msg

14. How you used WinRunner in your project?

Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.

15. Explain WinRunner testing process?

WinRunner testing process involves six main stages

i.Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii.Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iiiDebug Test: run tests in Debug mode to make sure they run smoothly

iv.Run Tests: run tests in Verify mode to test your application.

v.View Results: determines the success or failure of the tests.

vi.Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

16. What in contained in the GUI map?

a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b) There are 2 types of GUI Map files.

i.Global GUI Map file: a single GUI Map file for the entire application

ii.GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

17. How does WinRunner recognize objects on the application?

a)WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads anobject’sdescription in the GUI map and then looks for an object with the same properties in the application being tested.

18. Have you created test scripts and what is contained in the test scripts?

a) Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

19. How does WinRunner evaluates test results?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

20. Have you performed debugging of the scripts?

a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

31. How do you view the contents of the GUI map?

a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

32. When you create GUI map do you record all the objects of specific objects?

a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

33. What is the purpose of set_window command?

a) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

Syntax: set_window(, time);

The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

34. How do you load GUI map?

a) We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

35. What is the disadvantage of loading the GUI maps through start up scripts?

a) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.

b) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

36. How do you unload the GUI map?

a) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

37. What actually happens when you load GUI map?

a) When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.

38. What is the purpose of the temp GUI map file?

a) While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

39. What is the extension of gui map file?

a) The extension for a GUI Map file is “.gui”.

40. How do you find an object in an GUI map.

a) The GUI Map Editor is been provided with a Find and Show Buttons.

i To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

ii.To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

41. What different actions are performed by find and show button?

a) To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

b) To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

42. How do you identify which files are loaded in the GUI map?

a) The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.

43. How do you modify the logical name or the physical description of the objects in GUI map?

a) You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.


44. When do you feel you need to modify the logical name?

a) Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

45. When it is appropriate to change physical description?

a) Changing the physical description is necessary when the property value of an object changes.

46. How WinRunner handles varying window labels?

a) We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.

i.The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

ii.The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

[

47. What is the purpose of regexp_label property and regexp_MSW_class property?

a) The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

b) The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

48. How do you suppress a regular expression?

a) We can suppress the regular expression of a window by replacing the regexp_label property with label property.

49. How do you copy and move objects between different GUI map files?

a) We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:

i Choose Tools > GUI Map Editor to open the GUI Map Editor.

ii.Choose View > GUI Files.

iii.Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.

iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.

v.In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

vi.Click Copy or Move.

vii.To restore the GUI Map Editor to its original size, click Collapse.

50. How do you select multiple objects during merging the files?

a) Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

51. How do you clear a GUI map files?

a) We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.

52. How do you filter the objects in the GUI map?

a) GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

i. Logical name displays only objects with the specified logical name.

ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.

iii. Class displays only objects of the specified class, such as all the push buttons.

53.How do you configure gui map?

a) When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.

b) Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.

c) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

54.W hat is the purpose of GUI map configuration?

a) GUI Map configuration is used to map a custom object to a standard object.

55. How do you make the configuration and mappings permanent?

a) The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

56. What is the purpose of GUI spy?

a) Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

57. What is the purpose of obligatory and optional properties of the objects?

a) For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or “optional”.

i. An obligatory property is always learned (if it exists).

ii.An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

58. When the optional properties are learned?

a) An optional property is used only if the obligatory properties do not provide unique identification of an object.

59. What is the purpose of location indicator and index indicator in GUI map configuration?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

ii. An index selector uses a unique number to identify the object in a window.

1. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

60. How do you handle custom objects?

a) A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_ statements.

b) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

61. What is the name of custom class in WinRunner and what methods it applies on the custom objects?

a) WinRunner learns custom class objects under the generic “object” class. WinRunner records operations on custom objects using obj_ statements.

62. In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

ii. An index selector uses a unique number to identify the object in a window.

63 What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.

a) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

b) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

c) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were “object” class.

d) Ignore instructs WinRunner to disregard all operations performed on the class.

64. How do you find out which is the start up file in WinRunner?

a) The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

65. What are the virtual objects and how do you learn them?

a) Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.

b) Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

To define a virtual object using the Virtual Object wizard:

i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.

ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.

iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.

iv.Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.

v. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

66.H ow you created you test scripts 1) by recording or 2) programming?

a) Programming. I have done complete programming only, absolutely no recording.

67. What are the two modes of recording?

a) There are 2 modes of recording in WinRunner

i.Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii.Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

68. What is a checkpoint and what are different types of checkpoints?

a) Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.

You can add four types of checkpoints to your test scripts:

i. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.

ii. Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to an image captured in an earlier version.

iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.

iv. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

 

 

 

 

 

71. What is parameterizing?

a) In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

72. How do you maintain the document information of the test scripts?

a) Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

73. What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?

a) You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:

i. button_check_info

ii. scroll_check_info

iii. edit_check_info

iv. static_check_info

v. list_check_info

vi. win_check_info

vii. obj_check_info

Syntax: button_check_info (button, property, property_value );

edit_check_info ( edit, property, property_value );

74. What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

a) You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.

b) Creating a GUI Checkpoint using the Default Checks

i. You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.

ii. To create a GUI checkpoint using default checks:

1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

2. Click an object.

3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

c) Creating a GUI Checkpoint by Specifying which Properties to Check

d) You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.

e) To create a GUI checkpoint by specifying which properties to check:

i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

ii. Double-click the object or window. The Check GUI dialog box opens.

iii. Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.

iv. Select the properties you want to check.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

75. What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

a) To create a GUI checkpoint for two or more objects:

i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.

ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.

iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.

iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.

v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.

vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

76. What information is contained in the checklist file and in which file expected results are stored?

a) The checklist file contains information about the objects and the properties of the object we are verifying.

b) The gui*.chk file contains the expected results which is stored in the exp folder

77. What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

a) You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.

b) When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.

c) Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.

d) To capture a window or object as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.

ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap ( object, bitmap, time );

iii. For an object bitmap, the syntax is:

obj_check_bitmap ( object, bitmap, time );

iv. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:

win_check_bitmap (“Flight Reservation”, “Img2”, 1);

v. However, if you click the Date of Flight box in the same window, the statement might be:

obj_check_bitmap (“Date of Flight:”, “Img1”, 1);

Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

78. What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

a) You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).

b) To capture an area of the screen as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.

ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.

iv. The win_check_bitmap statement for an area of the screen has the following syntax:

win_check_bitmap ( window, bitmap, time, x, y, width, height );

79. What do you verify with the database checkpoint default and what command it generates, explain syntax?

a) By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.

b) When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.

c) You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you

d) specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.

e) You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.

Syntax: db_check(, );

f) You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.

Syntax:

db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

ChecklistFileName A file created by WinRunner and saved in the test’s checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.

Contains one of the following values:

1. DVR_ONE_OR_MORE_MATCH – The checkpoint passes if one or more matching database records are found.

2. DVR_ONE_MATCH – The checkpoint passes if exactly one matching database record is found.

3. DVR_NO_MATCH – The checkpoint passes if no matching database records are found.

RecordNumber An out parameter returning the number of records in the database.

80. How do you handle dynamically changing area of the window in the bitmap checkpoints?

a) The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

81. What do you verify with the database check point custom and what command it generates, explain syntax?

a) When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.

b) You can create a custom check on a database in order to:

i. check the contents of part or the entire result set

ii. edit the expected results of the contents of the result set

iii. count the rows in the result set

iv. count the columns in the result set

c) You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

82. What do you verify with the sync point for object/window property and what command it generates, explain syntax?

a) Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.

b) You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.

c) You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:

Syntax:

obj_exists ( object [, time ] );

win_exists ( window [, time ] );

83. What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?

a) You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.

b) During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax:

obj_wait_bitmap ( object, image, time );

win_wait_bitmap ( window, image, time );

84. What do you verify with the sync point for screen area and what command it generates, explain syntax?

a) For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution

Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

85. How do you edit checklist file and when do you need to edit the checklist file?

a) WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

86. How do you edit the expected value of an object?

a) We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

87. How do you modify the expected results of a GUI checkpoint?

a) We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

88. How do you handle ActiveX and Visual basic objects?

a) WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

89. How do you create ODBC query?

a) We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

90. How do you record a data driven test?

a) We can create a data-driven testing using data from a flat file, data table or a database.

i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.

ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.

iii.Database: we store test data in the database and access these data using ‘db_*’ functions.

91. How do you convert a database file to a text file?

a) You can use Data Junction to create a conversion file which converts a database to a target text file.

92. How do you parameterize database check points?

a) When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

93. How do you create parameterize SQL commands?

a) A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:


i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.

FROM specifies the path of the database.

WHERE (optional) specifies the conditions, or filters to use in the query.

Departure is the parameter that represents the departure point of a flight.

Day_Of_Week is the parameter that represents the day of the week of a flight.

b) When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check(“list1.cdl”, “dbvf1”, NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

94. Explain the following commands:

a) db_connect

to connect to a database

db_connect(, );

b) db_execute_query

to execute a query

db_execute_query ( session_name, SQL, record_number );

record_number is the out value.

c) db_get_field_value

returns the value of a single field in the specified row_index and column in the session_name database session.

db_get_field_value ( session_name, row_index, column );

d) db_get_headers

returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.

db_get_headers ( session_name, header_count, header_content );

e) db_get_row

returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );

f) db_write_records

writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

g) db_get_last_error

returns the last error message of the last ODBC or Data Junction operation in the session_name database session.

db_get_last_error ( session_name, error );

h) db_disconnect

disconnects from the database and ends the database session.

db_disconnect ( session_name );

i) db_dj_convert

runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

95. What check points you will use to read and check text on the GUI and explain its syntax?

a) You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b) You can use a text checkpoint to:

i.Read text from a GUI object or window in your application, using obj_get_text and win_get_text

ii.Search for text in an object or window, using win_find_text and obj_find_text

iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text

iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

96. Explain Get Text checkpoint from object/window with syntax?

a) We use obj_get_text (, ) function to get the text from an object

b) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

97. Explain Get Text checkpoint from screen area with syntax?

a) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

98. Explain Get Text checkpoint from selection (web only) with syntax?

a) Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

i. object The logical name of the object.

ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.

iv. out_text The output variable that stores the text string.

v. text_before Defines the start of the search area for a particular text string.

vi. text_after Defines the end of the search area for a particular text string.

vii. index The occurrence number to locate. (The default parameter number is numbered 1).

99. Explain Get Text checkpoint web text checkpoint with syntax?

a) We use web_obj_text_exists function for web text checkpoints.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] );

a. object The logical name of the object to search.

b. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the character #.

c. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the character #.

d. text_to_find The string that is searched for.

e. text_before Defines the start of the search area for a particular text string.

f. text_after Defines the end of the search area for a particular text string.

100. Which TSL functions you will use for

a) Searching text on the window

i. find_text ( string, out_coord_array, search_area [, string_def ] );

string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression.

out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below).

search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle.

string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.

b) getting the location of the text string

i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

window The logical name of the window to search.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the “Using Regular Expressions” chapter in your User’s Guide.

result_array The name of the output variable that stores the location of the string as a four-element array.

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

c) Moving the pointer to that text string

i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] );

window The logical name of the window.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark).

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

d) Comparing the text

i. compare_text (str1, str2 [, chars1, chars2]);

str1, str2 The two strings to be compared.

chars1 One or more characters in the first string.

chars2 One or more characters in the second string. These characters are substituted for those in chars1.

101. What are the steps of creating a data driven test?

a) The steps involved in data driven testing are:

i. Creating a test

ii. Converting to a data-driven test and preparing a database

iii. Running the test

iv. Analyzing the test results.

102. Record a data driven test script using data driver wizard?

a) You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.

To create a data-driven test:

i. If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.

ii. Choose Tools > DataDriver Wizard.

iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.

iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use

v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.

vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, “table.”

vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table

viii. To the script at a later time without making changes throughout the script.

ix. Choose from among the following options:

1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements

2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.

3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.

4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.

5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.

6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.

7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.

x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.

Choose whether and how to replace the selected data:

1. Do not replace this data: Does not parameterize this data.

2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.

3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.

xi. The final screen of the wizard opens.

1. If you want the data table to open after you close the wizard, select Show data table now.

2. To perform the tasks specified in previous screens and close the wizard, click Finish.

3. To close the wizard without making any changes to the test script, click Cancel.


103. What are the three modes of running the scripts?

a) WinRunner provides three modes in which to run tests—Verify, Debug, and Update. You use each mode during a different phase of the testing process.

i. Verify

1. Use the Verify mode to check your application.

ii. Debug

1. Use the Debug mode to help you identify bugs in a test script.

iii. Update

1. Use the Update mode to update the expected results of a test or to create a new expected results folder.

104. Explain the following TSL functions:

a) Ddt_open

Creates or opens a datatable file so that WinRunner can access it.

Syntax: ddt_open ( data_table_name, mode );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write).

b) Ddt_save

Saves the information into a data file.

Syntax: dt_save (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

c) Ddt_close

Closes a data table file

Syntax: ddt_close ( data_table_name );

data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters.

d) Ddt_export

Exports the information of one data table file into a different data table file.

Syntax: ddt_export (data_table_namename1, data_table_namename2);

data_table_namename1 The source data table filename.

data_table_namename2 The destination data table filename.

e) Ddt_show

Shows or hides the table editor of a specified data table.

Syntax: ddt_show (data_table_name [, show_flag]);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

f) Ddt_get_row_count

Retrieves the no. of rows in a data tables

Syntax: ddt_get_row_count (data_table_name, out_rows_count);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

out_rows_count The output variable that stores the total number of rows in the data table.

g) ddt_next_row

Changes the active row in a database to the next row

Syntax: ddt_next_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

h) ddt_set_row

Sets the active row in a data table.

Syntax: ddt_set_row (data_table_name, row);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The new active row in the data table.

i) ddt_set_val

Sets a value in the current row of the data table

Syntax: ddt_set_val (data_table_name, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

j) ddt_set_val_by_row

Sets a value in a specified row of the data table.

Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

k) ddt_get_current_row

Retrieves the active row of a data table.

Syntax: ddt_get_current_row ( data_table_name, out_row );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

out_row The output variable that stores the active row in the data table.

l) ddt_is_parameter

Returns whether a parameter in a datatable is valid

Syntax: ddt_is_parameter (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The parameter name to check in the data table.

m) ddt_get_parameters

Returns a list of all parameters in a data table.

Syntax: ddt_get_parameters ( table, params_list, params_num );

table The pathname of the data table.

params_list This out parameter returns the list of all parameters in the data table, separated by tabs.

params_num This out parameter returns the number of parameters in params_list.

n) ddt_val

Returns the value of a parameter in the active roe in a data table.

Syntax: ddt_val (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The name of the parameter in the data table.

o) ddt_val_by_row

Returns the value of a parameter in the specified row in a data table.

Syntax: ddt_val_by_row ( data_table_name, row_number, parameter );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row_number The number of the row in the data table.

parameter The name of the parameter in the data table.

p) ddt_report_row

Reports the active row in a data table to the test results

Syntax: ddt_report_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

q) ddt_update_from_db

imports data from a database into a data table. It is inserted into your test script when you select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database.

105. How do you handle unexpected events and errors?

a) WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

Define Exception Handling

Define Exception

Define Handler Function

WinRunner enables you to handle the following types of exceptions:

Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.

TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.

Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.

Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

106. How do you handle pop-up exceptions?

a) A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be

i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.

ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

107. How do you handle TSL exceptions?

a) A TSL exception enables you to detect and respond to a specific error code returned during test execution.

b) Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.

c) The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.

d) Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

108. How do you handle object exceptions?

a) During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.

b) You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

109. How do you comment your script?

a) We comment a script or line of script by inserting a ‘#’ at the beginning of the line.

110. What is a compile module?

a) A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.

b) Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

111. What is the difference between script and compile module?

a) Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.

b) WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of “Compiled Module”.

c) By default, modules containing TSL code have a property value of “main”. Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a “call” statement. Example of a call for the “app_init” script:

call cso_init();

call( “C:\\MyAppFolder\\” & “app_init” );

d) Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:

reload (“C:\\MyAppFolder\\” & “flt_lib”);

or

load (“C:\\MyAppFolder\\” & “flt_lib”);

112. Write and explain various loop command?

a) A for loop instructs WinRunner to execute one or more statements a specified number of times.

It has the following syntax:

for ( [ expression1 ]; [ expression2 ]; [ expression3 ] )statement

i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following.

ii. For example, the for loop below selects the file UI_TEST from the File Name list

iii. in the Open window. It selects this file five times and then stops.

set_window (“Open”)

for (i=0; i<5;>

list_select_item(“File_Name:_1″,”UI_TEST”); #Item Number2

b) A while loop executes a block of statements for as long as a specified condition is true.

It has the following syntax:

while ( expression )

statement ;

i. While expression is true, the statement is executed. The loop ends when the expression is false. For example, the while statement below performs the same function as the for loop above.

set_window (“Open”);

i=0;

while (i<5){

i++;

list_select_item (“File Name:_1”, “UI_TEST”); # Item Number 2

}

c) A do/while loop executes a block of statements for as long as a specified condition is true. Unlike the for loop and while loop, a do/while loop tests the conditions at the end of the loop, not at the beginning.

A do/while loop has the following syntax:

do

statement

while (expression);

i. The statement is executed and then the expression is evaluated. If the expression is true, then the cycle is repeated. If the expression is false, the cycle is not repeated.

ii. For example, the do/while statement below opens and closes the Order dialog box of Flight Reservation five times.

set_window (“Flight Reservation”);

i=0;

do

{

menu_select_item (“File;Open Order…”);

set_window (“Open Order”);

button_press (“Cancel”);

i++;

}

while (i<5);

113. Write and explain decision making command?

a) You can incorporate decision-making into your test scripts using if/else or switch statements.

i. An if/else statement executes a statement if a condition is true; otherwise, it executes another statement.

It has the following syntax:

if ( expression )

statement1;

[ else

statement2; ]


expression is evaluated. If expression is true, statement1 is executed. If expression1 is false, statement2 is executed.

b) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values.

It has the following syntax:

switch (expression )

{

case case_1: statements

case case_2: statements

case case_n: statements

default: statement(s)

}

The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

114. Write and explain switch command?

a) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values.

It has the following syntax:

switch (expression )

{

case case_1: statements

case case_2: statements

case case_n: statements

default: statement(s)

}

b) The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

115. How do you write messages to the report?

a) To write message to a report we use the report_msg statement

Syntax: report_msg (message);

116. What is a command to invoke application?

a) Invoke_application is the function used to invoke an application.

Syntax: invoke_application(file, command_option, working_dir, SHOW);

117. What is the purpose of tl_step command?

a) Used to determine whether sections of a test pass or fail.

Syntax: tl_step(step_name, status, description);

118. Which TSL function you will use to compare two files?

a) We can compare 2 files in WinRunner using the file_compare function.

Syntax: file_compare (file1, file2 [, save file]);

119. What is the use of function generator?

a) The Function Generator provides a quick, error-free way to program scripts. You can:

i. Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.

ii. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.

iii. Add Customization functions that enable you to modify WinRunner to suit your testing environment.

120. What is the use of putting call and call_close statements in the test script?

a) You can use two types of call statements to invoke one test from another:

i. A call statement invokes a test from within another test.

ii. A call_close statement invokes a test from within a script and closes the test when the test is completed.

iii. The call statement has the following syntax:

1. call test_name ( [ parameter1, parameter2, …parametern ] );

iv. The call_close statement has the following syntax:

1. call_close test_name ( [ parameter1, parameter2, … parametern ] );

v. The test_name is the name of the test to invoke. The parameters are the parameters defined for the called test.

vi. The parameters are optional. However, when one test calls another, the call statement should designate a value for each parameter defined for the called test. If no parameters are defined for the called test, the call statement must contain an empty set of parentheses.

121. What is the use of treturn and texit statements in the test script?

a) The treturn and texit statements are used to stop execution of called tests.

i. The treturn statement stops the current test and returns control to the calling test.

ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.

b) Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.

treturn

c) The treturn statement terminates execution of the called test and returns control to the calling test.

The syntax is:

treturn [( expression )];

d) The optional expression is the value returned to the call statement used to invoke the test.

texit

e) When tests are run interactively, the texit statement discontinues test execution. However, when tests are called from a batch test, texit ends execution of the current test only; control is then returned to the calling batch test.

The syntax is:

texit [( expression )];

122. Where do you set up the search path for a called test.

a) The search path determines the directories that WinRunner will search for a called test.

b) To set the search path, choose Settings > General Options. The General Options dialog box opens. Click the Folders tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in which they are listed in the box. Note that the search paths you define remain active in future testing sessions.

123. How you create user-defined functions and explain the syntax?

a) A user-defined function has the following structure:

[class] function name ([mode] parameter…)

{

declarations;

statements;

}

b) The class of a function can be either static or public. A static function is available only to the test or module within which the function was defined.

c) Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as follows:

in: A parameter that is assigned a value from outside the function.

out: A parameter that is assigned a value from inside the function.

inout: A parameter that can be assigned a value from outside or inside the function.

124. What does static and public class of a function means?

a) The class of a function can be either static or public.

b) A static function is available only to the test or module within which the function was defined.

c) Once you execute a public function, it is available to all tests, for as long as the test containing the function remains open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module are available for the duration of the testing session.

d) If no class is explicitly declared, the function is assigned the default class, public.

125. What does in, out and input parameters means?

a) in: A parameter that is assigned a value from outside the function.

b) out: A parameter that is assigned a value from inside the function.

c) inout: A parameter that can be assigned a value from outside or inside the function.

126. What is the purpose of return statement?

a) This statement passes control back to the calling function or test. It also returns the value of the evaluated expression to the calling function or test. If no expression is assigned to the return statement, an empty string is returned.

Syntax: return [( expression )];


127. What does auto, static, public and extern variables means?

a) auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.

b) static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.

c) public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.

d) extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

128. How do you declare constants?

a) The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.

b) The syntax of this declaration is:

[class] const name [= expression];

129. How do you declare arrays?

a) The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.

b) class array_name [ ] [=init_expression]

c) The array class may be any of the classes used for variable declarations (auto, static, public, extern).

130. How do you load and unload a compile module?

a) In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.

b) You can load a module either as a system module or as a user module. A system module is generally a closed module that is “invisible” to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).

load (module_name [,1|0] [,1|0] );

The module_name is the name of an existing compiled module.


Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.

(Default = 0)

The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.

(Default = 0)

c) The unload function removes a loaded module or selected functions from memory.

d) It has the following syntax:

unload ( [ module_name | test_name [ , “function_name” ] ] );

141. How do you update your expected results?

142. How do you run your script with multiple sets of expected results?

143. How do you view and evaluate test results for various check points?

144. How do you view the results of file comparison?

145. What is the purpose of Wdiff utility?

146. What are batch tests and how do you create and run batch tests ?

147. How do you store and view batch test results?

148. How do you execute your tests from windows run command?

149. Explain different command line options?

150. What TSL function you will use to pause your script?

151. What is the purpose of setting a break point?

152. What is a watch list?

153. During debugging how do you monitor the value of the variables?

154.What are the reasons that WinRunner fails to identify an object on the GUI?

a) WinRunner fails to identify an object in a GUI due to various reasons.

i. The object is not a standard windows object.

ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

155.What do you mean by the logical name of the object.

a) An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

156.If the object does not have a name then what will be the logical name?

If the object does not have a name then the logical name could be the attached text.

157.What is the different between GUI map and GUI map files?

a) The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.

i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

b) GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

158.How do you view the contents of the GUI map?

a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

160.What is startup script in WinRunner?

It is writing a script and when WinRunner starts it automatically runs the script. If you write script like invoking some application as soon as the script is run the application will be invoked for the purpose of testing

161.What is the purpose of loading WinRunner add-ins?

Add-ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator,and while executing the script only those functions in the loaded add-in will be executed,else WinRunner will give an error message saying it does not recognize the function

162.What is the purpose of GUI spy?

Using the GUI spy you can view the properties of any GUI object and your desktop. You use the spy pointer to point to an object,and the GUI spy displays the properties and their values in the GUI spy dialog box. You can choose to view all properties of an object, or only the selected set of properties that WinRunner learns.

163.When you create GUI map do you record all the objects of specific objects?

a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

164.What is the purpose of set_window command?

b) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.


Syntax: set_window(, time);

The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

165.How do you load GUI map?

c) We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

166..What is the disadvantage of loading the GUI maps through start up scripts?

d) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.

e) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

167.How do you unload the GUI map?

f) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

 

Test Director Interview Questions

  1. How many types of tabs in Test Director?
    There are 4 tabs available in Test Director.
    1. Requirement
    2. Test plan
    3.Test lab.
    4.Defect
     We can change the name of Tabs, as we like. Test Director enables us to rename the Tab. In TD there only 4 tabs are available.
  2. What is meant by test lab in test director?
    Test lab is a part of test director where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in the project.
  3. What is Test Director
    Its a Mercury interactive’s Test management tool. It includes all the features we need to organize and manage the testing process.
  4. What are all the main features of Test Director?
    It enables us to create a database of tests,execute tests, report and track defects detected in the software.
  5. How the assessment of the application will be taken place in Test Director?
    As we test, Test Director allows us to continuously assess the status of our application by generating sophisticated reports and graphs. By integrating all tasks involved in software testing Test Director helps us to ensure that our software is ready for deployment.
  6. What the planning tests will do in Test Director?
    It is used to develop a test plan and create tests. This includes defining goals and strategy,designing tests,automating tests where beneficial and analyzing the plan.
  7. What the running tests will do in Test Director?
    It execute the test created in the planning test phase, and analyze the test results.
  8. What the tracking defects will do in Test Director?
    It is used to monitor the software quality. It includes reporting defects,determining repair priorities,assigning tasks,and tracking repair progress.
  9. What are all the three main views available in What the running tests will do in Test Director?

    ? Plan tests

    ? Run tests

    ? Track defects

    Each view includes all the tools we need to complete each phase of the testing process

  10. What is test plan tree?

    A test plan tree enables you to organize and display your test hierarchically,according to your testing requirements

  11. What are all the contents of test plan tree?

    Test plan tree can include several type of tests

    ? Manual test scripts

    ? Win Runner test scripts

    ? Batch of Win Runner test scripts

    ? Visual API test scripts

    ? Load Runner scenario scripts and Vuser scripts

  12. What is test step?

    A test step includes the action to perform in our application,input to enter,and its expected output

  13. TestDirector is a Test management tool

  14. What are all the components of TestDirector5.0?

    Plan tests,Run tests,Track defects

  15. Who is having full privileges in TestDirector project?

    TD Admin

  16. What is test set?

    A test set is a subset of tests in our database designed to achieve a specified testing objective

  17. How the analyzing of test results will be accomplished in TestDirector?

    Analyzing results will be accomplished by viewing run data and by generating TestDirector reports and graphs

  18. What is Test set builder?

    The test set builder enables us to create,edit,and delete test sets. Our test sets can include both manual and automated tests. We can include the same test in different test sets.

  19. How the running of Manual test will be accomplished in TestDirector?

    When running a manual test,we perform the actions detailed in the test steps and compare them to the expected results. The mini step window allows us to conveniently perform manual tests and record results with TestDirector minimized

  20. How the running of Automated test will be accomplished in TestDirector?

    We can execute automated tests on our own computer or on multiple remote hosts. The test execution window enables us to control test execution and manage hosts. We can execute a single test, or launch an entire batch of tests.

  21. How to execute test plan in testdirector?

    Write testcases in test plan(Description and expected results)

    To execute them go to TEST LAB . Give coverage from test plan to test lab by selecting “select tests”.

    Then click on RUN of RUN TEST SET to execute the test cases.

  22. What is the use of Test Director software?

    TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create database of manual and automated tests, build test cycles, run tests, and report and track defects.

  23. How you integrated your automated scripts from TestDirector?

    When you work with WinRunner , you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual.

    20.Who all are having the rights to detect and report the defects?

  24. Who all are having the rights to detect and report the defects?

    ? Software developers

    ? Testers

    ? End users

  25. What are the main things want to be considered while reporting the bug?

    ? Description about the bug

    ? Software version

    ? Additional informations necessary to reproduce and repair the defect

  26. Describe about mailing defect data

    Connecting TestDirector to your e-mail system lets you routinely inform development and quality assurance personnel about defect repair activity.

  27. What is report designer?

    Its a powerful and flexible tool that allows you to design your own unique reports

  28. What is the use of graphs in TestDirector?

    TestDirector graphs help us visualize the progress of test planning,test execution,and defect tracking for our application,so that we can detect bottlenecks in the testing process.

  29. What is difference between master test plan and test plan?
    Master plan is a document showing the planning for whole of the project i.e. all the phases of the project whereas the test plan is the document required for only testing people.
  30. How to generate the graphs in Test Director?
    The generation of graphs in the Test Director that to Test Lab module is:
    1. Analysis
    2. graph.
    3. Graph Wizard.
    4. select the graph type as summary and click the next button.
    5. Select the show current tests and click the next button.
    6. Select the define a new filter and click the filter button.
    7. Select the tests set and click on the ok button
    8. select the plan: subject and click on the Ok button.
    9. select the plan:status.
    10. select the test set as X axis.
    11. Click the finish button.
  31. How can we export multiple test cases from TD in a single go?
    If we want to export it to any office tools
    1. Select the multiple steps/cases you need.
    2. Right click > save as and proceed.
  32. How to upload test cases from excel into test director?
    First we have to activate excel in test director with the help of adding page. After activation we can view the option as ‘export to test director’ in excel under tools tab.
    If you select the export to test director pop up dialog box opens the following process should be followed i.e.,
    Enter
    1. URL of Test Director
    2. Domain name and project name
    3. user name and password
    4. Select anyone of these 3 options: Requirements or test cases or defects
    5. Select a map option. a)select a map b)select a new map name c)create a temporary map.
    6. Map the test director to corresponding excel. Map the field whatever you mention in excel.These are the required steps to export the excel into TD.
  33. Does anyone know of any integration between Test director and Rational Robot?
    Any idea on the technical feasibility of such integration and level of effort would also be interesting?
    Test Director is software management tool. It is a Mercury Interactive Product.
    Rational Robot is a Rational product. It comes included with ‘test manager’ module for software management.
    Integrating the test director and Rational Robot is not feasible.
  34. Explain the project tree in test director?
    Project Tree in test director : Planning tests –Create test– execute tests–tracking defects.
  35.  What is coverage status, what does it do?
    Coverage status is percentage of testing covered at a given time. If you have 100 test cases in a project and you have finished 40 of test cases out of total test cases then your coverage status of project is 40%. Coverage Status is used to keep track of project accomplishment to meet the final deadline of the deliverables.
  36.  Difference between data validation and data integrity.
    Data Validation: We check whether the input data/output data is valid in terms of its length, data type etc..
    Data Integrity: We check out database contains only accurate data. We implement data integrity with the help of integrity constraints. So in data integrity we check whether the integrity constraints or implemented correctly or not
  37.  What are the uses of filters in test director?
    Limits the data displayed according to the criteria you specified.
    For Ex: You could create a filter that displays only those tests associated with the specified tester and subject.
    Each filter contains multiple filter conditions. These are expressions that are applied to a test plan tree or to the field in a grid. When the filter is applied, only those records that meet the filter conditions are displayed.
    You create and edit filters with the filter dialog box.This opens when you select the filter command. Once you have created a filter, we can save it for future use.
  38. How do we generate test cases through test director?
    Test lab we do the design step. Design grid we create parent child tree.
    Ex: Database operation (test name)
    1. Delete an order
    Description: Click delete button to delete the order.
    Expected results: Order is deleted.
    Pass/Fail:
    Update an order
    create an order.
  39. What are the datatypes a available in PL/SQL ?
  40. Difference between WEBINSPECT-QAINSPECT and WINRUNNER/TEST DIRECTOR?
  41. How will you generate the defect ID in test director? Is it generated automatically or not?
  42. Difference between TD 8.0 (Test director) and QC 8.0 (Quality Center).
  43. How do you ensure that there are no duplication of bugs in Test Director?
  44. Difference between WinRunner and Test Director?
  45. How will you integrated your automated scripts from TestDirector?
  46. What is the use of Test Director software?
  47. What is the Extra tab in QC ? (TCS)
  48. What is Test Builder?(TCS)
  49. I am using Quality Center 9.2. While raising a defect i can see that there is only option to send a mail to assigned and it is also sent to me. Is there any way i can send the mail to multiple users ? I know i can save the defect and than later sent it to multiple users, but is there any way where i can send it at the time of raising the defect itself. Any configuration that can be done by an adminstrator?
  50. what is connection between test plan tab and test lab tab? how can u access your test cases in test lab tab?
  51. How do you create Test script from Quality centre Using QTP
  52. What is the difference between Quality centre and Test Director
  53. what is the last version in TD to support for qtp?
  54. What r the madatory fields in TD?
  55. TD is web based or client/server?
  56. What is the latest version in TD?
  57. What r the views in testdirector?
  58. How to map testcases with requirements in test lab?
  59. how can we see all fields in test director
  60. How do you prepare Bug Reports ? What all do you include in Bug Report ?
  61. How can we connect defectreport from testlab&testplan tabs?
  62. Hi, I would like to upload from excel sheet test case to Quality center test plan section .Please tell me how to upload data(written test case in excel) to Quality center.
  63. what is RTM in test director?
  64. what iz d difference between TD an QC??
  65. Can any one tell how to install the TD in our system?
  66. how many ways are there to copy test cases in test director
  67. how to import excel/word data into the test plan module of quality center 9.0
  68. what would be purpose of Dashboard on Quality center? please provide me the advantage of dashboard? thanks in advance….
  69. what was the advantage of Quality center? functions performed using Quality center? thanks in advance
  70. can test cases we wroitten directly on td. wat else we can do in td
  71. What do you mean by Requirment Coverage?
  72. how will you do execution workflow, creat and modify test sets

A – Z Testing

To know with the basic definitions of software testing and quality assurance this is the best glossary compiled by Erik van Veenendaal. Also for each definition there is a reference of IEEE or ISO mentioned in brackets.

 

A

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]

acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]

accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]

accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing.

actual result: The behavior produced/observed when a component or system is tested.

 

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.

adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability testing.

 

agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability testing.

anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident, problem.

attractiveness: The capability of the software product to be attractive to the user. [ISO 9126]

audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1) the form or content of the products to be produced

(2) the process by which the products shall be produced

(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]

audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]

 

B

back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610]

basic block: A sequence of one or more consecutive executable statements containing no branches.

basis test set: A set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610]

bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.

best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.

beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing.

black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

black box test design techniques: Documented procedure to derive and select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.

boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, ifthen- else.

branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

branch testing: A white box test design technique in which test cases are designed to execute branches.

business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

 

C

Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]

capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing. See also test automation.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]

 

certification: The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

changeability: The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability.

classification tree method: A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [Grochtmann]

code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

co-existence: The capability of the software product to co-exist with other independent software in a common environment sharing common resources. [ISO 9126] See portability testing.

complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISO 9126]

compliance testing: The process of testing to determine the compliance of component or system.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.

component specification: A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

component testing: The testing of individual software components. [After IEEE 610]

compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.

concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.

condition determination testing: A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

condition outcome: The evaluation of a condition to True or False.

configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards compliance. [IEEE 610]

configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]

configuration identification: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]

configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]

configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]

consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]

control flow: An abstract representation of all possible sequences of events (paths) in the execution through a component or system.

conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.

COTS: Acronym for Commercial Off-The-Shelf software.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.

coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by the test suite.

cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where – L = the number of edges/links in a graph – N = the number of nodes in a graph – P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

D

data definition: An executable statement where a variable is assigned a value.

data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.

data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]

data flow analysis: A form of static analysis based on the definition and usage of variables.

data flow coverage: The percentage of definition-use pairs that have been exercised by a test case suite.

data flow test: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.

debugging: The process of finding, analyzing and removing the causes of failures in software.

debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.

decision: A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.

decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.

decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]

decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

decision outcome: The result of a decision (which therefore determines the branches to be taken).

defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]

defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]

definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).

deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.

design-based testing: An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).

desk checking: Testing of software or specification by manual simulation of its execution.

development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]

documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.

domain: The set from which valid input and/or output values can be selected.

driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]

dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

dynamic testing: Testing that involves the execution of the software of a component or system.

 E

efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]

efficiency testing: The process of testing to determine the efficiency of a software product.

elementary comparison testing: A black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.

entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

entry point: The first executable statement within a component.

equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

error: A human action that produces an incorrect result. [After IEEE 610]

error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

error seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]

error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].

exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.

executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.

exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.

exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing. [After Gilb and Graham]

exit point: The last executable statement within a component.

expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

exploratory testing: Testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [Bach]

F

fail: A test is deemed to fail if its actual result does not match its expected result.

failure: Actual deviation of the component or system from its expected delivery, service or result. [After Fenton]

failure mode: The physical or functional manifestation of a failure. For example, a system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution.

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

fault tolerance: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability.

fault tree analysis: A method used to analyze the causes of faults (defects).

feasible path: A path for which a set of input values and preconditions exists which causes it to be executed.

feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints). [After IEEE 1008]

finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]

formal review: A review characterized by documented procedures and requirements, e.g. inspection.

frozen test basis: A test basis document that can only be amended by a formal change control process. See also baseline.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.

functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

functional test design technique: Documented procedure to derive and select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique.

functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

functionality: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]

functionality testing: The process of testing to determine the functionality of a software product.

G

glass box testing: See white box testing.

H

heuristic evaluation: A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called “heuristics”).

high level test case: A test case without concrete (implementation level) values for input data and expected results.

horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification).

 

I

impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overal project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.

incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

incident: Any event occurring during testing that requires investigation. [After IEEE 1008]

incident management: The process of recognizing, investigating, taking action and disposing of incidents. It involves recording incidents, classifying them and identifying the impact. [After IEEE 1044]

incident management tool: A tool that facilitates the recording and status tracking of incidents found during testing. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

incident report: A document reporting on any event that occurs during the testing which requires investigation. [After IEEE 829]

independence: Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO-178b]

infeasible path: A path that cannot be exercised by any set of possible input values.

informal review: A review not based on a formal (documented) procedure.

input: A variable (whether stored within a component or outside) that is read by a component.

input domain: The set from which valid input values can be selected.. See also domain.

input value: An instance of an input. See also input.

inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]

installability: The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability.

installability testing: The process of testing the installability of a software product. See also portability testing.

installation guide: Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

installation wizard: Supplied software on any suitable media, which leads the installer through the installation process. It normally runs the installation process, provides feedback on installation results, and prompts for options.

instrumentation: The insertion of additional code into the program in order to collect information about program behavior during execution.

instrumenter: A software tool used to carry out instrumentation.

intake test: A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase.

integration: The process of combining components or systems into larger assemblies.

integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.

interface testing: An integration test type that is concerned with testing the interfaces between components or systems.

interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.

interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.

invalid testing: Testing using input values that should be rejected by the component or system. See also error tolerance.

isolation testing: Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

 K

keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data driven testing.

 L

LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

learnability: The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability.

load test: A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

low level test case: A test case with concrete (implementation level) values for input data and expected results.

 

M

maintenance: Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment. [IEEE 1219]

maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system.

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]

maintainability testing: The process of testing to determine the maintainability of a software product.

management review: A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and heir system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

maturity: (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model, Test Maturity Model. (2) The capability of the software product to avoid failure as a result of defects in the software. [ISO 9126] See also reliability.

measure: The number or category assigned to an attribute of an entity by making a measurement [ISO 14598].

measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]

measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]

memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.

metric: A measurement scale and the method used for measurement. [ISO 14598]

milestone: A point in time in a project at which defined (intermediate) deliverables and results should be ready.

moderator: The leader and main person responsible for an inspection or other review process.

monitor: A software tool or hardware device that run concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]

multiple condition coverage: The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.

multiple condition testing: A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).

mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.

N

N-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]

N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing.

negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique. [After Beizer].

non-conformity: Non fulfillment of a specified requirement. [ISO 9000]

non-functional requirement: A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability.

non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

non-functional test design techniques: Methods used to design or select tests for nonfunctional testing.

 O

off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

operability: The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability.

operational environment: Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.

operational profile testing: Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]

operational testing: Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]

output: A variable (whether stored within a component or outside) that is written by a component.

output domain: The set from which valid output values can be selected. See also domain.

output value: An instance of an output. See also output.

 P

pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.

pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.

 

Pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]

path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

path sensitizing: Choosing a set of input values to force the execution of a given path.

path testing: A white box test design technique in which test cases are designed to execute paths.

performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See efficiency.

performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. Defect Detection Percentage (DDP) for testing. [CMMI]

performance testing: The process of testing to determine the performance of a software product. See efficiency testing.

performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

phase test plan: A test plan that typically addresses one test level.

portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]

portability testing: The process of testing to determine the portability of a software product.

postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.

precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

 Priority: The level of (business) importance assigned to an item, e.g. defect.

process cycle test: A black box test design technique in which test cases are designed toexecute business procedures and processes. [TMap]

 process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

 project test plan: A test plan that typically addresses multiple test levels.

pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

 Q

quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]

quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement. [ISO 9000]

R

random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability.

recoverability testing: The process of testing to determine the recoverability of a software product. See also reliability testing.

regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]

reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISO 9126]

reliability testing: The process of testing to determine the reliability of a software product.

replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability.

requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]

requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

requirements management tool: A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.

requirements phase: The period of time in the software life cycle during which the equirements for a software product are defined and documented. [IEEE 610]

resource utilization: The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions. [After ISO 9126] See also efficiency.

resource utilization testing: The process of testing to determine the resource-utilization of a software product.

result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.

resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]

re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

reviewer: The person involved in the review who shall identify and describe anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

risk-based testing: Testing oriented towards exploring and providing information about product risks. [After Gerrard]

risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also errortolerance, fault-tolerance.

root cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

 

S

safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO 9126]

safety testing: The process of testing to determine the safety of a software product.

scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

scalability testing: Testing to determine the scalability of the software product.

scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.

scripting language: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/replay tool).

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126]

security testing: Testing to determine the security of the software product.

severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]

simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]

simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator.

smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.

software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]

specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]

specification-based test design technique: See black box test design technique.

specified input: An input for which the specification predicts a result.

stability: The capability of the software product to avoid unexpected effects from modifications in the software. [ISO 9126] See also maintainability.

 

state diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]

state table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

state transition: A transition between two states of a component or system.

state transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.

statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

statement coverage: The percentage of executable statements that have been exercised by a test suite.

statement testing: A white box test design technique in which test cases are designed toexecute statements.

static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.

static analyzer: A tool that carries out static analysis.

static code analysis: Analysis of program source code carried out without execution of that software.

static code analyzer: A tool that carries out static code analysis. The tool checks source code, for certain properties such as conformance to coding standards, quality metrics or data flow anomalies.

static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

statistical testing: A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing.

status accounting: An element of configuration management, consisting of the recording andreporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes. [IEEE 610]

 

Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]

structural coverage: Coverage measures based on the internal structure of the component.

structural test design technique: See white box test design technique.

stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]

subpath: A sequence of executable statements within a component.

suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality.

Software Usability Measurement Inventory (SUMI): A questionnaire based usability test technique to evaluate the usability, e.g. user-satisfaction, of a component or system. [Veenendaal]

syntax testing: A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

system: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]

system integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

 

T

technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review. [Gilb and Graham, IEEE 1028]

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test charter: A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing. See also exploratory testing.

test comparator: A test tool to perform automated test comparison.

test comparison: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.

test data: Data that exists (for example, in a database) before a test is executed, and thataffects or is affected by the component or system under test.

test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

 

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

test design tool: A tool that support the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool itself.

test design technique: A method used to derive or select test cases.

test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]

test evaluation report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

test execution: The process of running a test by the component or system under test, producing actual result(s).

test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

test execution phase: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]

test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

test execution technique: The method used to perform the actual test execution, either manually or automated.

test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback. [Fewster and Graham]

test harness: A test environment comprised of stubs and drivers needed to conduct a test.

test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

test item: The individual element to be tested. There usually is one test object and many test items. See also test object.

test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. [After TMap]

test log: A chronological record of relevant details about the execution of tests. [IEEE 829]

test logging: The process of recording information about tests executed into a test log.

test manager: The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.

test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

test object: The component or system to be tested. See also test item.

test objective: A reason or purpose for designing and executing a test.

test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]

test performance indicator: A metric, in general high level, indicating to what extent a certain target value or criterion is met. Often related to test process improvement objectives, e.g. Defect Detection Percentage (DDP).

test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level. [After Gerrard]

test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process [After IEEE 829]

test planning: The activity of establishing or updating a test plan.

test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

test point analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

test procedure: See test procedure specification.

test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

test process: The fundamental test process comprises planning, specification, execution, recording and checking for completion. [BS 7925/2]

test repeatability: An attribute of a test indicating whether the same results are produced each time the test is executed.

test run: Execution of a test on a specific version of the test object.

test script: Commonly used to refer to a test procedure specification, especially an automated one.

test specification: A document that consists of a test design specification, test case specification and/or test procedure specification.

test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a programme (one or more projects).

test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]

test target: A set of exit criteria.

test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [TMap] See also CAST.

test type: A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases. [After TMap]

testability: The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability.

testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]

testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]

tester: A technically skilled professional who is involved in the testing of a component or system.

testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]

thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

 

U

understandability: The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability.

unreachable code: Code that cannot be reached and therefore is impossible to execute.

usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]

usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. [After ISO 9126]

use case testing: A black box test design technique in which test cases are designed to execute user scenarios.

user test: A test whereby real-life users are involved to evaluate the usability of a component or system.

 

V

V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

verification: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

vertical traceability: The tracing of requirements through the layers of development documentation to components.

volume testing: Testing where the system is subjected to large volumes of data. See also resource-utilization testing.

 

W

walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028]

white box test design technique: Documented procedure to derive and select test cases based on an analysis of the internal structure of a component or system.

white box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.

Testing Life Cycle

Life Cycle of Testing Process This article explains about Different steps in Life Cycle of Testing Process. In Each phase of the development process will have a specific input and a specific output. Once the project is confirmed to start, the phases of the development of project can be divided into the following phases:

  • Software requirements phase.
  • Software Design
  • Implementation
  • Testing
  • Maintenance

In the whole development process, testing consumes highest amount of time. But most of the developers oversee that and testing phase is generally neglected. As a consequence, erroneous software is released. The testing team should be involved right from the requirements stage itself.

The various phases involved in testing, with regard to the software development life cycle are:

  1.  Requirements stage
  2. Test Plan
  3.  Test Design.
  4. Design Reviews
  5. Code Reviews
  6. Test Cases preparation.
  7. Test Execution
  8. Test Reports.
  9. Bugs Reporting
  10. Reworking on patches.
  11. Release to production.

Requirements Stage

Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.

Test Plan

Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:

• Total number of features to be tested.
• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc

Test Design

Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation

Test cases should be prepared based on the following scenarios:

• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios

Design Reviews

The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.

Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.

Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.

Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done. The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

Types of software Testing

March 14, 2008 Comments off

Software Testing Types:

Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

 

Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.

Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Web Testing, Example Test cases

WEB TESTING
While testing a web application you need to consider following Cases:

• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security

Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links

 

• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields

• Database
* Testing will be done on the database integrity.

• Cookies
* Testing will be done on the client system side, on the temporary Internet files.

Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.

• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..

Usability:
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance

Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested.

Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.

Security:
The primary reason for testing the security of a web is to identify potential vulnerabilities and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection

Web Testing: Complete guide on testing web applications

In my previous post I have outlined points to be considered while testing web applications. Here we will see some more details on web application testing with web testing test cases. Let me tell you one thing that I always like to share practical knowledge, which can be useful to users in their career life. This is a quite long article so sit back and get relaxed to get most out of it.

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

1) Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

  • Test the outgoing links from all the pages from specific domain under test.
  • Test all internal links.
  • Test links jumping on the same pages.
  • Test links used to send the email to admin or other users from web pages.
  • Test to check if there are any orphan pages.
  • Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?

  • First check all the validations on each field.
  • Check for the default values of fields.
  • Wrong inputs to the fields in the forms.
  • Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.

Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines.

Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.

Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.

Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
  • Printing options

Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.

OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing

Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.

In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

I think I have addressed all major web testing methods. I have worked for around 2 years out of my testing career on web testing. There are some experts who have spent their whole career life on web testing. If I missed out addressing some important web testing aspect then let me know in comments below. I will keep on updating the article for latest testing information.

How can a Web site be tested?

Points to be considered while testing a Web site:

Web sites are essentially client/server applications
with web servers and ‘browser’ clients.

Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.).

 

Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort.

Other considerations might include:

What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

Will down time for server and content maintenance/upgrades be allowed? how much?

What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?

How reliable are the site’s Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?

What processes will be required to manage updates to the web site’s content, and

what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?

Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??

How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for in testing?

How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

The page layouts and design elements should be consistent throughout a site, so that it’s clear to the user that they’re still within a site.

Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be included on each page.

I am working on the search engine website. Testing a search engine site is a little bit different than a regular website. In my next posts I will explain how to test WWW in detail.

Software Automation Testing, Should Automate project Testing?

Should we go for Automtion testing of project?
This is really a difficult question as I understand.

If you are doing manual testing of your project and decide to go for Automation I think you will have to ask yourself some basic questions like:

Which is the best testing tool?
To which level should I automate project Testing?
Is it possible to Automate my complete application?

By asking these questions you can go to near the Automation testing decision. i.e To be or not to be?

Let’s have answers for these basic questions:

Which is the best testing tool?
A basic question asked often. Different people will give you different answers for this question.

It is important to evaluate the tool based on your project environment, your budget,skills level of individuals Some tools work better in specific
environments, while in another environment they can create compatibility issue.
Demonstrate the tool using your application under test.

You can go for some open source tools available in the market, but with
the study of indivisual tool and then finalising which is better suit for you.

Second question is:
To which level should I automate project Testing?
Or can I 100 percent Automate project?
– Automation testing can provide several benefits if implemented successfully.
Expectation to Automation comes alog with, but you may not have the results
as quick as you expect.
Test Engineer must evaluate the great potential and should be able to put these benifites into your application.

Keep in mind that 100 percent Automation of any project is difficult to achieve given the time and budget constraints.

So try to Automate as possible as you can, atleast 80 percent, which will greatly benefit to your organisation.

Feel free to ask me query on Software testing Automation. I will try my best to answer it!

How to find a bug in application? Tips and Tricks

March 14, 2008 Comments off

very good and important point. Right? If you are a software tester or a QA engineer then you must be thinking every minute to find a bug in an application. And you should be!I think finding a blocker bug like any system crash is often rewarding! No I don’t think like that. You should try to find out the bugs that are most difficult to find and those always misleads users.

 

Finding such a subtle bugs is most challenging work and it gives you satisfaction of your work. Also it should be rewarded by seniors. I will share my experience of one such subtle bug that was not only difficult to catch but was difficult to reproduce also.
I was testing one module from my search engine project. I do most of the activities of this project manually as it is a bit complex to automate. That module consist of traffic and revenue stats of different affiliates and advertisers. So testing such a reports is always a difficult task. When I tested this report it was showing the data accurately processed for some time but when tried to test again after some time it was showing misleading results. It was strange and confusing to see the results.

There was a cron (cron is a automated script that runs after specified time or condition) to process the log files and update the database. Such multiple crons are running on log files and DB to synchronize the total data. There were two crons running on one table with some time intervals. There was a column in table that was getting overwritten by other cron making some data inconsistency. It took us long time to figure out the problem due to the vast DB processes and different crons.

My point is try to find out the hidden bugs in the system that might occur for special conditions and causes strong impact on the system. You can find such a bugs with some tips and tricks.

So what are those tips:

1) Understand the whole application or module in depth before starting the testing.

2) Prepare good test cases before start to testing. I mean give stress on the functional test cases which includes major risk of the application.

3) Create a sufficient test data before tests, this data set include the test case conditions and also the database records if you are going to test DB related application.

4) Perform repeated tests with different test environment.

5) Try to find out the result pattern and then compare your results with those patterns.

6) When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.

7) Use your previous test data pattern to analyse the current set of tests.

 Try some standard test cases for which you found the bugs in some different application. Like if you are testing input text box try inserting some html tags as the inputs and see the output on display page.

9) Last and the best trick is try very hard to find the bug  As if you are testing only to break the application!

I will include more tips in some coming posts.

Meantime you can comment out more tips here.

How to write software Testing Weekly Status Report

March 14, 2008 Comments off

Writing effective status report is as important as the actual work you did! How to write a effective status report of your weekly work at the end of each week?

Here I am going to give some tips. Weekly report is important to track the important project issues, accomplishments of the projects, pending work and milestone analysis. Even using these reports you can track the team performance to some extent. From this report prepare future actionables items according to the priorities and make the list of next weeks actionable.

So how to write weekly status report?

Follow the below template:
Prepared By:
Project:
Date of preparation:
Status:
A) Issues:
Issues holding the QA team from delivering on schedule:
Project:
Issue description:
Possible solution:
Issue resolution date:

You can mark these issues in red colour. These are the issues that requires managements help in resolving them.

Issues that management should be aware:

These are the issues that not hold the QA team from delivering on time but management should be aware of them. Mark these issues in Yellow colour. You can use above same template to report them.

Project accomplishments:
Mark them in Green colour. Use below template.
Project:
Accomplishment:
Accomplishment date:

B) Next week Priorities:
Actionable items next week list them in two categories:

1) Pending deliverables: Mark them in blue colour: These are previous weeks deliverables which should get released as soon as possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:

2) New tasks:
List all next weeks new task here. You can use black colour for this.
Project:
Scheduled Task:
Date of release:

C) Defect status:

Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.

Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.

Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases to be executed.

This template should give you the overall idea of the status report. Don’t ignore the status report. Even if your managers are not forcing you to write these reports they are most important for your work assessment in future.

Try to follow report writing routine. Use this template or at least try to report it in your own words about the overall work of which you can keep some track.

Do you have any better idea for this routine work? Comment it out!

How to write effective Test cases, procedures and definitions

Writing effective test cases is a skill and that can be achieved by some experience and in-depth study of the application on which test cases are being written.

Here I will share some tips on how to write test cases, test case procedures and some basic test case definitions.

What is a test case?
“A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly.” Definition by Glossary

There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing.

So you can observe a systematic growth from no testable item to a Automation suit.

Why we write test cases?
The basic objective of writing test cases is to validate the testing coverage of the application. If you are working in any CMMi company then you will strictly follow test cases standards. So writing test cases brings some sort of standardization and minimizes the ad-hoc approach in testing.

How to write test cases?
Here is a simple test case format

Fields in test cases:

Test case id:
Unit to test:
What to be verified?
Assumptions:
Test data:
Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

So here is a basic format of test case statement:

Verify
Using
[tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]

Verify: Used as the first word of the test case statement.
Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of using depending on the situation.

For any application basically you will cover all the types of test cases including functional, negative and boundary value test cases.

Keep in mind while writing test cases that all your test cases should be simple and easy to understand. Don’t write explanations like essays. Be to the point.

Try writing the simple test cases as mentioned in above test case format. Generally I use Excel sheets to write the basic test cases. Use any tool like ‘Test Director’ when you are going to automate those test cases.

Feel free to comment below any query regarding test case writing or execution.

Living life as a Software Tester!

March 14, 2008 Comments off

Recently I read a very interesting article on “All I Ever need to know about testing” by Lee Copeland.
I was so impressed with the concept of our day to day work comparison with the software testing.I will extract only points related to software testing. As a software tester keep in mind these simple points:

Share everything:
If you are a experienced tester on any project then help the new developers on your project. Some testers have habit to keep the known bugs hidden till they get implement in code and then they write a big defect report on that. Don’t try to only pump your bug count, share everything with developers.

Build trust:
Let the developers know any bug you found in design phase. Do not log the bug repeatedly with small variations just to pump the bug count. Build trust in developer and tester relation.

Don’t blame others:
As a tester you should not always blame developers for the bugs. Concentrate on bug, not always on pointing that bug in front of all people. Hit the bug and its cause not the developer!

Clean up your own mess:
When you finish doing any test scenario then reconfigure that machine to its original configuration. The same case applies for bug report. Write a clean effective bug report. Let the developer find it easy to repro and fix it.

Give credit to others for their work:
Do not take others credit. If you have referred any others work, immediately give credit to that person. Do not get frustrated if you not found any bug that later has been reported by client. Do work hard, use your skill.

Remember to flush )
Like the toilets flush all the software’s at some point. While doing performance testing remember to flush the system cache.

Take a nap everyday:
We need time to think, get refresh or to regenerate our energy.
Some times its important to take one step back in order to get fresh insight and to find different working approach.

Always work in teams, team score are always better and powerful than individuals.

How to get job in Software Testing quickly?

March 14, 2008 Comments off

In recent days this is the most asked question to me by readers. How to get software testing job? How to come in software testing field? or Can I get job in testing?

All these questions are similar and I want to give also similar answer for this. I have written post on choosing software testing as your career where you can analyze your abilities and know which are the most important skills required for software testing.

I will continue mentioning that “know your interest before going into any career field”. Just going to software testing career or any other hot career is wrong and which may result into loss of your job interest as well as your job.

Now you know your abilities, skills, interests right? and you have made decision to go for software testing career as it is your favorite career and you suit most in this career. So here is a guideline for how to get a good job in software testing field.

If you are a fresher and just passed out from your college or passing out in coming months then you need to prepare well for some software testing methodologies. Prepare all the manual testing concepts. If possible have some hands-on on some automation and bug tracking tools like winrunner and test director. It is always a good idea to join any software testing institute or class which will provide you a good start and direction of preparation. You can join any 4 months duration software testing course or can do diploma in software testing which is typically of 6 months to 1 year. Keep the preparation going on in your course duration. This will help you to start giving interviews right after getting over your course.

If you have some sort of previous IT experience and want to switch to software testing then it’s somewhat simple for you. Show your previous IT experience in your resume while applying for software testing jobs. If possible do some crash course to get idea of software testing concepts like I mentioned for freshers above. Keep in mind that you have some kind of IT experience so be prepared for some tough interview questions here.

As companies always prefer some kind of relevant experience for any software job, its better if you have relevant experience in software testing and QA. It may be any kind of software testing tools hands-on or some testing course from reputed institutes.

Please always keep in mind- Do not add fake experience of any kind. This can ruin your career forever. Wait and try for some more days to get the job by your abilities instead of getting into trap of fake experience.

Last important words, Software testing is not ‘anyone can do career!’ Remove this attitude from your mind if someone has told such kind of foolish thing to you. Testing requires in depth knowledge of SDLF, out of box thinking, analytical skill and some programming language skill apart from software testing basics.

So best luck and start preparation for your rocking career! I will continue writing this career series and what you actually need to prepare for software testing interview.

White box testing: Need, Skill required and Limitations

March 14, 2008 Comments off

What is White Box Testing?
White box testing (WBT) is also called Structural or Glass box testing.

White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised.

 

White Box Testing is coverage of the specification in the code.

Code coverage:

Segment coverage:
Ensure that each code statement is executed once.

Branch Coverage or Node Testing:
Coverage of each code branch in from all possible was.

Compound Condition Coverage:
For multiple condition test each condition with multiple paths and combination of different path to reach that condition.

Basis Path Testing:
Each independent path in the code is taken for testing.

Data Flow Testing (DFT):
In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code.DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. In short each data variable is tracked and its use is verified.
This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.

Path Testing:
Path testing is where all possible paths through the code are defined and covered. Its a time consuming task.

Loop Testing:
These strategies relate to testing single loops, concatenated loops, and nested loops. Independent and dependent code loops and values are tested by this approach.

Why we do White Box Testing?
To ensure:

  • That all independent paths within a module have been exercised at least once.
  • All logical decisions verified on their true and false values.
  • All loops executed at their boundaries and within their operational bounds internal data structures validity.

Need of White Box Testing?
To discover the following types of bugs:

  • Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
  • The design errors due to difference between logical flow of the program and the actual implementation
  • Typographical errors and syntax checking

Skills Required:
We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. We should know the specification and the code to be tested. Knowledge of programming languages and logic.

Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective.
Reference- http://www.softrel.org/stgb.html

Comment out your queries on white box testing below. Meantime I will cover Black box testing in detail.

Black Box Testing: Types and techniques of BBT

March 14, 2008 Comments off

I have covered what is White box Testing in previous article. Here I will concentrate on Black box testing. BBT advantages, disadvantages and and How Black box testing is performed i.e the black box testing techniques.

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.

 

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ’structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
– Tester can be non-technical.
– Used to verify contradictions in actual system and the specifications.
– Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing
– The test inputs needs to be from large sample space.
– It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
– Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Reference- http://www.softrel.org/stgb.html

How to get your all bugs resolved without any ‘Invalid bug’ label?

March 14, 2008 Comments off

I hate “Invalid bug” label from developers for the bugs reported by me, do you? I think every tester should try to get his/her 100% bugs resolved. This requires bug reporting skill. See my previous post on “How to write a good bug report? Tips and Tricks” to report bugs professionally and without any ambiguity.

The main reason for bug being marked as invalid is “Insufficient troubleshooting” by tester before reporting the bug. In this post I will focus only on troubleshooting to find main cause of the bug. Troubleshooting will help you to decide whether the ambiguity you found in your application under test is really a bug or any test setup mistake.

Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete testing setup. Let’s say you found an ambiguity in application under test. You are now preparing the steps to report this ambiguity as a bug. But wait! Have you done enough troubleshooting before reporting this bug? Or have you confirmed if it is really a bug?

What troubleshooting you need to perform before reporting any bug?

Troubleshooting of:

  • What’s not working?
  • Why it’s not working?
  • How can you make it work?
  • What are the possible reasons for the failure?

Answer for the first question “what’s not working?” is sufficient for you to report the bug steps in bug tracking system. Then why to answer remaining three questions? Think beyond your responsibilities. Act smarter, don’t be a dumb person who only follow his routine steps and don’t even think outside of that. You should be able to suggest all possible solutions to resolve the bug and efficiency as well as drawbacks of each solution. This will increase your respect in your team and will also reduce the possibility of getting your bugs rejected, not due to this respect but due to your troubleshooting skill.

Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.

Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out – what can be different reasons for failure.

Reasons of failure:
1) If you are using any configuration file for testing your application then make sure this file is upto date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.

2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.

3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.

4) Check if you are not using invalid access credentials for authentication.

5) Check if software versions are compatible.

6) Check if there is any other hardware issue that is not related to your application.

7) Make sure your application hardware and software prerequisites are correct.

8 ) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.

9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.

10) Before starting to test make sure you have uploaded all latest version files to your test environment.

These are all small and common mistakes but can mostly impact on your relations and credibility in your team. When you will find that your bug is marked as invalid and the invalid bug reason is from above mentioned list – it will be a silly mistake and it will definitely hurt you. (At least to me!)

Share mistakes done by you while reporting any bug. This will help other readers to learn from your experience!

How to Improve Tester Performance?

March 14, 2008 Comments off

Many Companies don’t have resources or can’t afford to hire the required number of testers on the project. So what could be the solution in this case?

The answer is simple. Companies will prefer to have skilled testers instead of a army of testers!

 

So how can build skilled testers on any project?
You can improve testers performance by assigning him/her to the single project.
Due to this the tester will get the detail knowledge of the project domain, Can concentrate well on that project, can do the R&D work during the early development phase of the project.

This not only build his/her functional testing knowledge but also the project Domain knowledge.

Company can use following methods to Improve the Testers performance:
1) Assign one tester to one project for long duration or to the entire project. Doing this will build testers domain knowledge, He/She can write better test cases, Can cover most of the test cases, and eventually can find the problem faster.

2) Most of the testers can do the functional testing, BV analysis but they may not know how to measure test coverage,How to test a complete application, How to perform load testing. Company can provide the training to their employees in those areas.

3) Involve them in all the project meetings, discussions, project design so that they can understand the project well and can write the test cases well.

4) Encourage them to do the extra activities other than the regular testing activities. Such activities can include Inter team talk on their project experience, Different exploratory talks on project topics.

Most important is to give them freedom to think outside the box so that they can take better decision on Testing activities like test plan, test execution, test coverage.

If you have a better idea to boost the testers performance don’t forget to comment on!

Priority and Severity

Q. What is a test strategy?

Answer:
A test strategy must address the risks and present a process that can reduce those
risks.
The two components of Test strategy are:
a) Test Factor: The risk of issue that needs to be addressed as a part of the test
strategy. Factors that are to be addressed in testing a specific application
system will form the test factor.
b) Test phase: The phase of the systems development life cycle in which testing
will occur.]

Q. When to stop testing?

 

Answer:
a) When all the requirements are adequately executed successfully through test
cases
b) Bug reporting rate reaches a particular limit
c) The test environment no more exists for conducting testing
d) The scheduled time for testing is over
e) The budget allocation for testing is over]

Q. Your company is about to roll out an E-Commerce application. It is not
possible to test the application on all types of browsers on all platforms and
operating systems. What steps would you take in the testing environment to
reduce the business risks and commercial risks?


Answer:
Compatibility testing should be done on all browsers (IE, Netscape, Mozilla etc.)
across all the operating systems (win 98/2K/NT/XP/ME/Unix etc.)]

Q. What’s the difference between priority and severity?

Answer:
“Priority” is associated with scheduling, and “severity” is associated with standards.
“Priority” means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency). “Severity” is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and
often suggests harshness; severe is marked by or requires strict adherence to
rigorous standards or high principles, e.g. a severe code of behavior. The words
priority and severity do come up in bug tracking. A variety of commercial, problemtracking/
management software tools are available. These tools, with the detailed
input of software test engineers, give the team complete information so developers
can understand the bug, get an idea of its ’severity’, reproduce it and fix it. The fixes
are based on project ‘priorities’ and ’severity’ of bugs. The ’severity’ of a problem is
defined in accordance to the customer’s risk assessment and recorded in their
selected tracking tool. A buggy software can ’severely’ affect schedules, which, in
turn can lead to a reassessment and renegotiation of ‘priorities’.]

Some tricky question answers

March 14, 2008 Comments off

1. Define the following along with examples

a. Boundary Value testing
b. Equivalence testing
c. Error Guessing
d. Desk checking
e. Control Flow analysis


Answer:
1-a) Boundary value Analysis: –

A process of selecting test cases/data by
identifying the boundaries that separate valid and invalid conditions. Tests are
constructed to test the inside and outside edges of these boundaries, in addition to
the actual boundary points. or A selection technique in which test data are chosen to
lie along “boundaries” of the input domain [or output range] classes, data structures,
procedure parameters, etc. Choices often include maximum, minimum, and trivial
values or parameters.
E.g. – Input data 1 to 10 (boundary value)
Test input data 0, 1, 2 to 9, 10, 11

1-b) Equivalence testing: –
The input domain of the system is partitioned into classes
of representative values, so that the no of test cases can be limited to one-per-class,
which represents the minimum no. of test cases that must be executed.
E.g.- valid data range: 1-10
Test set:-2; 5; 14

1-c) Error guessing: –
Test data selection technique. The selection criterion is to pick
values that seem likely to cause errors Error guessing is based mostly upon
experience, with some assistance from other techniques such as boundary value
analysis. Based on experience, the test designer guesses the types of errors that
could occur in a particular type of software and designs test cases to uncover them.
E.g. – For example, if any type of resource is allocated dynamically, a good place to
look for errors is in the de-allocation of resources. Are all resources correctly deallocated,
or are some lost as the software executes?

1-d) Desk checking: –
Desk checking is conducted by the developer of the system or
program. The process involves reviewing the complete product to ensure that it is
structurally sound and that the standards and requirements have been met. This is
the most traditional means for analyzing a system or program.

1-e) Control Flow Analysis: –
It is based upon graphical representation of the
program process. In control flow analysis; the program graphs has nodes which
represent a statement or segment possibly ending in an unresolved branch. The
graph illustrates the flow of program control from one segment to another as
illustrated through branches .the objective of control flow analysis is to determine

the potential problems in logic branches that might result in a loop condition or
improper processing .

Software Installation/Uninstallation Testing

Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.

Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.

If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.

In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.

We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.

To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!

You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.

Installation testing tips with some broad test cases:

1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case. Installation testing

Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.

2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.

3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.

4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.

5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.

6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.

7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.

8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.

9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.

10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.

11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.

12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.

13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.

I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.

Hope this article will be a basic guideline to those having trouble to start with software installation testing both manually or in automation.

How Domain knowledge is Important for testers?

March 14, 2008 Comments off

“Looking at the current scenario from the industry it is seen that the testers are expected to have both technical testing skills as well either need to be from the domain background or have gathered domain knowledge mainly for BFSI is commonly seen.
I would like to know why and when is this domain knowledge imparted to the tester during the testing cycle?”First of all I would like to introduce three dimensional testing career mentioned by Danny R. Faught. There are three categories of skill that need to be judged before hiring any software tester. What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.

 

No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.

While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. (If you are the experts in all of the above skills then please let me know ;-)) So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.

Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.

We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.

Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough – Here comes the need of subject-matter experts.

Let’s take example of my current project: I am currently working on search engine application. Where I need to know the basic of search engine terminologies and concepts. Many times I see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.

When I know the functional domain better I can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.

Here is the big list of the required testing knowledge:

  • Testing skill
  • Bug hunting skill
  • Technical skill
  • Domain knowledge
  • Communication skill
  • Automation skill
  • Some programming skill
  • Quick grasping
  • Ability to Work under pressure …

That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.

What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.

There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.

If you are reading this article till this point then I would like to hear on which domain you are working on? So that our readers can get better idea of different domains and projects. Comment your domain below.

Types of Risks in Software Projects

March 14, 2008 Comments off

Are you developing any Test plan or test strategy for your project? Have you addressed all risks properly in your test plan or test strategy?

As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.

What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for loss”

Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.

In this article I will cover what are the “types of risks”. In next articles I will try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.

Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:

  • Wrong time estimation
  • Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
  • Failure to identify complex functionalities and time required to develop those functionalities.
  • Unexpected project scope expansions.

Budget Risk:

  • Wrong budget estimation.
  • Cost overruns
  • Project scope expansion

Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:

  • Failure to address priority conflicts
  • Failure to resolve the responsibilities
  • Insufficient resources
  • No proper subject training
  • No resource planning
  • No communication in team.

Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:

  • Continuous changing requirements
  • No advanced technology available or the existing technology is in initial stages.
  • Product is complex to implement.
  • Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:

  • Running out of fund.
  • Market development
  • Changing customer product strategy and priority
  • Government rule changes.

These are all common categories in which software project risks can be classified. I will cover in detail “How to identify and manage risks” in next article.

Test plan Templates

Test plan is in high demand. Ya it should be! Test plan reflects your entire project testing schedule and approach. This article is in response to those who have demanded sample test plan.

In my previous article I have outlined Test plan Index. In this article I will elaborate that index to what each point mean to do. So this Test plan will include the purpose of test plan i. e to prescribe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.

Find what actually you need to include in each index point.

I have included link to download PDF format of this test plan template at the end of this post.

Test Plan Template:

(Name of the Product)

Prepared by:

(Names of Preparers)

(Date)

TABLE OF CONTENTS

1.0 INTRODUCTION

2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks

3.0 SCOPE

4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing

5.0 Hardware Requirements

6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation

7.0 Test Schedule

8.0 Control Procedures

9.0 Features to Be Tested

10.0 Features Not to Be Tested

11.0 Resources/Roles & Responsibilities

12.0 Schedules

13.0 Significantly Impacted Departments (SIDs)

14.0 Dependencies

15.0 Risks/Assumptions

16.0 Tools

17.0 Approvals

1.0 INTRODUCTION

A brief summary of the product being tested. Outline all the functions at a high level.

2.0 OBJECTIVES AND TASKS

2.1 Objectives
Describe the objectives supported by the Master Test Plan, eg., defining tasks and responsibilities, vehicle for communication, document to be used as a service level agreement, etc.

2.2 Tasks
List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.
3.0 SCOPE

General
This section describes what is being tested, such as all the functions of a specific product, its existing interfaces, integration of all functions.

Tactics
List here how you will accomplish the items that you have listed in the “Scope” section. For example, if you have mentioned that you will be testing the existing interfaces, what would be the procedures you would follow to notify the key people to represent their respective areas, as well as allotting time in their schedule for assisting you in accomplishing your activity?

4.0 TESTING STRATEGY

Describe the overall approach to testing. For each major group of features or feature combinations, specify the approach which will ensure that these feature groups are adequately tested. Specify the major activities, techniques, and tools which are used to test the designated groups of features.

The approach should be described in sufficient detail to permit identification of the major testing tasks and estimation of the time required to do each one.

4.1 Unit Testing

Definition:
Specify the minimum degree of comprehensiveness desired. Identify the techniques which will be used to judge the comprehensiveness of the testing effort (for example, determining which statements have been executed at least once). Specify any additional completion criteria (for example, error frequency). The techniques to be used to trace requirements should be specified.

Participants:
List the names of individuals/departments who would be responsible for Unit Testing.

Methodology:
Describe how unit testing will be conducted. Who will write the test scripts for the unit testing, what would be the sequence of events of Unit Testing and how will the testing activity take place?

4.2 System and Integration Testing

Definition:
List what is your understanding of System and Integration Testing for your project.

Participants:
Who will be conducting System and Integration Testing on your project? List the individuals that will be responsible for this activity.

Methodology:
Describe how System & Integration testing will be conducted. Who will write the test scripts for the unit testing, what would be sequence of events of System & Integration Testing, and how will the testing activity take place?

4.3 Performance and Stress Testing

Definition:
List what is your understanding of Stress Testing for your project.

Participants:
Who will be conducting Stress Testing on your project? List the individuals that will be responsible for this activity.

Methodology:
Describe how Performance & Stress testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of Performance & Stress Testing, and how will the testing activity take place?

4.4 User Acceptance Testing

Definition:
The purpose of acceptance test is to confirm that the system is ready for operational use. During acceptance test, end-users (customers) of the system compare the system to its initial requirements.

Participants:
Who will be responsible for User Acceptance Testing? List the individuals’ names and responsibility.

Methodology:
Describe how the User Acceptance testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of User Acceptance Testing, and how will the testing activity take place?

4.5 Batch Testing

4.6 Automated Regression Testing

Definition:
Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still works as specified in the requirements.

Participants:
Methodology:

4.7 Beta Testing
Participants:

Methodology:
5.0 HARDWARE REQUIREMENTS
Computers
Modems

6.0 ENVIRONMENT REQUIREMENTS

6.1 Main Frame
Specify both the necessary and desired properties of the test environment. The specification should contain the physical characteristics of the facilities, including the hardware, the communications and system software, the mode of usage (for example, stand-alone), and any other software or supplies needed to support the test. Also specify the level of security which must be provided for the test facility, system software, and proprietary components such as software, data, and hardware.

Identify special test tools needed. Identify any other testing needs (for example, publications or office space). Identify the source of all needs which are not currently available to your group.

6.2 Workstation
7.0 TEST SCHEDULE

Include test milestones identified in the Software Project Schedule as well as all item transmittal events.

Define any additional test milestones needed. Estimate the time required to do each testing task. Specify the schedule for each testing task and test milestone. For each testing resource (that is, facilities, tools, and staff), specify its periods of use.

8.0 CONTROL PROCEDURES

Problem Reporting
Document the procedures to follow when an incident is encountered during the testing process. If a standard form is going to be used, attach a blank copy as an “Appendix” to the Test Plan. In the event you are using an automated incident logging system, write those procedures in this section.

Change Requests
Document the process of modifications to the software. Identify who will sign off on the changes and what would be the criteria for including the changes to the current product. If the changes will affect existing programs, these modules need to be identified.

9.0 FEATURES TO BE TESTED

Identify all software features and combinations of software features that will be tested.

10.0 FEATURES NOT TO BE TESTED

Identify all features and significant combinations of features which will not be tested and the reasons.

11.0 RESOURCES/ROLES & RESPONSIBILITIES

Specify the staff members who are involved in the test project and what their roles are going to be (for example, Mary Brown (User) compile Test Cases for Acceptance Testing). Identify groups responsible for managing, designing, preparing, executing, and resolving the test activities as well as related issues. Also identify groups responsible for providing the test environment. These groups may include developers, testers, operations staff, testing services, etc.

12.0 SCHEDULES

Major Deliverables
Identify the deliverable documents. You can list the following documents:
– Test Plan
– Test Cases
– Test Incident Reports
– Test Summary Reports

13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs)

Department/Business Area Bus. Manager Tester(s)

14.0 DEPENDENCIES

Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadlines.

15.0 RISKS/ASSUMPTIONS

Identify the high-risk assumptions of the test plan. Specify contingency plans for each (for example, delay in delivery of test items might require increased night shift scheduling to meet the delivery date).
16.0 TOOLS
List the Automation tools you are going to use. List also the Bug tracking tool here.

17.0 APPROVALS

Specify the names and titles of all persons who must approve this plan. Provide space for the signatures and dates.

Name (In Capital Letters) Signature Date

1.

2.

3.

4.

Bug life cycle

March 14, 2008 Comments off

What is Bug/Defect?

Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”

Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.

or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

Lastly the general definition of bug is: “failure to conform to specifications”.

If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.

Life cycle of Bug:

1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce

In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.

The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.

Look at the following Bug life cycle:

//www.softwaretestinghelp.com/wp-content/qa/uploads/2007/09/bug-life-cycle1.jpg” cannot be displayed, because it contains errors.

The figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.

Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

Categories: Bug life cycle Tags:

Tips to design test data before executing your test cases

March 14, 2008 Comments off

I have mentioned importance of proper test data in many of my previous articles. Tester should check and update the test data before execution of any test case. In this article I will provide tips on how to prepare test environment so that any important test case will not be missed by improper test data and incomplete test environment setup.

What do I mean by test data?

If you are writing test case then you need input data for any kind of test. Tester may provide this input data at the time of executing the test cases or application may pick the required input data from the predefined data locations. The test data may be any kind of input to application, any kind of file that is loaded by the application or entries read from the database tables. It may be in any format like xml test data, system test data, SQL test data or stress test data.

Preparing proper test data is part of the test setup. Generally testers call it as testbed preparation. In testbed all software and hardware requirements are set using the predefined data values.

If you don’t have the systematic approach for building test data while writing and executing test cases then there are chances of missing some important test cases. Tester can’t justify any bug saying that test data was not available or was incomplete. It’s every testers responsibility to create his/her own test data according to testing needs. Don’t even rely on the test data created by other tester or standard production test data, which might not have been updated for months! Always create fresh set of your own test data according to your test needs.

Sometime it’s not possible to create complete new set of test data for each and every build. In such cases you can use standard production data. But remember to add/insert your own data sets in this available database. One good way to design test data is use the existing sample test data or testbed and append your new test case data each time you get same module for testing. This way you can build comprehensive data set.

How to keep your data intact for any test environment?

Many times more than one tester is responsible for testing some builds. In this case more than one tester will be having access to common test data and each tester will try to manipulate that common data according to his/her own needs. Best way to keep your valuable input data collection intact is to keep personal copies of the same data. It may be of any format like inputs to be provided to the application, input files such as word file, excel file or other photo files.

Check if your data is not corrupted:
Filing a bug without proper troubleshooting is bad a practice. Before executing any test case on existing data make sure that data is not corrupted and application can read the data source.

How to prepare data considering performance test cases?

Performance tests require very large data set. Particularly if application fetching or updating data from DB tables then large data volume play important role while testing such application for performance. Sometimes creating data manually will not detect some subtle bugs that may only be caught by actual data created by application under test. If you want real time data, which is impossible to create manually, then ask your manager to make it available from live environment.

I generally ask to my manager if he can make live environment data available for testing. This data will be useful to ensure smooth functioning of application for all valid inputs.

Take example of my search engine project ‘statistics testing’. To check history of user searches and clicks on advertiser campaigns large data was processed for several years which was practically impossible to manipulate manually for several dates spread over many years. So there is no other option than using live server data backup for testing. (But first make sure your client is allowing you to use this data)

What is the ideal test data?

Test data can be said to be ideal if for the minimum size of data set all the application errors get identified. Try to prepare test data that will incorporate all application functionality, but not exceeding cost and time constraint for preparing test data and running tests.

How to prepare test data that will ensure complete test coverage?

Design your test data considering following categories:
Test data set examples:
1) No data: Run your test cases on blank or default data. See if proper error messages are generated.

2) Valid data set: Create it to check if application is functioning as per requirements and valid input data is properly saved in database or files.

3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.

4) Illegal data format: Make one data set of illegal data format. System should not accept data in invalid or illegal format. Also check proper error messages are generated.

5) Boundary Condition data set: Data set containing out of range data. Identify application boundary cases and prepare data set that will cover lower as well as upper boundary conditions.

6) Data set for performance, load and stress testing: This data set should be large in volume.

This way creating separate data sets for each test condition will ensure complete test coverage.

Conclusion:

Preparing proper test data is a core part of “project test environment setup”. Tester cannot pass the bug responsibility saying that complete data was not available for testing. Tester should create his/her own test data additional to the existing standard production data. Your test data set should be ideal in terms of cost and time. Use the tips provided in this article to categorize test data to ensure complete functional test cases coverage.

Be creative, use your own skill and judgments to create different data sets instead of relying on standard production data while testing.

What is your experience?

Have you faced problem of incomplete data for testing? How you managed to create your own data then? Share your simple tips and tricks to create or use test data.

How to build a successful QA team?

What do we mean by a great software testing team?

“A team with a star player is a good team, but a team without one is a great
team.” – Author unknown.

The above quote from Author leads us to discussion on great teams and its characteristics. The article stems from experience gained while working for different teams, observation of team members behavior under time pressure coupled with complex nature of project.This holds good for Software Testing team which finds prominence place in a project activities and requires right mix of people for performing these activities.

Why does some software testing team fails and others succeed? Is there any solution for this problem.The answer is “Yes”/”No” – depends on how the team member aligns himself towards common goal of the team not at the cost of suppressing his team members interest but working together with common understanding of problem at hand.
The success also depends on leadership attributes possessed by Test Leads –“Captain of ship”.

The objective of this article is to help software test engineers or any person who believes in team work,to understand characteristics of high performance team and how to cultivate them in their own teams.

Success of team in long run doesn’t depend on individual who is considered “STAR” but does depends on all who form clusters of stars that makes great team.

Characteristics of Great Software Testing Team

Initial stage – Ask yourself following question:

Does your new team member knows the reason he has been selected for the team?

New members of the team are often puzzled about their presence in team.Although you may argue that he/she need not know purpose and just work on task assigned to him/her.This is assumption made by many higher management people.By clearing defining the roles and responsibilities helps individuals to understand the project in bigger context.That includes the relevance of his/her job,skills of individuals that could be contributed towards the projects,share common team goal which was defined earlier.This does bring great commitment towards the work and hence contributes towards its quality.

Ownership:

When project complexity increases in terms of tasks and team size, it would not be possible to keep track of individuals tasks by single leader.Hence the solution to this would be assigning Ownership to individuals. However this virtual leadership often act as a impediment rather than solution if not considered appropriately. Mere appointment of individual as Owner without considering a serious thought of whether he/she could manage their team would not bring desired result.

Individuals acting as owners should have mindset which matches leaders mindset and the pride on their part to act as future leaders. These are people who could make difference by carrying along with them their team members and the same people by showing Indifferent attitudes towards their team will disintegrate the team. The task of owners is not merely restricted to assigning task to team members but to understand task at hand, situation at much broader perspective and bringing common level of understanding among their team members. Support their team member at the time of difficulty of handling task,word of encouragement,correcting their mistakes by not acting as lead but as a peer,acting up on ideas or taking advice for appropriate situation from experienced members would certainly benefit towards shared goal. Collaboration and a solid sense of interdependency in a team will defuse blaming behavior and stimulate opportunities for learning and improvement.

Knowledge of seasoned players in the team

The term-seasoned players indicates the person who has spent considerable amount of time in same project or similar kind of work. They are resources who have vast knowledge about project. By channeling their knowledge in proper way,the entire team could be benefited.These individual should show an act of diligence towards others work rather than arrogance.It is commonly said “Past success breeds arrogance”. They are higher performers who’s absence could be felt in a team but it should be not sole criteria as there are equal chance for others who has similar caliber to act at this position.

Motivation – Key Factor

Motivation is not all about giving speech when members of team are assembled but rather every effort should be made to tailor these speech to address each individual. This means each of team member has unique qualities and unique working style. This task is rather complex than said for Test Lead since it will bring effort on leaders part to sense the team member’s feeling not only to task assigned to members but also on project as whole. Positive attitude of lead will energies team – This is quoted from experience working for one of great test team.If the leader complains about long working hours or insisting the team members to work at schedule which is impossible to meet, your team will reflect your attitude. He/She is true leader who inspite of unreasonable schedule instills the confidence among team members to believe in their abilities and at the same time working at the background on his part to justify his team members effort working on unreasonable schedule but bring an extension to these schedule to make his team members job simple.

Recognition

Everyone likes to be recognized for his/her work.When an individual is awarded for his/her work,the responsibility of team lead should bring reason for individual recognition in front of others. The team lead decision for these kind of task should be impartial.This does bring great respect for the awarded individual by members in the team. They would be acting on similar grounds and ultimately team benefits from their collective response. Very often that members working for virtual leader often are not recognized since due to zero visibility to the leader of team. It is virtual leader who has to bring on table the accomplishment,contribution done by team member towards their task.This indicates that virtual leader is future leader who does take care of members of his team and well received by members of his team to whom they always wanted to be associated in future.

One-One basis Meeting

It is often seen that roles and responsibilities for the members are defined and assessment is done at the end of project.Agreed that it is formal process.But informal talk like One – One basis adds to this formal process as well. These informal meeting should address issues at present whom members wont feel like conveying during group meeting, future opportunities for members, identifying future leaders/owners of the team and equally acting on issues at hand after feedback from team members.Timely and appropriately delivered feedback can make the difference between a team that hides mistakes and a team that sees mistakes as opportunities. The responsibility for poor performance is usually a function of the team structure rather than individual incompetence; yet, it is individuals who are sent to training programs for fixing. If team members feel like they are pitted against one another to compete for rewards and recognition, they will withhold information that might be useful to the greater team. When a team has problems, the effective team leader will focus on the team’s structure before focusing on individuals.

“Don’t tell people how to do things, tell them what to do and let them surprise you with their results.” – George Patton

Conclusion

There are plenty of things to be considered while building successful team.The key words – Unity,Trust,Respect for others opinion and acting without fear are ingredients for great test team,in general for any successful team. After reading this article look at your team and question yourself “Are you working in great test team” or “ Will you make every effort to build great test team”.Then don’t wait,try next second to build “Great Software Testing Team”.

“Coming together is a beginning, Keeping together is progress, Working together is success”. – Henry Ford

Over To You!

What do you think from your experience, What are your characteristics for building a successful QA team?

About author: Sharath R. Bhat is a Software Test Engineer at Torry Harris Business solutions, Bangalore and has more than three years experience in software Testing. An ISEB/ISTQB Certified Test Engineer and worked in Telecom, Finance and Healthcare domains. Areas on technical expertise include testing Web Applications, Client-Server, Data Warehousing and Middleware applications built using “Kabira”.

Testing News

March 14, 2008 Comments off

Some days ago I have written a post on Pay per bug approach to charge clients on the basis of number of bugs. There are some distinct obstacles in this approach like how to validate bugs? How much to be paid for bugs? Will core functionality and business logic bugs get caught in this model? There was a good discussion on this topic in comment section as many readers suggested some key approach to handle these obstacles.

In spite of these obstacles, uTest is now offering some decent cash to testers to find flaws in customer applications. They are going to pay to testers for every approved bug.

 

How will it work?
uTest is going to tie up with some companies which don’t have dedicated QA teams and want to outsource their testing work. Such companies can use uTest services and testers community to get their application tested. In return companies will pay some amount to uTest based on the severity and priority of the bug. At the end this amount will be awarded to testers who find these bugs.

Anyone can sign up with uTest to test software’s and make cash for finding bugs. Testers can earn from few hundred to few thousand dollars per month based on experience and performance. The bug pricing will be decided by bug type, type of application and severity of the bug.

What are the benefits?
Good cash benefits for your skills,
Work from anywhere,
Flexible working time (work whenever you want),
And last important benefit: Be your own boss ;-)

As per now I think this is win-win condition for customers who don’t have separate QA departments and testers who want to earn some descent cash from their skill and experience.

Have your opinion on this in comment section below.

Update: This is a pilot program from uTest and only selected testers will get response when this pay per bug system goes live. This is a beta program and right now I can’t predict any success of this program. So wait and watch till next update from them.

Categories: Testing News Tags:

Career in software Testing

March 14, 2008 Comments off

The most frequently asked questions to me till date are “What is the future of software testing business?” “Should I consider software testing as my career option?” – Now you don’t need to ask these question to me any more. See the good news below.

Infosys Chief Executive and Managing Director, Mr. Kris Gopalakrishnan estimated global software testing business to reach $13 Billion by 2010. Out of these approximately half of the testing work will be outsourced to India – according to Mr. Gopalakrishnan. That’s great news for Indian software testers.

Mr.Gopalakrishnan was speaking in inauguration of International software testing conference in Bangalore organized by STeP-IN. The conference was aimed to discuss on various software testing opportunities and future of Indian test community.

Currently Indian software testing community is the largest in the industry and hence there is tremendous business competition in Indian corporations. This competition finally leads to quality work and this is helping India to satisfy global customers. India is having immense talent and customers are coming to India for superior work quality.

Software testers and analysts are now key part of any product team. Indian IT giants like Infosys is deriving upto 10 per cent of revenue from software testing services and significantly growing each year.

Irrespective of software testing global or Indian business I have always suggested candidates to choose your career according to your interest. One can make a good career in any field if you have interest and goal to pioneer in that field. Without interest not a single career option will work for you.

So tighten the belts – learn new technologies on software testing, continuously update your knowledge and don’t even think about future of software testing market!

Website Cookie Testing

March 14, 2008 Comments off

We will first focus on what exactly cookies are and how they work. It would be easy for you to understand the test cases for testing cookies when you have clear understanding of how cookies work? How cookies stored on hard drive? And how can we edit cookie settings?

What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.

 

Why Cookies are used?
Cookies are nothing but the user’s identity and used to track where the user navigated throughout the web site pages. The communication between web browser and web server is stateless.

For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to example.com web server for the page 1.html. Next time if you type page as http://www.example.com/2.html then new request is send to example.com web server for sending 2.html page and web server don’t know anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.

How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol do keep some history of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine and lasts for months or years.

Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored depends on the browser. Different browsers store cookie in different paths. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like “Vijay” etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools->Options->Privacy and then “Show cookies” button.

How cookies are stored?
Lets take example of cookie written by rediff.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail account, a cookie will get written on your Hard disk. To view this cookie simply click on “Show cookies” button mentioned on above path. Click on Rediff.com site under this cookie list. You can see different cookies written by rediff domain with different names.

Site: Rediff.com Cookie name: RMID
Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM

Applications where cookies can be used:

1) To implement shopping cart:
Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user adds some products in their shopping cart and if due to some reason user don’t want to buy those products this time and closes the browser window? When next time same user visits the purchase page he can see all the products he added in shopping cart in his last visit.

2) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.

3) User tracking:
To track number of unique visitors online at particular time.

4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies control these advertisements. When and which advertisement should be shown? What is the interest of the user? Which keywords he searches on the site? All these things can be maintained using cookies.

5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.

Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to warn before writing any cookie or disabled the cookies completely then site containing cookie will be completely disabled and can not perform any operation resulting in loss of site traffic.

2) Too many Cookies:
If you are writing too many cookies on every page navigation and if user has turned on option to warn before writing cookie, this could turn away user from your site.

3) Security issues:
Some times users personal information is stored in cookies and if someone hack the cookie then hacker can get access to your personal information. Even corrupted cookies can be read by different domains and lead to security issues.

4) Sensitive information:
Some sites may write and store your sensitive information in cookies, which should not be allowed due to privacy concerns.

This should be enough to know what cookies are. If you want more cookie info see Cookie Central page.

Some Major Test cases for web application cookie testing:

The first obvious test case is to test if your application is writing cookies properly on disk. You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.

Test cases:

1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.

2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.

3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.

4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)

5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.

6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.

7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.

8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.

9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.

10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

These are some Major test cases to be considered while testing website cookies. You can write multiple test cases from these test cases by performing various combinations. If you have some different application scenario, you can mention your test cases in comments below.

How to write a good bug report?

March 14, 2008 Comments off

Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.

“The point of writing problem report(bug report) is to get bugs fixed” – By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)

 

What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.

1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.

2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.

3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.

How to Report a Bug?

Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.

Reporter: Your name and email address.

Product: In which product you found this bug.

Version: The product version if any.

Component: These are the major sub modules of the product.

Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.

Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.

Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.

Severity:
This describes the impact of the bug.
Types of Severity:

  • Blocker: No further testing work can be done.
  • Critical: Application crash, Loss of data.
  • Major: Major loss of function.
  • Minor: minor loss of function.
  • Trivial: Some UI enhancements.
  • Enhancement: Request for new feature or some enhancement in existing one.

Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.

Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.

URL:
The page url on which bug occurred.

Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.

Description:
A detailed description of bug. Use following fields for description field:

  • Reproduce steps: Clearly mention the steps to reproduce the bug.
  • Expected result: How application should behave on above mentioned steps.
  • Actual result: What is the actual result on running above steps i.e. the bug behavior.

These are the important steps in bug report. You can also add the “Report type” as one more field which will describe the bug type.

The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem

Some Bonus tips to write a good bug report:

1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.

2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.

3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.

4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.

5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.

6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.

Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.

QA, Test engineer’s Payscale

March 14, 2008 Comments off

If you are working as a Test engineer or a QA engineer then this might be shocking for you, Specifically my Indian friends.
See the salary chart below QA Test engineers Payscale

 

This is the salary for Test/QA engineers by their experience. This salary structure is for those who does the job of design, implement, execute and debug information technology test cases and scripts, Automate test cases, Find bugs, defects, and regressions, Verify fixes, Validate and document completion of testing and development.

Desktop application testing, Client server application testing and Web application testing

March 14, 2008 Comments off

Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.

 

Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

Features of QTP

Features of QTP:

  • Ease of use.
  • Simple interface.
  • Presents the test case as a business workflow to the tester (simpler to understand).
  • Uses a real programming language (Microsoft’s VBScript) with numerous resources available.
  • QuickTest Pro is significantly easier for a non-technical person to adapt to and create working test cases, compared to WinRunner.
  • Data table integration better and easier to use than WinRunner.
  • Test Run Iterations/Data driving a test is easier and better implement with QuickTest.
  • Parameterization easier than WinRunner.
  • Can enhance existing QuickTest scripts without the “Application Under Test” being available; by using the ActiveScreen.
  • Can create and implement the Microsoft Object Model (Outlook objects, ADO objects, FileSystem objects, supports DOM, WSH, etc.).
  • Better object identification mechanism.
  • Numerous existing functions available for implementation – both from within QuickTest Pro and VBScript.
  • QTP supports .NET development environment
  • XML support
  • The Test Report is more robust in QuickTest compared to WinRunner.
  • Integrates with TestDirector and WinRunner (can kick off WinRunner scripts from QuickTest).
Categories: Features of QTP Tags:

Soft Skill for testers

Have you been facing problems in interviews? Do you fear to deliver any speech? Do you hesitate to speak in your company meeting? Do you have problems explaining your views to others? Do others disagree with you even though you are right?

If answers to these questions are ‘yes’ then it’s time to improve your communication skill. You should be perfect in all ways of communications like verbal, presentation skill and written communication.

 

Poor communication generally leads to disagreement and misunderstandings. Even in romantic relationship if you are poor at communication, chances are high that you will break up with your boy friend or girl friend.

Good communication skill is a must for software testers. You might have seen this line in every job requirements especially openings in QA and testing field. As testers require communicating with different project team members including clients, communication skill plays important role. If you want to win the arguments (I mean arguments that are right) and find the common solution for your problems with your subordinates then you should be able to express your views effectively.

As a part of ’soft skills for testers’ article series I am sharing detailed power point presentation on “How to improve communication skill”.

Keep in mind these simple rules for effective communication:

  • Listen carefully when others are clarifying their thoughts. Don’t interrupt others in-between.
  • Do not speak too fast. Slow down while speaking.
  • Speak clearly. Your pronunciation should be loud and clear.
  • Make eye contact with whom you are speaking. This increases chances of mutual agreement.
  • Read, read and read. For better communication and effective words in your speech your vocabulary should be very strong. Reading more and more will increase your vocabulary.

Besides these 5 golden rules for effective communication here is PPT presentation on improving your communication skill.

Main topics covered in this PPT:
1) What makes a good communicator?
2) Process of communication
3) Active listening
4) Using non-verbal communication effectively
5) Presentation skill while appearing for an interview.

What are Passwords?

Passwords are strings of characters used to authenticate computer system users.

Computer users are normally asked to enter their username (or login name) and their password (or pass phrase) before they are give access to a system.

If the person knows the username and the password, the computer systems trusts that they are the account owner and grants them access to their data.

Selecting a good password

Choosing a good password is critical for personal security, requiring password crackers to take additional time and resources to get access to your personal information and computer credentials. A poor password creates a false sense of security, and may endanger your personal information, access to computer resources, or even allow another individual to spawn attacks and viruses using your personal credentials.

Password Construction

Password crackers have many tools at their disposal to cut down the amount of time it takes to crack your password. Selecting a secure password will help to ensure that the password cracker must take as much time as possible to guess or otherwise identify your password. No password is ultimately secure, but if it takes the password cracker longer to crack the password than it takes for the password to become useless, you will have succeeded in thwarting the cracker’s attack.

Insecure methods

  • Passwords should not be created using personal information about yourself or your family. A password cracker with incentive to break your personal password will use this information first, making these passwords the least secure passwords. Examples of bad passwords of this type are: your name, birthplace, nickname, family name, names of pets, street address, parents names, names of siblings and the like.
  • Passwords should not be formed of words out of any dictionary or book. Longer words do not generally add much protection. Using known words in any language allows the password cracker to take shortcuts in his password cracking schemes, allowing him to guess your password in a very small fraction of the time it would take otherwise. Examples of bad passwords of this type are: dragon, secret, cheese, god, love, sex, life and similar words.
  • Passwords should not be composed of proper nouns of places, ideas, or people. These words are commonly found in password cracker databases. Examples are: Jehovah, Tylenol, edutainment, Coolio, beesknees, transformers.
  • Passwords should not be simple variations of words. Although these passwords don’t appear in a book or dictionary, it is a simple matter to generate a replacement word list automatically. These passwords are more secure than the above two examples, but not significantly more secure. Examples of passwords of this type are drowssap, l0ve, s3cr3t, dr@gon, and similar word-like terms.
  • Passwords should not be a concatenation of two words commonly following each other in a sentence. These passwords are more secure than the above password concepts, but still fall far short for password security. Examples of these kinds of passwords are: whatfor, divineright, bigpig, ilove, farfetched, catspajamas.
  • Do not reuse recently employed passwords again. If you find it difficult to pick a new password, you should wait until you changed you password at least 5 times before reusing an old password, or 12 months if password changes are common.

Secure methods

  • Always change your password immediately if you feel that your password has been compromised. Always do this directly. Never follow links sent to you in email, through an instant messenger client, or from a phone call you received. Ask for administrative assistance if you have trouble changing your password.
  • Do not write your password down where others may find it. If you must write it down, ensure it is in a locked location that is only accessible to you. Hiding your password in places you feel it is unlikely to be found is not helpful. Password crackers have a criminal mind, and generally know where to look.
  • It is important that you change your password on a regular schedule, at least every six months. This assists you by throwing off any cracking efforts that might be in progress, but have not yet been completed. It also helps you if somehow you have compromised your password in some other way without knowing it.
  • Select passwords that use a mixture of capital letters, numbers, and special characters. Take heed however, some systems do not allow you to use some or any special characters. Make sure you check the password criteria for the system you are using ahead of time, if possible.
  • Use substitution of numbers for letters and letters for numbers in your passwords. Although this is not a primary method of securing your password, it will add another layer of security on top of a good password, and will prevent the accidental guess of your password due to circumstances.
  • Where it is not possible to use many characters in your password (less than 14), it is advisable to create a password by creating a passphrase, and selecting letters in a specific position in each word. An example of this is “jJjshnImn2”. As you notice, it’s unlikely that any cracker would guess this password; however, it is easy to remember when you note the passphrase “John Jacob Jingleheimer Schmidt, his name is my name too”. Notice the use of number substitution and capitalization in the password.
  • The best passwords are complete phrases if the system will allow them. They are sometimes called “passphrases” in reflection of this. For example, a good passphrase might be “I clean my Glock in the dishwasher.” You can also use number and letter substitution on passphrases as well. Longer passphrases generally mean better password security.

Password Secrecy

Passwords are useless if they are distributed to other than to their intended users. Below is a list of methods to keep your passwords private.

  • If you have a large number of passwords to remember, or you don’t feel you can remember important ones, you can use your computer to assist you in the storage of passwords. You can encrypt your password list with an acceptable master password using reliable encryption software. Many password managers are available for this purpose. For experienced users Gnu Privacy Guard and Pretty Good Privacy are free for individual use. Ensure you know how to use encryption properly; improper use of encryption technologies may defeat the whole purpose of using encryption in the first place. Seek help from an encryption expert, or purchase commercial encryption software if understanding is not forthcoming. Do not store your encrypted passwords, or your encryption keys, somewhere that another person may gain access to them.
  • Refrain from using the same password on multiple systems, especially systems that do not serve the same function. Never use passwords you use on Internet forums, games, websites, or otherwise for any important password. It is trivial for the owners of these systems to extract your passwords if they are willing.
  • Never tell another a password through e-mail, instant messenger clients, chat rooms, forums or other shared environments. These conversations are almost never entirely private. Do not tell someone your passwords over a cell phone or cordless telephone, as these are insecure mediums for conversation, and may easily be monitored. If you must tell someone a password over a telephone land line, make sure the party you are speaking with is the only listener. You may want to validate that additional parties are not listening in by calling the original party on a number you know is owned by them.
  • Do not use shared passwords unless it is entirely unavoidable. Passwords shared between multiple users prevents the determination of which user performed which actions.
  • Of course, never tell your passwords to anyone. Once you tell someone else your password, you no longer have control over the scope of password knowledge. If you absolutely must share your account access to a computer system, change the password to a new password first before sharing it, and then change the password back to its original form once the other users are done performing the necessary efforts.

Two-Factor Authentication

The original password concept has been proven to be insecure. There have been cases where passwords have been compromised without a users knowledge, through coersion, or because they were conned into revealing it. The core problem with legacy passwords is that it is very difficult or impossible for an administrator or a computer system to differentiate between a legitimate user and illegitimate user gaining access through the same password. Because of this inherent flaw in the original password system, Two Factor Authentication was invented.

A password is “something you know.” This information is understood to be known by a single individual. Two-factor authentication systems add in another factor, “something you have”, electronic card key, electronic token, dongle, fob or some other physical item you keep in a secure place when not in use. A common stand in replacement for this second factor when higher levels of security are needed is “something you are”. A biological fingerprint, retina pattern, person’s weight, specific vital signs or a combination of these items is used in place of the electronic device. The biological factor for authentication and authorization has been found to be unreliable, but not in that it permits those that should not be permitted when used properly, but because there is a tendency for it to deny legitimate users access due to sickness, physical body changes, or other physical impairments.

There are two common methods of authentication when users use electronic components for two-factor authentication, response-only, and challenge-response systems.

Response-only systems require a user to present your electronic device to an electronic reading system, or for you to enter data displayed on the electronic device without user input. The user must provide a username or pin that is not known to outsiders, and then enter specific credential data generated by the electronic device when prompted. In many cases, this mechanism returns the user back to a single factor authentication, where the user does not need to know something, but just posseses the item in question. An example of this is the standard electronic card key used to enter a facility or building perimiter. The user need not provide any other factor to prove their identity.

Challenge-response systems require the user to enter a specific passphrase or pin into the electronic device first, before the device responds with the proper access credentials data. This varient is always considered two-factor authentication, since the user must provide both “something they know” (the pin), and use “something they have” (the electronic device).

Both the response-only and challenge-response systems can be defeated if the user both reveals the private information they keep secret, such as their username or pin code, and the attacker takes ownership of the electronic device. Due to this weakness, the bioligcal factor was invented.

Biological factors have been in use for several decades, and have proven to be reliable and secure ways to prevent unauthorized users from gaining access to secure systems or environments, regardless of the privacy of their passwords used. Systems monitor fingerprints, eye retina patterns, weight, ambient temperature, and other biological signs to determine the authenticity of the user requesting access. Movies have been touting methods of defeating these systems by cutting off body parts, using retinal masks, or forcing legitimate users into bypassing the authentication mechanisms for the attacker. These are largely Hollywood schemes and rarely work in the real world. In most cases where this level of security is required, local or remote monitoring of entry points through cameras and security personnell is common. Deadlock portals, remote activated magnetically controlled entranceways, and visual idenfitication are the norm.

Many simple methods have been devised to defeat weakly designed biological factor systems, so be sure you thoroughly test the security measures you plan to put in place before implementation.

Categories: Passwords Tags:

Developing a Test Strategy

Overview
This is the first of two documents on testing. The first is published in February 2006 and the second will be published in March. The purpose of these documents is to provide an outline of how to take the concept of User Acceptance Testing, and turn it into a tested product ready to go live. There are a number of steps to be undertaken along the way, and this white paper will provide a guide as to how and why it should happen in a certain sequence.
As with most undertakings, there is no absolute right way. We do not promote this as the only way to do the work. It is promoted however as the work you should do to improve your chances of getting it right.
Unfortunately, many organisations want to make commercial gain from a relatively straight forward process. In order to do this they create a world of jargon that you need to attend one of their training programs to understand. They create tools that support only their way of doing business. They lock you into a certain path that needs consulting support. By following the process outlined here, training, consulting and tools become options rather than mandatory.

Testing Steps
Looking at UAT from a high level, there are a few basic steps that need to be undertaken:

 

Step

Description

Test Strategy

Decide how we are going to approach the testing in terms of people, tools, procedures and support

Test Scenarios

What are the situations we want to test

Test Scripts

What are the actual inputs we will use? What are the expected results?

Test Strategy
Why do a Test Strategy? The Test Strategy is the plan for how you are going to approach testing. It is like a project charter that tells the world how you are going to approach the project. You may have it all in your head, and if you are the only person doing the work it might be OK. If however you do not have it all in your head, or if others will be involved, you need to map out the ground rules.
Here are some of the things that need to be covered in a Test Strategy. You could use this as a template for your own strategy.

Project Name

Overview
Testing stage Instructions:
Identify the type of testing to be undertaken.
Example:
User Acceptance Testing
Scheduled for Example:
01.04.06 to 15.04.06
Location Example:
Testing will be carried out in the Test Center on Level X
Participants Instructions:
Identify who will be involved in the testing. If resources have not been
nominated, outline the skills required.
Example:
Testing Manager – J. Smith
2 Testers – To be nominated. The skills required are:
• Broad understanding of all the processes carried out by the accounts receivable area.
• Should be familiar with manual processes currently undertaken for reversing payments.
• Preferably spent time dealing with inquiries from customers over the phone
• Etc.

Categories: Test Strategy Tags:

Negotiation Skills

A complex sales situations should be navigated by the sales professionals who do know how to successfully handle the challenge. Negotiate Success do provides the proven methods to overcome the objections without relying on the price as a solution. This programmed is the non-manipulative, The customer-focused process of ensuring both the sides win, which do leaves the company in the stronger position for the future opportunities.

negotiation is the natural process in the business; both the sides must be fully prepared and if possible should enjoy the process. Through the good negotiation it is possible for both the sides to come out of a deal happy.

The Successful Negotiators Plan

Each of the negotiation will, if are done properly, be concerned with trading the concessions against each other. There will be usually more issues and the variables than could be used for such a trading than is immediately obvious. Good negotiator should consider all the possible variables before meeting, calculate or do estimate what each will cost, then decide which he/she will prefer to use and which others would prepared to use if it came to crunch.

It can’t be emphasized too strongly which the essence of the good negotiating lies in obtaining the concessions from other party which will totally or largely compensate for which you have extended.

What variables can I use?

  • price

  • discount or rebate

  • bonuses

  • delivery times

  • financing arrangements

  • training

  • packaging

  • spare parts

  • deposit arrangements

  • balance arrangements

  • credit terms

  • guarantees

There are many other variables and you will undoubtedly will be able to produce the core list of the variables for most of your common negotiations.

  • What each one would cost us at different levels of the business? What will it cost them?

  • What each one would be worth to us? What worth would it be to them?

These are the two crucial questions in the effective negotiations because they will start the thought process of: What is cheap for me to give but valuable for the one who gain? Also, What can I value which is cheap for them to agree to? Once you have answer to these questions quickly you will realize that a negotiating can be as art as it is a science.

Different Styles of Negotiation

There are variety of styles of negotiation, depending on the circumstances. Where will you do not expect to deal with the people ever again, and you also do not need their goodwill, it might be appropriate to play the hardball. Here you can seek to win the negotiation, while other person losing out. Many people do go through this situation when they buy or sell the house, that is why buying a house can be such a confrontational and an unpleasant experience.

Similarly, where there is great deal at the stake in negotiation, for an example, in the large sales negotiations, then it might be appropriate to prepare the detail, and use the gamesmanship to gain the advantage.

These approaches usually are wrong for resolving the disputes within the team. If the person plays the hardball, then this will put the other person at the disadvantage. Similarly, using the tricks and the manipulation during the negotiation could severely undermine trust, damaging the subsequent teamwork. While the manipulative person might not be caught if the negotiation is infrequent, this is not a case when the people work together on the day-by-day basis. Honesty and openness are best policies in the team-based negotiation.

Preparing for the Successful Negotiation

Depending on a scale of disagreement, the level of preparation might be appropriate for conducting the successful negotiation. For a small disagreements, excessive preparation could be counter-productive because it do takes time which is better focused in reaching the team goals. It could also be seen as the manipulative because just as it do strengthens your position, it weakens the other person.

If the major disagreement needed to be resolved, preparing thoroughly for that is required, and worthwhile. Think through following points before you could start negotiating.

  • Goals:
    What you want to get out from the negotiation? What do you expect from the other person?

  • Trading:
    What you and the other person have which you can trade? What do you and the other person have so that the other wants it? What might you both be prepared to give away?

  • Alternatives:
    If you do not reach the agreement with him/her, what alternatives you have? Are these things good or bad alternatives? How much it matters if you do not reach the agreement? Will the failure to reach the agreement cut out future opportunities? What alternatives may the other person have?

  • The relationship:
    What is a history of relationship? Can or should this history impact negotiation? Will there be any of the hidden issues that might influence negotiation? How you will handle these?

  • Expected outcomes:
    What outcome would people be expecting from the negotiation? What was the outcome in the past, and what precedents been set?

  • The consequences:
    What are the consequences of winning or losing this negotiation by you? What are the consequences of winning or loosing by the other person?

  • Power:
    Who has the power in the relationship? Who do controls the resources? Who stands to lose most if agreement is not ben reached? What power does other person have to deliver which you do hope for?

  • Possible solutions:
    Based on all considerations, what possible compromises might be there?

How to Negotiate

While preparing for the important negotiation, be sure to invest a time which it takes to answer the following questions:

  • What are other team’s “hot buttons? What kind of the facts, tactics, or the evidence would they do perceive to be convincing, meaningful, or “powerful?”

  • What will they hope to achieve from a negotiation at company level, group level, and also the personal level?

  • What could you learn from the other team’s previous negotiations?

  • What are other team’s needs and how you can gather the information on their needs?

  • Who are all interested parties for the negotiation?

  • Are there any of the penalties associated with a negotiation, such as the penalty for the bluffing?

  • What are time limits associated with a negotiation both as disclosed and undisclosed?

  • Who wants the change and who wants to keep the things in the way as they are?

  • What are best means of communication between two teams?

  • What is the cost of stalemate for their team and your team too?

  • What options you have if you do fail to reach the negotiated agreement?

  • How well you have thought through the options?

Negotiation is not the art form – more you practice, better will you become at it. Do the homework and feel confident. Know all your worth. Doing it right requires the preparation, studying the comparative situations, role playing and getting the other perspective.

How to Negotiate

Before reaching a negotiation stage of selling the business, lot of hard work must have been carried out on both the sides. A vendor should ensure his company is totally ready for a sale and any of the potential purchaser must have carried out a due diligence.

The Negotiations could be complex and the time-consuming, and more often than not break down, sometimes at the very late stage; this could be very stressful for both the parties. Saying all this if a right approach is been taken by both the parties from a outset there is better chance of deal being struck which both the parties are happy with.

Prior to research and the due diligence is been invaluable always during any of the negotiation process. It will show vendor that you do have the true picture of their company; this could be used to strengthen the bargaining position. The good purchaser will attempt to discover any of the weaknesses in the company so this could be exploited, on other side the good vendor will do attempt to highlight a company’s strengths.

The use of the basic psychology in a negotiation process is often been used: The common tactic is for a purchaser to try and understand an aspirations of a vendor. Most of the people become emotionally attached to business and could have the personal friends within staff. Understanding this is very important tool for the successful negotiator. The good tactic used by a vendor could be to highlight how well the particular sector is growing year by year and how well an economy is doing and how well a purchaser will fair in future.

Negotiating Your Value

Each new job offer or the performance review is the opportunity to negotiate the base salary, the bonuses, the benefits, stock options and the various other incentives which will add to the job satisfaction and ultimately, provide a more financial security. You will need to take control over your job search before the new job offer and the plan ahead of the time for the annual performance review to reach the ultimate goal of the financial security and the happiness. So are you all prepared to negotiate?

First step in negotiating is making a decision to reach the goal. Once have you made the decision, there is need for you to plan your approach, gather all the supporting information, consider the alternatives and the viewpoints, do communicate specifically, and understand the strengths and weaknesses. You should be able to respond effectively to a negotiating party, and knowing all your competition will enable you to bargain for your position more efficiently and accurately.

Tips to Focus on During the Research and Negotiation:

  • Be Persuasive
    It is very hard to force your boss to increase the compensation, and by trying to do so could potentially damage the working relationship you currently have. Think about a process as for trying to convince him which it might benefit an organization to pay more to you.

  • Do aim high and be realistic
    Many researchers have found the strong correlation between the people’s aspirations and results which they do achieve in the negotiation. At same time, you do want to suggest the ideas for which your boss realistically could say yes.

  • Start with a right tone which you want
    To let your boss to know that you will listen and will try to understand the views. At same time, you do expect boss to do same for you so you could work together to address this issue. Avoid the ultimatums, threats and the other coercive behavior.

  • Clarify the interests
    Your compensation must satisfy the range of needs, not just the salary. Make sure you do have thought about the other points of value for you as well — like the profit sharing, stock options which vest immediately, the bonus, the greater work responsibilities, the quicker promotion schedule, the increased vacation or the flexible hours.

  • Anticipate the boss’s interests
    Just like you, your boss do have needs and concerns. To make him to say yes, your ideas should have to address the things which are important for him.

  • Create several of the options
    The joint brainstorming is a most effective way to find the ideas which satisfy everybody’s interests. It do works best when you do separate it from the commitment — first create the possible solutions, and later decide among them.

  • Focus on the objective criteria
    It is very easy to make someone to do agree with your proposal if he looks how that proposal is firmly been grounded on the objective criteria, such as what similar firms pay people of like experience or what others in the company make.

  • Think through the alternatives
    In case if you cannot persuade boss to say yes, you have to backup the plan. Part of the preparation is creating the specific action plan so that you know what will you do if you do have to walk away from a table.

  • Do prepare thoughtfully to achieve your goals
    This is a only aspect for your negotiations which you can completely control on. To take the advantages of all the above advice, you should have to invest significant amount of time and energy.

  • Review to learn
    Only way by which you can really improve the ability to negotiate is explicitly to learn from the experiences. After finishing the negotiations, you reflect on what you did which worked well and what you may want to do differently. Ultimately you will be successful in achieving the goal of the financial security and the happiness!

Negotiation Strategies to Maximize Your Salary Offer

Best approach for the negotiation within the team is to adopt the win-win approach, i.e. the one in which both the parties feel positive about a situation when a negotiation is concluded. This will helps to maintain the positive working relationship later on.

This will governs the style of a negotiation. Histrionics and the displays of emotion are clearly inappropriate because they do undermine rational basis of negotiation and will bring the manipulative aspect to it.

Despite this, emotion can be an important subject of discussion. For a team to function effectively, the emotional needs of team members must be fairly met. If emotion is not discussed where needed, the agreement reached can be unsatisfactory and temporary. Be as detached as possible when discussing your own emotions. Perhaps it would be best to discuss your emotions as if they belonged to someone else.

Take some time always to consider the salary offer. Ask for the least of 24 to 48 hours. Silence is the golden – or it could become so – when do you just let it hang up awhile following the initial offer. Do not rush to fill a quiet void.

Weigh any of the offer against a company’s expectations of you in a position rather than your own needs. A company has put itself on line with its offer. Rest assured they do have a cap, but you might have some of the wiggle room based upon how much value a company perceives you could bring them.

Prior to any of the job interview, compare the salaries for the similar positions. Websites such as http://www.salary.com offer the tools for the research. Knowing own worth and why a company would want to hire you, gives you the bargaining power. The salary offer itself is the testament to fact that a company perceives the value.

Write the “counter-offer” letter thanking the company for all its offer to you, recap why do they say they want you, and will enthusiastically proclaim the desire to join their team provided they will reconsider an amount of their offer. Accept a risk involved with this approach and will be prepared to walk away if it will not work.

Know when it is no longer in the best interest to keep the negotiating and then move on to next opportunity. Usually, if a situation does not feel quite right, it is not. You will not be happy working wherever you do feel you are a proverbial square peg in the round hole – especially if you do feel you were taken advantage of.

Corporate Communication Skills

The Corporate communication will involve much more than just motivating the employees and then dispensing the good PR. It does represent the tool to be leveraged and the process which is to be mastered. A Power of Corporate Communication shows the managers and the executives how to communicate effectively with the fellow employees from a mailroom to boardroom, and also between the organizations and across the industries. Fully accessible and nonacademic refreshingly, it will create the easy-to-follow map of world of corporate communication, with the workplace-tested approaches for addressing the common challenges. Written by the two leaders in the today’s corporate communication field Paul Argenti is an author of 1994’s Corporate Communicational Power of Corporate Communication is replete with the careful analyses and the real-world examples and the case studies from a leading organizations also including Sony, Coca-Cola, and the GE.

The effective corporate communication does requires the carefully formulated and the implemented program, one which will both craft the corporation’s image and a protect which image when the problems arise. A Power of the Corporate Communication is the most straight-talking guide of today’s for mastering an art and leveraging a power of the corporate communication.

The key components of a corporate communication function. Methods to manage the multiple constituencies and the deliver consistent, the relevant messages Crisis communication tactics, and a dangers of creating the “spin” as opposed to facing the problems head-on.

The Successful communication program is a central to everything the organization do accomplishes, or will hope to accomplish. Let a Power of the Corporate Communication do provide you with a tools which you need to establish and maintain the program and build the corporate communication program which provides you with the strategic advantage.

“If it is left unaddressed, issues of the corporate communication could come back to haunt the company, and when addressed, they could extend the success.

Organization Communication

The communication is widely been accepted as the strategic business tool. Just as the accountant should know how to apply the math skills to the financial management and the general counsel should apply the knowledge of law to protecting a business’s interests, today’s communicator should know how to use the skills to solve the business problems.

How Do You Know Whether The Organization Uses The Communication Strategically? Ask Yourself The Questions Below:

Does the organization have the strategic communication plan which will explain how the communication activities will help it to succeed? If not, then you are likely to get the minimal return on the investment in the things like the company intranet, newsletter, and even the e-mail. The Corporate communication should be funded for a reason: to help a company to achieve the goals. If you do see no evidence which the communication has the positive impact on a business, you are likely wasting the time and the money.

If you have the communication department or the staff person, are these goals aligned with a company’s goals? Communication goals should focus on the business outcomes, not the tactical outputs. The outcome-oriented goal is to educate the employees about a company’s top three objectives for a year. The output-oriented goal is to produce the 12 issues of a company newsletter this year.

Does the communication staff considers the “customer” to be leader of the business? If a communication function does exists solely to satisfy the employees, the customers, reporters or any of the other audience, staff’s energies are been misdirected. The business leaders set the goals and the priorities. achieving them will require efforts of the employees, the loyalty of the customers, and sometimes a cooperation of the other groups. Influencing those groups in ways which helps the business is the job of a strategic communicator.

Is the communication among the people and the groups in organization is managed well? If the systems do not exist to enable the communication and the processes do not work, then the communication is not the strategic tool. The business would not tolerate the shoddy inventory control system or the broken manufacturing process. Why must it tolerate the communication processes which do not work?

Is The communication two-way activity in the organization? To be used most effectively, the communication can’t be top-down. The business leaders should engage the employees, customers, suppliers, the shareholders and the other groups in the dialogue. In addition, people in these of the key groups should be able to initiate a dialogue, not just respond when been prompted.

Notice here that none of the questions above asks about the grammar or spelling. They are just the tools of trade, but unless the business does apply these tools to help it to succeed, buying them is foolish.

Corporate Benefits

In the global economy of today, English is an international language of the business. 750 millions of people do speak English in professional setting, and over about one billion more are in a process of learning it as second language. The clear and confident English communication skills now are, more than ever, critical for a success of the global organizations.

A CommuniCorp International do understands this vital need and has also designed the services to provide the results and a long-term benefits for the corporate clients.

Boost the Employees Confidence

Can you do afford to have your staff suffer from the lack of confidence?
By providing a non-native English speakers an ability to speak up clearly, articulate the thoughts, and precisely express the ideas, we do increase a self-confidence of each of the clients. A client-sponsoring the organization, in turn, benefits from the increased productivity and the greater passion for the success.

Increase The Customers’ Satisfaction

You do spend lot of the time and resources for acquiring and retaining the customers. Do you have a luxury to upset even single of these customers by having the disastrous misunderstanding?
Exemplary the customer service is the foremost goal of a modern global corporation. If the company do have a multinational employees or the non-native English speaking executives in the client-facing roles, clear in the communication is vital.

Avoid The Worst Nightmare

Did you ever imagined that a single miscommunication could damage the company beyond repair?
Single incorrectly spoken or the mispronounced word could cost the company valuable time and the money. By training effectively the employees and the executives to communicate clearly, your firm would benefit by avoiding a huge losses, worst of which was been experienced by an aviation industry.

Creative Cooperative Teams

Two things will define a successful work team: A good problem-solving skills, and the strong collaborative ability. What will hold so many teams back from achieving up these qualities? Many times it is quite natural human tendency to feel that “my way is a only way.” Flexibility is a first requirement for the good team player. With this quality firmly in the place, the team members can listen better, build up on each other’s ideas better, and then finally make the better decisions. The great team is more than a sum of its parts-as long as the parts do know it.

The attendees are then led through the series of the exercises which let them experience a basic principles of the creativity and the cooperation:

Communicate well. A Good communication do involve both sending and receiving the information clearly. We are the good “Receivers” get a more mistake-free messages if we do keep all our attention focused outward. We are the good “Senders” if we make the habit of checking in, to ensure that our message was clear.

Share even the bad ideas. Second inviolable rule, and the must for brainstorming the sessions. Since all the creative people do have many lousy ideas before they could get the good one, it does stands to reason that a faster you dig sooner you will hit the pay dirt. Your team does not have to act on the lousy ideas. But each member do have to share them.

Accept and handle the conflict. Like the stress, conflict is inevitable. Like the stress, it can also be both the positive and the negative. Positive dynamics of the conflict does include increased energy and the attention. The negative ones include the resistance and the defensiveness. Remaining flexible during the times of conflict is a key. Although this will not come naturally to most of the people, simple secrets exist to help us to do it.

Focus on the partner Not yourself. The indispensable trait of a good teams is the cooperation. and the indispensable trait of a cooperation is the outward focus, the willingness to be as interested in the other peoples thoughts and the ideas as in your own. Whole team ultimately may decide to go with an idea you brought up in first place, but during brain-storming session this is not a point. Listening to the team members is a point. Put in another way: having the ideas will make you good team player; clutching them fiercely to the chest will not.

Corporate Communication Encodes and Promotes:

  • The strong corporate culture

  • A coherent corporate identity

  • A reasonable corporate philosophy

  • The genuine sense of the corporate citizenship

  • The appropriate and the professional relationship with a press

  • The quick, responsible ways of communicating in the crisis

  • Understanding the communication tools and the technologies

  • The sophisticated approaches to the global communications

 

Leadership Skills

Leadership quality incorporates many of the diverse skills and the qualities, and for many of the people it does not come naturally. Good leader is a one who can find balance in managing, disciplining, delegating, instructing , encouraging, and sympathizing. Good leader do strives to accomplish the goal common among most of the leaders: to make the people believe in your vision, make them follow you, and do whatever it do takes to get a job done.

As the manager/leader, all the eyes are on you. You will be representing the company; do choose your words carefully, be confident leader, and do make your presence known without being haughty or condescending. No person wants to feel like as if they are at a bottom of totem pole, so as the leader, do relate to your employees, try to be on their level, and do set examples. The people like to emulate the leaders, and if you are conscious of all your actions, the people will follow your lead and will strive to reach the top. All the parties will profit.

Since the leadership is not inherent in most of the people, a wise leader must work with others to help them out to develop their own leadership skills and the style; education is the key factor to the successful leadership. It is not easy task to get everyone to work together amicably for good of the common goal, but this can be done. Good leader do sets a tone for each and every project and will helps to motivate a team to do the best it can do. Good leader also know to give credit to people who have done good job.

Leadership Qualities

A leader should have:

  • Vision of where he/she is going.

  • Ability to do communicate well to the his/her followers.

  • The shepherd’s heart for his/her people who work under.

  • Understanding of a role of management as it will relates to the leadership.

  • Deep desire to go on learning new things.

  • Resolve to handle pain which comes with the leadership.

  • Anointing of Spirit of the God in his/her life.

Leadership Development Model

We do live in a paradoxical world. At one level, we are been expected to learn the entire life, many of us who are spending 16 or more years getting the formal education. But no where in this process anyone did teach us how to learn to recall the things, so if even we are exposed to the sound ideas, most of them have been forgotten.

To make the matters worse, very few of us know how to turn the useful theory into a real world skills. Traditional training model still which is commonly been used is that if you do practice once, you will know how to do the things. A real world will not work like that, you should have to practice many times before the skills are been mastered.

A model stresses that the leadership skill development is learning how to master the process of skills development. To do that, we need a proper mixture of the feedback, the motivation, the practice and the theory.

We will provide the Skills Development Model so we can more quickly develop the expertise. It will minimizes the frustration by using the combination of sound theory combined with the repeated practice to have the individuals perfect skills and the behaviors.

Leadership Development Model Description

Theory

The skills development is only as good as a theory behind it. In the other words, before you could do practice, you should know and understand intellectually what should have to be done.

Let us say you do have a new fax machine. You just have explained a little bit of how a machine will work, bits, bites, the group 2 and group 3, dedicated lines and the digital. In fact, you just have explained a process to your trainee. Do you say, Here, you know do it, and simply walk away or actually show them all? Sometimes, it is better to show people what to do with the demonstration before a dreaded words, You try out.

Practice

Some of the leadership skill development efforts will require few minutes; others do take hundreds of hours. In a hundreds of hours category, we are becoming persuasive speaker. Even those with the great aptitude blessed with the large dose of the talent should practice endlessly to get really good.

On other hand, learning how to make the positive first impression it do takes less than 30 minutes since theory is not that complicated. However, one should still practice few number of times to get good at doing it.

Motivation

The standard assessments are very good at measuring some of the things, but they cannot measure the motivation. In our development model, the person who is serious about the leadership skill development should be internally motivated to perfect a component skills required to be the transformational leader. It is unrealistic to expect any of the teacher or the coach to motivate a apathetic or a lazy to excel the leadership development.

Feedback

The management professor have once mentioned that, “development of the leadership skills do requires feedback.” Unfortunately, the behavioral feedback is commonly not done in most of the training programs.

There exists two ways to get the feedback: do it yourself or let others to do it for you. But we recommend skilled coaches who can provide the positive and negative feedback.

Self-Mastery

This is a term for perfection to “become all that you can be.” This is the special state of the mind when the skill developed runs largely in a unconscious.

Leadership Passion Propulsion

Passion is known as a great motivator. It is this which gives a ultimate meaning to all your actions. Being fiercely passionate about the goals and the targets helps to give you the edge and will helps you a inch closer to the leadership position.

Step 1 – Define the passion in you:

What fires you up? For some of the people, answer to this question is been very obvious. For others, it is little more difficult.

If you are facing it difficult in giving definite answer set aside the 30 minutes to answer three of the following questions:

  • How I want my life to be like when I am 60?

  • What I should have to accomplished 5 years from now?

  • What three things I want to do if I had only 6 months to live?

Each of the question will do have several answers. Choose a top three answers for each of the questions above.

Out of these nine goals which you have identified, pick out the three goals which do look most important to you. Obviously, these three goals are the things which are very important to you. You should be naturally passionate about achieving them: If not, then you should need to set the goals which are on the grander or the more beneficial scale!

Step 2 – Harness the Passion Energy

Once you have set out the inspirational goals, work out for what you should need to do to achieve these goals.

Identify a key information and the training you need to achieve them effectively, and also think through the tools which you may need and the people of whom you need support from on your way.

Make the professional, rational, well thought-through the plan. And then do use this plan to turn the goals into reality.

Ten Skills of Leadership

Getting and Giving the Information is probably important competency required of the leaders. If you cannot communicate effectively, then none other leadership skill will do compensate for this lack. First and the foremost thing is, you should be able to exchange the information effectively and accurately.

Understanding the Group Needs and Characteristics

It is very essential that we do first understand ourselves and our own needs and the characteristics. Only then we can know and understand the other people’s needs and characters. This understanding will hopefully come naturally as we mature, creeping over us like a ivy winding about the tree. By directly exploring and encouraging the discovery of these kinds of personality traits, we can do accelerate maturing of the leader, adding the fertilizer to ivy and the tree. About Understanding the Group Needs and the Characteristics

Knowing and Understanding the Group Resources

  • Recognize the knowledge and the use of the group resources as the major technique in bringing the group together and creating a commitment to the common goals.

  • Recognize that the resources are theoretically limitless, and that the leader’s and the group’s the ability to recognize and utilized the diverse resources tremendously affects what a group can accomplish.

  • Involve more and more people in active leadership by giving each of them a part according to his or her resources.

  • Evaluate a impact availability of the resources has on doing the job and maintaining a group.

Controlling a Group

  • Recognize how his own behavior will influences and can control others.

  • Distinguish between the controlling group performance and setting an example.

  • Identify the control as the function of group, or of the facilitator, and advantages and disadvantages of each having this responsibility.

  • Identify the different techniques for controlling the group performance and the suitability in a different situations.

  • Deploy the group resources to best interests of a group while encouraging the personal growth.

  • Evaluate the leadership performance in terms of the group performance.

Counseling

Counseling is the private talk with someone which helps the individual to solve a personal problem. As a leader, people will do come to you with the problems. Because you are the leader, you will spot the people with problems. You cannot turn them away or will just let them suffer, because a ignored problem, if serious, almost inevitably will become a group problem. Counseling is been considered pretty difficult. The professional counselors, like the vocational counselors, clergymen, lawyers, bankers, teachers, psychiatrists and others, sometimes you will spend years learning how to counsel in their fields. People often will pay large amount of money to be counseled.

Setting an Example

Setting an Example is the personal behavior independent of any external influences. While very simple competency on its face, none is more important. Fail to demonstrate this competency to the members of your group, and you will be doomed to the negative results. No matter how good the line you talk, if you do not match it with your behavior, you will not get any respect and will find it increasingly difficult to get a group to work with you.

It might be more difficult under some of the circumstances to set the positive example, but that will not stop you! Setting an Example is where your backbone will show. If you have the character, if your character has the integrity–that is, if who you are on outside is been lined up with who you are on inside–you will accomplish a far more than you may imagine possible. For this kind of the leader, as long as he do takes care of all his follower’s needs, enjoys respect, loyalty, and love.

Representing a Group

Representing Group is accurately communicating to the non-group members a sum of the group members ideas, feelings, etc., and vice versa. A leader should represent his team on the great variety of the issues. Some of these issues and a need for the decision representing a group interests will be known well in advance; and others will not be.

Under any of the circumstances, to faithfully represent a group, you should:

  • Fully understand nature of the problem.

  • Know how a decision was been reached and be able to communicate this to others.

  • Accurately and responsibly do communicate from and back to a original group.

  • Realize that the other groups might derive their entire picture of the other group through you, the representative. You should be consistent, possess the integrity, and be fair to all the parties.

Problem-Solving

Effective use of the problem-solving will do more than any other competency to advance both getting a job done and keeping a group together. It is the “umbrella” competency in all its effect on the variety of issues. The problem-solving is useful both in the group situations and in the one-on-one.

Evaluation

Evaluation attitude is the predisposition to continually examine and analyze all our efforts. Evaluation is very critical component of cyclical learning process. It will not occur just formally at conclusion of the activities, but also informally as well, by all been involved, throughout a project or task. An Evaluation Attitude is one of the principals which form a basis of White Stag Leadership Development Program.

One who will apply this attitude or the technique will be aware continually of the objectives of his learning’s and will do attempt to measure his growth towards them.

Sharing the Leadership

Sharing a Leadership translates on one level into the “styles” of the leadership. Depending upon the job and a group, certain ways for a leader to work with a group will be more appropriate than the others. It will also identifies some generic roles groups have which can be distributed among all the members.

Sharing the leadership is a key function of the leader. Ability to extend herself, to accomplish the jobs greater than a person alone can handle, is one of a key elements of the society’s success today. Never has the society been so productive.

Team Work

The Organization Charts

Traditionally an authority is been delegated down from a top officer of organization to the Executive Vice Presidents who will in turn delegate the responsibilities to the levels of Directors until the responsibility is been accepted by the front-line Managers ultimately who are responsible for the workers performance in the department. Below this hierarchical structure of the organization, individual manager, “boss” personally decides who is there in a department and what will they do. A manager provides his/her workers all the information and the resources provided from the level above.

As the work becomes more and more complex, managers will often finds that more work can be achieved quicker with less effort and less time, if the workers do communicate and coordinate with each other as well as with their boss. This group of workers under the manager will form a team.

team-work-in-a-organization

A Team
Team is a group consisting small number of people with the complementary skills who are committed to some common purpose, performance goals, and the approach for which they will hold themselves mutually accountable.

Team Charter and Mission

Charter for the team is written document which will formally defines a scope or the limits of a team and will confers authority to team — what the team is officially been allowed to do. Some of the charters also define outcomes expected from the group, such as a greater volume of the work produced, improvements in the efficiency, a better ratings for customer satisfaction, etc.

Team’s mission is team’s own purpose of why does it exists. It may be a written document or just informal. If the group lacks mission, teams approach might tend to be fragmented as each of the member will try to fill up the void of meaning by introducing person agenda of their own Thereby diffusing a team momentum toward the specific achievements.

The clear and the commonly accepted reason for why a team is together provides team the “touchstone” for a way the team identifies the issues to work on, it establishes the priorities, handles the conflicts, and makes the decisions.

The Team Size:
A conventional wisdom is that the teams will be most effective when they are small in size, usually not having more than 15 members in the team, and often will be between 5 and 10. Most effective teams achieve the balance between a diversity from the larger group and a dedicated focus which is more easily achievable in a small groups. Conventional thinking is larger and diverse the group is, it is more difficult (and the longer it takes) for them to come to the common agreement.

Self-directed Work Teams and the Flat Organizations:
The self managed teams will often use the different words to reflect changes to scope of the authority. Rather then having traditional manager who will direct people, self-managed teams will have coordinator who will facilitates work performance. This coordinator is elected from within the group to represent a group.

Less the number of managers flatter is the organizational structure, with a less layers between the top officer and a front-line workers.

Necessary foundation for the self-managed teams is a widespread personal individual adoption of the Continuous Process Improvement approach towards work. Ultimately the self-directed work team will thoroughly have to monitor and correct the work in the timely manner.

Team Interpersonal Skills

Engage With the Understanding

  • Recognizing the teamwork foundations

  • Understanding the communication styles of the team member

  • Creating the strategies for building up a team communication

Share the Constructive Feedback

  • Tuning into the communications cues

  • Planning and delivering the feedback

  • Handling the feedback effectively

Participate Actively In the Meetings

  • Planning for the productive meetings

  • Conducting the meetings effectively

  • Evaluating the meeting success

Resolve For the Consensus

  • Choose the conflict-resolution strategies

  • Accepting the team diversity

  • Applying all the team decision-making techniques

Solve the Team Problems

  • Implementing six-step problem-solving model

  • Applying all the problem-solving tools and the methods

  • Applying all the problem-solving model

Improving the Teamwork

A most guaranteed way of improving the teamwork is by applying the principles of the performance management to a group’s behaviors. This does involves these many basic steps:

  • Identifying what will the teamwork behaviors lead for the better performance (called TARGET behaviors)

  • Finding which of the teamwork behaviors that are currently being used (called CURRENT behaviors)

  • Undertaking the gap analysis between the target and the current teamwork behaviors, and also taking action to bring the current teamwork behaviors closer to a target

The Target Behaviors

One of the way to recognize the target teamwork behaviors is to complete a ITPQ (the Ideal Team Profile Questionnaire) instrument. This could be completed by a teams, peer groups, the customers, the staff, the senior management and some others to provide the wide range of views of which would make a team successful. This information will enables a team to:

  • Identify and also manage the conflicting expectations between them, say for example, the management and the customers.

  • Take the wide perspective while setting up the behavioral goals for themselves, which must improve the quality of those goals.

  • Facilitate the dialogue within a team and also with others outside a team on how to improve the performance.

The Gap Analysis

Once target and the current behaviors which are been identified, the team need to work out how to change their current behaviors to be more in line with a target. This will involve of assessing a behavioral gap and producing the action plan for a team to be implemented.

The Current Behaviors

A Current behaviors might be influenced by the factors, such as shown below:

  • An organizational culture

  • Preferences for the team members

  • The Current circumstances

  • Feedback from the people outside a team

  • And the many other factors

One of the way to identify the current behaviors is to complete MTR-i (which is Management Team Roles – indicator) instrument. This is been completed by a individuals within the team, and it will identify the roles which they are currently performing those can be aggregated to show the collective team behaviors.

 

Categories: Team Work Tags:

Time Management

Our life revolves around the passing of the time. To waste the time is to waste the part of the life. The time cannot be paused, bought, or can be changed, but we can do learn to use it in better way. Time is one of the scarcest resources and unlike the money or the energy, is irreplaceable. By learning the effective time management, you will learn to take control over your life.

Taking the control over life involves taking control over time by planning. Planning will take time itself, but the initial time investment frees a much more time later on. Like the most other things, an effective planning is the skill which starts off being difficult, but soon will become a habit.

Balancing the Work and Family

The successful people are very clear about what is important for them. They know how to set the priorities and concentrate on doing things that will give them a great satisfaction and happiness in the life.

  • Why the relationships are so important.

  • Practice the moderation in all the things.

  • Balance the work and family life.

  • Recognize when your life will come out of balance.

Time Management Principles

Time management explains the attributes which are needed for the effective time management, and also the benefits of beginning with the limited range of the tactics, before extending these into the overall time management strategy.

Identifying a Time Loss
This will explains the importance of carrying out the objective review of how do you currently spend the time and identifying what proportion of the time is spent in the areas which are not essential for achieving the goals.

Urgency and the Importance
This will describes how to use urgency/importance grid to classify the tasks which you currently perform, and how to optimize an amount of the time which you do spend on each of the type of task.

The Effective Decision Making
It describes the various techniques which can contribute to the more effective decision making.

Setting the Goals
This will discuss of adopting the proactive approach in the order to anticipate the events and be in the position to identify and define the goals clearly.

Defining the Objectives
This will explains how to analyze the goals to define the series of objectives and need to rank the objectives in order to identify means and actions which are needed to achieve them.

Time Saving Techniques

Dealing With The Interruptions
This will explains how to evaluate what the interruption represents as the demand on the time and how to deal with the non-urgent interruptions in the polite but an effective way.

Knowing When to Delegate
This will explain how to overcome a reluctance to the delegate, and also how to decide which tasks are most suitable for the delegation.

Delegating Effectively
This will give details about the practical aspects of the delegating work, the importance of providing the ongoing support and the feedback, and a need to evaluate a outcome and apply the lessons learned when delegating the work in future.

Managing The Incoming Calls
This will describes how to screen the incoming calls when you do not want to be disturbed, and a various tactics for keeping the incoming telephone calls as short as possible.

Managing The Outbound Calls
This will explains the use of the outgoing call log to help the plan and structure the calls, and maintain efficient time management approach for making the outgoing calls.

Organizing The Workspace
This will explains how to deal with the incoming paperwork in the efficient way, and how to identify the manual and the electronic filing systems which meet the needs of a way you do work.

Communicating Effectively
This will discusses the various time saving techniques which you can use to improve the efficiency with regard to the written communications. This includes: speed-reading, the business letters and the email.

Practical Time Planning

Understanding the Overload
Describes most common sources of the work-related stress, and provides the objective assessment of an extent to which you would be suffering from the overload at work.

Negotiating the Workload
Inability to say ‘No’ to the requests can be significant contributor to the stress and the overload. This section explains how to decline the requests when it will be appropriate to do so.

Planning The Day
Make schedule or plan all your tasks according to the workload and the performance cycle. This will save your time in thinking what to do next.

Using The Activity Networks
The activity networks, have become established as one of a most popular resource planning techniques that are available.

The Critical Path Analysis
It will show how to identify a critical path within the networks of the activities and how to calculate a total float and a free float available.

The Effective Resource Planning
This describes the resource planning, which is concerned with an effective scheduling of all the resources available in the order to deliver outputs which are required.

Preparing The Planning Diagrams
It discusses various ways which the resource planning information may be summarized for presenting to the senior management.

Eliminating the Time Wasters

The Time-wasters will surround you all the sides and will tear away at your minutes and hours, holding the back from producing a critical results which are vital to the success in the career.

  • Eliminate a time wasters in life;

  • The Law of an Excluded Alternative;

  • Identify a major time-wasters;

  • Practical ways to overcome and avoid them if possible.

Maximizing the Productivity

It is only what you do produce to what you put in determines the success. Over the time, the results-orientation goes hand-in-hand with the big payoffs in the life.

  • Work with full capacity;

  • Develop an ability to get the results;

  • Concentrate on the high-value tasks;

  • Increase a quality and the quantity of the results.

 

Categories: Time Management Tags:

Communication Skills

The Crash Course in Communication

Talking is very easy, but communication, that means an exchange or communion with the other person, requires the greater skill. An exchange which is the communion demands on the way we listen and do speak skillfully, and just not talk mindlessly. Interacting with the fearful, angry, or the frustrated people will be even more difficult, because we are less skillful when we are caught up in such kind of emotions. Do not despair or resign yourself to the lifetime of miscommunication at the work or at home! Good communicators can honed as well as born. Here are few of the tips to get you started.

This will remind us how difficult it is to communicate effectively in any of the organization. The problem is not that we have got the bad people, the problem is that we have got the poor systems. This guide will teach how to overcome the communication barriers and also hone the communication skills.

The Communication is the skill and like any other skills it also requires the practice. It is improved through practice which differentiates the skill from other forms of the knowledge. Understanding a theory of the communication and the effective presentation will not make you brilliant communicator or the presenter but should make you aware of how to maximize a impact of the presentations.

Most important thing to remember is a message which you intend to communicate is most likely to be misunderstood by the listeners. Therefore, in addition to the carefully preparing and presenting the message, stay alert for any of the signs which your audience are mis-interpreting it. It is up to you, a presenter, to continually check if your message have been received, understood, interpreted correctly and is filed in the receivers mind.

Effective Communication Fundamentals

Communication is the complex two-way process, involving encoding, translation and the decoding of the messages. The effective communication requires a communicator to translate their messages in the way which is specifically designed for the intended audience.

Creating and delivering the effective presentation requires basic understanding of a communication process. Most of the business presentations require a clear and an unambiguous communication of the message in the way which can be clearly understood by a recipient.

Tps for Effective Communication

  • Be honest while communicating. Dishonesty will somewhere show up along a line.

  • Take interest in the people you are communicating with. Remember the people are more attracted towards those who have interest in them, and pays more attention to what they say.

  • Think before you speak or put pen to paper: what message you trying to convey? What outcome do you want to elicit?

  • Be direct and not aggressive. Lot of flannelling around can make the people lose interest and miss a vital point.

  • Don’t use the jargon – and acronyms, and also the technical expressions, unless you are sure about that your listeners do understand

  • Write the way as you will speak. Do not fall into a trap of using the long words just because it is written down.

  • Take time. Whether in the speech or in paper, rushing will make you seem nervous, unconfident and like downright scared.

The Interpersonal Communication Skills

Ability to ask the questions and listen are vital to the good interpersonal skills. In fact the empathetic listening is a number one skill which can help to build the relationships.

Ten Tips for Good Interpersonal Skills

  • Listen to the person first. Communication is the two-way process; getting all your message across depends on understanding a other person.

  • Be interested in people you will be communicating with. Remember that the people are more attracted towards those who have interest in them, and therefore will pay more attention to what they will say.

  • Relax. The bad body language like hunched shoulders, fidgeting, the toe-tapping or the hair-twiddling all give a game away.

  • Smile and use the eye contact. It is a most positive signal which you can give.

  • Ask the questions. It is great way to show the people that you really are interested in them.

  • If the other person has different point of view towards you find out why they have such point of view. More you understand reasons behind their thinking more you will be able to understand their point of view or can help them understand your point of view.

  • Be assertive. so that we can try to value their input as your own inputs. Do not be pushy and do not be a pushover. Try for a right balance.

  • When you will be speaking try to be enthusiastic in appropriate context. Use voice and the body language to show this.

  • Immediately don’t try to latch to something which someone has just now said … “oh yes it happened to me” and immediately go on and telling your own story. Make sure that you ask questions about them first and then be careful while telling your story so as not to sound like a competition.

  • Learn from the interactions. If you have a good conversation with someone try to think why it all went well and remember key points for the next time. If it did not go well – again try and learn something out of it.

Body Language

We all do communicate with one another through our look as well as what we do say and how do we sound. In fact what our body is doing while we are talking (i.e. the body language) could often play much greater part in the communication than we do think.

Most obvious form of the paralanguage is the body language or the kinesics . This is a language of the gestures, expressions, and the postures. In North America, for the instance, we commonly use arms and shake hands and say good-bye, point, count, express an excitement, beckon, warn away, a threaten, etc. In fact, we will learn many subtle variations of each of the gestures discussed above and make use of these gestures situation. We will use head to either say yes or to say no, to smile, frown, and wink acknowledgement or flirtation. The head and the shoulder in combination may shrug to indicating that we do not know something about the topic.

The eye contact
The eye contact helps to create the better interaction and the rapport with the listeners. Always try to look at listener at the end of the sentence to reinforce a message in that sentence.

The gesture
The gestures can help to give your voice the extra energy and the confidence Try to gesture on some of the key words – this will give the words a greater emphasis.

The Presence
Adopt ‘Anchor Position’ whenever you do want to keep the body language calm and controlled. While sitting do keep the small of back into back of the chair. This will help and support your posture and do maintain the energy and the confident style. Aim to keep the body language open and be relaxed all the times. Physical attitude can affect the psychological attitude.

The Movement and the Space
Be sensitive towards the people’s space and try not to intrude into it. To achieve the report when speaking to others try to match up the levels –like either both are sitting or standing with a body angled in towards other person.

Presentation Skills

Remember nobody is born as natural speaker. Of course we can bawl the heads off and make heck of the noise when we were been born – but it is is not quite same!

Greatest speakers today have not just become great overnight! They do have spent lot of time practicing reviewing and reading about the way to improve getting the specific one-to-one feedback on how to improve and also having lots of specialized training and the coaching.

It will take time and also effort to read absorb and apply. It will also takes time and efforts to attend the training courses or the seminars and get a good professional training. If we want to differentiate yourself at the work by becoming great presenter however then it is something which is certainly worth investing the time in.

There is the simple structure into which nearly all the presentations must fit. This comprises of the three clearly identifiable parts – Introduction followed by the main body and finally the conclusion.

Often this is expressed as:

  • Tell what you are going to tell them

  • Tell them

  • Tell what you have told them.

The good guide for breakdown of the presentation is 10/80/10 rule – where the introduction and the conclusion are each allotted of 10% of a presentation time with a main body comprising of about 80%. For example the 30 minute presentation should have 3 minutes for introduction and conclusion each and the main body lasting for 24 minutes. This formula may be applied for any length of the presentation – as it do reflects the good breakdown from audience’s perspective.

What to Ask After the Offer

All job hunters are waiting for that call — the one that says they’ve landed the job. But as eager as you may be to escape either your current job or the unemployment ranks, don’t abdicate your power position once the offer comes in. Now it’s your turn to sit in the interviewer’s seat and ask the company and yourself some tough questions — the answers to which could mean the difference between career bliss and disaster.

Will the actual work and job responsibilities provide gratification, fulfillment and challenge?
This question is often overlooked, because applicants get hung up on job titles, salary and benefits. Try to get a clear sense of what an actual day would be like. What will you spend the majority of your time doing? Is the work in line with your values? Will you likely learn this job quickly and become bored and unchallenged?

What are the boss’s strengths and weaknesses?
This question can be tough to answer, and it’s best saved for after the job offer has been extended. You’ll want to get a good idea for your potential boss’s management style. Speak to your potential boss as much as possible to get a feel for his personality and what you can live with. Does he micromanage? Will you get consistent feedback and reviews? Does he make small talk, or is every conversation strictly business?

How much change is in the works at your prospective company, and what kind?
Constant change at work can mean constant stress. Find out if there are any big changes coming, such as new processing systems or management, impending retirements or adoption of new procedures that still need to be ironed out. At the same time, remember that some of these transitions will have less effect on your position than others.

How many of my skills and experiences will I be able to use and learn?
Make sure your unique skills and talents will be used and that training and promotion are open in the future. When you decide to move on, you’ll want to have a new crop of experiences to sell to your next employer. Your goal is to perform well at work while constantly growing and learning.

How many people have held the position in the past several years?
Knowing how many people have been in your job and why they left can offer you great insights. You’ll want to know if they were promoted or quit altogether. A steady stream of resignations may be a sign you could be reentering the job market soon.

While many of the reasons positions eventually become unfulfilling are unavoidable, such as hitting a plateau after repeatedly performing the same duties, job seekers should consider the ways a new position will advance them

Questions To Ask The HR

 What kinds of assignments might I expect the first six months on the job?
How often are performance reviews given?
Please describe the duties of the job for me.
What products (or services) are in the development stage now?
Do you have plans for expansion?
What are your growth projections for next year?
Have you cut your staff in the last three years?
Are salary adjustments geared to the cost of living or job performance?
Does your company encourage further education?
How do you feel about creativity and individuality?
Do you offer flextime?
What is the usual promotional time frame?
Does your company offer either single or dual career-track programs?
What do you like best about your job/company?
Once the probation period is completed, how much authority will I have over decisions?
Has there been much turnover in this job area?
Do you fill positions from the outside or promote from within first?
Is your company environmentally conscious? In what ways?
In what ways is a career with your company better than one with your competitors?
Is this a new position or am I replacing someone?
What is the largest single problem facing your staff (department) now?
May I talk with the last person who held this position?
What qualities are you looking for in the candidate who fills this position?
What skills are especially important for someone in this position?
What characteristics do the achievers in this company seem to share?
Who was the last person that filled this position, what made them successful at it, where are they today, and how may I contact them?
Is there a lot of team/project work?
Will I have the opportunity to work on special projects?
Where does this position fit into the organizational structure?
How much travel, if any, is involved in this position?
What is the next course of action? When should I expect to hear from you or should I contact you?

Tips For HR Interview

Entering the room
Prior to the entering the door, adjust your attire so that it falls well.
Before entering enquire by saying, “May I come in sir/madam”.
If the door was closed before you entered, make sure you shut the door behind you softly.
Face the panel and confidently say ‘Good day sir/madam’.
If the members of the interview board want to shake hands, then offer a firm grip first maintaining eye contact and a smile.
Seek permission to sit down. If the interviewers are standing, wait for them to sit down first before sitting.
An alert interviewee would diffuse the tense situation with light-hearted humor and immediately set rapport with the interviewers.

Enthusiasm
The interviewer normally pays more attention if you display an enthusiasm in whatever you say.
This enthusiasm come across in the energetic way you put forward your ideas.
You should maintain a cheerful disposition throughout the interview, i.e. a pleasant countenance hold s the interviewers interest.

Humor
A little humor or wit thrown in the discussion occasionally enables the interviewers to look at the pleasant side of your personality,. If it does not come naturally do not contrive it.
By injecting humor in the situation doesn’t mean that you should keep telling jokes. It means to make a passing comment that, perhaps, makes the interviewer smile.

Eye contact
You must maintain eye contact with the panel, right through the interview. This shows your self-confidence and honesty.
Many interviewees while answering, tend to look away. This conveys you are concealing your own anxiety, fear and lack of confidence.
Maintaining an eye contact is a difficult process. As the circumstances in an interview are different, the value of eye contact is tremendous in making a personal impact.

Be natural
Many interviewees adopt a stance which is not their natural self.
It is amusing for interviewers when a candidate launches into an accent which he or she cannot sustain consistently through the interview or adopt mannerisms that are inconsistent with his/her personality.
Interviewers appreciate a natural person rather than an actor.
It is best for you to talk in natural manner because then you appear genuine.