Home > Software Testing FAQ Part2 > Software Testing FAQ Part2

Software Testing FAQ Part2

  1. What is Test Bed?
    An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
  2. What is Software Requirements Specification?
    A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
  3. What is Soak Testing?
    Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
  4. What is Smoke Testing?
    A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
  5. What is Scalability Testing?
    Performance testing focused on ensuring the application under test gracefully handles increases in work load.
  6. What is Release Candidate?
    A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
  7. What is Ramp Testing?
    Continuously raising an input signal until the system breaks down.
  8. What is Race Condition?
    A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
  9. What is Quality System?
    The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
  10. What is Quality Policy?
    The overall intentions and direction of an organization as regards quality as formally expressed by top management.
  11. What is Quality Management?
    That aspect of the overall management function that determines and implements the quality policy.
  12. What is Quality Control?
    The operational techniques and the activities used to fulfill and verify requirements of quality.
  13. What is Quality Circle?
    A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
  14. What is Quality Audit?
    A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
  15. What is Quality Assurance?
    All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
  16. What is Monkey Testing?
    Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
  17. What is Metric?
    A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
  18. What is Localization Testing?
    This term refers to making software specifically designed for a specific locality.
  19. What is Independent Test Group (ITG)?
    A group of people whose primary responsibility is software testing.
  20. What is Gorilla Testing?
    Testing one particular module, functionality heavily.
  21. What is Gray Box Testing?
    A combination of Black Box and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings.
  22. What is Functional Specification?
    A document that describes in detail the characteristics of the product with regard to its intended features.
  23. What is Functional Decomposition?
    A technique used during planning, analysis and design; creates a functional hierarchy for the software.
  24. What is Exhaustive Testing?
    Testing which covers all combinations of input values and preconditions for an element of the software under test.
  25. What is Equivalence Partitioning?
    A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
  26. What is Equivalence Class?
  27. What is Endurance Testing?
    Checks for memory leaks or other problems that may occur with prolonged execution.
  28. What is Emulator?
    A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
  29. What is Depth Testing?
    A test that exercises a feature of a product in full detail.
  30. What is Dependency Testing?
    Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
  31. What is Defect?
    Non conformance to requirements or functional / program specification
  32. What is Debugging?
    The process of finding and removing the causes of software failures.
  33. What is Data Driven Testing?
    Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
  34. What is Data Flow Diagram?
    A modeling notation that represents a functional decomposition of a system.
  35. What is Cyclomatic Complexity?
    A measure of the logical complexity of an algorithm, used in white-box testing.
  36. What is Conversion Testing?
    Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
  37. What is Context Driven Testing?
    The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
  38. What is Conformance Testing?
    The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
  39. What is Concurrency Testing?
    Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
  40. What is Component?
    A minimal software item for which a separate specification is available.
  41. What is Code Coverage?
    An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
  42. What is Code Complete?
    Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
  43. What is Cause Effect Graph?
    A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
  44. What is Capture/Replay Tool?
    A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
  45. What is Breadth Testing?
    A test suite that exercises the full functionality of a product but does not test features in detail.
  46. What is Branch Testing?
    Testing in which all branches in the program source code are tested at least once.
  47. What is Boundary Value Analysis?
    BVA is similar to Equivalence Partitioning but focuses on “corner cases” or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
  48. What is Boundary Testing?
    Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
  49. What is Binary Portability Testing?
    Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
  50. What is Baseline?
    The point at which some deliverable produced during the software engineering process is put under formal change control.
  51. What is Basis Set?
    The set of tests derived using basis path testing.
  52. What is Basis Path Testing?
    A white box test case design technique that uses the algorithmic flow of the program to design tests.
  53. What is Basic Block?
    A sequence of one or more consecutive, executable statements containing no branches.
  54. What is Backus-Naur Form?
    A metalanguage used to formally describe the syntax of a language.
  55. What is Automated Testing?
    Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
    The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
  56. What is Automated Software Quality (ASQ)?
    The use of software tools, such as automated testing tools, to improve software quality.
  57. What is Application Programming Interface (API)?
    A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
  58. What is Application Binary Interface (ABI)?
    A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
  59. What is Agile Testing?
    Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
  60. What is Ad Hoc Testing?
    A testing phase where the tester tries to ‘break’ the system by randomly trying the system’s functionality. Can include negative testing as well.
  61.  How do you debug an ASP.NET Web application?
    Attach the aspnet_wp.exe process to the DbgClr debugger.
  62. Why are there five tracing levels in System.Diagnostics.TraceSwitcher?
    The tracing dumps can be quite verbose.
    � For applications that are constantly running you run the risk of overloading the machine and the hard drive.
    � Five levels range from None to Verbose, allowing you to fine-tune the tracing activities.
  63. What’s the difference between the Debug class and Trace class?
    Documentation looks the same. Use Debug class for debug builds, use Trace class for both debug and release builds.
  64. Difference between Smoke Testing and Sanity Testing?
    Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.Sanity Testing is a cursory testing,it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
  65. What is the difference between QC and QA?
    Quality assurance is the process where the documents for the product to be tested is verified with actual requirements of the customers. It includes inspection, auditing , code review , meeting etc. Quality control is the process where the product is actually executed and the expected behavior is verified by comparing with the actual behavior of the software under test. All the testing types like black box testing, white box testing comes under quality control. Quality assurance is done before quality control.
  66. What is a scenario?
    A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  67. What is Difference betweem Manual and Automation Testing?
    This answer is quite simple, Manual is when user needs to do many things based on the test case specified, say like click some tab and check if the tab is working fine or click on a particular URL and check if the web site specified opens. The above stuff can also be done automatically, i.e. through some automated tools like Winrunner , Silk Test etc the things can be automated , so user just has to trigger the tool and the tool will take care of executing , only thing needed is we need to know how to make the testcases automated through that tool which is not very difficult.
  68. What is L10 Testing?
    L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not.
  69. What is I18N Testing?
    I18N Testing is “Internationalization testing”
    Determine whether your developed product’s support for international character encoding methods is sufficient and whether your product development methodologies take into account international coding standards.
  70. What is SEI? CMM? CMMI? ISO? IEEE? ANSI?
  71. SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
    CMM = ‘Capability Maturity Model’, now called the CMMI (’Capability Maturity Model Integration’), developed by the SEI. It’s a model of 5 levels of process ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
    Level 1 – characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects.� Few if any processes in place; successes may not be repeatable.
    Level 2 – software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
    Level 3 – standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
    Level 4 – metrics are used to track productivity, processes, and products.� Project performance is predictable, and quality is consistently high.
    Level 5 – the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

    ISO = ‘International Organisation for Standardization’ – The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 – Quality Management Systems: Requirements; (b)Q9000-2000 – Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 – Quality Management Systems: Guidelines for Performance Improvements.

    To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products – it indicates only that documented processes are followed.

    IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730) etc

    ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

    Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

  72. What makes a good Software Test engineer?
    A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
  73. What is documentation change management?
    Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes
  74. What is the role of documentation in QA?
    Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.
  75. How do you perform integration testing?
    First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
    Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
  76. What is clear box testing?
    Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
  77. What is closed box testing?
    Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.
  78. What is open box testing?
    Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
  79. What is ‘Software Quality Assurance’?
    Software QA involves the entire software development PROCESS – monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to ‘prevention’.
  80. What is Security Testing?
    Security TestingApplication vulnerabilities leave your system open to attacks, Downtime, Data theft, Data corruption and application Defacement. Security within an application or web service is crucial to avoid such vulnerabilities and new threats.While automated tools can help to eliminate many generic security issues, the detection of application vulnerabilities requires independent evaluation of your specific application’s features and functions by experts. An external security vulnerability review by Third Eye Testing will give you the best possible confidence that your application is as secure as possible.
  81. What is Stress Testing?
    Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing. Also see testing, software testing, performance testing.
  82. What is Acceptance Testing?
    Acceptance TestingUser acceptance testing (UAT) is one of the final stages of a software project and will often occur before the customer accepts a new system.Users of the system will perform these tests which, ideally, developers have derived from the User Requirements Specification, to which the system should conform.Test designers will draw up a formal test plan and devise a range of severity levels. The focus in this type of testing is less on simple problems (spelling mistakes, cosmetic problems) and show stoppers (major problems like the software crashing, software will not run etc.). Developers should have worked out these issues during unit testing and integration testing. Rather, the focus is on a final verification of the required business function and flow of the system. The test scripts will emulate real-world usage of the system. The idea is that if the software works as intended and without issues during a simulation of normal use, it will work just the same in production.Results of these tests will allow both the customers and the developers to be confident that the system will work as intended.
  83. What is Compatibility Testing?
    Compatibility TestingOne of the challenges of software development is ensuring that the application works properly on the different platforms and operating systems on the market and also with the applications and devices in its environment.Compatibility testing service aims at locating application problems by running them in real environments, thus ensuring you that your application is compatible with various hardware, operating system and browser versions.
  84. What is Fuzz Testing?
    Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct.The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.
  85. What is Walkthrough?
    A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.
  86. What is Top-Down Testing?
    An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
  87. What is Oracle?
    A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.
  88. What is Mutation Analysis?
    A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program.
  89. Do you know about LCSAJ?
    LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
  90. What is Inspection?
    A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
  91. What is Executable Statement?
    A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.
  92. Do you know about Cause- Effect graph?
    A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.
  93. What is Capture/Playback Tool?
    A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.
  94. What is Bottom Up Testing?
    An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
  95. What is Big Bang Testing?
    Integration testing where no incremental testing takes place prior to all the system’s components being combined to form the system.
  96. What is Productivity Metrics?
    PRODUCTIVITY METRICSOUTPUT / INPUT OR
    VALUE OF MATERIAL / COST OF PRODUCTIONEg: Non Commented Software Source (NCSS) Per Engineer Per Month
    NCSS per Person Month
    NCSS per Function Point
    NCSS can also be replaced by KLOC (Kilo Lines of Code)
  97. What is Effort Variance?
    Effort variance = (Actual effort – Planned Effort)/Planned effort * 100
  98. How do we calculate Schedule Variance?
    Schedule variance = (Actual time taken – Planned time) / Planned time * 100
  99. How do you create a test plan/design?
    Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…
    •Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
    •Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
    •It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
    •Test scenarios are executed through the use of test procedures or scripts.
    •Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
    •Test procedures or scripts include the specific data that will be used for testing the process or transaction.
    •Test procedures or scripts may cover multiple test scenarios.
    •Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
    •Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
    • Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
    •A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
    Inputs for this process:
    •Approved Test Strategy Document.
    •Test tools, or automated test tools, if applicable.
    • Previously developed scripts, if applicable.
    • Test documentation problems uncovered as a result of testing.
    • A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
    Outputs for this process:
    • Approved documents of test scenarios, test cases, test conditions and test data.
    •Reports of software design issues, given to software developers for correction.
  100. How do you create a test strategy?
    The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.Inputs for this process:
    •A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
    •A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
    •Testing methodology. This is based on known standards.
    •Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
    •Requirements that the system can not provide, e.g. system limitations.
    Outputs for this process:
    •An approved and signed off test strategy document, test plan, including test cases.
    •Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
  101. What is the general testing process?
    The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.
  102. What is software testing methodology?
     is a three step process of…
    Creating a test strategy;
    Creating a test plan/design; and
    Executing tests.
    This methodology can be used and molded to your organization’s needs.
  103. What is a test schedule?
    The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.
  104. What is a Test Configuration Manager?
    Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.
  105. What is a Technical Analyst?
    Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.
  106. What is a Database Administrator?
    Database Administrators, Test Build Managers, and System Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator
  107. What is a System Administrator?
    Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.
  108. What is a Test Build Manager?
    Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.
  109. What is a Test Engineer?
    A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. They also…
    •Speed up the work of your development staff;
    •Reduce your risk of legal liability;
    •Give you the evidence that your software is correct and operates properly;
    •Improve problem tracking and reporting;
    •Maximize the value of your software;
    •Maximize the value of the devices that use it;
    •Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;
    •Help the work of your development staff, so the development team can devote its time to build up your product;
    •Promote continual improvement;
    •Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
    •Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;
    •Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.
  110. What is a Test/QA Team Lead?
    The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team
  111. What testing roles are standard on most testing projects?
    Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.
  112. What is alpha testing?
    Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers
  113. What is acceptance testing?
    Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.
  114. What is comparison testing?
    Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.
  115. What is beta testing?
    Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.
  116. What is compatibility testing?
    Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.
  117. What is recovery/error testing?
    Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  118. What is security/penetration testing?
    Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
  119. What is installation testing?
    Installation testing is the testing of a full, partial, or upgrade install/uninstall process. The installation test is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed when necessary.
  120. What is load testing?
    Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.
  121. What is performance testing?
    Performance testing verifies loads, volumes and response times, as defined by requirements. Although performance testing is a part of system testing, it can be regarded as a distinct level of testing.
  122. What is sanity testing?
    Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
  123. What is end-to-end testing?
    End-to-end testing is similar to system testing, the *macro* end of the test scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
  124. What is regression testing?
    The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
  125. What is system testing?
    System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
    Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.
  126. What is integration testing?
    Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
  127. What is usability testing?
    Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers
  128. What is functional testing?
    Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.
  129. What is parallel/audit testing?
    Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
  130. What is incremental integration testing?
    Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.
  131. What is unit testing?
    Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.
  132. What is white box testing?
    White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.
  133. What is black box testing?
    Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.
  134. What is a good code?
    A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.
  135. What is an inspection?
    An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.
  136. What is validation?
    Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed
  137. What is verification?
    Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
  138. How is testing affected by object-oriented designs?
    Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements.
    While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application’s objects. If the application was well-designed this can simplify test design.
  139. How can it be known when to stop testing?
    This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
    • Deadlines (release deadlines, testing deadlines, etc.)
    • Test cases completed with certain percentage passed
    • Test budget depleted
    • Coverage of code/functionality/requirements reaches a specified point
    • Bug rate falls below a certain level
    • Beta or alpha testing period ends
  140. What is ‘configuration management’?
    Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes
  141. What’s a ‘test case’?
    • A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly.
    • A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
    • Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s useful to prepare test cases early in the development cycle if possible.
  142. What’s the role of documentation in QA?
    • Critical. (Note that documentation can be electronic, not necessarily paper.)
    • QA practices should be documented such that they are repeatable.
    • Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented.
  143. What makes a good software QA engineer?
    • The same qualities a good tester has are useful for a QA engineer.
    • Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization.
    • Communication skills and the ability to understand various sides of issues are important.
    • In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed.
    • An ability to find problems as well as to see ‘what’s missing’ is important for inspections and reviews.
    • There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.
    • Change management for documentation should be used if possible.
  144. What makes a good test engineer?
    • A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail.
    • Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
    • Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming.
    • Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
  145. How do you view the contents of the GUI map?
    GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
  146. What is the different between GUI map and GUI map files?
    The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
    GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
  147. If the object does not have a name then what will be the logical name?
    If the object does not have a name then the logical name could be the attached text.
  148. What are the reasons that WinRunner fails to identify an object on the GUI?
    WinRunner fails to identify an object in a GUI due to various reasons.� The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.
  149. What is the use of Test Director software?
    TestDirector is Mercury Interactive software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
  150. Explain WinRunner testing process?
    WinRunner testing process involves six main stages
    1. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
    2. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
    3. Debug Test: run tests in Debug mode to make sure they run smoothly
    4. Run Tests: run tests in Verify mode to test your application.
    5. View Results: determines the success or failure of the tests.
    6. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.
Advertisements
  1. May 29, 2012 at 6:54 am

    A pessimist complains for the wind it manually; the optimist plans that to move; along with the realist sets all the sails. We will need to allow limited discontent, however , we’ve got to under no circumstances drop infinite anticipation.

  2. June 14, 2012 at 6:30 am

    Well… being completely sincere, My partner and i had not been expecting to learn these kinds of info in error, since I did, simply because I merely found your current write-up even though I became actually building a study in Google, looking to find something similar although not quite exactly the same… Nevertheless currently I’m greater than glad you just read the idea and also I would like to bring that your standpoint is quite fascinating even though a trifle debatable towards the known… I might rather declare it can be around available debate… but I am just worried to allow you to an enemy, ‘, ‘, ‘… On the other hand, in case you want to speak a little more about this, make sure you respond to our review and I am going to ensure that you subscribe to ensure I am informed and come back in charge of more… Your pals

  3. July 25, 2012 at 7:40 am

    I recently stumbled on your blog post and have ended up studying along. I’ve come across a variety of unusual comments, although for the most part I can trust the style the additional rewiewers say. Seeing numerous wonderfulgreat opinions of this site, I believed we could in addition connect in addition to explain how I truly appreciated reading through this blog. Well, i believe this can produce my own very first thoughts: “I can see you’ve built a few important issues. Not really lots of people could truly think it over and the choice of just simply do. I’m just really satisfied that there is a great deal relating to this topic that is discovered and you simply achieved it so well, with so a lot category!”

  4. February 11, 2014 at 4:56 am

    It doesn’t matter if you’re trying to save
    your relationship after cheating, stop a divorce, or you just want to get
    your ex back. But you can’t just stand there grinning like an idiot until they finish
    laughing. If it feels weird coming out of your mouth, fake it.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: