Wednesday, May 2, 2007

IMP QA&Concepts

Acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.
Alpha Testing:
Testing of a software product or system conducted at the developer’s site by the end user.
Ad Hoc Testing:
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Agile Testing:
Testing practice which emphasize on a test-first design paradigm.
Automated Testing:
That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.
BetaTesting:
Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing:
A white box test case design technique that uses the algorithmic flow of the program to design tests.
Black Box Testing:
Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
Bottom-up Testing:
An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.
Boundary Testing:
Testing that focuses on the boundary or limit conditions of the software being tested. Stress Testing can also be considered as form of boundary testing.
Boundary Value Analysis:
A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.
Branch Coverage Testing:
A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed atleast once.
Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail. Its in a way is used when there is no enough time to execute all the test cases
Bug:
A design or implementation flaw that will result in symptoms exhibited by some module when module is subjected to an appropriate test.
Cause-and-Effect (Fishbone) Diagram:
A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause
Cause-effect Graphing:
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. -->
Code Complete:
Phase of development where functionality is implemented in entirety. Bug fixes are all that are left. All functionsfound in the functional Specifications have been implemented. Code complete module may be far from release as it may have many bugs
Code Coverage:
An analysis method that determines which parts of the software/code have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection:
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Concurrency Testing:
Multi-user testing geared toward determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores. This is one area where the cause for many bugs which were considered random can be identified.
Conformance Testing:
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.Debugging: The process of finding and removing the causes of software failures. Tools used for debugging are called debuggers.End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware,applications, or systems if appropriate.
Exhaustive Testing:
Testing which covers all combinationsof input values and preconditions for an element of the software under test. This is practically infeasible.
Failure:
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.Fault-based
Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.
Function Points:
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 =minor influence, 5 = strong influence.
Functional Testing:
Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.
Gorilla Testing:
Testing one particular module, functionality heavily.
Gray Box Testing:
A combination of Black Box and WhiteBox testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Heuristics Testing:
Another term for failure-directed testing.Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.
Infeasible Path:
Program statement sequence that can never be executed. i,e the unreachable code
Inspection:
A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Its generic term for all inspections similar to code inspections.
Integration Testing:
orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.
Intrusive Testing:
Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.
Installation Testing:
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
IV&V:
Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.
Life Cycle:
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.
Localization Testing:
This term refers to making software specifically designed for a specific locality.
Loop Testing:
A white box testing technique that exercises program loops.
Manual Testing:
That part of software testing that requires operator input, analysis, or evaluation.
Monkey Testing:
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Its a form of adhoc testing.
Mutation Testing:
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing:
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing:
Testing aimed at showing software does not work. Also known as "test to fail".
Path Analysis:
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.
Path Coverage Testing:
A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.
Peer Reviews:
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Path Testing:
Testing wherein all paths in the program source code are tested at least once.
Performance Testing:
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing:
Testing aimed at showing software works. Also known as "test to pass".
Proof Checker:
A program that checks formal proofs of program properties for logical correctness.
Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.
Random Testing:
An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.
Regression Testing:
Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.
Ramp Testing:
Continuously raising an input signal until the system breaks down.Form of stress testing
Recovery Testing:
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability:
probability of failure-free operation for a specified period.
Run Chart:
A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.
Statement Coverage Testing:
A test method satisfying coverage criteria that requires each statement be executed at least once.
Static Testing:
Verification performed without executing the system’s code. Also called static analysis.
Statistical Process Control:
The use of statistical techniques and tools to measure an ongoing process for change or stability.

No comments: