Monday, May 7, 2007

Testing faq page 5

What is the difference between adhoc testing,monkey testing and exploratory testing?
Adhoc testing: This Kind of testing dosen't have a any process/test case/Test senarios defined/preplanned to do it.It involves simultaneous test design and test execution.
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data). Keep pressing some keys randomely and check whether the software fails or not.
Exploratory testing is simultaneous learning, test design and test execution.It is a type of adhoc testing, but here the tester does not have much idea about the application, he explores the system in an attempt to learn the application and simultaneously test it.

What is Negative Testing?
Testing the application for fail conditions,negative testing is testing the tool with improper inputs.for example entering the special characters for phone number

What is Testing Techniques?
Black Box and White Box are testing types and not testing techniques.
Testing techniques are as follows:-
The most popular Black box testing techniques are:-
Equivalence Partitioning. Boundary Value Analysis. Cause-Effect Graphing. Error-Guessing.
The White-Box testing techniques are: -
Statement coverage Decision coverage Condition coverage Decision-condition coverage Multiple condition coverage Basis Path Testing Loop testing Data flow testing

What is the difference between bug priority & bug severity?
Priority means how urgently bug is needed to fix Severity means how badly it harms the system Priority tells U how Important the bug is. Severity tells U how bad the bug is. Severity is constant....whereas priority might change according to schedule

What is defect density?
defect density = Total number of defects/LOC(lines of code)
Defect density = Total number of defects/Size of the projectSize of Project can be Function points, feature points,use cases, KLOC etc

What is the difference between testing and debugging?
Testing: Locating or Identifying Bugs
Debugging: Fixing the identified Bugs

What is CMM and CMMI?
CMM stands for Capability Maturity Model developed by the Software Engineering Institute(SEI). Before we delve into it, lets understand what is a software process.
A Software Process can be defined as set of activities, methods, practices and transformations that people employ to develop and maintain software and the associated products.
The underlying premise of software process management is that the quality of a software product is largely determined by the quality of the process used to develop and maintain it.
Continuous process improvement is based on many small, evolutionary steps.CMM organizes these steps into 5 maturity levels.Each maturity level comprises a set of process goals that, when satisfied , stabilize an important component of the software process.Organizing the goals into different levels helps the organization to prioritize their improvement actions. The five maturity levels are as follows.1.Initial - The Software Process is characterized as adhoc and occassionally even chaotic.Few processes are defined and success depends on individual effort and heroics.
2.Repeatable - Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
3.Defined - The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.
4. Managed - Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
5. Optimizing - Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

CMMI:In CMM(aka SW-CMM), the entire emphasis is on the software practices.But Software is becoming such a large factor in the systems that are being built today that it is virtually impossible to logically separate the two disciplines.SEI redirected its effort toward the integration of system and software practices and thus born cMMI which stands for Capability Maturity Model Integration.You can find on more info on http://www.sei.cmu.edu/cmmi/

What is six sigma?
Six Sigma stands for Six Standard Deviations from mean. Initially defined as a metric for measuring defects and improving quality, a methodology to reduce defect levels below 3.4 Defects Per one Million Opportunities.
Six Sigma incorporates the basic principles and techniques used in Business, Statistics, and Engineering. These three form the core elements of Six Sigma. Six Sigma improves the process performance, decreases variation and maintains consistent quality of the process output. This leads to defect reduction and improvement in profits, product quality and customer satisfaction.
Six Sigma experts (Green Belts and Black Belts) evaluate a business process and determine ways to improve upon the existing process.

Testing faq page4

What is incremental integration testing?
Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers or test stubs be developed as needed; done by programmers or by testers.

What is installation testing and how is it performed?
Installation testing is often the most under tested area in testing. This type of testing is performed to ensure that all Installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed.
Installation testing should take care of the following points: -
1. To check if while installing product checks for the dependent software / patches say Service pack3.2. The product should check for the version of the same product on the target machine, say the previous version should not be over installed on the newer version.3. Installer should give a default installation path say “C:\programs\.”4. Installer should allow user to install at location other then the default installation path.5. Check if the product can be installed “Over the Network”6. Installation should start automatically when the CD is inserted.7. Installer should give the remove / Repair options.8. When uninstalling, check that all the registry keys, files, Dll, shortcuts, active X components are removed from the system.9. Try to install the software without administrative privileges (login as guest).10. Try installing on different operating system.11. Try installing on system having non-compliant configuration such as less memory / RAM / HDD.

What is Compliance Testing?What is its Significance?
Performed to check whether system is developed in accordance with standards,procedures and policies followed by the company like, completeness of documentation etc.

What is bee-bugging testing and incremental testing?
Bebugging:-Test Engineer release the build with some known bugs is called Bebugging.
Incremental Testing:-Level by level testing is called Incremental Testing.

what are the software models?
A software model is a process for the creation of software.The following are few software models.
1)V-model 2)spiral model 3)waterfall model 4)prototype model

What is Concurrent Testing? And how will you perform it?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores etc.

Testing faq page3

what is cross browser testing?
Cross browser testing - application tested with different browser for usablity testing & compatiblity testing

What is difference between Waterfall model and V model?
The waterfall model is a software development model (a process for the creation of software) in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance.To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner.The model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or overlap between them.In Waterfall Model the tester role will take place only in the testing phase
V Model or Life cycle testing involves continuous testing of the system during the developmental process. At predetermined points, the results of the development process are inspected to determine the correctness of the implementation. These inspections identify defects at the earliest possible point.when the project starts both the system development process and system test process begins. The team that is developing the system begins the systems development process and the team that is conducting the system test begins planning the system test process. Both teams start at the same point using the same information.

what is the difference between Alpha testing and Beta testing?
Typically, software goes through two stages of testing before it is considered finished. The first stage, called alpha testing, is often performed only by users within the organization developing the software. The second stage, called beta testing , generally involves a limited number of external users.

What is Baseline document, Can you say any two?
A baseline document is a document which has covered all the details and went thrugh a "walkthrough". Once a document is baselined it cannot be changed unless there is a change request has been approved. For instance we have requirements document baselined ,then High level design docment baselined and so on....

Briefly explain what is software Testing Life Cycle?
Software testing life cycle contains the following components:
1.Requirements
2.Test Plan preparation
3.Test case preparation
4. Test case execution
5.Bug Analysis
6.Bug Report
7.Bug Tracking and closure

What is the difference between System and End-to-End testing?
System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing but involves testing of the application in a environment that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed mimics the end users usage of the application.

Testing faq page2

What are 5 common problems in the software development process?
poor requirements - if requirements are unclear, incomplete, too general, and not testable, there will be problems.
unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
featuritis - requests to pile on new features after development is underway; extremely common.
miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.

what is difference between test plan and usecase?
Test plan : It contains introduction to the client company,scope,overview of the application, test strategy,schedule, roles and responsibilities, deliverables and milestones.
Use Case : It is nothing but user action and system response. It contains the flows typical flow, alternate flow and exceptional flow. Apart from these it also has a pre condition and post condition.A usecase describes how a end user uses specific functionality in the application

what is difference between smoke testing and sanity testing?
The general definition (related to Hardware) of Smoke Testing is: Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.
In relation to software, the definition is Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Sanity testing is a brief test of major functional features of a software application to determine if its basically operational(or Sane)

Differentiate between Static and Dynamic testing?
Test activities that are performed without running the software is called static testing. Static testing includes code inspections, walkthroughs, and desk checks. In contrast, test activities that involve running the software are called dynamic testing. Static: Document review, inspections, reviewsDynamic: Build testing/testing code/testing application

What is the Requidifference between rements & Specifications?
Requirements: means statements by the customer what the system has to achieve.
Specifications : are implementable requirements

What is the difference between statement coverage, path coverage and branch coverage?
Statement coverage measures the number of lines executed.
Branch coverage measures the number of executed branches. A branch is an outcome of a decision, so an if statement, for example, has two branches (True and False).
Path coverage usually means coverage with respect to the set of entry/exit paths

Testing faq page1

1.What's the difference between QA and testing?
QA stands for "Quality Assurance", deals with 'prevention' of defects in the product being developed.It is associated with process and process improvement activitiesTESTING means "quality control". Its focus is defect detection and removal. QUALITY CONTROL measures the quality of a productQUALITY ASSURANCE measures the quality of processes used to create a quality product.

2.What is black box/white box testing?
Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box"(can't see what is inside the box), so you design test cases which pour the input at one end of the box and expect a certain specific output from other end of the box .To run these test cases you don't need to know how the input is transformed inside the box to output .Black-box also called as behavioral or functional or opaque-box or gray-box or closed-box. White-box test design treats the system as a transparent box which allows one to peek inside the "box", so you see how an input transforms into output.So you design test cases to test the internal logic, paths or branches of the box. white-box is also known as structural or glass-box or clear-box or translucent-box test design

3.What are unit, component and integration testing?
Unit:The smallest compilable component. A unit typically is the work of one programmer.As defined, it does not include any called sub-components or communicating components in general.
Unit Testing: In unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation
component: A unit is a component. The integration of one or more components is a component.
Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.
component testing: the same as unit testing except that all stubs and simulators are replaced with the real thing.
Integration Testing: This test begins after two or more programs or application components have been successfully unit tested. It is conducted by the development team to validate the interaction or communication/flow of information between the individual components which will be integrated.

4.What's the difference between load and stress testing ?
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.
Load testing is a test whose objective is to determine the maximum sustainable load the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.

5.Why does software have bugs?
miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
programming errors - programmers, like anyone else, can make mistakes.
changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control
Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read'). software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

6.What is difference between verification and validation?
Verification is the process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during theprevious phase. This involves reviewing, inspecting, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements.Validation is the determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements.This involves actual testing of the product.

QUALITY

Web
www.softwaretestingsucks.com
What is Quality?
What is quality?orDefine quality?
Lot of quality pioneers defined quality in different waysA quality product is defined as the one that meets product requirements But Quality can only be seen through customer eyes.So the most important definition of quality is meeting customer needs or Understanding customer requirements, expectations and exceeding those expectations.Customer must be satisfied by using the product, then its a quality product.

Whats the difference between meeting product requirements and meeting customer needs?
Aren't customer needs tranlsated into product requirements?Not always.Though our aim is to accurately capture customer needs into requirements and build a product that satisfies those needs, we sometimes fail to do so because of the following reasons-Customers fail to accurately communicate their exact needs-captured requirements can be misinterpreted

Can't we define a quality product as the one that contains no bugs/defects?
Quality is much more than absence of defects/bugs.Consider this, though the product may have zero defects, but if the usability sucks i.e it is difficult to learn and operate the product, then its not a quality product.

If the product has some defects, can it be still called a quality product?
It depends on the nature of those bugs.But in some cases, even though a product has bugs, it can be still called a quality product.Unless the product is very critical, aiming for zero defects is not cost effective always.We should aim for 100% defect 'detection', but given the budget, time and resources constraints, we can still release the product with some unfixed or open bugs. If the open bugs cause no loss to the customer,then it can be still called a quality product.

Is quality only testers responsiblity?
No. Quality is everybody's responsibility including the customer.We, testers identify the deviations and report them, thats it.There are many factors that impact the quality such as maintainabiltiy, reusability, flexibility, portabilty which the testers can't validate. Testers can only validate the correctness, reliability, usability and interoperability of a product and report the deviations.

When is the right time to catch a bug?
As soon as possible.The cost of fixing the bug will keep on increasing exponentially as the product development progresses.For example, the cost of fixing a design bug identified in system testing is much more than fixing it, if it had been identified during design phase itself because now you not only have to rectify the design but also the code, the corresponding documents and code that is dependent on this code.
Are there any other quality control practices apart from testing?Yes.Inspections, design and code walkthroughs, reviews etc.

what are software quality factors?
software quality factors are attributes of the software that, if they are wanted and not present, pose a risk to the success of the software. There are 11 main factors and their definitions are given below. The priority and importance of the these attributes keeps changing from product to product.Like if the product being developed needs to be changed quite frequently, then flexibility and reusability of the product needs to be given priority. The following are the quality factors
Correctness: Extent to which a program satisfies its requirements
Reliability: Extent to which a program can be expected to perform its intended function withrequired precision.
Efficiency: The amount of computing resources and code required by a program to perform afunction.
Integrity: Extent to which access to software or data by unauthorized persons can becontrolled.
Usability: Effort required learning, operating, preparing input, and interpreting output of aprogram.
Maintainability: Effort required locating and fixing an error in an operational program.
Testability: Effort required testing a program to ensure that it performs its intended function.
Flexibility: Effort required modifying an operational program.
Portability: Effort required to transfer software from one configuration to another.
Reusability: Extent to which a program can be used in other applications – related to thepackaging and scope of the functions that programs perform.
Interoperability: Effort required to couple one system with another.

How to reduce the amount spend to ensure and build quality?orHow to reduce the cost of quality?
cost of quality includes the total amount spent on preventing errors, identifying and correcting errors.Coming to reducing this cost.Try to build a product that has less defects or no defects even before it goes to testing phase and to achieve this you should spend more money and effort on tyring to prevent errors from going into the product.You must concentrate greatly on building efficient and effective processes and keep on continuously improving them by identifying weakness in them.You many not reap great benefits immediately but over a long run you can make significant savings by reducing the cost of quality.

How to reduce the cost of fixing a bug?
Catch it as early as possible.As the development process progresses,the cost of fixing a bug keep on increasing exponentially. Practice life cycle testing.

test your basic knowledge of PC and networking.

1.How to open the command prompt?
To open a command prompt
window in Windows 2000 or XP, click Start Run, type cmd in the box, and click OK.

2. How to find IP address of your connection?
Go to start/run type 'cmd' then type 'ipconfig' Add the '/all' switch for more info.

3. How to verify connection to remote computer?
Ping tool verifies connections to remote computers
example: In cmd type c:>ping 192.168.0.1 -t
-t Ping the specified host until interrupted
-a Resolve addresses to hostnames

4. How to find a path on the network from your PC that is running load test script to web server?
Use Tracert Utility runs at the Command prompt. It will trace a path from you to the URL or IP address given along with the tracert command. Tracert determines the route taken to a destination by sending ICMP echo packets

5. How to find what ports are open on your system?
In cmd type c:>netstat This command gives you a generic look at what ports are open on your system

6. What TCP/IP Settings are used on computer?
Description of TCP/IP Settings that are used in network troubleshooting
1. IP Address
2. Subnet Mask
3. Default Gateway
4. DHCP Server
5. DNS Servers

7. What is telnet?
Telnet is a text based communication program that allows you to connect to a remote server over a network
telnet
is the name or IP address of the remote server to connect to.
is the port number of the service to use for the connection. The default is 23 (TELNET service).

8. How to find a network configuration of your PC?
In cmd type c:> net config workstation
the result displays a list of configurable services: computer name,user name, logon domain, domain DNS name.

9. How to find what program used as default for opening file .xyz
In cmd type C:> assoc .xyz which program will open that .xyz file

10. How to change settings in command prompt?
The first thing you'll want to do is Start, Run, cmd.exe, then right click the window menu and choose properties. Try the following values for improvement:
Options Command History Buffer Size 400
Options Command History Discard Old Duplicates True
Options Edit Options QuickEdit Mode True
Layout Screen buffer size Height 900
Layout Window size Height 40

11. How to start DirectX Diagnostic Tool ?
To start the DirectX Diagnostic Tool: 1. Click Start, and then click Run.
In the Open box, type dxdiag, and then click OK.

12. How to determine whether there is an issue with the DNS configuration of your connection to your ISP?
At a command prompt, type ipconfig /all, and then press ENTER to display the IP address of your DNS server. If the IP address for your DNS server does not appear, you need contact your ISP.

13. What do you need to do that your browser will point URL www.YourTest.com to the internal IP address 127.99.11.01?
Make changes in the hosts file in C:\WINDOWS\system32\drivers\etc
The Hosts file is looked at first before going out to the DNS (Domain Name Service) servers.
you have to put the following on new lines at the end of hosts file:
127.99.11.01 YourTest.com
127.99.11.01 www.YourTest.com

14. What can you suggest to enhance testing process on windows OS?
Put shortcut to notepad.exe in SendTo folder. It is speed up work with different files like hosts, configuration files.

15. What is FTP?
FTP is short for File Transfer Protocol. This is the protocol used for file transfer over the Internet.
________________________________________
________________________________________
Find more on Support Knowledge Base
On this page I put some Interview Questions for QA and testers. These PC and networking Interview Questions are very simple and mainly were used for interviewing software testers who is involved in any type of testing. The interview questions found above are listed in order of complexity. However all new PC and networking interview questions (regardless of there difficulty) will be added to the bottom of the list. You can find more PC and network related Interview Questions searching the WEB. END PC and Networking Interview Questions

Wednesday, May 2, 2007

IMP QA&Concepts

Structural Testing:
A testing method where the test data is derivedsolely from the program structure.
Sanity Testing:
Brief test of major functional elements of a piece of software to determine if it is basically operational.
Scalability Testing:
Performance testing focused on ensuring the application under test gracefully handles increases in workload.
Security Testing:
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing:
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing:
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification:
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing:
A set of activities conducted with the intent of finding errors in software.
Static Analysis:
Analysis of a program carried out without executing the program.
Static Analyzer:
A tool that carries out static analysis.
Static Testing:
Analysis of a program carried out without executing the program.
Storage Testing:
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing:
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing:
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Test Bed:
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
Test Development:
The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.
Test Harness:
A software tool that enables the testing of softwarecomponents that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.
Testability:
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development:
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver:
A program or test tool used to execute tests. Also known as a Test Harness.
Test Environment:
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design:
Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.
Test Plan:
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
Test Procedure:
A document providing detailed instructions for the execution of one or more test cases.
Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification:
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite:
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools:
Computer programs used in the testing of a system, a component of the system, or its documentation.
Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Total Quality Management:
A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix:
A document showing the relationship between Test Requirements and Test Cases.
Test Objective:
An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.
Unit Testing:
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.
Usability Testing:
Testing the ease with which users can learn and use a product.
Use Case:
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
V- Diagram (model):
a diagram that visualizes the orderof testing activities and their corresponding phases of development
Verification:
The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing:
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files),can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Validation:
The process of evaluating software to determine compliance with specified requirements.
Walkthrough:
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.
White-box Testing:
Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.

IMP QA&Concepts

Acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.
Alpha Testing:
Testing of a software product or system conducted at the developer’s site by the end user.
Ad Hoc Testing:
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Agile Testing:
Testing practice which emphasize on a test-first design paradigm.
Automated Testing:
That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.
BetaTesting:
Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing:
A white box test case design technique that uses the algorithmic flow of the program to design tests.
Black Box Testing:
Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
Bottom-up Testing:
An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.
Boundary Testing:
Testing that focuses on the boundary or limit conditions of the software being tested. Stress Testing can also be considered as form of boundary testing.
Boundary Value Analysis:
A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.
Branch Coverage Testing:
A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed atleast once.
Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail. Its in a way is used when there is no enough time to execute all the test cases
Bug:
A design or implementation flaw that will result in symptoms exhibited by some module when module is subjected to an appropriate test.
Cause-and-Effect (Fishbone) Diagram:
A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause
Cause-effect Graphing:
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. -->
Code Complete:
Phase of development where functionality is implemented in entirety. Bug fixes are all that are left. All functionsfound in the functional Specifications have been implemented. Code complete module may be far from release as it may have many bugs
Code Coverage:
An analysis method that determines which parts of the software/code have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection:
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Concurrency Testing:
Multi-user testing geared toward determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores. This is one area where the cause for many bugs which were considered random can be identified.
Conformance Testing:
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.Debugging: The process of finding and removing the causes of software failures. Tools used for debugging are called debuggers.End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware,applications, or systems if appropriate.
Exhaustive Testing:
Testing which covers all combinationsof input values and preconditions for an element of the software under test. This is practically infeasible.
Failure:
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.Fault-based
Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.
Function Points:
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 =minor influence, 5 = strong influence.
Functional Testing:
Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.
Gorilla Testing:
Testing one particular module, functionality heavily.
Gray Box Testing:
A combination of Black Box and WhiteBox testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Heuristics Testing:
Another term for failure-directed testing.Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.
Infeasible Path:
Program statement sequence that can never be executed. i,e the unreachable code
Inspection:
A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Its generic term for all inspections similar to code inspections.
Integration Testing:
orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.
Intrusive Testing:
Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.
Installation Testing:
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
IV&V:
Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.
Life Cycle:
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.
Localization Testing:
This term refers to making software specifically designed for a specific locality.
Loop Testing:
A white box testing technique that exercises program loops.
Manual Testing:
That part of software testing that requires operator input, analysis, or evaluation.
Monkey Testing:
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Its a form of adhoc testing.
Mutation Testing:
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing:
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing:
Testing aimed at showing software does not work. Also known as "test to fail".
Path Analysis:
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.
Path Coverage Testing:
A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.
Peer Reviews:
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Path Testing:
Testing wherein all paths in the program source code are tested at least once.
Performance Testing:
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing:
Testing aimed at showing software works. Also known as "test to pass".
Proof Checker:
A program that checks formal proofs of program properties for logical correctness.
Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.
Random Testing:
An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.
Regression Testing:
Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.
Ramp Testing:
Continuously raising an input signal until the system breaks down.Form of stress testing
Recovery Testing:
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability:
probability of failure-free operation for a specified period.
Run Chart:
A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.
Statement Coverage Testing:
A test method satisfying coverage criteria that requires each statement be executed at least once.
Static Testing:
Verification performed without executing the system’s code. Also called static analysis.
Statistical Process Control:
The use of statistical techniques and tools to measure an ongoing process for change or stability.

Software testing

Software Testing:Exercising the program application with varieties of inputs and validating the correspondence behavior .Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
Why do we test?
To validate whether the application is doing what it is supposed to do.(listing down obvious testcases).
To ensure that the application is not doing anything that it is not supposed to do.(listing down unobvious testcases)
To help making ship/noship decisions.
Difference Between:
Mistake:When an incorrect result occurs through human interaction it is a mistake.
Error: When mistake happens it leads to an error.
Fault: Outward manifestation of an error is a fault.orif a particular error is hit then its a fault Failure:It is a deviation of software from its expected delivery or service.
Defect:Tester finds the defect.
Bug:Defect accepted by the developer is called as bug.

Tuesday, May 1, 2007

TESTINGCONCEPTS

TESTING
Software Testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.