Sunday, October 19, 2008
ISTQB CERTIFICATION
ISTQB is a non-profit organization responsible for defining various guidelines such as examination structure and regulations, accreditation, certification etc. Working parties within the ISTQB are responsible for developing and maintaining the syllabi and exams. The ISTQB comprises representatives from each existing national board.
What is the ISTQB Certified Tester program?
The ISTQB Certified Tester program provides certification for software testers internationally. There are currently two levels of certification: The Foundation Level and the Advanced Level certificate. For both, international working parties develop and maintain internationally uniform curricula and exams.
What is the advantage of ISTQB certification?
Truly international nature of the certification: This certificate is recognized in multiple countries in multiple continents. Hardly any other certification has adoption in as many countries as this one. Please see http://www.istqb.org/members.php
The people behind this exam: Test experts from various countries are constantly working towards defining and refining the syllabus, the exam pattern, the examination questions and various other things. Rex Black, Randy Rice, Erik Van Veenendaal and many others from the academia as well as industry are involved in this effort.
The multi-tier nature of the certification: This certification distinguishes between knowledge based exam (foundation level) and skill based (advanced level).
Non-profit nature of ISTQB: Non-profit nature of ISTQB ensures that nobody has a vested interest in promoting a particular Body of Knowledge (BOK) or a particular book. All material developed by ISTQB is developed on a voluntary basis and is made available for free by ISTQB.
How many ISTQB certified testers are there?
Approximately 30,000 software testers currently hold ISTQB compliant certificates. Most of them can be found in Europe, since the program has just recently been introduced into Asia and the U.S.
Is this certification acknowledged across the world ?
This is a truly internationally recognized certification program. For the list of participating nations please visit http://www.istqb.org/members.php . up
What is the Examination Schedule?
Please check the Examination Schedule page for the dates in various cities. The examination dates can change depending on the number of registrations and other factors. Participants will be informed of the date changes in advance.
Thursday, March 27, 2008
Capability maturity model(CMM)
CMM was developed by the Software Engineering Institute. The Capability Maturity Model (CMM) is a way to develop and refine an organization's processes.
Structure of CMM
The CMM involves the following aspects:
Maturity Levels:
It is a layered framework providing a progression to the discipline needed to engage in continuous improvement (It is important to state here that an organization develops the ability to assess the impact of a new practice, technology, or tool on their activity. Hence it is not a matter of adopting these, rather it is a matter of determining how innovative efforts influence existing practices. This really empowers projects, teams, and organizations by giving them the foundation to support reasoned choice.)
Key Process Areas:
A Key Process Area (KPA) identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important.
Goals:
The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.
Common Features: Common features include practices that implement and institutionalize a key process area. These five types of common features include: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.
Key Practices:
The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the key process areas.
What is testing maturity model
1. Estiblishes a baseline for the current level of testing
2. Highlights any inconsistencies between believed level of maturity
3. Provides a roadmap for test process improvemenent.
Phases of Software Testing maturity models
Level 1: Intial
Level 2: Phase defination
Level 3: Integretion
Level 4: Management and measurement
Level 5: Optimization/Defect prevention and Quality control
Level 1:
1. Testing is a choatic process
2. Tests are developed and ad-hoc
3, Objective is show there work in ad hoc way or chatic process
4. Lacks trained staff or resources
5. Connot decide the time.
Level 2:
1. Testing is seperate from debugging state
2. phase after coding
3. Primary testing fields to show the goals meet software requirements
4. Basic testing techniques are adapted
Level 3:
1. Testing integrated in entire life cycle
2. Test objectives are based on objectives
3. test recognizes the professional activity
Level 4:
1. Testing is measured and quality process
2. Reviews at all development process
3. products are tested for the attributes like reliability,usability and maintainablity
4. Regression testing, test cases are recorded and maintained
5. Defects are logged and severirty is given
Level 5:
1. Testing is defined and managed
2. Testing costs and effectiviness can be monitered
3. An estiblish procedure for the selection and evalution of testing tools
4. Automated tools a primary part of testing process
Wednesday, March 26, 2008
Guidelines on deciding the Severity of Bug:
A sample guideline for assignment of Priority Levels during the product test phase includes:
- Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
. - Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
. - Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
. - Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.
Guide lines on writing bug description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.
- Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
- Use present tense.
- Don’t use unnecessary words.
- Don’t add exclamation points. End sentences with a period.
- DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
- Mention steps to reproduce the bug compulsorily.
Bug life cycle
Bug can be defined as the abnormal behavior of the software.No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the application under Test (AUT).
Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle.
The different states of a bug can be summarized as follows:
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to develpoer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Tuesday, March 25, 2008
Common mistakes of Software developers
Introduction:
Most Software developers are not even aware that there favorites methods are problematic. Quite often experts are self thought, hence they tend to have the same bad habit as when they first began, usually because they never witnessed the better ways of performing their embedded systems. These experts then train novices who subsequently acquire the same bad habits. The purpose of this presentation is to improve the awareness to common problems, and to provide a start towards eliminating mistakes and thus creating software that is more reliable and easier to maintain.
It is easy for spending a million on testing a program. Common estimation of the cost of finding and fixing the errors in program range from 40% to 80% of total development cost. Companies don’t spend this kind of money to “verify that a program works”. They spend it because the program doesn’t work, it has bugs and they want them found. No matter what development methodology they follow, their programs still end up with bugs. Beizer’s (1990) review estimation the average number of errors in program released to testing at 1 to 3 bugs per 100 executable statements. There are big differences between programmers, but no one’s work is error-free.
One error per 100 statements is an estimate of public bugs; the ones still left in program after the programmer declares it error-free Beizer (1984) reported his private bug rate, how many mistakes he made in designing and coding a program, as 1.5 errors per executable statement. This includes all mistakes including typing errors.
“At this rate, if your programming language allows one executable statements per lines, you make 150 errors while writing a 100 lines program.”
Most programmers catch and fix more than 99% of their mistake before releasing a program for testing. Having found so many, no wonder they think they most found a lot. But they haven’t .Tester’s job is to find the remaining 1%.
Correcting just one of the mistakes within a project can lead to week or months of savings in manpower (especially during the maintenance place of a software life cycle.).
Some of the common problems are :
Large if-then-else and case statements:
It is not uncommon to see large if-else statements or case statements in embedded code. These are problematic from three perspectives.
1.) They are extremely difficult to test, because code ends up having so many different paths. If statements are nested it becomes even more complicated.
2.). The difference between best-case and worst-case execution time becomes
significant. This leads either to under-utilizing the CPU, or possibilities of timing errors when the longest path is taken.
3.) The difficulty of structure code coverage testing grows exponentially with the number of branches, thus branches should be minimized.
This example confuses new testers who lack in programming experience.
Developers think their code is always correct and as mentioned earlier 99 % errors are corrected by themselves and remaining 1%.errors will be found out by testers. In the below example the
IF (0 < x < 12) then
SYSTEM.OUT.PRINTLN (“Month is” & i);
Else
SYSTEM.OUT.PRINTLN (“Invalid input”);
Consider how this code could fail. Here are some of the simple programming errors that are very common mistakes that can go wrong:
a) Suppose the programmers said less than or equals instead of less than. The program would eject 0 as bad character. The only way to catch the error I by testing with 0.
b) If the code is written as less than 12 instead of less than or equal to 12, the program would go wrong.
“Testers with just the four boundary characters, /, 0, 9, and: will reveal every classification
Error that the programmer could make by getting an inequality wrong or by mistyping”
Error Handling:
Errors in dealing with errors are common. Error handling errors include failure to anticipate the possibility of errors and protect against them, failure to notice error conditions, and failure to deal with a detected error in a reasonable way. Many programmers correctly detect errors but then branch into untested error recovery routines. These routines’ bugs can cause more damage than the original problem.
Some times the errors are more even large while executing the tests and the Microsoft’s worst scenario is we can’t copy the error messages. There are some tools for copying the text of such error messages and also we can take the screen shots
Monday, May 7, 2007
Testing faq page 5
Adhoc testing: This Kind of testing dosen't have a any process/test case/Test senarios defined/preplanned to do it.It involves simultaneous test design and test execution.
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data). Keep pressing some keys randomely and check whether the software fails or not.
Exploratory testing is simultaneous learning, test design and test execution.It is a type of adhoc testing, but here the tester does not have much idea about the application, he explores the system in an attempt to learn the application and simultaneously test it.
What is Negative Testing?
Testing the application for fail conditions,negative testing is testing the tool with improper inputs.for example entering the special characters for phone number
What is Testing Techniques?
Black Box and White Box are testing types and not testing techniques.
Testing techniques are as follows:-
The most popular Black box testing techniques are:-
Equivalence Partitioning. Boundary Value Analysis. Cause-Effect Graphing. Error-Guessing.
The White-Box testing techniques are: -
Statement coverage Decision coverage Condition coverage Decision-condition coverage Multiple condition coverage Basis Path Testing Loop testing Data flow testing
What is the difference between bug priority & bug severity?
Priority means how urgently bug is needed to fix Severity means how badly it harms the system Priority tells U how Important the bug is. Severity tells U how bad the bug is. Severity is constant....whereas priority might change according to schedule
What is defect density?
defect density = Total number of defects/LOC(lines of code)
Defect density = Total number of defects/Size of the projectSize of Project can be Function points, feature points,use cases, KLOC etc
What is the difference between testing and debugging?
Testing: Locating or Identifying Bugs
Debugging: Fixing the identified Bugs
What is CMM and CMMI?
CMM stands for Capability Maturity Model developed by the Software Engineering Institute(SEI). Before we delve into it, lets understand what is a software process.
A Software Process can be defined as set of activities, methods, practices and transformations that people employ to develop and maintain software and the associated products.
The underlying premise of software process management is that the quality of a software product is largely determined by the quality of the process used to develop and maintain it.
Continuous process improvement is based on many small, evolutionary steps.CMM organizes these steps into 5 maturity levels.Each maturity level comprises a set of process goals that, when satisfied , stabilize an important component of the software process.Organizing the goals into different levels helps the organization to prioritize their improvement actions. The five maturity levels are as follows.1.Initial - The Software Process is characterized as adhoc and occassionally even chaotic.Few processes are defined and success depends on individual effort and heroics.
2.Repeatable - Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
3.Defined - The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.
4. Managed - Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
5. Optimizing - Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.
CMMI:In CMM(aka SW-CMM), the entire emphasis is on the software practices.But Software is becoming such a large factor in the systems that are being built today that it is virtually impossible to logically separate the two disciplines.SEI redirected its effort toward the integration of system and software practices and thus born cMMI which stands for Capability Maturity Model Integration.You can find on more info on http://www.sei.cmu.edu/cmmi/
What is six sigma?
Six Sigma stands for Six Standard Deviations from mean. Initially defined as a metric for measuring defects and improving quality, a methodology to reduce defect levels below 3.4 Defects Per one Million Opportunities.
Six Sigma incorporates the basic principles and techniques used in Business, Statistics, and Engineering. These three form the core elements of Six Sigma. Six Sigma improves the process performance, decreases variation and maintains consistent quality of the process output. This leads to defect reduction and improvement in profits, product quality and customer satisfaction.
Six Sigma experts (Green Belts and Black Belts) evaluate a business process and determine ways to improve upon the existing process.