Wednesday, October 31, 2007

Difference between QA, QC and Testing

Most of the people and organizations are confused about the difference between quality assurance (Q.A.), quality control (Q.C.) and Testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software.They are defined below:

* Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
* QA is Prevent the issue and correct it.
* QA is the responsibility of entire team.

* Quality Control: A set of activities designed to evaluate a developed work product.
* QC is detect the issue and resolve it.
* QC is the responsibility of team member.
* Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast,

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections.

Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation,
Both QA and QC activities are generally required for successful software development.

* While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.

Error Guessing

Error Guessing is not in itself a testing technique but rather a skill that can be applied to all of the other testing techniques to produce more effective tests (i.e, tests that find defects).

Error Guessing is the ability to find errors or defects in the AUT by what appears to be intuition. In fact, testers who are effective at error guessing actually use a range of techniques, including:

* Knowledge about the AUT, such as the design method or implementation technology
* Knowledge of the results of any earlier testing phases (particularly important in Regression Testing)
* Experience of testing similar or related systems (and knowing where defects have arisen previously in those systems)
* Knowledge of typical implementation errors (such as division by zero errors)
* General testing rules of thumb of heuristics.

Error guessing is a skill that is well worth cultivating since it can make testing much more effective and efficient - two extremely important goals in the testing process. Typically, the skill of Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

Monday, October 29, 2007

How to access different systems using Web Service?

Lots of approaches to distributed computing have been tried over the years, and many of these are still in use. But none has achieved the same degree of explosive growth and re-use as we have seen with HTML for web UI. The term "web services" encompasses applications that employ a specific combination of technologies to make themselves accessible to other systems running on remote machines. The most significant web services technologies address three questions:

* How do I find web services that I want to use?
* Once I find a service, how do I learn how it works?
* How do I format messages to a web service?

Finding a Web Service:

In the simplest case, you could learn about a web service in the normal course of communicating with your friends, co-workers and business partners. Universal Description, Discovery and Integration (UDDI) offers a more structured approach. UDDI is a standard for establishing and using registries of web services. A company could establish its own private registry of web services available internally, or to its partners and customers. There also are several public UDDI registries that anyone can search, and to which anyone can publish the availability of their own web services.

Understanding a Web Service:

Once you identify a web service that you'd like to use, you need to know how it works: What kinds of messages does it respond to? What does it expect each message to look like? What messages does it return, and how do you interpret the responses? The Web Services Description Language (WSDL) provides a way to declare what messages are expected and produced by a web service, with enough information about their contents to enable using the service successfully with little or no additional information. When you create a web service, you can create a description of the service using WSDL and distribute the description file (often called a WSDL file) to prospective users of the web service, either directly or by including a link to the WSDL file in a UDDI registry entry.

Communicating With a Web Service:

Now that you've obtained the WSDL description of a Web Service, you're ready to invoke it. Web Services communicate with one another via messages in a format known as XML. XML (Extensible Markup Language), like HTML, is a descendent of Standard Generalized Markup Language (SGML). HTML focuses on the way information is to be presented. XML, on the other hand, focuses on the structure of the information, without regard to presentation issues. That's one reason XML is well suited to exchanging information between automated systems. Web services exchange XML messages with one another, typically using either HTTP or SMTP (e-mail) to transport the messages. The Simple Object Access Protocol (SOAP) is a further specification of how to use XML to enable web services to communicate with one another. A SOAP message is just an XML message that follows a few additional rules, most of which deal with how the elements of the message are encoded, and how the message as a whole is addressed.

Web Service

What is a Web Service?

The W3C defines a Web Service as a software system designed to support inter-operable machine to machine interaction over a network. Web services are frequently just API's that can be accessed over a network, and executed on a remote system hosting the requested services.

Among the many ways devised to enable humans to use software running on distant computers, HTML transported over HTTP and presented via a web browser is surely the most successful yet. By using this relatively simple, accessible message format, applications can be used by people all over the world without installing custom client software on their computers.

Monday, October 22, 2007

Other factors for Automation

Deciding factors for Automation

1. What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
2. Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

Criteria for automating

There are two sets of questions to determine whether automation is right for your test case:

Is this test scenario automate-able?

1. Yes, and it will cost a little.
2. Yes, but it will cost a lot.
3. No, it is no possible to automate.

How important is this test scenario?

1. I must absolutely test this scenario whenever possible.
2. I need to test this scenario regularly.
3. I only need to test this scenario once in a while.

If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate

What happens if you can’t automate?
Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are

1. Re-evaluate – Do I really need to run this test this often?
2. What’s the cost of doing this test manually?
3. Look for new testing tools.
4. Consider test hooks.

Automation Testing (Pros and Cons)

Pros of Automation

1. If you have to run a set of tests repeatedly, automation is a huge win for you.
2. It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner.
3. It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner.
4. Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation

1. It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
2. Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.

Manual Testing (Pros and Cons)

Pros of Manual

1. If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
2. It allows the tester to perform more ad-hoc (random testing). Since, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual

1. Running tests manually can be very time consuming.
2. Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.

Wednesday, October 17, 2007

Difference between Re-Test and Regression Testing

Re-Test
You have tested an application or a product by executing a test case. The end result deviates from the expected result and is notified as a defect. Developer fixes the defect. The tester executes the same test case that had originally identified the defect to ensure that the defect is rectified.

Regression Test
Developer fixes the defect. Tester re-test the application to ensure that the defect is rectified. He also identifies a set of test cases whose test scenario surrounds the defect to ensure that the functionality of the application remains stable even after addressing the defect.

On the other hand, if a particular functionality had undergone a minor / major modifications, the tester tests the application to identify defects in the changed functionality followed by the execution of certain set of test cases surrounding the same functionality to ensure that the application is stable.

Test Efficiency Vs Test Effectiveness

Test Efficiency:
a. Test efficiency = Internal in the organization how much resources were consumed, how much of these resources were utilized.
b. Number of test cases executed divided by unit of time (generally per hour).
c. Test efficiency = (Total number of defects found in unit + integration + system) / (Total number of defects found in unit + integration + system + User acceptance testing)
d. Test Efficiency: Test the amount of code and testing resources required by a program to perform a function.
e. Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted) * 100.
f. Test Efficiency is the rate of bugs found by the tester to the total bugs found.

When the build is sent to the customer side people for the testing (Alpha and Beta Testing), the customer side people also find some bugs.

Test Efficiency = A/(A+B)
Here A= Number of bugs found by the tester
B= Number of bugs found by the customer side people
This test efficiency should always be greater than 90%

Test Effectiveness:
a. Test effectiveness = How much the customer's requirements are satisfied by the system, how well the customer specifications are achieved by the system, how much effort is put in developing the system.
b. Number of defects found divided by number of test cases executed.
c. Test effectiveness = (Total number of defects injected + Total number of defect found) / (Total number of defect escaped)* 100.
d. Test Effectiveness: It judge the Effect of the test environment on the application.
e. Test Effectiveness = Loss due to problems / Total resources processed by the system.

Monday, October 15, 2007

Use of Drivers and Stubs

A driver module is used to simulate a calling module and call the program unit being tested by passing input arguments. The driver can be written in various ways, (e.g., prompt interactively for the input arguments or accept input arguments from a file).

To test the ability of the unit being tested to call another module, it may be necessary to use a different temporary module, called a stub module. More than one stub may be needed, depending on the number of programs the unit calls. The stub can be written in various ways, (e.g., return control immediately or return a message that indicates it was called and report the input parameters received).

Drivers and Stubs

It is always a good idea to develop and test software in "pieces". But, it may seem impossible because it is hard to imagine how you can test one "piece" if the other "pieces" that it uses have not yet been developed (and vice versa). To solve this kindof diffcult problems we use stubs and drivers.

In white-box testing, we must run the code with predetermined input and check to make sure that the code produces predetermined outputs. Often testers write stubs and drivers for white-box testing.

Driver for Testing:

Driver is a the piece of code that passes test cases to another piece of code. Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation. It can be called as as a software module which is used to invoke a module under test and provide test inputs, control and, monitor execution, and report test results or most simplistically a line of code that calls a method and passes that method a value.

For example, if you wanted to move a fighter on the game, the driver code would be
moveFighter(Fighter, LocationX, LocationY);

This driver code would likely be called from the main method. A white-box test case would execute this driver line of code and check "fighter.getPosition()" to make sure the player is now on the expected cell on the board.

Stubs for Testing:

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Four basic types of Stubs for Top-Down Testing are:

1 Display a trace message
2 Display parameter value(s)
3 Return a value from a table
4 Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software module that is or will be defined elsewhere or a dummy component or object used to simulate the behavior of a real component until that component has been developed.

Ultimately, the dummy method would be completed with the proper program logic. However, developing the stub allows the programmer to call a method in the code being developed, even if the method does not yet have the desired behavior.

Stubs and drivers are often viewed as throwaway code. However, they do not have to be thrown away: Stubs can be "filled in" to form the actual method. Drivers can become automated test cases.

Thursday, October 11, 2007

When should Testing stop?

"When to stop testing" is one of the most difficult questions to a test engineer.
The following are few of the common Test Stop criteria:
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small.
3. The testing budget is exhausted.
4. The project duration is completed.
5. The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

• Measuring Test Coverage.
• Number of test cycles.
• Number of high priority bugs.

When Testing should occur?

Ques: When Testing should occur?

Wrong Assumption
Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product .Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Ans: Testing Activities in Each Phase

The following testing activities should be performed during the phases
Requirements Analysis - (1) Determine correctness (2) Generate functional test data.
Design - (1) Determine correctness and consistency (2) Generate structural and functional test data.
Programming/Construction - (1) Determine correctness and consistency (2) Generate structural and functional test data (3) Apply test data (4) Refine test data.
Operation and Maintenance - (1) Retest.

Each phase detail:

Requirements Analysis

The following test activities should be performed during this stage.
• Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.

The requirements statement should record the following information and decisions:
1. Program function - What the program must do?
2. The form, format, data types and units for input.
3. The form, format, data types and units for output.
4. How exceptions, errors and deviations are to be handled.
5. For scientific computations, the numerical method or at least the required accuracy of the solution.
6. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.
• Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: (1) boundary values (2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.

• The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:
• Principal data structures.
• Functions, algorithms, heuristics or special techniques used for processing.
• The program organization, how it will be modularized and categorized into external and internal interfaces.
• Any additional information.

Here the testing activities should consist of:
• Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

• Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

• Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

• Reexamination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Programming/Construction

Here the main testing points are:

• Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

• Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

• Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

• Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

• Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

• Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

• Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Operations and maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Wednesday, October 10, 2007

Software Testing 10 Rules

1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.

Regression Testing

Ques: What is the objective of Regression testing?
Ans: The objective of Regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Ques: Is the Regression testing performed manually?
Ans: It depends on the initial testing approach. If the initial testing approach is manual testing, then, usually the regression testing is performed manually. Conversely, if the initial testing approach is automated testing, then, usually the regression testing is performed by automated testing.

Ques: What do you do during Regression testing?
Ans:
o Rerunning of previously conducted tests.
o Reviewing previously prepared manual procedures.
o Comparing the current test results with the previously executed test results.

Ques: What are the tools available for Regression testing?
Ans: Although the process is simple i.e. the test cases that have been prepared can be used and the expected results are also known, if the process is not automated it can be very time-consuming and tedious operation.

Some of the tools available for regression testing are:
Record and Playback tools – Here the previously executed scripts can be rerun to verify whether the same set of results are obtained. E.g. Rational Robot.

Ques: What are the end goals of Regression testing?
Ans:
o To ensure that the unchanged system segments function properly
o To ensure that the previously prepared manual procedures remain correct after the changes have been made to the application system
o To verify that the data dictionary of data elements that have been changed is correct.

Regression testing as the name suggests is used to test / check the effect of changes made in the code.

Most of the time the testing team is asked to check the last minute changes in the code just before making a release to the client, in this situation the testing team needs to check only the affected areas.

So in short for the regression testing the testing team should get the input from the development team about the nature / amount of change in the fix so that testing team can first check the fix and then the affected areas.

Tuesday, October 9, 2007

Defect Profile

1.Defect - Nonconformance to requirements or functional / program specification.
2.Bug - A fault in a program, which causes the program to perform in an unintended or unanticipated manner.
3.Bug Report comes into picture once the actual testing starts.
4.If a particular Test Case(s) Actual and Expected Result will mismatch then we will report a Bug against that Test Case.
5.For each bug we are having a Life Cycle. First time when tester identifies a bug then he will give the Status of that bug as 'New'.
6.Once the Developer Team lead go through the Bug Report and he will assign each bug to the concerned Developer and he changes the bug status to 'Assigned'. After that developer starts working on it during that time by changing the bug status as 'Open' once it got fixed he will change the status to 'Fixed'. In the next Cycle we have to check all the Fixed bugs if those are really fixed then concerned tester change the status of that bug to 'Closed' else change the status to 'Reviewed-not-ok'. Finally 'Deferred', those bugs which are going to be fixed in the next iteration.

See the following sample template used for Bug Reporting.
7.Here also the name of Bug Report file follows some naming convention like
Project-> Name-> Bug-> Report-> Ver No-> Release Date
8.All the bolded words should be replaced with the actual Project Name, Version Number and Release Date.
For eg., Bugzilla Bug Report 1.2.0.3 01_12_04
9.After seeing the name of the file anybody can easily recognize that this is a Bug Report of so and so project and so and so version released on the particular date.
10. It reduces the complexity of opening a file and finding for which project it belongs to.
11.It maintains the details of Project ID, Project Name, Release Version Number and Date on the top of the Sheet.
12. For each bug it maintains :--
a)Bug ID
b) Test Case ID
c) Module Name
d) Bug Description
e) Reproducible (Y/N)
f) Steps to Reproduce
g) Summary
h) Bug Status
i) Severity
j) Priority
k) Tester Name
l) Date of Finding
m) Developer Name
n) Date of Fixing.

Bug ID: Bug ID column represents the unique Bug Number for each bug. For this one each organization follows their own standard to define the format of Bug ID.

Test Case ID: This column gives the reference to the Test Case Document, against which test case the bug was reported. With this reference we can navigate very easy in the Test Case Document for more details.

Module Name: Module Name refers to the Module, in which the bug was raised. Finally based on this information we can estimate for each module how many bugs are there in each Module.

Bug Description: It gives the summary of the Bug. What is that bug, what was happened actually instead of expected result.

Reproducible: This column is very important for developers based on this they know whether it can be reproducible or not. If it is reproducible then it is very to developer team to debug that otherwise they will try to find it out. Simply it is Yes or No.

Steps to Reproduce: This column specifies the complete steps to produce that bug. We can say it as Navigation. This is very useful both for testers and developers to reproduce the bug and to debug the bug. If the Reproducible column is yes then only we will specify steps to reproduce column. Otherwise this column is Null.

Summary: This column gives the detailed description of the bug.

Bug Status: This column is very important in Bug Report, it is used to track the bug in each level.

1.New: It is given by the tester when he find out the bug
2.Assigned: It is given by the developer team lead after assigning to concerned developer
3.Open: It is given by the developer while he fixing the bug
4.Fixed: It is given by the developer after he fixed the bug
5.Closed: It is given by the tester if the bug is fixed in new Build
6.Reviewed-not-ok: It is given by the test if the bug is not fixed in new build
7.Deferred: The bug which is going to be fixed in next iteration

Severity: This column tells the effect of that bug to the application. Usually it is given by the Testers. For this severity also various organizations follow different conventions. Here I am providing sample of this severity based on its affect.

Severity It is specified by the tester how much effect it gives to the application.
Very High: Tester will give this status when you are not able to continue your testing eg. Not opening application.
High: Tester will give this status if he is not able to test this Module but he can test some other module.
Medium: Tester will give the status if he is not able to progress in the current module.
Low: Its like cosmetic some spell mistake or look and feel problem.

Priority: This column is filled by Test Lead he will consider the severity of the bug, Time Schedule, Risks associated with the project especially for that bug. Based on that he will give the Status to Priority Very High, High, Medium or Low based on the all aspects.

Tester Name: This column is for the name of the tester, who identifies that particular bug by using this column developers can easily communicate with that particular Tester if any confusion is there to understand the Bug Description.

Date of Finding: This column contains the Date when the tester reported the bug. So that we will get a report for a particular day how many bugs are reported.

Developer Name: This column contains the name of the Developer, who fixed that particular bug. This information is very useful when a particular bug is fixed but still there then Testers can communicate with the concerned Developer to clear doubt.

Date of Fixing: This column contains the Date when the developer fixed the bug. So that we will get a report for a particular day how many bugs are fixed.

Monday, October 8, 2007

Bug Tracking System

Bug Tracking System is an easy-to-use Internet-based application designed to sustain quality assurance by expediting, streamlining and facilitating the management of Bugs, Problem Requests (PR), Change Requests (CR), Enhancement Requests (ER) or any other activity within your work environment.

Bug tracking is the single most important way to improve the quality of your software. Without keeping track of your bugs, there would be no way to maintain control of what each person on your team works on, fixes and finds problems with. It allows you to prioritize and make decisions that affect the quality of your software. It provides quality assurance personnel, developers, and their clients global defect management throughout the software development life cycle.

Thursday, October 4, 2007

Risk Analysis

A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.
Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software

3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Products.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where any workproduct fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.

Serves as a single source for tracking purposes.

Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps.