Sunday, December 9, 2007

Server Setup Scripts

Two cases must be tests:
One is to set up databases from scratch and, other to set up databases when they already exist.

Minimum list of areas are as follows:
* Is a setup batch job available to run without much operator’s assistance
(It is not acceptable if it requires an operator to run many batch jobs manually)
* Work environment the setup needs to run (DOS, NT)
* Environment variables (i.e. is %svr% defined?)
* Time it takes to set up
* Set up databases from scratch
* Set up from existing databases
* Set up log and failure messages
* After setup, check for:

  • Databases
  • Tables
  • Tables attachments (Keys, indexes, rules, defaults, column names and column types)
  • Triggers
  • Stored procedures
  • Look up data
  • User access privileges

Integration tests of Server

Integration tests should be performed after 'Database schema testing', 'Stored procedure test' and 'Trigger test' component testing is done. It should call stored procedures intensively to select, update, insert and delete records in different tables and different sequences. The main purpose is to see any conflicts and incompatibility.

* Conflicts between schema and triggers
* Conflicts between stored procedures and schema
* Conflicts between stored procedures and triggers

Friday, December 7, 2007

Trigger Test

“EACH AND EVERY TRIGGER AND TRIGGER ERROR MUST BE TESTED AT LEAST ONCE!!!”

1.1 Updating triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling is correct
* See if a trigger is generated for a specific table column
* Trigger’s update validation
* Update a record with a valid data
* Update a record, a trigger prevents, with invalid data and cover every trigger error
* Update a record when it is still referenced by a row in other table
* Make sure rolling back transactions when a failure occurs
* Find out any case in which a trigger is not supposed to roll back transactions

1.2 Inserting triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling
* See if a trigger is generated for a specific table column
* Trigger’s insertion validation
* Insert a record with a valid data
* Insert a record, a trigger prevents, with invalid data and cover every trigger error
* Try to insert a record that already exists in a table
* Make sure rolling back transactions when an insertion failure occurs
* Find out any case in which a trigger should roll back transactions
* Find out any failure in which a trigger should not roll back transactions
* Conflicts between a trigger and a stored procedure/rules
(i.e. a column allows NULL while a trigger doesn’t)

1.3 Deleting triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling
* See if a trigger is generated for a specific table column
* Trigger’s deletion validation
* Delete a record
* Delete a record when it is still referenced by a row in other table
* Every trigger error
* Try to delete a record that does not exists in a table
* Make sure rolling back transactions when a deletion fails
* Find out any case in which a trigger should roll back transactions
* Find out any failure in which a trigger should not roll back transactions
* Conflicts between a trigger and a stored procedure/rules
(i.e. a column allows NULL while a trigger doesn’t)

Stored Procedure Test

“EACH AND EVERY STORED PROCEDURE MUST BE TESTED AT LEAST ONCE!!!”

1.1 Individual procedure tests

Verify the following things and compare them with design specification
* Whether a stored procedure is installed in a database
* Stored procedure name
* Parameter names, parameter types and the number of parameters

Outputs:

* When output is zero (zero row affected)
* When some records are extracted
* Output contains many records
* What a stored procedure is supposed to do
* What a stored procedure is not supposed to do
* Write simple queries to see if a stored procedure populates right data

Parameters:
* Check parameters if they are required.
* Call stored procedures with valid data
* Call procedures with boundary data
* Make each parameter invalid a time and run a procedure

Return values:
* Whether a stored procedure returns values
* When a failure occurs, nonzero must be returned.

Error messages:
* Make stored procedure fail and cause every error message to occur at least once
* Find out any exception that doesn’t have a predefined error message

Others:
* Whether a stored procedure grants correct access privilege to a group/user
* See if a stored procedure hits any trigger error, index error, and rule error
* Look into a procedure code and make sure major branches are test covered.

1.2 Integration tests of procedures
* Group related stored procedures together. Call them in particular order
* If there are many sequences to call a group of procedures, find out equivalent classes and run tests to cover every class.
* Make invalid calling sequence and run a group of stored procedures.
· Design several test sequences in which end users are likely to do business and do stress tests

Database Schema Testing

“EACH AND EVERY ITEM IN SCHEMA MUST BE TESTED AT LEAST ONCE!!!”

1.1 Databases and devices

Verify the following things and find out the differences between specification and actual databases

* Database names
* Data device, log device and dump device
* Enough space allocated for each database
* Database option setting (i.e. trunc. option)

1.2 Tables, columns, column types, defaults, and rules

Verify the following things and find out the differences between specification and actual tables

* All table names
* Column names for each table
* Column types for each table (int, tinyint, varchar, char, text, datetime. specially the number of characters for char and varchar)
* Whether a column allows NULL or not
* Default definitions
* Whether a default is bound to correct table columns
* Rule definitions
* Whether a rule is bound to correct table columns
* Whether access privileges are granted to correct groups

1.3 Keys and indexes,

Verify the following things and compare them with design specification

* Primary key for each table (every table should have a primary key)
* Foreign keys
* Column data types between a foreign key column and a column in other table
* Indices, clustered or nonclustered; unique or not unique

Thursday, December 6, 2007

Why back end testing is so important?

A back end is the engine of any client/server system. If the back end malfunctions, it may cause system deadlock, data corruption, data loss and bad performance. Many front ends log on to a single SQL server. A bug in a back end may put serious impact on the whole system. Too many bugs in a back end will cost tremendous resources to find and fix bugs and delay the system developments.

It is very likely that many tests in a front end only hit a small portion of a back end. Many bugs in a back end cannot be easily discovered without direct testing.

Back end testing has several advantages: The back end is no longer a "black box" to testers. Many bugs can be effectively found and fixed in the early development stage. Take Forecast LRS as an example; the number of bugs in a back end was more than 30% of total number of bugs in the project. When back end bugs are fixed, the system quality is dramatically increased.

Http and Https

Hypertext Transfer Protocol (http) is a system for transmitting and receiving information across the Internet. Http serves as a request and response procedure that all agents on the Internet follow so that information can be rapidly, easily, and accurately disseminated between servers, which hold information, and clients, who are trying to access it. Http is commonly used to access html pages, but other resources can be utilized as well through http. In many cases, clients may be exchanging confidential information with a server, which needs to be secured in order to prevent unauthorized access. For this reason, https, or secure http, was developed by Netscape corporation to allow authorization and secured transactions.
In many ways, https is identical to http, because it follows the same basic protocols. The http or https client, such as a Web browser, establishes a connection to a server on a standard port. When a server receives a request, it returns a status and a message, which may contain the requested information or indicate an error if part of the process malfunctioned. Both systems use the same Uniform Resource Identifier (URI) scheme, so that resources can be universally identified. Use of https in a URI scheme rather than http indicates that an encrypted connection is desired.

There are some primary differences between http and https, however, beginning with the default port, which is 80 for http and 443 for https. Https works by transmitting normal http interactions through an encrypted system, so that the information cannot be accessed by any party other than the client and end server. There are two common types of encryption layers: Transport Layer Security (TLS) and Secure Sockets Layer (SSL), both of which encode the data records being exchanged.

When using an https connection, the server responds to the initial connection by offering a list of encryption methods it supports. In response, the client selects a connection method, and the client and server exchange certificates to authenticate their identities. After this is done, both parties exchange the encrypted information after ensuring that both are using the same key, and the connection is closed. In order to host https connections, a server must have a public key certificate, which embeds key information with a verification of the key owner's identity. Most certificates are verified by a third party so that clients are assured that the key is secured.
Https is used in many situations, such as log-in pages for banking, forms, corporate logons, and other applications in which data needs to be secure. However, if not implemented properly, https is not infallible, and therefore it is extremely important for end users to be wary about accepting questionable certificates and cautious with their personal information while using the Internet.

Thursday, November 29, 2007

Delete Vs Truncate Statement

* Delete table is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow.

* Truncate table also deletes all the rows in a table, but it won’t log the deletion of each row, instead it logs the de-allocation of the data pages of the table, which makes it faster. Of course, truncate table cannot be rolled back.

* Truncate table is functionally identical to delete statement with no “where clause” both remove all rows in the table. But truncate table is faster and uses fewer system and transaction log resources than delete.

* Truncate table removes all rows from a table, but the table structure and its columns, constraints, indexes etc., remains as it is.

* In truncate table the counter used by an identity column for new rows is reset to the seed for the column.

* If you want to retain the identity counter, use delete statement instead.

* You cannot use truncate table on a table referenced by a foreign key constraint; instead, use delete statement without a where clause. Because truncate table is not logged, it cannot activate a trigger.

* Truncate table may not be used on tables participating in an indexed view.

Friday, November 23, 2007

Normal Forms

The database community has developed a series of guidelines for ensuring that databases are normalized. These are referred to as normal forms and are numbered from one (the lowest form of normalization, referred to as first normal form or 1NF) through five (fifth normal form or 5NF).

First Normal Form (1NF)
First normal form (1NF) sets the very basic rules for an organized database:
* It contains two-dimensional tables with rows and columns.
* Eliminate duplicative columns from the same table.
* Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).

Second Normal Form (2NF)
Second normal form (2NF) further addresses the concept of removing duplicative data:
* Meet all the requirements of the first normal form.
* Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
* Create relationships between these new tables and their predecessors through the use of foreign keys.

Third Normal Form (3NF)
Third normal form (3NF) goes one large step further:
* Meet all the requirements of the second normal form.
* Remove columns that are not dependent upon the primary key.

Boyce-Codd Normal Form
A table is in Boyce-Codd normal form (BCNF) if and only if, for every one of its non-trivial functional dependencies X ? Y, X is a superkey—that is, X is either a candidate key or a superset thereof.

Fourth Normal Form (4NF)
Finally, fourth normal form (4NF) has one additional requirement:
* Meet all the requirements of the third normal form.
* A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.

Fifth Normal Form (5NF)
The criteria for fifth normal form (5NF and also PJ/NF) are:
* The table must be in 4NF.
* There must be no non-trivial join dependencies that do not follow from the key constraints. A 4NF table is said to be in the 5NF if and only if every join dependency in it is implied by the candidate keys.

Domain/key Normal Form (DKNF)
Domain/key normal form (or DKNF) requires that a table not be subject to any constraints other than domain constraints and key constraints.

Sixth Normal Form (6NF)
A table is in sixth normal form (6NF) if and only if it satisfies no non-trivial join dependencies at all. This obviously means that the fifth normal form is also satisfied. The sixth normal form was only defined when extending the relational model to take into account the temporal dimension.

Thursday, November 22, 2007

Normalization

Normalization is a technique for designing relational database tables to minimize duplication of information and, to safeguard the database against certain types of logical or structural problems, namely data anomalies.
Normalization is typically a refinement process after the initial exercise of identifying the data objects that should be in the database, identifying their relationships, and defining the tables required and the columns within each table.
For example, when multiple instances of a given piece of information occur in a table, the possibility exists that these instances will not be kept consistent when the data within the table is updated, leading to a loss of data integrity. A table that is sufficiently normalized is less vulnerable to problems of this kind, because its structure reflects the basic assumptions for when multiple instances of the same information should be represented by a single instance only.
Higher degrees of normalization typically involve more tables and create the need for a larger number of joins, which can reduce performance.

There are two goals of the normalization process:
a. Eliminating redundant data (for example, storing the same data in more than one table) and
b. Ensuring data dependencies make sense (only storing related data in a table).

Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.

Sunday, November 18, 2007

TRUNCATE TABLE advantages over DELETE statement

* Less transaction log space is used - The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log.
* Fewer locks are typically used - When the DELETE statement is executed using a row lock, each row in the table is locked for deletion. TRUNCATE TABLE always locks the table and page but not each row.
* Without exception, zero pages are left in the table - After a DELETE statement is executed, the table can still contain empty pages. For example, empty pages in a heap cannot be deallocated without at least an exclusive (LCK_M_X) table lock. If the delete operation does not use a table lock, the table (heap) will contain many empty pages. For indexes, the delete operation can leave empty pages behind, although these pages will be deallocated quickly by a background cleanup process.
TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE statement.

SQL - Truncate and Delete

Truncate and Delete both are used to delete data from the table. Both the command will only delete the data of the specified table without modifying or deleting the structure of the table. Both the SQL statements are used to delete only the data from the table but they both differ from each other in many aspects like syntax, performance, resources uses etc.

TRUNCATE TABLE

Removes all rows from a table without logging the individual row deletions. TRUNCATE TABLE is functionally the same as the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is faster and uses fewer system and transaction log resources.

Syntax:

TRUNCATE TABLE
[ { database_name.[ schema_name ]. | schema_name . } ]
table_name
[ ; ]

DELETE

Delete command in SQL also removes all rows from a table with logging the individual row deletion in the transaction log. We can use the Where Clause with this (Delete) statement.

Syntax:

DELETE FROM TABLE_NAME
[ { database_name.[ schema_name ]. | schema_name . } ]
table_name
[ ; ]

Table_name : Is the name of the table to truncate or from which all rows are removed.
Simple it looks like below query.

DELETE FROM authors

The above command will delete all data from the table author.

In case of delete statements you can limit your delete query using where clause to delete, only particular records that fulfills the condition of where clause will be deleted not the all records.

It looks like below query with where clause.

DELETE FROM authors Where AuthorId IN (1,2,3)

Wednesday, November 14, 2007

Benefits of Stored Procedures

* Pre-compiled execution - SQL Server compiles each stored procedure once and then reutilizes the execution plan. This results in tremendous performance boosts when stored procedures are called repeatedly.
* Reduced client/server traffic - If network bandwidth is a concern in your environment, you'll be happy to learn that stored procedures can reduce long SQL queries to a single line that is transmitted over the wire.
* Efficient reuse of code and programming abstraction - Stored procedures can be used by multiple users and client programs. If you utilize them in a planned manner, you'll find the development cycle takes less time.
* Enhanced security controls - You can grant users permission to execute a stored procedure independently of underlying table permissions.

Structure

Stored procedures are extremely similar to the constructs seen in other programming languages. They accept data in the form of input parameters that are specified at execution time. These input parameters (if implemented) are utilized in the execution of a series of statements that produce some result. This result is returned to the calling environment through the use of a recordset, output parameters and a return code. That may sound like a mouthful, but you'll find that stored procedures are actually quite simple.

Example:
Let's take a look at a practical example. Assume we have the table shown at the bottom of this page, named Inventory. This information is updated in real-time and warehouse managers are constantly checking the levels of products stored at their warehouse and available for shipment. In the past, each manager would run queries similar to the following:

SELECT Product, Quantity
FROM Inventory
WHERE Warehouse = 'FL'

This resulted in very inefficient performance at the SQL Server. Each time a warehouse manager executed the query, the database server was forced to recompile the query and execute it from scratch. It also required the warehouse manager to have knowledge of SQL and appropriate permissions to access the table information.

We can simplify this process through the use of a stored procedure. Let's create a procedure called sp_GetInventory that retrieves the inventory levels for a given warehouse. Here's the SQL code:

CREATE PROCEDURE sp_GetInventory
@location varchar(10)
AS
SELECT Product, Quantity
FROM Inventory
WHERE Warehouse = @location

Florida warehouse manager can then access inventory levels by issuing the command
EXECUTE sp_GetInventory 'FL'
The New York warehouse manager can use the same stored procedure to access that area's inventory.
EXECUTE sp_GetInventory 'NY'

Granted, this is a simple example, but the benefits of abstraction can be seen here. The warehouse manager does not need to understand SQL or the inner workings of the procedure. From a performance perspective, the stored procedure will work wonders. The SQL Sever creates an execution plan once and then reutilizes it by plugging in the appropriate parameters at execution time.

Inventory Table

Stored Procedure

A Stored Procedure is a subroutine available to applications accessing a relational database system. Stored procedures (sometimes called a sproc or SP) are actually stored in the database.

A Stored Procedure is a group of SQL statements that form a logical unit and perform a particular task, and they are used to encapsulate a set of operations or queries to execute on a database server.

Typical uses for stored procedures include data validation (integrated into the database) or access control mechanisms. Furthermore, stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. Stored procedures reduce network traffic and improve performance. Additionally, stored procedures can be used to help ensure the integrity of the database.
Stored procedures are similar to user-defined functions (UDFs). The major difference is that UDFs can be used like any other expression within SQL statements, whereas stored procedures must be invoked using the CALL statement.

CALL procedure(…)

Stored procedures can return result sets, i.e. the results of a SELECT statement. Such result sets can be processed using cursors by other stored procedures by associating a result set locator, or by applications. Stored procedures may also contain declared variables for processing data and cursors that allow it to loop through multiple rows in a table. The standard Structured Query Language provides IF, WHILE, LOOP, REPEAT, and CASE statements, and more. Stored procedures can receive variables, return results or modify variables and return them, depending on how and where the variable is declared.

How to test Stored Procedure?

Sunday, November 11, 2007

White Box Testing Techniques

* Basis Path Testing – Each independent path through the code is taken in a pre-determined order. This point will further be discussed in other section.

* Flow Graph Notation - The flow graph depicts logical control flow using a diagrammatic notation. Each structured construct has a corresponding flow graph symbol.

* Cyclomatic Complexity - Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of the three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.

* Graph Matrices - The procedure for deriving the flow graph and even determining a set of basis paths is amenable to mechanization. To develop a software tool that assists in basis path testing, a data structure, called a graph matrix can be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections between nodes.

* Control Structure Testing - Described below are some of the variations of Control Structure Testing.

Condition Testing:
Condition testing is a test case design method that exercises the logical conditions contained in a program module.

Data Flow Testing:
The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program.

* Loop Testing - Loop Testing is a white box testing technique that focuses exclusively on the validity of loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops, and unstructured loops.

Simple Loops:
The following sets of tests can be applied to simple loops, where ‘n’ is the maximum number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. ‘m’ passes through the loop where m5. n-1, n, n+1 passes through the loop.

Nested Loops:
If we extend the test approach from simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter values. Add other tests for out-of-range or exclude values.
3. Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.

Concatenated Loops:
Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. However, if two loops are concatenated and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent.

Unstructured Loops:
Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs

Black Box Testing Techniques

* Graph Based Testing Methods - Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

* Error Guessing - Error guessing is a skill that is well worth cultivating since it can make testing much more effective and efficient - two extremely important goals in the testing process. Typically, the skill of Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

* Boundary Value Analysis - In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".

Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
4. For strongly typed languages robust testing results in run-time errors that abort normal execution

Limitations of Boundary Value Analysis
BVA works best when the program is a function of several independent variables that represent bounded physical quantities
1. Independent Variables
o NextDate test cases derived from BVA would be inadequate: focusing on the boundary would not leave emphasis on February or leap years
o Dependencies exist with NextDate's Day, Month and Year
o Test cases derived without consideration of the function
2. Physical Quantities
o An example of physical variables being tested, telephone numbers - what faults might be revealed by numbers of 000-0000, 000-0001, 555-5555, 999-9998, 999-9999?

* Equivalence Partitioning - Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
EP can be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

* Comparison Testing - There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing.

* Orthogonal Array Testing - The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions by deriving a suitable small set of test cases (from a large number of possibilities).

Sunday, November 4, 2007

Testing types and Testing techniques

Testing types

Testing types refer to different approaches towards testing a computer program, system or product. The two types of testing are Black box testing and White box testing, which would both be discussed in detail in this chapter. Another type, termed as Gray box testing or Hybrid testing is evolving presently and it combines the features of the two types.

Testing Techniques

Testing techniques refer to different methods of testing particular features a computer program, system or product. Each testing type has its own testing techniques while some techniques combine the feature of both types.

Black box testing techniques:

* Graph Based Testing Methods
* Error Guessing
* Boundary Value analysis
* Equivalence partitioning
* Comparison Testing
* Orthogonal Array Testing

White box testing techniques:

* Basis Path Testing
* Flow Graph Notation
* Cyclomatic Complexity
* Graph Matrices
* Control Structure Testing
* Loop Testing

Difference between Testing Types and Testing Techniques?

Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested.

That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.

On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.)

Wednesday, October 31, 2007

Difference between QA, QC and Testing

Most of the people and organizations are confused about the difference between quality assurance (Q.A.), quality control (Q.C.) and Testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software.They are defined below:

* Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
* QA is Prevent the issue and correct it.
* QA is the responsibility of entire team.

* Quality Control: A set of activities designed to evaluate a developed work product.
* QC is detect the issue and resolve it.
* QC is the responsibility of team member.
* Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast,

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections.

Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation,
Both QA and QC activities are generally required for successful software development.

* While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.

Error Guessing

Error Guessing is not in itself a testing technique but rather a skill that can be applied to all of the other testing techniques to produce more effective tests (i.e, tests that find defects).

Error Guessing is the ability to find errors or defects in the AUT by what appears to be intuition. In fact, testers who are effective at error guessing actually use a range of techniques, including:

* Knowledge about the AUT, such as the design method or implementation technology
* Knowledge of the results of any earlier testing phases (particularly important in Regression Testing)
* Experience of testing similar or related systems (and knowing where defects have arisen previously in those systems)
* Knowledge of typical implementation errors (such as division by zero errors)
* General testing rules of thumb of heuristics.

Error guessing is a skill that is well worth cultivating since it can make testing much more effective and efficient - two extremely important goals in the testing process. Typically, the skill of Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

Monday, October 29, 2007

How to access different systems using Web Service?

Lots of approaches to distributed computing have been tried over the years, and many of these are still in use. But none has achieved the same degree of explosive growth and re-use as we have seen with HTML for web UI. The term "web services" encompasses applications that employ a specific combination of technologies to make themselves accessible to other systems running on remote machines. The most significant web services technologies address three questions:

* How do I find web services that I want to use?
* Once I find a service, how do I learn how it works?
* How do I format messages to a web service?

Finding a Web Service:

In the simplest case, you could learn about a web service in the normal course of communicating with your friends, co-workers and business partners. Universal Description, Discovery and Integration (UDDI) offers a more structured approach. UDDI is a standard for establishing and using registries of web services. A company could establish its own private registry of web services available internally, or to its partners and customers. There also are several public UDDI registries that anyone can search, and to which anyone can publish the availability of their own web services.

Understanding a Web Service:

Once you identify a web service that you'd like to use, you need to know how it works: What kinds of messages does it respond to? What does it expect each message to look like? What messages does it return, and how do you interpret the responses? The Web Services Description Language (WSDL) provides a way to declare what messages are expected and produced by a web service, with enough information about their contents to enable using the service successfully with little or no additional information. When you create a web service, you can create a description of the service using WSDL and distribute the description file (often called a WSDL file) to prospective users of the web service, either directly or by including a link to the WSDL file in a UDDI registry entry.

Communicating With a Web Service:

Now that you've obtained the WSDL description of a Web Service, you're ready to invoke it. Web Services communicate with one another via messages in a format known as XML. XML (Extensible Markup Language), like HTML, is a descendent of Standard Generalized Markup Language (SGML). HTML focuses on the way information is to be presented. XML, on the other hand, focuses on the structure of the information, without regard to presentation issues. That's one reason XML is well suited to exchanging information between automated systems. Web services exchange XML messages with one another, typically using either HTTP or SMTP (e-mail) to transport the messages. The Simple Object Access Protocol (SOAP) is a further specification of how to use XML to enable web services to communicate with one another. A SOAP message is just an XML message that follows a few additional rules, most of which deal with how the elements of the message are encoded, and how the message as a whole is addressed.

Web Service

What is a Web Service?

The W3C defines a Web Service as a software system designed to support inter-operable machine to machine interaction over a network. Web services are frequently just API's that can be accessed over a network, and executed on a remote system hosting the requested services.

Among the many ways devised to enable humans to use software running on distant computers, HTML transported over HTTP and presented via a web browser is surely the most successful yet. By using this relatively simple, accessible message format, applications can be used by people all over the world without installing custom client software on their computers.

Monday, October 22, 2007

Other factors for Automation

Deciding factors for Automation

1. What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
2. Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

Criteria for automating

There are two sets of questions to determine whether automation is right for your test case:

Is this test scenario automate-able?

1. Yes, and it will cost a little.
2. Yes, but it will cost a lot.
3. No, it is no possible to automate.

How important is this test scenario?

1. I must absolutely test this scenario whenever possible.
2. I need to test this scenario regularly.
3. I only need to test this scenario once in a while.

If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate

What happens if you can’t automate?
Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are

1. Re-evaluate – Do I really need to run this test this often?
2. What’s the cost of doing this test manually?
3. Look for new testing tools.
4. Consider test hooks.

Automation Testing (Pros and Cons)

Pros of Automation

1. If you have to run a set of tests repeatedly, automation is a huge win for you.
2. It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner.
3. It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner.
4. Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation

1. It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
2. Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.

Manual Testing (Pros and Cons)

Pros of Manual

1. If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
2. It allows the tester to perform more ad-hoc (random testing). Since, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual

1. Running tests manually can be very time consuming.
2. Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.

Wednesday, October 17, 2007

Difference between Re-Test and Regression Testing

Re-Test
You have tested an application or a product by executing a test case. The end result deviates from the expected result and is notified as a defect. Developer fixes the defect. The tester executes the same test case that had originally identified the defect to ensure that the defect is rectified.

Regression Test
Developer fixes the defect. Tester re-test the application to ensure that the defect is rectified. He also identifies a set of test cases whose test scenario surrounds the defect to ensure that the functionality of the application remains stable even after addressing the defect.

On the other hand, if a particular functionality had undergone a minor / major modifications, the tester tests the application to identify defects in the changed functionality followed by the execution of certain set of test cases surrounding the same functionality to ensure that the application is stable.

Test Efficiency Vs Test Effectiveness

Test Efficiency:
a. Test efficiency = Internal in the organization how much resources were consumed, how much of these resources were utilized.
b. Number of test cases executed divided by unit of time (generally per hour).
c. Test efficiency = (Total number of defects found in unit + integration + system) / (Total number of defects found in unit + integration + system + User acceptance testing)
d. Test Efficiency: Test the amount of code and testing resources required by a program to perform a function.
e. Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted) * 100.
f. Test Efficiency is the rate of bugs found by the tester to the total bugs found.

When the build is sent to the customer side people for the testing (Alpha and Beta Testing), the customer side people also find some bugs.

Test Efficiency = A/(A+B)
Here A= Number of bugs found by the tester
B= Number of bugs found by the customer side people
This test efficiency should always be greater than 90%

Test Effectiveness:
a. Test effectiveness = How much the customer's requirements are satisfied by the system, how well the customer specifications are achieved by the system, how much effort is put in developing the system.
b. Number of defects found divided by number of test cases executed.
c. Test effectiveness = (Total number of defects injected + Total number of defect found) / (Total number of defect escaped)* 100.
d. Test Effectiveness: It judge the Effect of the test environment on the application.
e. Test Effectiveness = Loss due to problems / Total resources processed by the system.

Monday, October 15, 2007

Use of Drivers and Stubs

A driver module is used to simulate a calling module and call the program unit being tested by passing input arguments. The driver can be written in various ways, (e.g., prompt interactively for the input arguments or accept input arguments from a file).

To test the ability of the unit being tested to call another module, it may be necessary to use a different temporary module, called a stub module. More than one stub may be needed, depending on the number of programs the unit calls. The stub can be written in various ways, (e.g., return control immediately or return a message that indicates it was called and report the input parameters received).

Drivers and Stubs

It is always a good idea to develop and test software in "pieces". But, it may seem impossible because it is hard to imagine how you can test one "piece" if the other "pieces" that it uses have not yet been developed (and vice versa). To solve this kindof diffcult problems we use stubs and drivers.

In white-box testing, we must run the code with predetermined input and check to make sure that the code produces predetermined outputs. Often testers write stubs and drivers for white-box testing.

Driver for Testing:

Driver is a the piece of code that passes test cases to another piece of code. Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation. It can be called as as a software module which is used to invoke a module under test and provide test inputs, control and, monitor execution, and report test results or most simplistically a line of code that calls a method and passes that method a value.

For example, if you wanted to move a fighter on the game, the driver code would be
moveFighter(Fighter, LocationX, LocationY);

This driver code would likely be called from the main method. A white-box test case would execute this driver line of code and check "fighter.getPosition()" to make sure the player is now on the expected cell on the board.

Stubs for Testing:

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Four basic types of Stubs for Top-Down Testing are:

1 Display a trace message
2 Display parameter value(s)
3 Return a value from a table
4 Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software module that is or will be defined elsewhere or a dummy component or object used to simulate the behavior of a real component until that component has been developed.

Ultimately, the dummy method would be completed with the proper program logic. However, developing the stub allows the programmer to call a method in the code being developed, even if the method does not yet have the desired behavior.

Stubs and drivers are often viewed as throwaway code. However, they do not have to be thrown away: Stubs can be "filled in" to form the actual method. Drivers can become automated test cases.

Thursday, October 11, 2007

When should Testing stop?

"When to stop testing" is one of the most difficult questions to a test engineer.
The following are few of the common Test Stop criteria:
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small.
3. The testing budget is exhausted.
4. The project duration is completed.
5. The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

• Measuring Test Coverage.
• Number of test cycles.
• Number of high priority bugs.

When Testing should occur?

Ques: When Testing should occur?

Wrong Assumption
Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product .Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Ans: Testing Activities in Each Phase

The following testing activities should be performed during the phases
Requirements Analysis - (1) Determine correctness (2) Generate functional test data.
Design - (1) Determine correctness and consistency (2) Generate structural and functional test data.
Programming/Construction - (1) Determine correctness and consistency (2) Generate structural and functional test data (3) Apply test data (4) Refine test data.
Operation and Maintenance - (1) Retest.

Each phase detail:

Requirements Analysis

The following test activities should be performed during this stage.
• Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.

The requirements statement should record the following information and decisions:
1. Program function - What the program must do?
2. The form, format, data types and units for input.
3. The form, format, data types and units for output.
4. How exceptions, errors and deviations are to be handled.
5. For scientific computations, the numerical method or at least the required accuracy of the solution.
6. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.
• Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: (1) boundary values (2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.

• The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:
• Principal data structures.
• Functions, algorithms, heuristics or special techniques used for processing.
• The program organization, how it will be modularized and categorized into external and internal interfaces.
• Any additional information.

Here the testing activities should consist of:
• Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

• Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

• Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

• Reexamination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Programming/Construction

Here the main testing points are:

• Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

• Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

• Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

• Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

• Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

• Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

• Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Operations and maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Wednesday, October 10, 2007

Software Testing 10 Rules

1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.

Regression Testing

Ques: What is the objective of Regression testing?
Ans: The objective of Regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Ques: Is the Regression testing performed manually?
Ans: It depends on the initial testing approach. If the initial testing approach is manual testing, then, usually the regression testing is performed manually. Conversely, if the initial testing approach is automated testing, then, usually the regression testing is performed by automated testing.

Ques: What do you do during Regression testing?
Ans:
o Rerunning of previously conducted tests.
o Reviewing previously prepared manual procedures.
o Comparing the current test results with the previously executed test results.

Ques: What are the tools available for Regression testing?
Ans: Although the process is simple i.e. the test cases that have been prepared can be used and the expected results are also known, if the process is not automated it can be very time-consuming and tedious operation.

Some of the tools available for regression testing are:
Record and Playback tools – Here the previously executed scripts can be rerun to verify whether the same set of results are obtained. E.g. Rational Robot.

Ques: What are the end goals of Regression testing?
Ans:
o To ensure that the unchanged system segments function properly
o To ensure that the previously prepared manual procedures remain correct after the changes have been made to the application system
o To verify that the data dictionary of data elements that have been changed is correct.

Regression testing as the name suggests is used to test / check the effect of changes made in the code.

Most of the time the testing team is asked to check the last minute changes in the code just before making a release to the client, in this situation the testing team needs to check only the affected areas.

So in short for the regression testing the testing team should get the input from the development team about the nature / amount of change in the fix so that testing team can first check the fix and then the affected areas.

Tuesday, October 9, 2007

Defect Profile

1.Defect - Nonconformance to requirements or functional / program specification.
2.Bug - A fault in a program, which causes the program to perform in an unintended or unanticipated manner.
3.Bug Report comes into picture once the actual testing starts.
4.If a particular Test Case(s) Actual and Expected Result will mismatch then we will report a Bug against that Test Case.
5.For each bug we are having a Life Cycle. First time when tester identifies a bug then he will give the Status of that bug as 'New'.
6.Once the Developer Team lead go through the Bug Report and he will assign each bug to the concerned Developer and he changes the bug status to 'Assigned'. After that developer starts working on it during that time by changing the bug status as 'Open' once it got fixed he will change the status to 'Fixed'. In the next Cycle we have to check all the Fixed bugs if those are really fixed then concerned tester change the status of that bug to 'Closed' else change the status to 'Reviewed-not-ok'. Finally 'Deferred', those bugs which are going to be fixed in the next iteration.

See the following sample template used for Bug Reporting.
7.Here also the name of Bug Report file follows some naming convention like
Project-> Name-> Bug-> Report-> Ver No-> Release Date
8.All the bolded words should be replaced with the actual Project Name, Version Number and Release Date.
For eg., Bugzilla Bug Report 1.2.0.3 01_12_04
9.After seeing the name of the file anybody can easily recognize that this is a Bug Report of so and so project and so and so version released on the particular date.
10. It reduces the complexity of opening a file and finding for which project it belongs to.
11.It maintains the details of Project ID, Project Name, Release Version Number and Date on the top of the Sheet.
12. For each bug it maintains :--
a)Bug ID
b) Test Case ID
c) Module Name
d) Bug Description
e) Reproducible (Y/N)
f) Steps to Reproduce
g) Summary
h) Bug Status
i) Severity
j) Priority
k) Tester Name
l) Date of Finding
m) Developer Name
n) Date of Fixing.

Bug ID: Bug ID column represents the unique Bug Number for each bug. For this one each organization follows their own standard to define the format of Bug ID.

Test Case ID: This column gives the reference to the Test Case Document, against which test case the bug was reported. With this reference we can navigate very easy in the Test Case Document for more details.

Module Name: Module Name refers to the Module, in which the bug was raised. Finally based on this information we can estimate for each module how many bugs are there in each Module.

Bug Description: It gives the summary of the Bug. What is that bug, what was happened actually instead of expected result.

Reproducible: This column is very important for developers based on this they know whether it can be reproducible or not. If it is reproducible then it is very to developer team to debug that otherwise they will try to find it out. Simply it is Yes or No.

Steps to Reproduce: This column specifies the complete steps to produce that bug. We can say it as Navigation. This is very useful both for testers and developers to reproduce the bug and to debug the bug. If the Reproducible column is yes then only we will specify steps to reproduce column. Otherwise this column is Null.

Summary: This column gives the detailed description of the bug.

Bug Status: This column is very important in Bug Report, it is used to track the bug in each level.

1.New: It is given by the tester when he find out the bug
2.Assigned: It is given by the developer team lead after assigning to concerned developer
3.Open: It is given by the developer while he fixing the bug
4.Fixed: It is given by the developer after he fixed the bug
5.Closed: It is given by the tester if the bug is fixed in new Build
6.Reviewed-not-ok: It is given by the test if the bug is not fixed in new build
7.Deferred: The bug which is going to be fixed in next iteration

Severity: This column tells the effect of that bug to the application. Usually it is given by the Testers. For this severity also various organizations follow different conventions. Here I am providing sample of this severity based on its affect.

Severity It is specified by the tester how much effect it gives to the application.
Very High: Tester will give this status when you are not able to continue your testing eg. Not opening application.
High: Tester will give this status if he is not able to test this Module but he can test some other module.
Medium: Tester will give the status if he is not able to progress in the current module.
Low: Its like cosmetic some spell mistake or look and feel problem.

Priority: This column is filled by Test Lead he will consider the severity of the bug, Time Schedule, Risks associated with the project especially for that bug. Based on that he will give the Status to Priority Very High, High, Medium or Low based on the all aspects.

Tester Name: This column is for the name of the tester, who identifies that particular bug by using this column developers can easily communicate with that particular Tester if any confusion is there to understand the Bug Description.

Date of Finding: This column contains the Date when the tester reported the bug. So that we will get a report for a particular day how many bugs are reported.

Developer Name: This column contains the name of the Developer, who fixed that particular bug. This information is very useful when a particular bug is fixed but still there then Testers can communicate with the concerned Developer to clear doubt.

Date of Fixing: This column contains the Date when the developer fixed the bug. So that we will get a report for a particular day how many bugs are fixed.

Monday, October 8, 2007

Bug Tracking System

Bug Tracking System is an easy-to-use Internet-based application designed to sustain quality assurance by expediting, streamlining and facilitating the management of Bugs, Problem Requests (PR), Change Requests (CR), Enhancement Requests (ER) or any other activity within your work environment.

Bug tracking is the single most important way to improve the quality of your software. Without keeping track of your bugs, there would be no way to maintain control of what each person on your team works on, fixes and finds problems with. It allows you to prioritize and make decisions that affect the quality of your software. It provides quality assurance personnel, developers, and their clients global defect management throughout the software development life cycle.

Thursday, October 4, 2007

Risk Analysis

A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.
Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software

3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Products.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where any workproduct fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.

Serves as a single source for tracking purposes.

Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps.

Sunday, September 30, 2007

Verification and Validation

Verification:

The standard definition of Verification is: "Are we building the product RIGHT?"

Verification is a process that makes it sure that the software product is developed the right way. The software should confirm to its predefined specifications, as the product development goes through different stages, an analysis is done to ensure that all required specifications are met.

During the Verification, the work product is reviewed/examined personally by one ore more persons in order to find and point out the defects in it. This process helps in prevention of potential bugs, which may cause in failure of the project.

Validation:

Validation is a process of finding out if the product being built is right?

Whatever the software product is being developed, it should do what the user expects it to do. The software product should functionally do what it is supposed to, it should satisfy all the functional requirements set by the user. Validation is done during or at the end of the development process in order to determine whether the product satisfies specified requirements.

All types of testing methods are basically carried out during the Validation process. Test plan, test suits and test cases are developed, which are used during the various phases of Validation process. The phases involved in Validation process are: Code Validation/Testing, Integration Validation/Integration Testing, Functional Validation/Functional Testing, and System/User Acceptance Testing/Validation.

Activities included in Verification and Validation Process:

The two major V&V activities are reviews, including, inspections and walkthroughs, and testing.

Reviews:
Reviews are conducted during and at the end of each phase of the life cycle to determine whether established requirements, design concepts, and specifications have been met. Reviews consist of the presentation of material to a review board or panel. Reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed.

Inspection:
Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the documents and work product during various phases of the product development life cycle. The work product and related documents are presented in front of the inspection team, the members of which carry different interpretations of the presentation. The bugs that are detected during the inspection are communicated to the next level in order to take care of them.

Walkthroughs:
Walkthrough can be considered same as inspection without formal preparation (of any presentation or documentations). During the walkthrough meeting, the presenter/author introduces the material to all the participants in order to make them familiar with it. Even when the walkthroughs can help in finding potential bugs, they are used for knowledge sharing or communication purpose.

Testing:
Testing is the operation of the software with real or simulated inputs to demonstrate that a product satisfies its requirements and, if it does not, to identify the specific differences between expected and actual results. There are varied levels of software tests, ranging from unit or element testing through integration testing and performance testing, up to software system and acceptance tests.

Friday, September 28, 2007

Testing Definitions

Ad-hoc Testing:
It is done without any formal test plans and test cases. Tester should have a significant understanding of software before testing it. It is normally done by experienced tester who has got good knowledge of software to be tested.
This testing is also done where software has to be tested in a very less time constraint.

Exploratory Testing:
It is done by the tester who has got less or no knowledge about the software which they are going to test. They can use this testing to write test cases. This is done at the initial stage of testing. This is majorly done to know the flow of software.

Negative Testing:
Negative testing is testing the tool with improper inputs. Negative Testing is simple testing the application beyond and below of its limits. For ex:
1) We want to enter a name for that negative test can be first we enter numbers.
2) We enter some ASCII characters and we will check
3) First numbers and characters we will check
4) Name should have some minimum length below that we will check

Load Testing:
Analyze number of things in load testing few:

1. Response times - Do they appear consistent, or is there any degrade over a period of time/Are they higher than what is expected
2. Performance of the hardware components – Mid-tier server or the application server, http server, and the database server. The CPU utilization, JVM memory heap of the application server, and the CPU of database server are important for assessing the performance.

Software Integration Environment (SIT):
This includes:
a. Server Machine
b. Database Machine
c. Client Machine

In real world scenario, application is tested using different machines for performance and compatibility purposes.

Database Testing:
In DB testing we need to check for,

1. The field size validation.
2. Check constraints.
3. Indexes are done or not (for performance related issues)
4. Stored procedures
5. The field size defined in the application is matching with that in the db.

QA Life Cycle

It is a integrated system of methodology activity involving like planning, implementation, assessment, reporting and quality improvement to ensure that the process is of the type and quality needed and expected by the client/customer.

1. Test requirements,
2. Test planning,
3. Test design,
4. Test execution and Defect logging,
6. Test reports and acceptance,
7. Sign off.

Test Requirements
1. Requirement Specification documents
2. Functional Specification documents
3. Design Specification documents (use cases, etc)
4. Use case Documents
5. Test Trace-ability Matrix for identifying Test Coverage

Test Planning
1. Test Scope, Test Environment
2. Different Test phase and Test Methodologies
3. Manual and Automation Testing
4. Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
5. Evaluation & identification? Test, Defect tracking tools

Test Design
1. Test Traceability Matrix and Test coverage
2. Test Scenarios Identification & Test Case preparation
3. Test data and Test scripts preparation
4. Test case reviews and Approval
5. Base lining under Configuration Management

Test Execution and Defect Tracking
1. Executing Test cases
2. Testing Test Scripts
3. Capture, review and analyze Test Results
4. Raised the defects and tracking for its closure

Test Reports and Acceptance
1. Test summary reports
2. Test Metrics and process Improvements made
3. Build release
4. Receiving acceptance

Signoff
Signoff template provides a checklist format for customer that can be used for reviewing a new system's functionality and other attributes before closing a purchase order or accepting a delivery. It includes checklist areas for functional tests, documentation reviews, issue recording, enhancement requests.

This form can be used standalone for this purpose, or it can be used as the short-form checklist and signoff form accompanying a written User Acceptance Test Plan or Beta Test Plan (see our templates for those 2 documents).

NOTE: This form can also be adapted for review and acceptance of any deliverable between a customer and a provider. Whether the deliverable is a recommendations report from consultants, a user manual from a technical publications firm, a physical hardware system, a software application, a plan for a marketing campaign, etc., this form can be used to list what's expected by the customer, record results of the acceptance review or tests, record open issues to be corrected, and ultimately document acceptance by the customer.

How to use?
When preparing to accept a deliverable—system, product, report, etc.—from a supplier, fill out your version of this form to include the items you want to test and/or review. If possible, consider ahead of time whether any discrepancies will be acceptable for each item. Schedule the review/tests with the supplier and discuss expectations. When you perform the reviews or tests, mark the performance of each item and indicate whether each result is acceptable—will the deliverable be accepted with this issue? Finally, review overall results with supplier, timeline for issue resolution, and whether re-test will be required.

Thursday, September 27, 2007

Testing Glossary

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing:
* Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
* The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.

Basic Block: A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".

Branch Testing: Testing in which all branches in the program source code are tested at least once.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coding: The generation of source code.

Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component: A minimal software item for which a separate specification is available.

Concurrency Testing:Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Debugging: The process of finding and removing the causes of software failures.

Defect: Nonconformance to requirements or functional / program specification

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. See also Static Testing.

Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class: A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing: same as Black Box Testing.
* Testing the features and operational behavior of a product to ensure they correspond to its specifications.
* Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module,functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

High Order Tests: Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG): A group of people whose primary responsibility is software testing,

Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Localization Testing: This term refers to making software specifically designed for a specific locality.

Loop Testing: A white box testing technique that exercises program loops.

Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Mutation Testing: Testing done on the application where bugs are purposely added to it.

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.

Path Testing: Testing in which all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing: A set of activities conducted with the intent of finding errors in software.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing:
* The process of exercising software to verify that it satisfies specified requirements and to detect errors.
* The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
* The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:
* Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
* A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

Unit Testing: Testing of individual software components.

Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.