Sunday, December 9, 2007

Server Setup Scripts

Two cases must be tests:
One is to set up databases from scratch and, other to set up databases when they already exist.

Minimum list of areas are as follows:
* Is a setup batch job available to run without much operator’s assistance
(It is not acceptable if it requires an operator to run many batch jobs manually)
* Work environment the setup needs to run (DOS, NT)
* Environment variables (i.e. is %svr% defined?)
* Time it takes to set up
* Set up databases from scratch
* Set up from existing databases
* Set up log and failure messages
* After setup, check for:

  • Databases
  • Tables
  • Tables attachments (Keys, indexes, rules, defaults, column names and column types)
  • Triggers
  • Stored procedures
  • Look up data
  • User access privileges

Integration tests of Server

Integration tests should be performed after 'Database schema testing', 'Stored procedure test' and 'Trigger test' component testing is done. It should call stored procedures intensively to select, update, insert and delete records in different tables and different sequences. The main purpose is to see any conflicts and incompatibility.

* Conflicts between schema and triggers
* Conflicts between stored procedures and schema
* Conflicts between stored procedures and triggers

Friday, December 7, 2007

Trigger Test

“EACH AND EVERY TRIGGER AND TRIGGER ERROR MUST BE TESTED AT LEAST ONCE!!!”

1.1 Updating triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling is correct
* See if a trigger is generated for a specific table column
* Trigger’s update validation
* Update a record with a valid data
* Update a record, a trigger prevents, with invalid data and cover every trigger error
* Update a record when it is still referenced by a row in other table
* Make sure rolling back transactions when a failure occurs
* Find out any case in which a trigger is not supposed to roll back transactions

1.2 Inserting triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling
* See if a trigger is generated for a specific table column
* Trigger’s insertion validation
* Insert a record with a valid data
* Insert a record, a trigger prevents, with invalid data and cover every trigger error
* Try to insert a record that already exists in a table
* Make sure rolling back transactions when an insertion failure occurs
* Find out any case in which a trigger should roll back transactions
* Find out any failure in which a trigger should not roll back transactions
* Conflicts between a trigger and a stored procedure/rules
(i.e. a column allows NULL while a trigger doesn’t)

1.3 Deleting triggers

Verify the following things and compare them with design specification
* Make sure trigger name spelling
* See if a trigger is generated for a specific table column
* Trigger’s deletion validation
* Delete a record
* Delete a record when it is still referenced by a row in other table
* Every trigger error
* Try to delete a record that does not exists in a table
* Make sure rolling back transactions when a deletion fails
* Find out any case in which a trigger should roll back transactions
* Find out any failure in which a trigger should not roll back transactions
* Conflicts between a trigger and a stored procedure/rules
(i.e. a column allows NULL while a trigger doesn’t)

Stored Procedure Test

“EACH AND EVERY STORED PROCEDURE MUST BE TESTED AT LEAST ONCE!!!”

1.1 Individual procedure tests

Verify the following things and compare them with design specification
* Whether a stored procedure is installed in a database
* Stored procedure name
* Parameter names, parameter types and the number of parameters

Outputs:

* When output is zero (zero row affected)
* When some records are extracted
* Output contains many records
* What a stored procedure is supposed to do
* What a stored procedure is not supposed to do
* Write simple queries to see if a stored procedure populates right data

Parameters:
* Check parameters if they are required.
* Call stored procedures with valid data
* Call procedures with boundary data
* Make each parameter invalid a time and run a procedure

Return values:
* Whether a stored procedure returns values
* When a failure occurs, nonzero must be returned.

Error messages:
* Make stored procedure fail and cause every error message to occur at least once
* Find out any exception that doesn’t have a predefined error message

Others:
* Whether a stored procedure grants correct access privilege to a group/user
* See if a stored procedure hits any trigger error, index error, and rule error
* Look into a procedure code and make sure major branches are test covered.

1.2 Integration tests of procedures
* Group related stored procedures together. Call them in particular order
* If there are many sequences to call a group of procedures, find out equivalent classes and run tests to cover every class.
* Make invalid calling sequence and run a group of stored procedures.
· Design several test sequences in which end users are likely to do business and do stress tests

Database Schema Testing

“EACH AND EVERY ITEM IN SCHEMA MUST BE TESTED AT LEAST ONCE!!!”

1.1 Databases and devices

Verify the following things and find out the differences between specification and actual databases

* Database names
* Data device, log device and dump device
* Enough space allocated for each database
* Database option setting (i.e. trunc. option)

1.2 Tables, columns, column types, defaults, and rules

Verify the following things and find out the differences between specification and actual tables

* All table names
* Column names for each table
* Column types for each table (int, tinyint, varchar, char, text, datetime. specially the number of characters for char and varchar)
* Whether a column allows NULL or not
* Default definitions
* Whether a default is bound to correct table columns
* Rule definitions
* Whether a rule is bound to correct table columns
* Whether access privileges are granted to correct groups

1.3 Keys and indexes,

Verify the following things and compare them with design specification

* Primary key for each table (every table should have a primary key)
* Foreign keys
* Column data types between a foreign key column and a column in other table
* Indices, clustered or nonclustered; unique or not unique

Thursday, December 6, 2007

Why back end testing is so important?

A back end is the engine of any client/server system. If the back end malfunctions, it may cause system deadlock, data corruption, data loss and bad performance. Many front ends log on to a single SQL server. A bug in a back end may put serious impact on the whole system. Too many bugs in a back end will cost tremendous resources to find and fix bugs and delay the system developments.

It is very likely that many tests in a front end only hit a small portion of a back end. Many bugs in a back end cannot be easily discovered without direct testing.

Back end testing has several advantages: The back end is no longer a "black box" to testers. Many bugs can be effectively found and fixed in the early development stage. Take Forecast LRS as an example; the number of bugs in a back end was more than 30% of total number of bugs in the project. When back end bugs are fixed, the system quality is dramatically increased.

Http and Https

Hypertext Transfer Protocol (http) is a system for transmitting and receiving information across the Internet. Http serves as a request and response procedure that all agents on the Internet follow so that information can be rapidly, easily, and accurately disseminated between servers, which hold information, and clients, who are trying to access it. Http is commonly used to access html pages, but other resources can be utilized as well through http. In many cases, clients may be exchanging confidential information with a server, which needs to be secured in order to prevent unauthorized access. For this reason, https, or secure http, was developed by Netscape corporation to allow authorization and secured transactions.
In many ways, https is identical to http, because it follows the same basic protocols. The http or https client, such as a Web browser, establishes a connection to a server on a standard port. When a server receives a request, it returns a status and a message, which may contain the requested information or indicate an error if part of the process malfunctioned. Both systems use the same Uniform Resource Identifier (URI) scheme, so that resources can be universally identified. Use of https in a URI scheme rather than http indicates that an encrypted connection is desired.

There are some primary differences between http and https, however, beginning with the default port, which is 80 for http and 443 for https. Https works by transmitting normal http interactions through an encrypted system, so that the information cannot be accessed by any party other than the client and end server. There are two common types of encryption layers: Transport Layer Security (TLS) and Secure Sockets Layer (SSL), both of which encode the data records being exchanged.

When using an https connection, the server responds to the initial connection by offering a list of encryption methods it supports. In response, the client selects a connection method, and the client and server exchange certificates to authenticate their identities. After this is done, both parties exchange the encrypted information after ensuring that both are using the same key, and the connection is closed. In order to host https connections, a server must have a public key certificate, which embeds key information with a verification of the key owner's identity. Most certificates are verified by a third party so that clients are assured that the key is secured.
Https is used in many situations, such as log-in pages for banking, forms, corporate logons, and other applications in which data needs to be secure. However, if not implemented properly, https is not infallible, and therefore it is extremely important for end users to be wary about accepting questionable certificates and cautious with their personal information while using the Internet.

Thursday, November 29, 2007

Delete Vs Truncate Statement

* Delete table is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow.

* Truncate table also deletes all the rows in a table, but it won’t log the deletion of each row, instead it logs the de-allocation of the data pages of the table, which makes it faster. Of course, truncate table cannot be rolled back.

* Truncate table is functionally identical to delete statement with no “where clause” both remove all rows in the table. But truncate table is faster and uses fewer system and transaction log resources than delete.

* Truncate table removes all rows from a table, but the table structure and its columns, constraints, indexes etc., remains as it is.

* In truncate table the counter used by an identity column for new rows is reset to the seed for the column.

* If you want to retain the identity counter, use delete statement instead.

* You cannot use truncate table on a table referenced by a foreign key constraint; instead, use delete statement without a where clause. Because truncate table is not logged, it cannot activate a trigger.

* Truncate table may not be used on tables participating in an indexed view.

Friday, November 23, 2007

Normal Forms

The database community has developed a series of guidelines for ensuring that databases are normalized. These are referred to as normal forms and are numbered from one (the lowest form of normalization, referred to as first normal form or 1NF) through five (fifth normal form or 5NF).

First Normal Form (1NF)
First normal form (1NF) sets the very basic rules for an organized database:
* It contains two-dimensional tables with rows and columns.
* Eliminate duplicative columns from the same table.
* Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).

Second Normal Form (2NF)
Second normal form (2NF) further addresses the concept of removing duplicative data:
* Meet all the requirements of the first normal form.
* Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
* Create relationships between these new tables and their predecessors through the use of foreign keys.

Third Normal Form (3NF)
Third normal form (3NF) goes one large step further:
* Meet all the requirements of the second normal form.
* Remove columns that are not dependent upon the primary key.

Boyce-Codd Normal Form
A table is in Boyce-Codd normal form (BCNF) if and only if, for every one of its non-trivial functional dependencies X ? Y, X is a superkey—that is, X is either a candidate key or a superset thereof.

Fourth Normal Form (4NF)
Finally, fourth normal form (4NF) has one additional requirement:
* Meet all the requirements of the third normal form.
* A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.

Fifth Normal Form (5NF)
The criteria for fifth normal form (5NF and also PJ/NF) are:
* The table must be in 4NF.
* There must be no non-trivial join dependencies that do not follow from the key constraints. A 4NF table is said to be in the 5NF if and only if every join dependency in it is implied by the candidate keys.

Domain/key Normal Form (DKNF)
Domain/key normal form (or DKNF) requires that a table not be subject to any constraints other than domain constraints and key constraints.

Sixth Normal Form (6NF)
A table is in sixth normal form (6NF) if and only if it satisfies no non-trivial join dependencies at all. This obviously means that the fifth normal form is also satisfied. The sixth normal form was only defined when extending the relational model to take into account the temporal dimension.

Thursday, November 22, 2007

Normalization

Normalization is a technique for designing relational database tables to minimize duplication of information and, to safeguard the database against certain types of logical or structural problems, namely data anomalies.
Normalization is typically a refinement process after the initial exercise of identifying the data objects that should be in the database, identifying their relationships, and defining the tables required and the columns within each table.
For example, when multiple instances of a given piece of information occur in a table, the possibility exists that these instances will not be kept consistent when the data within the table is updated, leading to a loss of data integrity. A table that is sufficiently normalized is less vulnerable to problems of this kind, because its structure reflects the basic assumptions for when multiple instances of the same information should be represented by a single instance only.
Higher degrees of normalization typically involve more tables and create the need for a larger number of joins, which can reduce performance.

There are two goals of the normalization process:
a. Eliminating redundant data (for example, storing the same data in more than one table) and
b. Ensuring data dependencies make sense (only storing related data in a table).

Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.