InterBase for Desktop Database Users

By: Conference Speaker

Abstract: This paper will focus on two areas; the specific features of InterBase that make it better than a desktop database and the differences in application design required to take advantage of these features.

This is the technical paper from a talk given at the 11th Annual Inprise & Borland Developer's Conference
By Bill Todd
Bill Todd is president of The Database Group, Inc., a database consulting and development firm based near Phoenix. He is co-author of four database programming books, including Delphi: A Developer's Guide. He is a contributing editor to Delphi Informant, and a frequent speaker at Inprise/Borland Conferences in the U.S. and Europe. Bill is also a member of Team Borland, a nationally known trainer, and has taught Delphi programming classes across the country and overseas. He was an instructor on the 1995, 1996, 1997, and 1998 Borland/Softbite Delphi World Tours.

Interbase For Desktop Database Users

Why bother? What are you going to gain by making the move from your favorite desktop database to Interbase? To make the move to Interbase you will have to invest some time in learning the product and in learning to design your applications differently so there had better be some benefits to justify the effort. So, what are the benefits?


The first major advantage of client/server architecture is stability. When you use a desktop database every workstation writes directly to the table and index files. This means that a crash on any workstation during a write can corrupt your database. In a client/server system only the computer that is running the database server software, in this case Interbase, writes directly to the database. All of the users workstations send SQL statements to Interbase which executes them and returns the requested information, if any. This means that a workstation crash cannot possibly corrupt your database. Of course you can still have database corruption if the computer running Interbase crashes but this is unlikely if you run Interbase on a dedicated machine, use a robust operating system (Unix or NT), and connect the computer to an uninterruptible power supply. In addition to improved stability you will also get better performance if you run Interbase on a dedicated machine, however, for small numbers of users you can run Interbase on a machine that also servers as a dedicated file server with acceptable results.


Another major advantage of moving to Interbase is database consistency. Although many desktop databases claim to support transactions few really do. A true transaction must exhibit four characteristics.

  1. Atomicity  All of the changes to all of the tables that are part of a transaction will either succeed or fail as a unit.
  2. Consistency  The database will always be left in a consistent state. If a transaction cannot be completed, for example if the database crashes, all uncommitted active transactions will be rolled back to restore the database to a consistent state.
  3. Isolation  A transaction cannot see any changes to the database made by another transaction that has not yet committed.
  4. Durability  Once a transaction has committed all of its changes are a permanent part of the database.

The transaction implementation of most desktop databases fails one or more of these tests. For example, the transaction support provided by the BDE for Paradox and dBase tables fails both the consistency and isolation requirements. Interbase implements true transaction support.


Overall you should expect to see better performance with Interbase than with a desktop database. As table size and number of users increase the difference in performance will also increase. You will also get better query performance than many desktop databases can deliver for two reasons. First, Interbase has a much more sophisticated query optimizer than the popular desktop databases. Second, query processing is done on the server. It is not necessary to read tables or indices across the network to the client workstation to process a query. The only network traffic is the SQL statement sent from the client to the server and the result set returned by the server to the client. The slower the network the more important this is so client/server database systems typically provide much better performance across a WAN than desktop database systems. Stored procedures and triggers also improve performance by shifting processing to the database server and reducing network traffic.

Note, however, that if you convert a desktop database application to Interbase by doing nothing more than moving the data to Interbase and changing your BDE alias performance may decrease. Most desktop database applications are designed to browse through entire tables while SQL database servers, such as Interbase, are designed to select and modify sets of records.

Reduced Development & Maintenance Time

Client/server systems can also reduce maintenance and application development time. This is particularly true if more than one application will use the same database. The use of Domains (custom column definitions) and Constraints (custom data validation rules) lets you implement data integrity rules (also called business rules) with less effort than if you had to code them in the client application. For more complex data validation triggers provide a way to code the validation routines on the server where they are shared by all of the applications that use the database. Stored procedures provide a way to program complex data manipulations on the server. These routines can be shared by all of the applications that use the database and also improve performance by reducing network traffic. Centralizing business rule enforcement and data manipulation routines on the server also reduces maintenance. If a business rule needs to be changed or a new one added you can make the change once in one place and it immediately applies to all applications. None of the client applications need to be recompiled or updated in any way.

Set Oriented vs. Navigational Design

Most desktop databases are navigational by design. You can move forward and backward through all of the rows in a table. If you want to change a record you first locate it by scrolling or by using a sequential or indexed search then modify it. SQL database servers, such as Interbase, are set oriented. They are designed to do one thing very efficiently and that one thing is process SQL statements. SQL is a set oriented language. In SQL you use the SELECT statement to fetch a set of records, do what you wish with them, then fetch the next set you want to work with.

This means that to get the best performance with a SQL database server your application design must change. For example, a typical desktop database application might allow the user to open a customer form and display the first record in the customer table. The user can then browse through the table or search for a specific record. A typical client/server application would also allow the user to open a customer form but no data would be displayed. At this point the user can either insert a new record or enter selection criteria that are used to retrieve a set of records to work with. For example, the user might have the option to enter the first letters of a company name, such as First Choice and the system would retrieve the set of records whose names begin with First Choice. The user can now insert, delete or update records. When the user is finished he/she will enter new selection criteria and fetch another set of records to work with.

Not only must the design of your applications change but so must the components you use to build them. Since SQL servers, such as Interbase, are designed to execute SQL statements well you will get the best performance doing everything with SQL and that means using TQuery. TTable was designed to work well with navigational databases like Paradox and DBase. While you can use TTable with Interbase there is a lot of overhead since the BDE must not only fetch the data but also the metadata that describes the fields and their types. You can see the additional SQL statements that the server must execute using SQL Monitor. There is also overhead in using TTable methods, such as FindKey and Locate, because the BDE must translate these into SQL and send the SQL to Interbase. To illustrate the difference lets look at the sample QRY.DPR project that accompanies this paper.

Figure 1  The QRY projects data module

Figure 1 shows the data module for this project. you can connect to the Interbase database by using a BDE alias or by specifying the connection information directly in the TDatabase components Params property. In this case the Params property contains:

SERVER NAME=d:my documentsicon2k1140ibfordeskemp.gdb

USER NAME=sysdba















If you are connecting to an Interbase server that is running on the same machine as your program you can specify the direct path to the database file for the server name property as shown above. Note that if you want to run the sample program that accompanies this paper you must change the SERVER NAME path to the location where you install the sample database. If you are connecting to a remote server the syntax for specifying the connection depends on the network protocol you are using.

For TCP/IP the syntax is:
     <server>:<path to db>    ?>    example: lurch:c:ibserverexamplesemployee.gdb

For IPX/SPX the syntax is:
     <server>@vol:<path>    ?>    example: flash@vol2:interbasexamplesemployee.gdb

For Netbeui the syntax is:
     serverdrive:path ‑> example: lurchc:ibserverexamplesemployee.gdb

The SQL property of CustQry is set to:


Since all of the customer numbers are positive integers this query returns no records when it is opened. CustQrys RequestLive property is set to True so the result set will be editable. This results in a main form with an empty grid as shown in figure 2.

Figure 2  The main form when the program starts.

At this point the user can either insert a new record or choose Edit | Find from the menu to find one or more existing records. The find dialog is shown in figure 3.

Figure 3  The Find dialog

In this example the user can find a customer by customer number or by country. Since customer number is the primary key this choice will find no more than one record. Searching by country may find multiple records as shown in figure 4. Finding the desired records is handled by several methods of the data module starting with FindCustomer.

procedure TMainDm.FindCustomer;


  FindCustForm := TFindCustForm.Create(Self);


    with FindCustForm do


      if ShowModal = mrOk then


        if CustNoEdit.Text <> '' then


        else if CountryEdit.Text <> '' then


        else begin

          ShowMessage('You must enter a customer number or country.');

        end; //if

      end; //if

    end; //with



  end; //try

This method creates the dialog box, FindCustForm, and shows it modally. If the user clicks the OK button and the customer number edit box is not empty the FindCustomerByNumber method is called with the value the user entered in the dialog as its parameter. If the customer number edit control is empty but the country edit control is not FindCustomerByCountry is called and passed the value the user entered in the country edit box. If the user clicks the OK button and both edit boxes are empty an error message is displayed. FindCustomerByNumber is shown below.

procedure TMainDm.FindCustomerByNumber(CustNo: String);


  with CustQry do





            ' ORDER BY Customer');


  end; //with


This method closes the query, clears the SQL property, adds a new SQL statement, which contains the customer number entered by the user, and finally opens the query. The FindCustomerByCountry method is almost identical and is shown below.

procedure TMainDm.FindCustomerByCountry(Country: String);


  with CustQry do





            QuotedStr(Country) +

            ' ORDER BY Country');


  end; //with


Figure 4  Customers whose country = USA

Note that the records in figure 4 are sorted by the Customer field as specified in the ORDER BY clause of the query. This raises another difference between desktop database applications using TTable and client/server applications using TQuery. With TTable new or changed records will move to the location dictated by the active index of the TTable. When using TQuery that does not happen. If you want to maintain record order as records are inserted or updated you will need to refresh the query by closing and reopening it. The sample Qry program does this with the following AfterPost event handler.

procedure TMainDm.CustQryAfterPost(DataSet: TDataSet);





Note that this can have the disconcerting side affect of making new or updated records vanish if they do not satisfy the conditions of the WHERE clause. An alternative that will keep the user on the record that was just inserted or updated is to retrieve the record that was just posted using its primary key. To do that change the AfterPost event handler to:

procedure TMainDm.CustQryAfterPost(DataSet: TDataSet);




Non SELECT Queries

The TQuery component can execute any SQL statement that is valid to Interbase, not just SELECT statements that return result sets. You can execute UPDATE, DELETE, INSERT, ALTER TABLE, CREATE TABLE, CREATE INDEX, DROP TABLE, DROP INDEX or any other valid SQL statement. However, if the SQL statement does not return a result set you must execute it by calling the TQuery ExecSQL method, not by setting its Active property to True or calling its Open method. This tells the BDE not to expect a result set.

Consistency vs. Concurrency

Interbases mechanism for allowing concurrent access by multiple users is very different from that used by desktop databases. Many desktop databases use pessimistic locking. In this model a row is locked when a client begins editing it and the lock is retained until the changes to the row are finished and the record is posted. Interbase, like most database servers, uses optimistic locking. In the optimistic model no lock is placed until the record is posted so the editing cycle is as follows. First the client reads the record and makes the desired changes. Next the client posts the changed record. At that time the record is reread and compared to the record as it was before the client made changes. If the two are the same no other user has changed the record since the client application read it and the post can proceed. The record is locked and the changes are written to the database.

At this point, event though the record has been locked and updated, the database has not really been changed. This is because all data access in Interbase occurs within a transaction. The real sequence of events for the record update is as follows.

  1. Start a transaction.
  2. Read the row.
  3. Save the before image of the row.
  4. Change the row.
  5. Reread the row to see if another user has changed it.
  6. Lock the row.
  7. Write the row (post the record).
  8. Commit the transaction.
  9. Unlock the row.

The change is not a permanent part of the database and cannot be seen by other transactions until its transaction has been committed. Even reads take place within a transaction and must be committed. You can get pessimistic row level locking with Interbase if you need it. To do so start a transaction, change the value of any field in the record by assigning it to itself and post the record. The record is now locked and will remain locked until the transaction commits or rolls back.

Interbase is also very different from desktop databases and other database servers in the way it behaves in an environment that consists of mixed readers and writers. Consider the following situation. You have a table that contains the inventory records of all of the goods stored in various warehouses around the country. You need to produce a report for the auditor that shows the total value of the goods in each warehouse. If you want to this report to be accurate with your desktop database you will have to lock the table so that no other user can update a record while the query for the report is running. If you do not it is possible for another user to post a transaction that transfers an item from one warehouse to another. If you are unlucky the query that is totaling the value by warehouse will read the records for the warehouse the goods are leaving before the update transaction and read the records for the warehouse the goods are moving to after the update transaction and you will have counted the same goods twice, or vice versa and not count them at all. This is not only a problem with desktop databases but with most database servers as well. While locking the entire table will give you a consistent result for the report it does not do much for concurrency since no other user can update the database while the report query is running.

However, Interbases versioning architecture eliminates the problem entirely. With versioning architecture both the query and the update transactions can run concurrently and the query is still guaranteed to give consistent results. Here is what happens with Interbase.

  1. The SELECT transactions starts.

  2. The UPDATE transaction starts.

  3. The record for the warehouse the goods are leaving is posted but because there is an active transaction that started before this one a new version of the record is written leaving the original record in place. Each record version contains the sequential transaction number of the transaction that created it. The same thing happens when the record for the warehouse the goods are moving to is updated.

  4. The UPDATE transaction commits.

  5. The SELECT transaction reads the records changed by the UPDATE transaction. When it reads these records it discovers that multiple versions of the record exist and selects the version written by the most recent transaction that has committed and that started before the SELECT transaction. This behavior ensures that the SELECT transaction always sees a snapshot of the database as it existed at the instant the transaction started regardless of any changes made to the database after the transaction started. In ANSI SQL terms Interbase provides, for all practical purposes, serializable transaction isolation.

This leads to another advantage of Interbase. You cannot make a backup of a desktop database while the database is in use. If you try your copy will likely be corrupt. However, the Interbase backup utility, GBAK, can produce a consistent backup of your database while it is on-line and being updated. How is this possible? The answer is, in exactly the same way you were able to run a query and get consistent results while updates were taking place. After all, a backup is just a SELECT that reads every row in every table in the database.

Transaction management is automatic if you use the BDE and do not change the default SQLPassThru Mode setting of Shared Autocommit. In this mode the BDE automatically starts a transaction each time you post a record and automatically commits the transaction after the post. However, if you want to wrap multiple changes to a single table or changes to multiple tables in a transaction you will have to control the transaction in code using the StartTransaction, Commit and Rollback methods of the TDatabase component. To see transactions in operation in the sample application choose Edit | Enable Transactions from the menu to make the transaction control buttons appear as shown in figure 5.

Figure 5  The transaction control buttons in the sample application.

If you have never worked with transactions run the program, start a transaction, change several records, make sure you post them all then click the Rollback button and watch all of your changes vanish. Below is the code from the OnClick event handlers of the three buttons.

procedure TMainForm.StartTransactionBtnClick(Sender: TObject);


  with MainDm.EmpDb do

    if not InTransaction then StartTransaction;


procedure TMainForm.CommitBtnClick(Sender: TObject);


  with MainDm.EmpDb do

    if InTransaction then Commit;


procedure TMainForm.RollbackBtnClick(Sender: TObject);


  with MainDm.EmpDb do

    if InTransaction then





    end; //if


Two things in this code are worth noting. First is the use of the InTransaction property to determine if a transaction is active or not. The second is the code that closes and reopens the query after the rollback. Remember that calling Rollback tells Interbase to undo the changes in the database. It does not affect the client application in any way so you must re-query the database to show the user the new state of the actual data. Also remember that transactions in Interbase meet the ACID test described earlier. If you start a transaction, change 20 records in three different tables and the machine that is running Interbase crashes the transaction will be rolled back when you restart Interbase. Your database will always be left in a consistent state.


Interbase also provides more sophisticated security than many desktop databases.


A view provides a restricted view of data in one or more tables. To create a view, use the CREATE VIEW statement as shown in the following example.





Views provide security because you can grant a user access to a view which returns data from a table that the user does not have access to. Views may be either updateable or read only. For a view to be updateable it must satisfy all of the following conditions.

  1. It returns data from a single table or another updateable view.

  2. All columns that are not included in the view allow null values.

  3. The SELECT statement does not include the DISTINCT keyword, a HAVING clause, aggregate functions, subqueries, user defined functions or stored procedures.

To prevent users from inserting rows that do not satisfy the WHERE clause in the view or from changing rows so that they no longer satisfy the WHERE clause add the WITH CHECK CONSTRAINT as shown below.







Once you have created a view you can use it exactly as you would a table. For example, using the PHONE_DIRECTORY view earlier in this section you could write the following query.



If the view is updateable you can also use it in UPDATE, DELETE and INSERT statements.

Grant and Revoke

Using the SQL GRANT and REVOKE statements you can grant any of the following privileges.




Retrieve rows from a table or view.


Add rows to a table or view.


Remove rows from a table or view.


Change values in a table or view.


Execute a stored procedure.


Reference the specified columns with a foreign key.

You can also assign update and references privileges at the column level instead of to an entire table. You can manage privileges for groups of users using roles. To take advantage of roles create one or more roles using the CREATE ROLE command. Grant privileges to the role then grant the role to one or more users using the GRANT command.

Physical Security

Desktop databases frequently provide the option to encrypt the database. This provides protection from casual snoopers who might try to view sensitive data by using a disk editor to browse through the database file. Interbase does not offer an encryption option. This is not a problem in a multi-user environment because operating system security on the database server will protect the database by preventing users from gaining access to the physical file. Client application users do not need to be able to log on to the database server machine and should not be able to do so. If you use Interbase to store sensitive information on notebook computers that could be lost or stolen the only way to protect your data is to use Windows 2000 as the operating system and NTFS as the file system to prevent users without a valid password from gaining access to the database file.

Domains and Check Constraints

While the rules of relational database design apply to Interbase tables just as they do to desktop databases Interbase provides some tools to make data definition easier. The first of these is the check constraint. Check constraints impose rules that the value assigned to a column must conform to. Check constraints can be include in a CREATE TABLE statement or added to a table definition using the ALTER TABLE statement. For example:

    CHECK (min_salary < max_salary);
    CHECK ( salary >= (SELECT min_salary FROM job WHERE
                        job.job_code = employee.job_code AND
                        job.job_grade = employee.job_grade AND
                        job.job_country = employee.job_country) AND
            salary <= (SELECT max_salary FROM job WHERE
                        job.job_code = employee.job_code AND
                        job.job_grade = employee.job_grade AND
                        job.job_country = employee.job_country));
    CHECK (order_status in ('new', 'open', 'shipped', 'waiting'));

As you can see, check constraints use the same syntax as the WHERE clause in a SELECT statement. This makes check constraints very powerful and flexible. They are the ideal way to enforce business rules in an Interbase database.

The second feature that Interbase brings to data definition is domains. A domain is a custom column type. Domains are the ideal tools to ensure that identical columns have the same definition in every table in which they appear. For example, suppose you need a Boolean data type in several tables in your database. Interbase does not offer a Boolean type but you can easily create a Boolean domain using the following statement.



CHECK (VALUE IN ('T', 'F'));

Each time you create a table that requires a Boolean column use the domain as shown below.


Domains save time in creating tables by avoiding the need to specify all of the characteristics of a column each time you use it and greatly ease maintenance by centralizing the definition of a column in one place. Using the ALTER DOMAIN statement you can change any part of a domain definition except its data type and NULL/NOT NULL state. For example, executing the following ALTER DOMAIN statement would change the default value to T for every column in the database whose data type is BOOLEAN_TYPE.



Primary Key Fields Cannot Be Null

One very important difference between some desktop databases and Interbase is that columns that are part of a tables primary key cannot be null. Many desktop databases allow this but Interbase does not. If you plan to convert an application from a desktop database to Interbase you must make sure that there are no nulls in any primary key columns before you try to move the data to Interbase.

Stored Procedures

A stored procedure is a program written in Interbases procedure and trigger language that is stored as part of the database. Stored procedures can be called by client applications or by other stored procedures or triggers. The biggest advantage of stored procedures is reduced network traffic. Since stored procedures are executed by Interbase on the database server machine you can use them to move much of your data manipulation code to the server. This eliminates transferring data from the server across the network to the client for manipulation and reduced network traffic means increased performance, particularly across a WAN or any low speed connection.

Stored procedures improve performance in another way. While you can use queries to do many of the same things that could be done in a stored procedure a query has a significant disadvantage. Each time the client application sends a SQL statement to the database server for execution the SQL statement must be parsed, submitted to the query optimizer and analyzed to formulate an execution plan. Stored procedures are parsed, optimized and stored in executable form when they are added to the database. Since a stored procedure does not have to be parsed and optimized each time it is called it will execute faster than an equivalent query. Stored procedures can also perform much more complex operations than a single query.

If more than one application will access the database, stored procedures will also save maintenance and development time since any application can call a stored procedure. Maintenance is easier because you can change the stored procedure without having to change or even recompile any of the client applications.

Finally, stored procedures are a valuable part of database security since store procedures can access tables the user does not have rights to. For example, suppose a user needs to run reports that show total salary by department and pay grade. While this information must come from the employee salary table you do not want this user to have access to the salary table and the individual salaries of all employees. The solution is to write a stored procedure to calculate the summary information from the salary table and grant the procedure read access to the table. You can then grant the user execute access to the stored procedure. The user does not need any rights to the salary table.

When Should You Use Stored Procedures?

The short answer is, any time you can. There is no disadvantage to using a stored procedure to retrieve data. There are only two limitations. First, you must be able to pass any variable information to the stored procedure as parameters or place the information in a table that the stored procedure can access. Second, the procedure and trigger language may be too limited for complex calculations.

Using Create Procedure

Stored procedures are created using the CREATE PROCEDURE statement that has the following syntax.










Input parameters provide a way to pass values that are used to modify the procedures behavior from the client application to the procedure. For example, if the purpose of the procedure is to compute the total monthly payroll for a department the department number would be passed to the procedure as an input parameter. Output, or return, parameters provide a way for the procedure to return information to the client applications. In this example the total monthly payroll for the department could be returned in an output parameter. A parameter can have any Interbase data type except BLOB or array. The following procedure shows the use of both input and output parameters.

 SELECT SUM(budget), AVG(budget), MIN(budget), MAX(budget)
  FROM department
  WHERE head_dept = :head_dept
  INTO :tot_budget, :avg_budget, :min_budget, :max_budget;

This stored procedure declares one input parameter, HEAD_DEPT, whose type is CHAR(3) and four output parameters, TOT_BUDGET, AVG_BUDGET, MIN_BUDGET, and MAX_BUDGET all of type NUMERIC(15, 2). Both the input parameter and output parameter declarations must be enclosed in parentheses.

Calling Stored Procedures

Interbase stored procedures which return result sets through output parameters are called select procedures because they can be used in place of a table name in a SQL select statement while stored procedures that do not return results are called using the TIbStoredProc component. The StoredProc sample application that accompanies this paper shows examples of calling both types of stored procdures.

Figure 6  Calling a select procedure

Figure 6 shows the sample application displaying the result set returned by calling a select procedure. The stored procedure is shown below.

 FOR SELECT h.department, d.department, d.mngr_no, d.dept_no
  FROM department d
  LEFT OUTER JOIN department h ON d.head_dept = h.dept_no
  ORDER BY d.dept_no
  INTO :head_dept, :department, :mngr_no, :dno
  IF (:mngr_no IS NULL) THEN
   mngr_name = '--TBH--';
   title = '';
   SELECT full_name, job_code
   FROM employee
   WHERE emp_no = :mngr_no
   INTO :mngr_name, :title;
  SELECT COUNT(emp_no)
  FROM employee
  WHERE dept_no = :dno
  INTO :emp_cnt;

The program executes this procedure using a TIbQuery component and the following SQL statement.


Calling stored procedures that return result sets in this way is very powerful because you can use a WHERE clause to return a subset of the data returned by the stored procedure, an ORDER BY clause to return the data in any order you wish or any other valid SQL construct to summarize or manipulate the data. The Using Parameters tab, shown in figure 7 provides an example of calling a select procedure that requires input parameters.

Figure 7  Calling a select procedure with input parameters

Below are the stored procedure and the SQL statement used to call it.

 FOR SELECT proj_id
  FROM employee_project
  WHERE emp_no = :emp_no
  INTO :proj_id

SELECT * FROM Get_Emp_Proj(:Emp_No)

This procedure requires an input parameter which is passed in the SQL statement by including it in parentheses following the procedure name. If multiple values were required they would be separated by commas. The following code from the Get Projects buttons OnClick event handler assigns a value to the input parameter and runs the stored procedure.

procedure TSpMainForm.GetProjBtnClick(Sender: TObject);


  with SpDm.EmpProjQry do


    Params.ParamByName('Emp_No').Value := StrToInt(EmpNoEdit.Text);


  end; //with


The third tab in the sample application, No Result Set, demonstrates calling a non-select procedure, that is, one that does not return any values. However, this procedure does require an input parameter. In this case, a TIbStoredProc component is used as shown in the data module in figure 8 below.

Figure 8  The sample application data module

The AdjustSalaryProc component was configured by setting its Database property to EmpDb and its StoredProcName property to ADJUST_SALARY_RANGE. The following code from the Adjust Salary Ranges buttons OnClick event handler executes the stored procedure.

procedure TSpMainForm.AdjustBtnClick(Sender: TObject);


  with SpDm.AdjustSalaryProc do


    Params.ParamByName('Factor').Value := StrToFloat(FactorEdit.Text);


  end; //with


This stored procedure requires a single input parameter named Factor which is the factor by which you wish to change all salary ranges.


Triggers are identical to stored procedures with the following exceptions.

  1. Triggers are called automatically when the data in the table the trigger is attached to changes.

  2. Triggers take no input parameters.

  3. Triggers do not return values

  4. Triggers are created by the CREATE TRIGGER statement.


The following CREATE TRIGGER statement shows all of the elements of the CREATE TRIGGER syntax. The keywords CREATE TRIGGER are followed by the trigger name, the keyword FOR then the name of the table the trigger is attached to. Next comes either ACTIVE or INACTIVE to indicate whether the trigger will function or not. If the trigger is inactive it will not execute. You will see how to activate and deactivate a trigger later in this paper. The next element of the CREATE TRIGGER statement indicates when the trigger will fire. This will be one of the following six conditions.







Next comes the optional POSITION keyword followed by an integer. Interbase allows you to attach any number of triggers to the same event. For example, you could have four triggers for the employee table all of which fire after update. This is a great feature because it allows you to modularize your code. However, the order in which the triggers fire may be important. The POSITION keyword lets you control the firing order based on the integer number supplied. In this example the trigger shown below would fire first because its position is zero. If there were three other triggers you might assign them positions of 10, 20 and 30. It is a good idea to leave a gap in the numbering sequence so you can easily insert another trigger at any point in the firing order in the future.

    IF (old.salary <> new.salary) THEN
        INSERT INTO salary_history
            (emp_no, change_date, updater_id, old_salary, percent_change)
        VALUES (
            (new.salary - old.salary) * 100 / old.salary);

After the keywork AS would come any local variable declarations using the same DECLARE VARIABLE syntax that was used for a stored procedure. Finally, comes the procedure body enclosed in a BEGIN/END block.

One thing to keep in mind as you begin using triggers is that a single change to a record in a database can cause many triggers to fire. A change to table A can fire a trigger that updates table B. The update to table B can fire a trigger that inserts a record in table C. Inserting a new record in table C can fire a trigger that updates table D and so on. The second important point about triggers is that a trigger is part of the transaction that caused it to fire. This means that if you start a transaction and update a row which causes a trigger to fire and that trigger changes a table which causes another trigger to fire which updates another table and you then rollback the transaction your change and all of the changes made by all of the triggers fired by your change will be rolled back.


Interbase is a dramatic step up from a desktop database. It will support more data and more users than any desktop database. It provides true transaction control to ensure the integrity of your data. It offers triggers to centralize enforcement of business rules and stored procedures to provide high performance data manipulation routines that can be shared by any number of client applications. It provides the stability of client/server architecture with a more sophisticated security system to control who has access to what than most desktop databases. Best of all, it is no more difficult to write an application using Interbase than it is to use a desktop database.

Server Response from: ETNASC04