USF Libraries
USF Digital Collections

Unit testing database applications using SpecDB

MISSING IMAGE

Material Information

Title:
Unit testing database applications using SpecDB a database of software specifications
Physical Description:
Book
Language:
English
Creator:
Mikhail, Rana Farid
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Software Testing
Formal specifications
Reverse engineering
Automated code
Constraint generation
Dissertations, Academic -- Computer Science and Engineering -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: In this dissertation I introduce SpecDB, a database created to represent and host software specifications in a machine-readable format. The specifications represented in SpecDB are for the purpose of unit testing database operations. A structured representation aids in the processes of both automated software testing and software code generation, based on the actual software specifications. I describe the design of SpecDB, the underlying database that can hold the specifications required for unit testing database operations.Specifications can be fed directly into SpecDB, or, if available, the formal specifications can be translated to the SpecDB representation. An algorithm that translates formal specifications to the SpecDB representation is described. The Z formal specification language has been chosen as an example for the translation algorithm. The outcome of the translation algorithm is a set of machine-readable formal specifications.To demonstrate the use of Sp ecDB, two automated tools are presented. The first automatically generates database constraints from represented business rules in SpecDB. This constraint generator gives the advantage of enforcing some business rules at the database level for better data quality. The second automated application of SpecDB is a reverse engineering tool that logs the actual execution of the program from the code. By Automatically comparing the output of this tool to the specifications in SpecDB, errors of commission are highlighted that might otherwise not be identified. Some errors of commission including coding unspecified behavior together with correct coding of the specifications cannot be discovered through black box testing techniques, since these techniques cannot observe what other modifications or outputs have happened in the background. For example, black box, functional testing techniques cannot identify an error if the software being tested produced the correct specified output but mor e over, sent classified data to insecure locations. Accordingly, the decision of whether a software application passed a test depends on whether it coded all the specifications and only the specifications for that unit. Automated tools, using the reverse engineering application introduced in this dissertation, can thus automatically make the decision whether the software passed a test or not based on the provided specifications.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2006.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Rana Farid Mikhail.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 227 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001796915
oclc - 156977818
usfldc doi - E14-SFE0001626
usfldc handle - e14.1626
System ID:
SFS0025944:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001796915
003 fts
005 20070725134642.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 070725s2006 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001626
040
FHM
c FHM
035
(OCoLC)156977818
049
FHMM
090
TK7885 (ONLINE)
1 100
Mikhail, Rana Farid.
0 245
Unit testing database applications using SpecDB :
b a database of software specifications
h [electronic resource] /
by Rana Farid Mikhail.
260
[Tampa, Fla] :
University of South Florida,
2006.
3 520
ABSTRACT: In this dissertation I introduce SpecDB, a database created to represent and host software specifications in a machine-readable format. The specifications represented in SpecDB are for the purpose of unit testing database operations. A structured representation aids in the processes of both automated software testing and software code generation, based on the actual software specifications. I describe the design of SpecDB, the underlying database that can hold the specifications required for unit testing database operations.Specifications can be fed directly into SpecDB, or, if available, the formal specifications can be translated to the SpecDB representation. An algorithm that translates formal specifications to the SpecDB representation is described. The Z formal specification language has been chosen as an example for the translation algorithm. The outcome of the translation algorithm is a set of machine-readable formal specifications.To demonstrate the use of Sp ecDB, two automated tools are presented. The first automatically generates database constraints from represented business rules in SpecDB. This constraint generator gives the advantage of enforcing some business rules at the database level for better data quality. The second automated application of SpecDB is a reverse engineering tool that logs the actual execution of the program from the code. By Automatically comparing the output of this tool to the specifications in SpecDB, errors of commission are highlighted that might otherwise not be identified. Some errors of commission including coding unspecified behavior together with correct coding of the specifications cannot be discovered through black box testing techniques, since these techniques cannot observe what other modifications or outputs have happened in the background. For example, black box, functional testing techniques cannot identify an error if the software being tested produced the correct specified output but mor e over, sent classified data to insecure locations. Accordingly, the decision of whether a software application passed a test depends on whether it coded all the specifications and only the specifications for that unit. Automated tools, using the reverse engineering application introduced in this dissertation, can thus automatically make the decision whether the software passed a test or not based on the provided specifications.
502
Dissertation (Ph.D.)--University of South Florida, 2006.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 227 pages.
Includes vita.
590
Adviser: Abraham Kandel, Ph.D.
653
Software Testing.
Formal specifications.
Reverse engineering.
Automated code.
Constraint generation.
690
Dissertations, Academic
z USF
x Computer Science and Engineering
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1626



PAGE 1

Unit Testing Database Applications Using SpecDB: A Database of Software Specifications by Rana Farid Mikhail A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science and Engineering College of Engineering University of South Florida Co-Major Professor: Abraham Kandel, Ph.D. Co-Major Professor: Donald Berndt, Ph.D. Kenneth Christensen, Ph.D. Miguel Labrador, Ph.D. Dewey Rundus, Ph.D. Date of Approval: April 26, 2006 Keywords: Software Testing, Formal Specifications, Reverse Engineering, Automated Code, Constraint Generation Copyright 2006, Rana Farid Mikhail

PAGE 2

DEDICATION To m y beloved late father, Dr. Farid Halim Mikhail; All I am, I owe to you. Your words of wisdom will remain in my heart and mind as long as I live.

PAGE 3

ACKNOWLEDGMENTS There are m any people who helped contribut e to the successful completion of this dissertation. First I would like to thank my two advisors Dr. Abraham Kandel and Dr. Donald Berndt for providing me with invalu able assistance in performing my research; their feedback always served to improve the work in meaning ways. I would also like to thank the members of my committee, Dr. Ken Christensen, Dr. Miguel Labrador, and Dr. Dewey Rundus and also Dr. Geoffery Okogbaa, the chair of my dissertation defense, for their valuable comments and help. I deep ly acknowledge Mrs. Judy Hyde, the office manager for NISTP for her administrative help that enabled me to reach my goal. I am also deeply grateful to my family, for being there for me. My beloved husband, Emad, for his understanding, love and patien ce, my dear mother, Nadia, for her continuous enthusiastic support and interest in what I was doing and for my brother Ramy for always believing in me. Finally, but most importantly, my deepest gratitude is for my late father Dr. Farid Mikhail. Acqui ring my doctorate degree was his dream. I am deeply saddened by the fact that he is no l onger here to see me realize his wish. But I am sure that completing this dissertation will please him; I am honored to be able to accomplish this and rise to my father's expectations.

PAGE 4

i TABLE OF CONTENTS LIST OF TABLES.........................................................................................................v LIST OF FIGURES......................................................................................................vii ABSTRACT..................................................................................................................ix CHAPTER 1 INTRODUCTION AND LITERATURE SURVEY..............................1 1.1 The Need For Testing...........................................................................................1 1.2 Why Does Software Have Errors.........................................................................2 1.3 Software Testing Definitions................................................................................4 1.3.1 Software Errors, Faults and Failures..............................................................4 1.3.2 Software Testing............................................................................................5 1.4 When Should Testing Start in the Software Lifecycle.........................................6 1.5 Types of Testing Techniques...............................................................................9 1.6 UML...................................................................................................................10 1.7 Formal Specification Languages........................................................................10 1.8 Current Technology and Available Software Testing Systems in Today’s Market..................................................................................................11 1.8.1 Types of Automated Testing Tools..............................................................11 1.8.2 Design and Visual Modeling Tools.............................................................12 1.8.2.1 Rational Rose.........................................................................................12 1.8.2.2 Oracle Designer.....................................................................................13 1.8.3 Automated Testing Tools.............................................................................14 1.8.4 Disadvantage of Record and Play Back Test Automation Tools.................18 1.9 Related Literature...............................................................................................18 1.9.1 AGENDA: A Test Generator for Rela tional Database Applications..........19 1.9.2 Executable Software Specifications.............................................................21 1.10 Research Goals and Contributions....................................................................22 1.11 Objectives and Accomplishments.....................................................................27 CHAPTER 2 SPECDB: A DATABASE REPRESENTATION OF SOFTWARE SPECIFICATIONS.........................................................29 2.1 Introduction........................................................................................................29 2.2 The Advantages of a Database Representation of Specifications......................30 2.3 The Design of the Specification Database, SpecDB..........................................32

PAGE 5

ii 2.4 Entities in the Design and How Each Represents Specific Software Requirements.....................................................................................................37 2.4.1 Operator and Valid Operands......................................................................37 2.4.2 List_of_Tables and Table Description.........................................................38 2.4.3 Types, Types_LOV and Type Restrictions..................................................39 2.4.3.1 List of Valid Values (LOV)...................................................................39 2.4.3.2 Other Restrictions..................................................................................39 2.4.4 Variables and Restrictions on Variable Values...........................................41 2.4.5 Procedure, Function_Declaration, and Subroutine_IP_Description............42 2.4.6 Subroutine_IP_Instance, Designed_SR_ Post_Conditions and Runtime_SR_ Post_Conditions...................................................................44 2.4.7 Dataflow, Triggers and Input_Output_Definition.......................................46 2.4.8 Assignment and Calculation........................................................................48 2.4.9 Predicate.......................................................................................................50 2.4.10 Database Operations, Tables I nvolved in Database Operations.................52 2.4.11 Classes, Objects and Class Relations..........................................................55 2.5 Conclusion..........................................................................................................56 CHAPTER 3 REPRESENTING FORMAL SPECIFICATIONS IN SPECDB: A TRANSLA TION ALGORITHM......................................................58 3.1 The Translation Algorithm From Formal Specifications to SpecDB.................59 3.2 Assumptions and Restrictions............................................................................60 3.3 Preparing the Formal Specifications for Translation.........................................60 3.4 Storing Formal Specification Statements into SpecDB.....................................61 3.4.1 Defining Schema Names and Operations....................................................61 3.4.2 Types of Schemas........................................................................................62 3.4.3 User-Defined Types.....................................................................................63 3.4.4 Specifying Values for a User-Defined Type................................................64 3.4.5 Defining Sets or Tables................................................................................65 3.4.6 Function Declaration....................................................................................66 3.4.7 Subroutine Input Restriction........................................................................67 3.4.8 Variable Definition and Subroutine Inputs..................................................68 3.4.9 Output Definition.........................................................................................69 3.4.10 Setting Variable Restrictions......................................................................70 3.4.11 Assignment Operation................................................................................71 3.4.12 Assignment Operation and Function Call..................................................72 3.4.13 Assignment Operation for Sets...................................................................74 3.4.14 Assignment Operation for a Set.................................................................75 3.4.15 Inserting Records in the Database..............................................................75 3.4.16 Output Table Definition.............................................................................77 3.4.17 Assigning a Variable to a Calculation........................................................78 3.4.18 Constraining the Values Populating a Table or Set....................................80 3.5 Conclusion..........................................................................................................83

PAGE 6

iii CHAPTER 4 AUTOMATED CONSTRAINTS GENERATOR: AN APPLICATION OF SPECDB.....................................................84 4.1 The Design of the Additional Tables in SpecDB...............................................86 4.1.1 Using List_of_Tables and Extending Table_Description...........................87 4.1.2 Adding the Constraints Table to SpecDB and Using the Predicate Table............................................................................................................90 4.1.3 Other Tables Used from SpecDB, Type_Excluded_Ranges and Types_LOV...........................................................................................94 4.1.3.1 Representing List of Values (LOV) Restrictions...................................94 4.1.3.2 Representing Other Field Restrictions...................................................94 4.2 Automatically Generating Database-Level Constraints: The Algorithm...........95 4.2.1 Generating Simple Static Constraints – The Algorithm..............................96 4.2.2 Generating Complex Dynamic Business Rules Enforcement Code..........100 4.2.2.1 Complex Business Rules Generation Algorithm.................................102 4.3 Conclusion........................................................................................................104 CHAPTER 5 A REVERSE ENGINEERING TESTING TOOL USING SPECDB..........................................................................105 5.1 Technique.........................................................................................................107 5.1.1 Input and Pre Conditions Classi fication and Categorization.....................108 5.1.1.1 Examples of Inputs from the Database Displayed by the RE Testing Tool.........................................................................................111 5.1.1.2 Examples of Inputs from the GUI Displayed by the RE Testing Tool......................................................................................................112 5.1.2 Output and Post Conditions Cla ssification and Categorization.................112 5.1.2.1 Examples of Outputs Displayed by the RE Testing Tool....................115 5.2 Case Study........................................................................................................116 5.2.1 Problem Statement for the Vacation Salesman Commission Program......................................................................................................116 5.2.2 Implementation..........................................................................................117 5.2.3 Program Graph for the Vacation Salesman Commission Problem............121 5.2.4 DD-Path Graph for the Vacation Salesman Commission Problem...........122 5.3 Examples..........................................................................................................123 5.3.1 Test Case 1.................................................................................................124 5.3.1.1 Scenario...............................................................................................124 5.3.1.2 Expected Output..................................................................................124 5.3.1.3 Expected GUI Output..........................................................................124 5.3.1.4 The RE Tool Display of the Inputs for Test Case1..............................125 5.3.1.5 Process and Actual Output...................................................................126 5.3.1.6 Comparing Expected to Actual Output................................................128 5.3.2 Test Case 2.................................................................................................134 5.3.2.1 Process and Actual Output...................................................................135 5.3.3 Test Case 3.................................................................................................137 5.4 Conclusion........................................................................................................139

PAGE 7

iv CHAPTER 6 ENHANCING OTHER TESTING TOOLS USING SPECDB.........141 6.1 Using the Database Representation of SpecDB to Expand the Testing Domain.............................................................................................................142 6.2 Enhancing State Validation Tools Using SpecDB...........................................145 6.3 Expanding the Testing Scope Beyond Variables and Database States............147 6.4 Testing Different Operation Types...................................................................147 6.5 Conclusion........................................................................................................148 CHAPTER 7 CONCLUSI ON AND FUTURE WORK...........................................149 7.1 A Comparison Between Testing Tools............................................................150 7.1.1 Test Cases and Scenarios...........................................................................150 7.1.1.1 Test Case 1...........................................................................................150 7.1.1.2 Test Case 2...........................................................................................151 7.1.1.3 Test Case 3...........................................................................................151 7.1.1.4 Test Case 4...........................................................................................152 7.1.1.5 Test Case 5...........................................................................................152 7.1.1.6 Test Case 6...........................................................................................153 7.1.2 Comparing the Ability of Testing Tools to Identify Errors.......................153 7.2 Research Accomplishments at a Glance..........................................................157 7.3 Conclusion and Recommendations..................................................................158 7.4 Future Work at a Glance..................................................................................161 REFERENCES...........................................................................................................165 APPENDICES............................................................................................................173 Appendix A: SQL Scripts for SpecDB Entities..................................................174 Appendix B: SQL Scripts for SpecDB Constraints............................................191 Appendix C: PL-SQL Code for the Constraints Generator.................................221 ABOUT THE AUTHOR...................................................................................End Page

PAGE 8

v LIST OF TABLES Table 1.1: Automated Testing Tools................................................................................17 Table 4.1: Snapshot of SpecDB Table_Fi eld_Description for Constraint 1.....................91 Table 4.2: Snapshot of SpecDB Cons traint Table for Constraint 1.................................91 Table 4.3: Snapshot of SpecDB Pred icate Table for Constraint 1....................................91 Table 4.4: Snapshot of SpecDB Table_Fi eld_Description for Constraint 2.....................92 Table 4.5: Snapshot of SpecDB Cons traint Table for Constraint 2.................................92 Table 4.6: Snapshot of SpecDB Pred icate Table for Constraint 2....................................92 Table 5.1: Input and Pre Conditions Classification and Categorization........................111 Table 5.2: Examples of Inputs from a Database.............................................................112 Table 5.3: Examples of GUI Inputs................................................................................112 Table 5.4: Output and Post Conditions Classification and Categorization...................115 Table 5.5: Example of Outputs from the GUI...............................................................115 Table 5.6: DD-Path Graph for the Vacation Salesman Commission Problem...............122 Table 5.7: The RE Testing Tool Displaye d Database Inputs for Test Case 1...............126 Table 5.8: The RE Testing Tool Disp layed GUI Input for Test Case 1........................126 Table 5.9: GUI Output for Test Case 1 – Part1.............................................................127 Table 5.10: GUI Output for Test Case 1 – Part2...........................................................127 Table 5.11: GUI Output for Test Case 1 – Part3...........................................................127 Table 5.12: GUI Output for Test Case 1 After Fault Injection......................................130 Table 5.13: SpecDB Dataflow Table Snap shot – Original Specifications.....................131

PAGE 9

vi Table 5.14: SpecDB Var Table Snaps hot – Original Specifications..............................131 Table 5.15: SpecDB Input_Output_Definition Ta ble Snapshot – Original Specifications..............................................................................................132 Table 5.16: SpecDB Var Table Snapshot – RE Tool Implementation Specifications..............................................................................................133 Table 5.17: RE Testing Tool Display of the GUI Output for Test Case 2.....................135 Table 5.18: SpecDB Var Table Snapshot – RE Tool Specifications – Test Case 2..................................................................................................136 Table 5.19: SpecDB Input_Output_Definiti on Table Snapshot – RE Tool Implementation Specifications – Test Case 2............................................137 Table 5.20: Database Output for Example 3..................................................................139 Table 7.1: A Comparison Between How Testing Tools/ Techniques Handle Test Cases 1-6.............................................................................................154 Table 7.2: A Comparison Between the Capabilities of Testing Tools and Techniques............................................................................................156

PAGE 10

vii LIST OF FIGURES Figure 1.1: The Relationship Between a Program Specification and its Implementation ................................................................................................5 Figure 1.2: Dissertation Contributi on: Entities and Chapter Layout...............................27 Figure 2.1: An Overall Diagram of SpecDB’s Design.....................................................34 Figure 2.2: A Focused Diagram of the General SpecDB Design (Left Section)..............35 Figure 2.3: A Focused Diagram of the Ge neral SpecDB Design (Right Section)............36 Figure 2.4: Representing Operators and Operator Overloading in SpecDB.....................37 Figure 2.5: Representing User Tables and Sets and their Description in SpecDB...........38 Figure 2.6: Representing Types, Type Restric tions and Types Valid Values in SpecDB...........................................................................................................40 Figure 2.7: Representing Variables, Variable Restric tions and Variable Valid Values in SpecDB...........................................................................................42 Figure 2.8: Representing Units (Sub-routines) in SpecDB...............................................43 Figure 2.9: Representing Sub-routine Instances and Post Conditions in SpecDB...........45 Figure 2.10: Representing Dataflow, Triggers and System Input and Outputs in SpecDB....................................................................................................48 Figure 2.11: Representing Assignment Opera tions and Calculations in SpecDB............49 Figure 2.12: Representing Predicates and Conditions in SpecDB....................................51 Figure 2.13: Representing Database Operations in SpecDB............................................54 Figure 2.14: Representing Object Orient ed Classes and Objects in SpecDB...................56 Figure 3.1: AddBirthday Schema.....................................................................................61 Figure 3.2: Report, User-defined Type Specification.......................................................64

PAGE 11

viii Figure 3.3: FindBirthday Schema.....................................................................................69 Figure 3.4: Success Schema..............................................................................................72 Figure 3.5: InitBirthdayBook Schema..............................................................................75 Figure 3.6: Remind Schema..............................................................................................78 Figure 3.7: RAddBirthday Schema...................................................................................79 Figure 4.1: SpecDB Original Table_ Description............................................................87 Figure 4.2: Table Field Description Table........................................................................88 Figure 4.3: Constraint Table.............................................................................................91 Figure 4.4: Representing Complex Constraints in SpecDB.............................................93 Figure 4.5: Representing Types, Extending the Initial Representation............................95 Figure 5.1: Program Graph for the Vacation Salesman Commission Problem..............121 Figure 5.2: DD-Path Graph for the Vacation Salesman Commission Problem..............122 Figure 5.3: RE Tool Model Diagram..............................................................................123 Figure 5.4: The Expected GUI Output for Test Case 1..................................................125 Figure 5.5: GUI Output for Test Case 2.........................................................................135 Figure 6.1: Representing AGENDA Data in SpecDB....................................................144

PAGE 12

ix UNIT TESTING DATABASE APPLICATIONS USING SPECDB: A DATABASE OF SOFTWARE SPECIFICATIONS Rana Farid Mikhail ABSTRACT In this dissertation I introduce SpecDB, a data base created to represent and host software specifications in a machine-readable format. The specifications represented in SpecDB are for the purpose of unit testing database operations A structured representation aids in the processes of both automated software testi ng and software code generation, based on the actual software specifications. I describe the design of SpecDB, the unde rlying database that can hold the specifications required for unit testing database operations. Specifications can be fed directly into SpecDB, or, if available, the formal specifications can be translated to the SpecDB representati on. An algorithm that translates formal specifications to the SpecDB representation is described. The Z formal specification language has been chosen as an example for th e translation algorithm. The outcome of the translation algorithm is a set of machine-readable formal specifications.

PAGE 13

x To demonstrate the use of SpecDB, two au tomated tools are presented. The first automatically generates database constraints from represented business rules in SpecDB. This constraint generator gives the advantag e of enforcing some business rules at the database level for better data quality. The second automated application of SpecDB is a reverse engineering tool that logs the actual execution of the program from the code. By Automatically comparing the output of this tool to the specifications in SpecDB, errors of commission are highlighted that might otherw ise not be identified. Some errors of commission including coding unspecified behavior together with correct coding of the specifications cannot be discovered through black box testing techniques, since these techniques cannot observe what other modifications or outputs have happened in the background. For example, black box, functional testing techniques cannot identify an error if the software being tested produced the co rrect specified output but more over, sent classified data to insecure locations. A ccordingly, the decision of whether a software application passed a test depends on whether it coded all the specifications and only the specifications for that unit. Automated tool s, using the reverse engineering application introduced in this dissertation, can thus au tomatically make the decision whether the software passed a test or not based on the provided specifications.

PAGE 14

1 CHAPTER 1 INTRODUCTION AND LITERATURE SURVEY 1.1 The Need For Testing One of the common but very true jokes about the software industry tells of a software entrepreneur proudly stating that if the autom obile industry had advan ced in the past decade with the same rate as that of the computer industry, we would all be driving $25 cars that run for a 1,000 miles per gallon of gasoline. To whic h an automobile CEO replies, that if cars were like today’s software, they would cras h twice a day for no reason, and when you call for service, they would tell you to shut down th e engine, as well as all windows, get out of the car, lock it, and start over! Indeed, in the past decade, computer hardware has advanced significantly offering much higher processing capability and speed. Consequently, the complexity and size of software has increased e xponentially too. Software is vital to almost every industry and we are becoming ever more dependent on our comput ers to perform every day tasks. As the complexity and size of so ftware grow, so does the need to insure the reliability and precision of software outputs. Software errors can be fatal in numerous industries; including airlines, medical and th e military. Billions of dollars are spent on software and software testing and maintenance annually by the DoD. Software errors result in hundreds of millions in losses annually from the DoD alone [1] (software errors are defined in Section 1.3.1).

PAGE 15

2 Accordingly, a lot of attention is now directed towards assuring software reliability, accuracy, security and usefulness. It is crucial to dedicate a lot of resources, including time, effort and a budget during the software lifecycle to verify that the end product is useful, usable and accomplishes the tasks, it was built to perform. 1.2 Why Does Software Have Errors Software errors result from many reasons, fr om misunderstanding of the requirements, to poor design; also from tired or careless programmers to rushed deadlines [2]. Often testing is delayed until later in the software devel opment life cycle, and little time is given for testing, debugging, and re-testing. Coverage gaps are also a problem; this happens when not all parts of the software are tested. Some software failures happen due to a specifi c scenario, a sequence of events that might have been tested individually but not in that particular sequence, that causes the software failures. The operating environment, vari ous connected hardware peripherals, other concurrently running software and different ope rating systems could also be a source of software failures [4]. It was proven that complete testing is impossible [3]. Meaning that it is impossible to test all values of each possible input, valid and invali d, with their respective combinations for a software application with considerable size a nd complexity; the number of test cases will be astronomical and the time needed for testing a nd verification will be almost endless. For

PAGE 16

3 example, if a very simple program takes only three i nputs, an integer Age between 0 and 99, a character, Initial between A and Z, and another integer representing a Month between the integers 1 and 12. Age: integer [0..99], Initial: char [A..Z], Month: integer [1..12] There are 100 possible valid values for Age, 26 valid values for Initial, and 12 valid values for Month. Accordingly there are 100*26*12 = 31,200 possible combinations of valid values only for this simple program. If complete testing was required, there would be 31,200 test cases that should be carried out to test the operation of this program on valid values alone. Even if the 31,200 test cases were run, this would not be considered complete testing The number of test cases required for complete testing will be much greater, when the invalid values are added. Invalid values could be in one, two or all three inputs. The number will increase exponentially if we want to create test cases for all these cases; where only one of the three variables contains an invalid valu e, a combination of two of the variables are invalid, or all three. For example, the Age vari able is an integer, one can enter any integer from the negative integer boundary to the pos itive integer boundary. Also, one can try to enter characters, real numbers or any other value for the Age integer value and test the program behavior and how it handles exceptions The possibilities and combinations of all those cases for this very simple three input pr ogram are thus almost endless. Also it is impossible to completely test all values of real numbers, even in a small range, and for strings representing names for example, etc.

PAGE 17

4 Some transient errors are revealed in comple x distributed or embedded control systems after prolonged usage. This may be a result of a memory overflow, or minute system clock discrepancies that accumulate to a noticeable di fference after prolonged usage. Such errors are very difficult to recognize [77], and require high volume system testing or long sequence testing techniques, and extended random regressi on testing [79]. Hence there is a need for better software testing techniques, which focu s on testing parts of the input domain, yet, attain good coverage to verify software quality. 1.3 Software Testing Definitions 1.3.1 Software Errors, Faults and Failures A software error is a mismatch between the program and its specifications, provided that the specifications are accurate and complete. A software application is considered erroneous if it fails to be useful. A software fault is defined as the execution of an error [5]. Faults of commission are those resulting from incorrectly coding specifications, or coding unspecified requirements. Faults of omission result from failing to implement specified requirements. A software failure happens when a fault of commission is executed while running the software application [5]. From Figure 1.1, the goal of testing is to make sure that S P = S = P If the above equation is reached in testing a software application, the resulting product is very well tested and functional, provided th at the specifications are correct, accurate, complete and do not contain any contradictions.

PAGE 18

5 Figure 1.1: The Relationship Between a Program Specification and its Implementation [5] 1.3.2 Software Testing Software testing is becoming an important area of research aimed at producing more reliable systems and minimizing programming errors [78]. Since it is impossible to completely test how a software application perf orms on all values of all of its inputs, as described in Section 1.2, a good strategy to test software has to be followed in order to choose test cases that test the software without coverage gaps. However, there is much more to good testing than running a program several times to check if it functions correctly [6]. In order to systematically te st software, we need to model the environment in which the software is intended to run, select test scenarios that attain good system coverage, execute and evaluate the test cases, correct the errors, and measure the testing progress [4]. Some faults may result from correcting other errors, and hence regression FAULTS OF OMISSION(specs that have not been programmed) (both specified and programmed)FAULTS OF COMMISSION(implemented behaviors that have not been specified) S PSpecification (expected) Program (observed) Program Behavior S P

PAGE 19

6 testing is often used, which entails re-running all test cases after an error has been corrected to verify that the fix did not inject any errors in a part of the software that was previously running correctly. Software Tes ting has been defined in many ways. Some authors define it as the process of running a software application to verify its correct execution of its valid and complete specifications and verify that it runs correctly in the environment it was created for [5,86]. Other au thors argue that test scenarios that do not reveal errors or failures are also of great im portance, since the testing team is assuring by such scenarios that the system actually impl emented the required sp ecifications correctly [6]. Software can also fail by executing too slowly or using too much memory; reasonable execution speed and memory usage are obvious requirements that might not be in the specification documents. A number of good references for software testing are [5,6,553]. Accordingly, it is crucial to create test cases that reflect a good coverage of all areas of the software and determine its usability. 1.4 When Should Testing Start in the Software Lifecycle One of the most important and useful practices in software engineering is the incorporation of software testing from the very beginning of the software lifecycle. The software life cycle is described in more detail in [7,8,9]. Waiti ng until the software is implemented and then starting to test the software was proven in the last decade by many authors and software team managers to be a very costly and difficult task to manage and carryout. Testing should start as soon as the analysts finish writing the firs t draft of the specification document describing what the required system should be built to accomplish. Testers can then begin identifying test cases, which are called system level test cas es. System level test cases are very easily

PAGE 20

7 interpreted and basically describe in a broad spectrum what the main functionalities of the software should be. By reading the system le vel test cases, both the development team and the end users can reach an early agreemen t and common understanding of the functionality of the software under construction. After agreeing on the specifications, and the syst em level test cases, the designers can then start defining the details of the system, and answer many questions from as broad as what programming language should be us ed to the minute details of the functionality of each subroutine. Testers have a very important role to play during the de sign phase. During that phase testers should generate the integration leve l and unit level test case s. Integration level test cases are targeted towards testing the correctness of the interaction between the different sub-routines (methods, functions, procedures, etc.) and the correctness of the interaction between the different modules or subsystems of the software under construction [5]. Once implementation starts, the programmers s hould have both the desi gn and the test cases that will be exercised against their implementa tion. Having the complete set of test cases, that cover all possible outputs of the functi onality a programmer is implementing, will help the programmer to understand the design and im plement it correctly. There is a strong possibility that the programmer will make sure the implemented code passes the given tests before submitting the code for integration with the rest of the system. There are many advantages for the above approach of starting tes ting early in the software lifecycle. In this situation, when the programmer is given both th e design and test cases, testing starts very early in the software lifecycle. A test case w ill be executed multiple times to check that the

PAGE 21

8 system passes it correctly at the unit level as we ll as the system level after integration with the rest of the system modules. There will be no ambiguity in what exactly the programmer should implement, even if the programme r misinterprets the design and specified functionality, reading the test cases, and their respective expected output will clear much of the ambiguity, and will assure a higher level of software quality, compared to that when only a design is given to the programmer. Once the implementation phase is over, the majority of the testing phase will have been implemented already. The cost of integration and system level testing will be minimal compared to that of delaying testing until the last phase in the software lifecycle. Also the ove rall quality of the implemented software will be much higher than might result given the same resources of time and money, but leaving testing till after implementation. It is worth noting that the wa terfall model of software development where all the specifications of the software are known from the beginning is not what is generally followed in real world software development e nvironments. Test cases should be created for all the specifications as they are gathered for th e different builds or versions of the software under construction. Testing should be carried out for each of those versions to make sure that the specifications thus far collected, ar e all implemented correctly. Specifications should be tested to make sure that they do not contradict previously specified requirements. Google, Microsoft, and Sun Microsystems are investing $7.5 million to fund research on alternative software development models to the waterfall model with higher reliability and faster deployment, at UC Berkeley [101].

PAGE 22

9 In conclusion, it is essential that the testing pr ocess starts from the be ginning of the software lifecycle and be concurrent with all the phases as a better software engineering practice. 1.5 Types of Testing Techniques There are two approaches to identifying test cases: functional and structural Functional testing, also often referred to as black box tes ting, is the process of testing the executable program without any reference to the source code of the software. It is called functional since we treat the software as a function, we provide inputs, and observe its output. Functional test cases can be reused even when the code changes, is translated into another language, or its structure differs. Another advantag e of functional testing is that the test cases can be created during the early phases of deve lopment, even before implementation starts, since it is based on the specifications. There are a number of functional testing techniques including boundary value analysis, equivalen ce class testing, and decision table-based testing. The reader can refer to [5] for a good coverage of those three techniques. Other black box testing techniques are domain drive n, stress driven, specification driven, risk driven, random / statistical, function, regressi on, scenario, use case, user and exploratory testing. The reader can refer to [7] for an explanation of these techniques. In structural testing on the other hand, also referred to as wh ite or clear box testing, test cases are created to execute specific branches and paths in th e source code. Structural testing techniques include decision-to-decision and define-use testing. The advantage of structural testing is in its ability to confirm that all paths in the source code have been executed in test cases, and thus those testing techniques can be used as a coverage matrix to guide other functional testing techniques.

PAGE 23

10 1.6 UML To better understand some types of today’s automa ted testing tools that are targeted towards testing in the requirements definition and de sign phases, one should understand the Unified Modeling Language (UML). The reader can refer to [11,12,13,14] for UML syntax and diagrams. The majority of software de velopment organizations today, use UML in describing a system design. System designers re present the design of a system using UML, a set of diagrams that both technical and nontechnical system end users can understand and use as a basis for discussions and reassurance that the software under development is what the end-users actually have in mind. Those UML diagrams are then used as a basis for more detailed system design, implementation and testing. In their book, Business Modeling with UML [11], Eriksson and Penker describe the nine different UML diagrams and the use of each for business modeling. They state that ‘UML standardizes notation for describing a process, but it doesn’t standardize a process for producing those descriptions’ [11]. This leads to the first research objective, namely, to create a standard representation for storing UML or software specifications represented by a formal specification languages like Z, since UML diagrams are not machine interpretable. 1.7 Formal Specification Languages Formal specifications, also known as formal me thods, are a mathematical-like notation used to describe the specifications, i.e. what a system should execute, without necessarily going into the details of how the requirements shoul d be implemented. Formal specifications where developed primarily to result in accura te, unambiguous software specifications to

PAGE 24

11 replace informal natural language specifications fo r better engineered soft ware. The reader can refer to [15-24,65] for a detailed description of formal methods, use, semantics and syntax. 1.8 Current Technology and Available Software Testing Systems in Today’s Market 1.8.1 Types of Automated Testing Tools Depending on where in the software lifecy cle a development team would like to introduce an automated testing tool, the types of tools can be divided into tools for the analysis and specifications gathering phase, others for the requirements definition phase, design phase, implementation phase, and fi nally quality assurance and debugging phase. There are also metrics and testing organiza tion tools to aid in the process of managing the testing effort. Some tools are targeted for specific purposes like test data generators [57], simulators and prototyping tools. There are dozens of automated software tes ting tools in the market today. The tools range in their functionality between functiona l-black box testing, to source code analysis tools. A good list can be found at the Apte st: Software Testing Resources and Tools website [25]. A lot of the available tools reco rd and play back test scenarios the tester inputs while doing exploratory or scenario testing. Regression testing is the most commonly automated testing technique.

PAGE 25

12 Some of the tools are targeted towards det ecting run-time errors and memory leaks. Among those tools are Purify from Rationa l Software, C Verifier from PolySpace, SWAT from Coverity, Insure++ from Paraso ft and GlowCode from Electric Software. Other tools are test case organization tools, to help in the organization of the testing process. There are a number of automated te st data generators and a lot of research in the automation of test data creation [69-72,8085,87-95]. Test data generators generate data to simulate real life data in opera tional systems for the purpose of testing. Simulators of test environments are also av ailable as well as automated stress and load testing tools. The following sections include a more detailed look at some of the features and underlying technology in several of the most widely used tools. 1.8.2 Design and Visual Modeling Tools Design and visual modeling tools are mainly used in the analysis and design phases. These tools help the development team as well as the end users to reach a common understanding of the functionality of the sy stem under construction. Some of the available tools also generate the system backbone from the design generated by the visual modeling tools, including the databa se tables, classes, modules, etc. The programmers can then write the code in th e designated locations. Rational Rose and Oracle Designer are two of the common design and visual modeling tools available in the market. 1.8.2.1 Rational Rose Rational Rose provides a channel to comm unicate a system’s architecture between various parties of the development team, fr om analysts and designers to developers

PAGE 26

13 and other team members. Rational Rose allo ws the designers to visually map system architecture to unified models and diagra ms using models similar to UML. The diagrams go in their detailed level only to de fine the interaction between the different parts of the software. You can write code in the subroutine definition area, and thus when you choose to generate the code from Rational Rose, the code that you wrote will also exist where it should in your app lication. However, Rational Rose does not go into the details of the design or interpre t the functionality of the software as it does with the higher level design. Accordingly, th e testing tools that are now available and produced by Rational, do not depend on th e specified functionality, but on the backbone, and the actual implementation and user scenarios. 1.8.2.2 Oracle Designer Oracle is one of the leading companies in database and information systems solutions. One of the products offered by Oracle is Oracle Designer. As stated by Oracle, Designer is “a complete toolset to model, generate, and capture the requirements and design of enterprise applications” [26]. Oracle Designer has many modeling tools to help in the design and analysis capture of a database application. Among the tools included in Designer are the process M odeler, Dataflow Diagrammer, function Hierarchy Diagra mmer, Entity relationship Diagra mmer, Design Editor, Dependency Manager and Matrix Diagrammer. None of the above tools captures the detailed design and the functionality of the application, in a way that could be interpreted by an automated testing tool. The diagrams and matrixes are used to communicate system requirements and design between the different development team members, but not

PAGE 27

14 for an automated testing tool, that can intelligently understand the specifications and design of the software under construction a nd accordingly generate good test cases to test it. In the design editor, one could design the forms and the components of the application’s windows, etc. One can also determine what type of triggers run, and what triggers them, but the code and th e meaning of the implementation in those triggers can either be written in plain English as comments for the programmers or can be written in actual PL-SQL code. However, what is written in the triggers or stored procedures in designer, whether it is the ac tual code or plain English description, is stored as is to allow for it to exist after the user chooses to generate the application from the stored design in Oracle Designer. Designer automatically generates a script for the specified database tables, constraints, modules for the application, and also forms, triggers, etc. Designer thus stores the structure of the application, whether it is the database schema architecture or the higher level process interaction and appli cation dataflow and user interface. Accordingly, Oracle Designer lacks the featur e that is proposed by this research to create a well defined representation of th e detailed design of the application under construction. 1.8.3 Automated Testing Tools It was essential to survey the current automated testing tools in today's market, in order to identify where improvements can be made. The conclusions reached after surveying the market guided much of this research. The products of the most popular vendors of

PAGE 28

15 automated testing tools are discussed in this section, including Mercury Interactive, Rational Software, Segue and Compuware. In the United Kingdom, Mercury Interactive holds at least 50% of the market share in software testing tools. They have a number of automated test tools, including TestDirector [45], WinRunner [46], Qu ickTest [47] and LoadRunner [48]. Rational Software offers the most complete lifecycle toolset, including testing for the windows platform. They are the market lead ers in object-oriented development tools. The leading testing tools available from Rational Software include Rational Robot, Functional Tester, Performance Tester and Ra tional Purify. Rational Robot is a test management tool that gives test cases for obj ects such as lists, menus, and bitmaps. It also provides test cases specific to object s in the development environment. As described by Rational [41] it is a "General purpose test automation tool for client/server applications". Functional Tester, on the ot her hand is an "automated functional and regression testing of Java, Web and VS.NET Wi nForm-based applicati ons", as described by Rational. It aids the process of test script writing and managing.[42]. Performance Tester verifies application response time and scalability, it aids in emulating a real time web-based environment where lots of users are concurrently using the software. Rational Purify aids in the detection if memory leak and memory corruption for Linux and UNIX [44]. Compuware's major product is QARun. Similar to other automated testing tools, QARun depends on scenario testing, by recording th e user’s actions and respective system

PAGE 29

16 responses into test scripts to test some of the application’s functions [52]. Compuware also has a load testing tool. Segue is another organization that provides so me comparable software testing tools to Mercury Interactive and Rational Software Among the products that it offers are SilkCentral Test Manager, SilkTest 5, and SilkPerformer. Comparable to Rational Robot, SilkCentral is a test management and organization tool [49]. SilkTest is comparable to QuickTest, and Rational Functi onal Tester. It is a regression testing tool that records and plays back the test cases [50]. SilkPerformer is an automated load, stress and performance testing tool [51], co mparable to Rational's Performance Tester and Mercury's LoadRunner. All four major automated testing tools vendor s have products similar in functionality to the products of their competitors. The tools available are mainly categorized into four major types; namely, test management, cap ture and replay, functional and regression testing and performance or stress testing tool s. Table 1.1 summarizes the functionality of the different testing tools. Test manageme nt tools like Mercury's TestDirector, Rational TestManager, SilkCentral by Segue and QACenter by Compuware all organize the testing process. Test management tool s as described by Mercury, thus provide a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing defects and issues [45]. Testers can create test cases and log them in the test management tools to organize the testing process.

PAGE 30

17 Table 1.1: Automated Testing Tools Type of Tool Organization Test Management Capture and Replay Functional & Regression Testing Performance & Stress Testing Mercury Interactive TestDirector WinRunner QuickTest LoadRunner Rational Software TestManager Ra tional Robot Functional Tester Performance Tester, Purify Segue SilkCentral SilkTest SilkTest5 SilkPerformer Compuware QACenter QARun TestPartner QACenter Capture and replay tools usually automatically generate test scripts from test scenarios captured during a tester's navigation through the software. Examples of the commonly used capture and playback tools are Merc ury's WinRunner, Rational Robot, SilkTest by Segue, and QARun by Compuware. Functional and regression testing tools ar e also common tools in software test automation. Regression testing tools run test sc ripts after issues have been resolved, to determine whether the fixes caused other probl ems in other parts of the software that were previously running correctly. Examples of regression and functional testing tools are Mercury's QuickTest, Rational Func tional Tester, Segue's SilkTest5, and Compuware's TestPartner. Performance and stress testing tools emulate an environment with thousands of users concurrently using the system. Examples of performance and stress testing tools are Mercury LoadRunner, Rational Performa nce Tester, Segue SilkPerformer and Compuware QACenter. Rational Purify checks for memory leaks.

PAGE 31

18 1.8.4 Disadvantage of Record and Playback Test Automation Tools In Section 1.8.3, the functionality of some of the most common software testing tools were outlined. However, none of those tools automatically and intelligently create test cases based on an understanding of what the software under construction is intended to execute. Some of those available tools reco rd and playback test scenarios produced by the navigation of testers through the product while testing the product. In this case, coverage relies on how well the tester or user goes through the different and possibly numerous system scenarios. The system responses have to be judged by the tester who determines whether the system passed the test or not, based on his understanding and knowledge of the specifications. The tester thus, also acts as the oracle who decides whether the system passed that specific test, generated by the scenario in question. 1.9 Related Literature A number of researchers in the field of software testing emphasize the need and the advantages of having a machine readable ma pping of software specifications [31, 34, 38]. Other researchers describe techniques to compar e software to its formal specifications as a good means of software testing [35,39]. Other techniques, such as those found in [37,40] can be deployed to work efficiently and automa tically, given a software ’s specification in the form of a machine-readable format, as that described in this research. In order to automatically test database operations, Chays et. al. in [27, 28] derived the software specifications from the database sc hema and program code, and a tester inputting relevant heuristics. The authors noted that it is not possible to fully automate the testing and

PAGE 32

19 verification process unless the formal specification is given in full, which is the focus of this research. A similar approach to testing au tomation was illustrated by [36], in which the authors depended on programmed embedded SQL st atements in the code, assuming it to be the correct source upon which to generate test cas es. The weakness of th is approach and the latter, lie in the fact that the test cases are generated based on the resu lting software not from the specifications, as would be the result of adopting the techniques in this research. In [33], Hung expanded on the work of Chays et al. in [28], designing an input generator for database applications. Hung emphasized th e need to compute the expected output automatically. Similar to Chays et al. in [28] Hung generated test cas es based on the testers values for the system inputs in their respective equivalence classes. The test cases where not generated automatically based on the specifi cations. The research in [58,59,60,61] did not address the problem of automatically checking th e results of the tests to verify whether the results are correct compared to the specificati ons. A number of researchers investigated the automation of testing web database applicati ons, including [62]. Also regression testing of database applications was researched in [68]. De fine-use path testing was used in [63] to test database driven applications. In [64] Ca bal and Tuya tested the SELECT operation in database applications. Automated test case generation techniques where introduced in [32] to test the GUI of applications. 1.9.1 AGENDA: A Test Generator for Relational Database Applications Chays, Deng and Frankl carried out a res earch titled “AGENDA: A Test Generator for Relational Database Applications” [ 27,28,54,55,56]. The tests generated in their

PAGE 33

20 research were targeted towards database operations or application queries (insert, delete, update and select statements). There were no test cases targeted to test the application code itself, in which these operations are embedded. The tests generated in Chays’ research depend on database input and output states, and do not compare the input or output to those specified by the design. The research does not tackle other types of operations where the output database state is not changed, but files, printed reports, magnetic card programming, etc. are the forms of output. Real world applications that use numerous types of inputs and outputs, besides database state changes will benefit only partially from AGENDA. AGENDA depends on the tester to specify whether the test passed or not. Accordingly, the tester is the oracle for AGENDA. AGENDA generates tests based on application code not the specifications and design of the software, and thus would not recognize errors of omission (parts that are designed but never programmed). When the tests are derive d from the actual code, the tool will also not recognize errors of commission where the pr ogrammers wrote parts of code that were never included in the design document. Fi nally, AGENDA uses two types of functional testing techniques, namely, boundary value anal ysis and equivalence class testing. Not all faults in the software are revealed by these two types of testing alone. Boundary value analysis and equivalence class testi ng are good techniques to use in combination with other techniques to best test the software.

PAGE 34

21 1.9.2 Executable Software Specifications One of the major goals of the dissertation is to represent software specification in a way that they can be reasoned with by automated software testing tools, to generate intelligent test cases. Intelligent test cases ar e a minimal set of useful test cases that collectively cover all the software and are cap able of identifying errors. Accordingly, it was necessary to review the literature for automated software specifications. In [96], Corning describes how Microsoft internally te st their systems by using a technique that automates the execution of human entered software test case specifications. Accordingly, what is automated is the genera tion of test cases based on the specifications of each of the test cases. In other words, the test cases are not created using a computationally intelligent technique to generate interesting and important test cases, because the software specifications ar e not executable (though the test cases specifications are executable). In this situa tion, the testers are the ones that create test cases, which can be automatically re-run with different parameters. The parameters of each test case are the test case specifications, or test specifications in short. Those parameters are executable. Accordingly, auto mated tools can read the specification of a test case, including the input variable names, possible values, pre-conditions, as well as other factors, and create multiple test cases that match those specifications. Those multiple test cases can then be run automatically. Formal specifications are not readily execu table [97]. Fuchs believes that formal specifications should be executable if possibl e [98] and argues against Hayes and Jones

PAGE 35

22 in [97] who wanted to make the distinc tion between human-readable specifications and machine-executable prototyping. Fuchs also i ndicates that the executable specifications in his research [98] are at the same high level of abstraction, the aim of the possibly executable specifications is for rapid prototypi ng and to facilitate th e validation process. In the early nineteen nineties, automated te sting and automated test case generation was not the purpose of those techni ques of trying to execute the specifications, the arguments were mainly targeted towards rapid prototyping, not automated testing. In [99,100], the authors describe a mechanis m to transform formal specifications to executable code. The result is not however, a means of representing the specifications to an automated tool to reason about and generate intelligently test cases from it. 1.10 Research Goals and Contributions After surveying today’s market for available software testing tools, none were found to generate the tests independently from specifica tions. The available tools are either codecoverage tools or capture/playback tools. They all rely on the implemented software either to create tests based on white box testing and the implemented code in question, or on the testers to provide good scenarios or test cas es and their corresponding expected output. Accordingly, one can fairly make the assumption that currently the term “automatic testing” is misused to refer to automatic test executi on and often misapplied to techniques that only partially automate portions of the testing process. After surveying the available tools, it was found that none of the tools in fact automatically generates the test cases for the software; meaning that, none of the available tools creates test cases based on its ability to interpret

PAGE 36

23 what the software under construction is implem ented to achieve. The only tools close to automatically generating test cases, are those for stress testing [29], whic h create test cases to see how the software will react when a huge num ber of users log-in at the same time for example, or for database systems, creating a huge database with very big tables, etc. These tools, do not test the actual functionality of th e software and whether it performs its tasks correctly or not, they just generate and execute stress tests. Basically, none of the available tools can understand the specifications and the software design, and accordingly create its own set of test cases that intelligently test and achieve good test coverage of the software. The main reason why such an intelligent test cas e generator does not exist is that there does not exist a means of representing the specifications, requirements, and design in such a way that it could be interpreted, understood and used by an intelligent tool to generate test cases. Creating this machine readable and machine-interpretable representation of a system specifications and design is one of the main goals of this research. The research contributions are discussed in this section. In Section 1.11, the contributions are listed together with the location in the dissertation where each contribution is discussed. The dissertation research goal is thus to creat e a unified, machine readable representation of a software application’s unit’s design and specifications; whereby an automated testing tool could interpret and understand the software ’s units’ design and specifications and automatically generate intelligent test cases to test the specified units of the software and determine whether the actual output matches th e expected specified output; and to use the machine readable specifications in automated tes ting tools to show its use. One of those

PAGE 37

24 tools is to automatically generate constraints and enforce business rules for database applications at the database level. Another one of the automated testing tools is to use the specification representation to show errors of commission using a reverse engineering approach. An automatic test case generator using the unified specification representation (SpecDB) will guarantee good coverage of the software, since it can generate test cases for all the specified functionalities of the software, utilizi ng the specifications no t the implementation. Such an intelligent test case generating tool can easily find errors of omission, where the programmers did not implement parts of the soft ware. It can also spot errors of commission caused by progra mming unsp ecified functionality in the software. Both of the latter types of errors would never be spotted if the test s were not generated based on the program specifications and design. Finally, such an inte lligent test-case generator can spot faults of commission caused by errors in programming a specified functionality incorrectly. The goal of this dissertation should not be mi sinterpreted as a requirements verifier. Requirements verifiers test the requirements document for clarity, consistency and completeness [30], however, they do not check the actual implemented software based on these requirements. A first step before starti ng to represent the specifications and the design in a way interpretable by a testing tool is to use a requirements verifier to check the accuracy of the specifications and the requirements [66]. Hence, the goal of the dissertation is to represent (formal) software specifications in a machine readable format to aid in the testing and test result verification processes and to creat e useful testing tools that make use of the

PAGE 38

25 machine readable software specifications to in telligently guide the testing process and assure better software quality. Accordingly, the first objective is to create a standard process for producing and storing UML or formal specifications and other soft ware specification and design constructs and detailed functionality specification. In order to create an intelligent automated test case generation tool for software testing purposes, th ere has to be a way for the tool to understand what the software is built for, its functionalities, inputs, outputs, processing scenarios, etc. Accordingly, the research objective is to pr ovide that standard way of representing specifications, in a way that could be interprete d by an automated test case generator. This representation of specifications is deta iled in Chapter 2, SpecDB: A Database Representation of Software Specifications. This dissertation includes extensions to the work by Chays, Deng and Frankl on AGENDA[27,28]. One of the uses of SpecDB, is to create an automated tool to generate intelligent test cases from SpecDB based on its understanding of the functionality of the software under test, and test the applicati on units containing both imperative-like code (programming constructs like if, while, etc. and abstract data types, etc.) and also including the database operations. This proposed solu tion can compare actual program outputs to the specified design, stored in SpecDB. Therefore, the tool can automatica lly make the decision whether or not the test passed, based on its understanding of the available querable design, represented in SpecDB. Based on the specifi cation, automated testing tools using SpecDB can spot errors of omission and commission imme diately. In their research [27], Chays and

PAGE 39

26 his research team, wrote: “Unless there is a formal specification of the intended behavior of the application, including a specification of how the database state will be changed, it is not possible to fully automate checking of the a pplication’s results.” Again, representing the formal specification they referred to, in a mach ine readable format, is the first objective of my research to make it possible to automate the checking of the software’s results. The second objective is to use the representation of the design and specifications to create an intelligent business rules generator for database applications. The proposed representation of the analysis and design of software will be based on a database structure that can be easily queried. Business rules and constraints will au tomatically be generated to enforce those constraints at the database level, not the application level. The final part of the research is dedicated to studying the feasibility and effectiveness of structural test case generation, based on th e level of detail given by the software specification and design. From the actual implementation, a reverse engineering approach is used to show all the software output, and post conditions after a test is executed. The output and post conditions are compared to the specifi ed behavior if represented in SpecDB and accordingly, errors are automatically spotted a nd code segments that caused the errors are isolated. The automated structural testing techniques used in the reverse engineering automated testing approach are dataflow testing and path testing. In Figure 1.2 the contributions of the research are represented. The design of SpecDB is detailed in Chapter 2. The software specifi cations are stored in SpecDB manually or automatically using the formal specificati on translation algorithm in Chapter 3. The

PAGE 40

27 usefulness of the SpecDB representation for automatic constraint generation and testing based on reverse engineering is demonstrated using two applications described in Chapters 4 and 5, automatic constraints generator and the re verse engineering testing tool, respectively. Finally in Chapter 6, enhancements to availa ble tools in the literature like AGENDA, using SpecDB are proposed. S p e c s & A u t o m a t e d In t e l l i g e n t I n t e r p r e t e r Formal Specifications T r a n s l a t i o n A l g o r i t h m ( C h a p t e r 3 ) Enhancements to AGENDA Test Case Generator(Chapter 6) Reverse Software Engineering Testing Tool. (Chapter 5) SpecsT e s t C a s e sSt o r e s p e c s Automated Constraints Generator(Chapter 4) Specs Software Errors Published Literature(AGENDA, Etc.) SpecDB(Chapter 2) Implementation Constraints Figure 1.2: Dissertation Contribution: Entities and Chapter Layout 1.11 Objectives and Accomplishments In Order to achieve the research goal described in Section 1.10, the following research objectives were accomplished:

PAGE 41

28 1. Represented specifications in a machine r eadable format. This is detailed in Chapter 2, the design of SpecDB. 2. Created a translation algorithm to transf orm formal specifications written in a formal language like Z, to SpecDB. Th is is described in Chapter 3, storing specifications in SpecDB. 3. Created useful applications that make use of the data stored in SpecDB. a. In Chapter 4, an automated tool is introduced which generates constraints and business rules to enforce them at the database level as opposed to the application level, for better reliability. b. Also in Chapter 5, a reverse engineer ing approach to testing database applications is described. It uses specifications to intelligently test software. 4. Suggested enhancements on published work in the field of testing database applications, using the techniques describe d in this research are presented. In Chapter 6, some enhancements are proposed to AGENDA [27, 28] as an example.

PAGE 42

29 CHAPTER 2 SPECDB: A DATABASE REPRESENTATION OF SOFTWARE SPECIFICATIONS 2.1 Introduction From the market survey of existing testing tools in Chapter 1, it was concluded that the current automated tools in the market lack an understanding of the functionality of the software under construction and te sting. The literature review also showed how much of the research on automated tools can benefit from machine-readable specifications, which can reason with the specifications and automatically generate intelligent test cases and automatically determine whether the software passed the test based on its understanding of the specifications and design. In this chapte r, one of the main contributions of the dissertation is presented, namely SpecDB, a da tabase that stores part of the software specifications to aid in the process of automated testing. The design of SpecDB introduced in this disse rtation focuses only on the specifications required for unit testing database operations, and is not, in its current state a full representation of the complete specifications of a software under constr uction. However, the techniques used in this chapter can be ex tended to include more of the software’s

PAGE 43

30 specification for other types of testing; for example, security, performance, system or integration testing, etc. In the following sec tion, the motivation for the research in, and the development of the specification representation database, SpecDB is emphasized. 2.2 The Advantages of a Database Representation of Specifications Since neither UML diagrams nor written formal specifications can be used as an input to automated testing tools, another channel is re quired to relay the specifications to automated tools. UML is not executable, nor mach ine readable. On the other hand, formal specifications are a complete representati on, but are difficult to write and not used frequently. A database representation is chos en because it has the benefit of being machine readable and also not as hard to use as formal languages. Once the specifications of the software under construction are expressed in a machine-readable way, the specifications may be used and reasoned about by automated tools to test or generate the code for software applications automatically. In order to create a software tool, that auto matically tests parts of another software under construction, or automatically generates parts of the code for another specified software; the former has to have a good understanding of what the latter is intended to do. Accordingly, the automated software code generating / testing tool has to be given, in a machine-readable way, the specifications of the other software being built. The best way a software’s specification and design can be given in a mach ine-readable format is through an adaptation of the already established formal methods. He nce the idea of representing specifications in a machine readable format evolved. In this chapter a standard methodology is introduced

PAGE 44

31 which represents software specifications in a machine readable format for the purpose of automated code generation and automated software testing. A database representation of the specifications was chosen as opposed to other forms or representation for the advantage of the inherently scalable nature of database de signs. The design of SpecDB is scalable and it is easy to modify or append to the design in order to accommodate more forms of specifications. It is then very simple to update the queries created for previous versions of SpecDB, to accommodate the new changes. R untime errors for database queries are not fatal, since they do not handle memory allocations like linked list representations. Another advantage is the use of the already stable and reliable database management systems available in the market, and the ability to create constrains and runtime triggers on database tables. Such constraints can aid in disc overing contradictory, incomplete or ambiguous specifications. Another use of SpecDB is to automatically represent programmed code and as an output, show the specifications of the programmed code With this reverse engineering approach, SpecDB can also be used with already existing so ftware. The advantage of this approach is to automatically test programmed software, giving the actual behavior for each operation. The analysts, designers and testers can then re view all the software execution results, hence showing errors of commission that might not otherwise be spotted by normal testing techniques that deem a test successful if the sp ecified behavior occurs. However, in some cases, the software errors lie in the fact th at the programmed software does more than is specified. For example, an e-commerce appli cation might perform its task of ordering an item, and also (as an unwanted side effect) send credit card information or passwords to

PAGE 45

32 another non-secure location! The latter secu rity leak cannot be spotted by conventional black box functional testing techniques. Using Sp ecDB in this reverse engineering approach will spot those errors of commission as will be shown in Chapter 5. Code translation from one programming language to another can be done using the reverse engineering tool and SpecDB. Code from one language can be fed into SpecDB, the specifications can then be used to automatically generate code in another language. 2.3 The Design of the Specification Database, SpecDB SpecDB is a database. The entities of the SpecDB database store the (formal) specifications of a software, necessary for unit testing. SpecD B is designed to efficiently and accurately represent those parts of the specifications shown in this chapter. In Section 2.4, the reader will see how SpecDB c overs and has a means to represent different forms of specifications. SpecDB is also desi gned to be scalable, generic, upgradeable and modifiable. Since the model is a database, it is easy to add more entities and/or more entity attributes to represent a specification that cannot otherwise be represented in the given SpecDB design in this chapter. As an example of SpecDB’s scalability, in Chapter 4 two tables are added and an existing table in SpecD B is modified to accomplish the task of an automated constraint generator using SpecDB.

PAGE 46

33 The specification representation of SpecDB is de signed to be the basis for the process of automated software generation and testing. Automated software code generation and/or testing tools can use SpecDB to understand what a software is designed to do, and hence can accurately test such a software by creating intelligent test cases with good coverage, and compare actual software behavior to its corres ponding specified software behavior. With the use of SpecDB, the processes of automated soft ware code generation and automate software testing can become more intelligent, efficient and reliable. In the design of SpecDB, constructs for both formal specifications and programming were taken into consideration. These rules and constructs can be found in [8,9,11-24,65]. Figure 2.1 shows an overall diagram of Sp ecDB’s design. Throughout Section 2.4 are figures showing parts of the relational databa se schema of the design of SpecDB. The relations between the entities are shown. Also the types of the fields in each entity are shown, numeric types are identified by the symbol “789”, text or character types are identified by “A”, and date or time types are identified by a calendar card. Table keys are identified by a “#” symbol, and whether each field is optional or mandatory is identified by “O” or “ ” respectively. There are some fields that are constrained to a list of valid values, for example, Is_Constant is a Boolean value that can be only T for true or F for false. On the left of the field name, if the value of the fi eld is constrained to a list of valid values, a relation symbol is viewed. Figures 2.2 and 2.3 are focused in parts of Figure 2.1, for better readability. Throughout Section 2.4 each entitiy is described in detail with focused images.

PAGE 47

34 Figure 2.1: An Overall Diagram of SpecDB’s Design

PAGE 48

35 Figure 2.2: A Focused Diagram of th e General SpecDB Design (Left Section)

PAGE 49

36 Figure 2.3: A Focused Diagram of the General SpecDB Design (Right Section)

PAGE 50

37 2.4 Entities in the Design and How Each Represents Specific Software Requirements In this section, each of the entities of the SpecD B design is defined and explained in detail, including examples of values for each attribute and how to store the different components of a software specification. First, general entities are defined, among those are the operators, List_of_Tables, Table Description, Types, Types List of valid values (LOV) and restrictions. The SQL scripts for the creation of th e entities of SpecDB and their corresponding constraints are provided in Appendices A and B respectively. The reader can refer to those constraints to see the allowable values of the attributes, this may aid in the understanding of the meaning of the attributes in each entity in SpecDB and what each represents. 2.4.1 Operator and Valid Operands In this section, the entities of SpecDB th at hold operators and valid operands are explained. The Operator table holds the operators of operations used in any software (formal) specification or code. Operators differ from one language to another, and hence SpecDB identifies them with unique IDs to form a generic outcome that is not language dependent. Figure 2.4: Representing Operators a nd Operator Overloading in SpecDB

PAGE 51

38 Examples of operator symbols are: /, M OD, AND, NOT, etc. If an operator is overloaded, records are added to the Valid_Ope rands table. The subroutine that defines the operator overloading operation should be stored in the Function_Declaration table (described in Section 2.4.5), with a subroutine name, the same as the operator symbol or name. The over_loaded attribute in both the Operat or and Valid_Operands tables is a Boolean variable that can be True or False. If more than three operands are involved in the operator overloading process, another attribute to represent the Operand4_Type can be added to the Valid_Operands table. The types of the operands have to reference records in the Types table (described in Section 2.4.3). 2.4.2 List_of_Tables and Table Description The List_of_Tables entity in SpecDB stores all the software/user-defined sets and tables. The Table_Description entity stores the names of the fields of the user-defined tables and sets, and their respective types. In Chap ter 4, this table will be extended, to aid in the process of the automated creation of database level business rules and constraints. Figure 2.5: Representing User Tables and Sets and their Description in SpecDB

PAGE 52

39 2.4.3 Types, Types_LOV and Type Restrictions The Types entity stores the non-standard user-defined types. This entity stores types that are not sets, arrays, linked lists, or any type designed for data storage and retrieval. SpecDB represents the latter storage types as user-defined tables in the database, and stores their descriptions in the List_of_Tab les and Table_Descriptions entities (described in Section 2.4.2). The Types entity stores us er-defined types like month names, days of the week, etc. Each of those user-defined t ypes has a basic type, e.g. String, Integer, etc. Also types can have size restrictions, e.g. a String of 20 characters. SpecDB allows for user-defined restrictions on types too. User defined restrictions can be one of the following: 2.4.3.1 List of Valid Values (LOV) In the first kind of type values restrictions, the values of a variable of a specific type are constrained to only values in a define d list of valid values. A type can be restricted to a List Of valid Values, i.e. any variables of the specified type can only take on one of the values in the LOV. For example, the type Month_Name can be either January, February, etc. In this cas e the Has_LOV attribute value of the Types entity for the Month_Name record is set to T for true and the valid months names are placed in the Types_LOV entity, each in a separate record. 2.4.3.2 Other Restrictions Types can also be restricted to a set of va lid values that are better specified by ranges. In this case, the ranges are stored in the T ypes table in their respective fields to define

PAGE 53

40 the allowable range. For example, for a Month_Number type, valid values are between the range of 1 o 12. Hence the Range_From_Int in the Types Table in the record defining the months is set to 1 and Range_To_Int is set to 12. A range can also be specified, with inclusions, e.g. all ev en integers between 0 and 100 including 0. Also exclusions can be defined from certain defined ranges by using the Type_Excluded_Ranges Table. Figure 2.6: Representing Types, Type Rest rictions and Types Valid Values in SpecDB The Types_LOV table contains a record for each value in the list of valid values for each type with LOV restrictions. The table has b een de-normalized to include values of all basic types, based on the “type” attribute valu e in the Types table, the value of the LOV will be stored in the relevant attribute. For example if the type of the Month_Number Type is Integer, then there will be twelve records in the Types_LOV table, each record will have a unique value between 1 and 12 stored in the Int_Value attribute.

PAGE 54

41 2.4.4 Variables and Restrictions on Variable Values In SpecDB, the VAR entity stores the specifi cations of a used variable. Including, its type, that can be also a user-defined t ype already defined in the Types entity. The SpecDB design, takes into account the fact th at in the same software, many different variables can have the same name, but defined in different contexts or sub-routines. This entity stores qualities each variable should possess. This aids automated testing tools in creating test cases that make sure that in a ll the software’s execution cycle and in all its states, each variable maintains its designed propertie s. If a variable is part of a class or an object, this can be specified, together with its encapsulation property; public, private or protected. Variables can have pre-defined types, but in turn also have their own restrictions. SpecDB has the flexibility to define restricti ons on both types and variables. A variable can have a LOV restriction or other restricti ons as the ones previously discussed for the types restrictions. A variable can also have a Table Restriction, i. e. the variable can only take on values that are stored in a specific field in a user-defin ed table. The Var_LOV table stores preset values that are preset before a software is executed, e.g. names of the months of the year. On the other hand, a variable can be restricted to take on values that are determined as the software executes, these values are stored in user-defined tabl es; e.g. names of people in an address book, where the names are con tinuously added as the address book is used.

PAGE 55

42 Those restrictions are stored in the Var_ Table_Restrictions entity. SpecDB gives the flexibility for a variable to be restricted to take on only values from another table, or, by exclusion, to take any value that does not currently exist in the specified table. This is done by specifying the operator in the Var_Table_Restrictions entity. Figure 2.7: Representing Variables, Variable Re strictions and Variable Valid Values in SpecDB 2.4.5 Procedure, Function_Declaration, and Subroutine_IP_Description Subroutines, or units are a major part of so ftware specifications as well as the actual program code, and the focus of this researc h. In the SpecDB database, procedures differ from functions in the fact that functions return a value, whereas procedures are subroutines that do not return a value, yet can change the system state. If a procedure should change the state of the system, then the value of the Changes_System_State field, is set to true. Automated testing tools can use this specification to make sure that the programmed subroutines are implemented as designed. The inputs for both procedures

PAGE 56

43 and functions are placed in the Subroutine_IP_Description entity. Inputs are represented with their respective orders using the IP_Order field. For a function that takes two inputs like, Calculate_Interest_Only_Monthly_Mortgage (Loan_Amount, Rate), the record in the Function_Declaration table will be: (Calculate_Interest_Only_Monthly_Mortgage, 2, Real, 68, Monthly_Mortgage,F,F,F,F,, ) where F stands for False. Th ere should be two records in the Sub_Routine_IP_Description table: (Calculate_Interest_Only_Monthly_Mortg age,1,Real, Loan_Amount, F,F,T) and (Calculate_Interest_Only_Monthly_Mortgage, 2,Real, Rate, F,F,T). The two inputs can have a restriction to be greater than zero. In this case two records have to be placed in the Var_Restrictions table to describe those restrictions. Figure 2.8: Representing Units (Sub-routines) in SpecDB The output value returned by a function (OP) can also have restrictions. SpecDB provides a place to store these pre-determined restrictions on the valid allowable values a function can return.

PAGE 57

44 2.4.6 Subroutine_IP_Instance, Designed_SR_ Post_Conditions and Runtime_SR_ Post_Conditions Each time a sub-routine is invoked during the life of the software runtime, it is given different parameters. The Subroutine_IP_Instance is where the values, or the sources for the values of the parameters are specified fo r each one of those instances. For example, if on the seventh call to Calculate_In terest_Only_Monthly_Mortgage in the program specifications or program, the input is take n from the GUI text fields (:HouseValue, :NegotiatedRate), in the Interest_Only window. Accordingly, the records in the Subroutine_IP_Instance table will be (7,1, Ca lculate_Interest_Only_ Monthly_Mortgage, GUI, :HouseValue, Interest_Only,,,,,,,,,) The Designed_SR_ Post_Conditions, holds the designed post conditions for subroutines, functions and procedures. Those will be co mpared to the actual post conditions during the testing phase to spot out software errors if there exists discrepancies between the designed subroutine post conditions in this table and the actual runtime subroutine post conditions, declared in the Runtime_SR_Post_Conditions table. The Type of the post condition could be a m odification to or execution of one of the following: GUI, DB, File, Report, Sequence, Message, Database Privilege or Role, Global Variable, Commit command, Save point command, Rollback, Run another program, Open a window or terminate the program execution. The source and the destination of the post condition are specified in this table. If the subroutine should result in a database operation being executed, the unique ID for

PAGE 58

45 that operation is placed in the DB_Operation_Num field, and the actual specifications for that operation is placed in the DB_operations and DB_Operation_Details tables. The Runtime Subroutine Post Conditions Tabl e is a duplicate of the Designed_SR_Post_ Conditions, with the difference that actual post conditions that occur due to the execution of a function or procedure are stored in the la tter. Each time a sub-routine is instantiated during runtime, an instance number is generate d, and the input parame ters used to invoke the sub-routine, is stored in the Sub_routine_IP_ Description table. Figure 2.9: Representing Sub-routines Instances, and Post Conditions in SpecDB The post conditions of that instance are st ored in the Runtime_SR_Post_Conditions table, using that same instance number. The fi elds in this table are exactly the same as those of the Designed_SR_Post_Conditions tabl e, with the difference that the field Instance number is added, and is part of the key to this table. Also th e data in the records

PAGE 59

46 indicate the actual runtime post_condition behavior after a sub-routine is run, as opposed to the designed or the expected, detailed in the Designed_SR_Post_Conditions table. This table will be populated after the soft ware is implemented and executed. An automated testing tool can make use of this table. 2.4.7 Dataflow, Triggers and Input_Output_Definition In order to specify the order of operations of a unit in an applica tion, the operations have to be given serial numbers. Accordingly, as a preparation stage for representing the specifications in SpecDB, unique dataflow numbers are given to operations in the (formal) specifications. Accordingly, SpecD B assumes that the program is numbered. Every line of the (formal) specification of a software as well as every line of code generated is given a unique serial number. SpecDB assumes that every line of code is a simple program statement, unnecessary bundli ng of code or specification on the same line is forbidden in this representation, to simplify the representation and enhance readability from SpecDB by automated tools. Accordingly, a preparation phase to simplify the specifications or code is necessary. The dataflow table stores what each line of the (formal) specification presents. SpecDB can also be used to represent actual lines of programmed code and automatically comp are the actual behavior to the specified behavior using this reverse engineering approach where the Dataflow table would contain the actual programmed lines of code. Each position is identified by a number, a successor number, and a type. The position type can be one of the following: Initial, Begin, Input, Assignment, Decision, While

PAGE 60

47 Loop, For Loop, Run exe, Output, Subroutin e Call, DB, open window, construct object, destruct object, call object method, end, end loop, end for, or terminal. According to the type of the position, the successor number can change. A position number can have many successor numbers and vice versa. For example, a decision point can have two results (true or false), and hence two successors, depending on the result. Database management systems have structures similar to subroutines called triggers. The difference between triggers and other subroutines is that procedures or functions are called explicitly, sometimes with arguments, whereas triggers are executed implicitly when a pre-specified event occurs, also ther e are no arguments passed to triggers. For our design specification purpose, the trigger table stores the events that cause triggers to execute. Those cannot be part of the normal dataflow table, since they do not follow the same dataflow process as the rest of the so ftware specification or code. However, this table will refer to the dataflow table once a trigger is defined, to adopt the same technique used to describe a normal subrou tine body, to describe the trigger body. Also what is named smart trigger is described in th is table. Smart tri ggers execute once a prespecified event occurs to a GUI object. For example, when the mouse pointer enters a specific area in a window or when a button right click occurs, etc. If the LHS var is assigned to a GUI object value, the var_id is the place to identify that GUI object, since all GUI used vars are listed in the VAR table.

PAGE 61

48 Figure 2.10: Representing Dataflow, Triggers and System Input and Outputs in SpecDB The Input_Output_Definition table describes how inputs and outputs to and from the software are represented. Inputs and outputs can be represented in many forms, file, database, through the GUI or to output devices like a printer (where a report is printed), or from other input devices. If the input or output is a database operation, then in the dataflow table, the line of code will be of type database operation, not input or output, and hence this type of input or out put will not be represented in this Input_output_definition table. 2.4.8 Assignment and Calculation Assignment operations and calculations are discu ssed in this section. A variable can be assigned to a constant or another variable, e.g. a:=65; b:=c. A variable can also be assigned to a calculation or a predicate statement resulting in a true or false value, e.g. a:=5+b; e:= (a AND b) OR (c XOR d). If a variable is assigned to a calculation, then

PAGE 62

49 there might be a number of records in the Ca lculation table describing this calculation. A calculation identifier is provided in the Cal_ Num field, and then at least one record is inserted in the Calculations table. Figure 2.11: Representing Assignment Operations and Calculations in SpecDB In the Calculation table, complex calculati ons are stored in the form of simple components. Each component involves only two operands and an operator. An operand can be simple like a variable or a constant or can be complex, and consists in itself of components. An example of a Simple Calcul ation is a+b. The LHS is a variable, and thus “a” will be stored in the Left_Var_ID fi eld. “b” will be stored in the Right_Var_ID field, and the operator ID for “+” will be stor ed in the Operator_ID field. Also a+109; a
PAGE 63

50 of “b” and operator_ID is that of the “/” ope rator sign. The other record with the same Cal_Num, but Component_Num=2, will have Left_Operand _Type = “Comp”, the Left_Component_Num = 1, and the Ri ght_Operand_Type = “Var” and the Right_Var_ID = that of “c”. 2.4.9 Predicate The Predicate table described in this secti on stores complex predicate statements or database conditions. Each line of specification or code i.e. position, has a type that is stored in the Dataflow table. If the Po s_Type is either a Decision, While Loop, For Loop, or Database Operation, the statement details at this line of code are stored in this Predicate table. In case the Pos_Type is a decision, a decision statement can have a simple or a complex predicate. An example of a simple predicate is: If a = b then … In this case, there are no calculations, or several components. There is only one component, and the left and right operands are variables, or a variable and a constant. An example of a complex predicate is If (a+b) < (d+11) then… In this case there are several components. The components are themselves calculations. Records are inserted in this table starting w ith the lowest level components, i.e. the ones that contain only simple predicates, with just variables and constants, i.e. from the inside out. Finally the predicate that represents the relationship among the already described

PAGE 64

51 components is inserted, using their respective component numbers. The technique is very similar to that used in the calculation table. Figure 2.12: Representing Predicates and Conditions in SpecDB In case the Pos_Type is a loop, the beginning of a loop is a decision statement, determining whether another iteration through th e loop should take place or to exit. The same discussion above for decisions is followed to describe a while loop predicate. In case the Pos_Type is a database operati on, the predicate table is where the WHERE clause of a database operation is stored. Si milar to the above examples, the operands are database fields compared to one another, for example: S.ID = Emp.ID. In this example, both the left and the right operands are database fields in the S and Emp tables (or table alias) This example will be treated as a si mple predicate, as discussed above. However, if the WHERE clause had many such simple predicates joined with AND or other operators, they will again be treated each as a component. The components will be

PAGE 65

52 stored in this predicate table as in the a bove examples involving only variables, and/or constants. Finally, if the Pos_Type is a set operation, th e predicate table is also used to express operations on sets. For example, N Known. In this case, the Component_Type is a DB_Tab le. Database tables are sets. And the records are the tuples belonging to the sets. Left and/or Right_Table _Name are the fields from the predicate table used in this case. 2.4.10 Database Operations, Tables Involved in Database Operations In this subsection the DB_Operations, DB _Operation_Tables and DB_Operation_Details tables of SpecDB are discussed. The DB_Opera tion_Tables table contai ns a list of tables involved in each DB operation defined in the DB_Operations table. The DB_Operations table stores database SQL Statements. G ood references for SQL standards can be found in [73,74,75,76]. Since a database select, insert delete, update, etc, each is considered one statement; this table is dedicated to describe those operations. There is one record for each SQL statement in the software specification or code. The ‘Where’ clause is stored in the Predi cate table, in this case the DB_Condition_Num is the same as the Pos_Num in the predicate ta ble. This table is a representation of some of the database operations that can be programmed in a PL-SQL environment.

PAGE 66

53 Following the same technique all the SQL Langua ge functions can be represented in this table. The functions listed and accounted for by fields are just a few examples to show the feasibility of the approac h. The DB_Operation_Details table contains the details of the Database operations described in the DB_Operations table. For SELECT Statements, the variab les to select are inserted in the Field_Name attribute. If there are n fields to select, there will be n records in this table with the same DB_Operation_Num. If the database operati on was "Select *", then there will be only one record, and the field_name will be *. The INTO portion of the sel ect statement, i.e. the variable where the selected value should be stored is indicated in the INTO_Var_ID attribute. The FROM clause is listed in the DB_Operation_Tables table, with the same DB_Operation_Num. Finally, The WHERE clause is stored in the Predicate table, with the POS_NUM = the DB_Conition_Num. If a cu rsor is defined, the Operation_Type is in the DB_Operations table is 'Cursor', it is tr eated the same as a select statement, with the addition of identifying the Cursor Na me in the Cursor_Name field in the DB_Operations table. Also if a Fetch opera tion on a cursor is the DB Operation type, again the Cursor_Name field is populated, and all the details of which values to put from the cursor SELECT statement into variables are all stored in the DB_Operation_Details table.

PAGE 67

54 For Insert Into Statements, the table name will be in the DB_O peration_Tables table, with the same DB_Operation_Num. If specific attributes (sequence) is specified, there will be a record in this tabl e for each attribute. The attributes will be stored in the Field_name field. The corresponding value that s hould be inserted in this attribute, will be placed in the appropriate ‘type’_value field, according to its type. Figure 2.13: Representing Database Operations in SpecDB If the DB statement is an Insert statement of the following type: Insert into Table_name Select…. then the Insert_Select attribute will be se t to true. Also the Insert_Select_DB_ Oper_Num will be filled out and the Se lect operation will be defined in the DB_Operations table, under its Pos_Num. For Delete DB operations, the table name will be in the DB_Operation_Tables table,

PAGE 68

55 with the same DB_Operation_Num. If there is no where clause, then all the rows in the table will be deleted, and thus the Whole_Table field should be set to True. If there are some specific rows to be deleted, identified by a where clause, the where clause will be specified in the Predicate Table, under a POS_Num equal to the position number of the current Delete Statement in the code. This number will be also stored in the Delete_Where_Pos_Num field. For Update DB operations, the table name will be in the DB_Operation_Tables table, with the same DB_Operation_Num. The attribut es will be stored in the Field_name field. The corresponding value that should be inserted in this attribute, will be placed in the appropriate ‘type’_value field, according to its type. (if there are n fields to update, there will be n records in this ta ble with the same DB_Operation_Num). The WHERE clause is stored in the Predicate table, with the POS_NUM = the DB_Conition_Num. In a query, the same table can be used twice, with different aliases. Accordingly, the key to the DB_Operation_Tables table is a numeric ID. 2.4.11 Classes, Objects and Class Relations SpecDB can represent object oriented constructs; classes, objects, and relationships between classes. In this section, the entities of SpecDB that hold object oriented constructs are explained.

PAGE 69

56 The Classes table holds the names of the classe s, the rest of the information of the class are stores in their respective locations, with an identifier pointing to their respective classes. For example, class variables, met hods and encapsulations rules are stored in the Var table, in Section 2.4.4 and either the Pr ocedure or the Function_Declaration tables respectively, explained in Section 2.4.5. Relationships between classes, like inheritance, association, aggregation, etc, are stored in the Class_Relationships table. In the Class_Relationships table, two classes fr om the Classes table are referenced in Class_Name1 and Class_Name2. The relations hip type between those two classes is stored in the relationship attribute in the Class_Relationships table. This attribute has a list of valid values representing the types of relationships among classe s, e.g. inheritance, encapsulation, etc. Figure 2.14: Representing Object Orie nted Classes and Objects in SpecDB 2.5 Conclusion Chapter 2 contains the design of SpecDB, a database designed to hold parts of a software’s specification for the purpose of unit testing. Through the use of SpecDB, automated tools can become more intelligent in generating and testing programs, based

PAGE 70

57 on an understanding of what the software under construction is supposed to do, by design. The database design of SpecDB is scal able and thus, allows for any additions to accommodate more programming constructs or requirements, new forms of input or output, etc., as they evolve in the futu re. The motivation behind building SpecDB has been expressed in Chapter 1, mainly to aid the process of automated intelligent test case generation based on a mach ine readable specifications. In order to create these machinereadable specifications to allow for au tomated reasoning, SpecDB was created. A scalable representation of the specification was chosen for this task, namely the use of a database specification representation. Th e standard technique of representation for portions of the software specifications to aid in the process of unit testing has been detailed in this chapter. Each of the entities of the SpecDB design has been explained in detail. Representing software specifications in this manner aids the process of automated testing and automated code generation. In Chapter 3, a translation algorithm is described, to translate formal specifications to the SpecDB representation, showing how a formal software specification can be automatically represented in SpecDB.

PAGE 71

58 CHAPTER 3 REPRESENTING FORMAL SPECIFICATIONS IN SPECDB: A TRANSLATION ALGORITHM In this chapter, an algorithm that translates formal software specifications to the SpecDB representation is described. The design of the underlying database that stores the formal specifications and design, SpecDB was detailed in Chapter 2. SpecDB is a database created to represent and store parts of the form al specifications of software, in a queryable / machine-readable format. The Z formal sp ecification language has been chosen as an example for the translation algorithm. Similar algorithms to that shown in this chapter can be created to translate other formal specification languages such as VDM to SpecDB. The outcome of the translation algorithm is a machine readable representation of the original formal specifications. It is worth noting th at the users of SpecDB do not have to learn formal languages. The specifications can be entered directly in SpecDB as easily as populating any database with the correct data. However, this chapter gives an algorithm to transform formal specifications to the SpecDB representation, if formal specifications were used in the specification and design phases of th e software development. The representation of SpecDB for formal specifications, aids in th e processes of both automated software code generation and automated software unit testing, ba sed on the actual software specifications.

PAGE 72

59 3.1 The Translation Algorithm From Formal Specifications to SpecDB The entities of the SpecDB database store the (f ormal) specifications of a program. SpecDB is designed to efficiently and accurately re present portions of the software formal specifications required to carry out the tes ting techniques. In Section 2.4, the methodology of how SpecDB covers and represents different forms of formal specifications is shown. In this section, formal specification fragments w ill be given together with their corresponding algorithm fragments to store these specificati ons in SpecDB. SpecDB was also designed to be generic, upgradeable and modifiable as described in Chapter 1. The examples given from the Z formal sp ecification language in Section 3.4 are from [18,19,23]. In each subsection in Section 3.4 the Z specification syntax is shown together with the algorithm steps. The algorithm is numbe red to identify it from the explanation text. Not all the syntax of the Z specification language is discussed in this ch apter. Examples are given for some of the algorithms. Numbers co rresponding to the step in the translation algorithm in that specific case are shown to s how the reader how each step is executed. Examples of the most commonly used syntax to document specifications are discussed. These examples are demonstrated to prove the feasibility and show the reader how an automated translation tool can execute this algorithm and transform formal specifications into the machine-readable SpecDB representa tion. The main idea of the algorithm can be understood from the steps of the algorithm in the following subsections. Similar algorithms can be created to translate other formal speci fication syntax to the SpecDB representation as needed. The understanding of the SpecDB design detailed in Chapter 2 is needed to generate extensions to the algorithm explained in this chapter.

PAGE 73

60 3.2 Assumptions and Restrictions All operators are prefixed with the “&” sign to distinguish them from variables, user defined types and schemas. User defined variables a nd types should not have dashes, spaces or Z symbols. Var_name? is an input. VAR_name! is an output. No user defined variable can end with 1. Va riables ending with 1 are temporary variable names given by the algorithm, until the variable is given a name in the specifications to follow its type definition. 3.3 Preparing the Formal Specifications for Translation Before translating the formal specifications in to the representation of SpecDB, the following preparation phase tasks have to be performed: 1. Going through the formal specifications, expand complex operations to simple operations. 2. Number the specifications, giving the ma in control (main menu) the number 1, and increments of 10 to the rest of the specifications in order to allow room for future enhancements, and add on specifications where needed. 3. Put either the or symbol before a schema name that is the first line after another schema definition, according to the meaning of the schema. 4. Add necessary brackets to complex calculations and predicates to accurately

PAGE 74

61 describe the required output. A c ouple of brackets should enclose one operator and one LHS operand or component within brackets and one RHS operand or component within brackets. 3.4 Storing Formal Specification Statements into SpecDB In the following subsections of Section 3.4, a number of formal specification statement syntax is provided together with its corresponding translation into the SpecDB representation. The most commonly used statements are discussed. Other formal specification statements can also be translat ed into the SpecDB representation by expanding on the given algorithm with the same scheme after understanding the entities of the SpecDB design detailed in Chapter 2. 3.4.1 Defining Schema Names and Operations Program units in the Z formal specifications language are given the name schemas. In Figure 3.1, the unit or schema name is AddBirthday. Figure 3.1: AddBirthday Schema To represent Z schemas in SpecDB, the fo llowing steps in the algorithm have to be executed. 60 70 80 90 100 110

PAGE 75

62 1. Set Current_SR to the Schema Name. (SR stands for Sub-Routine.) 2. Insert the name of the Schema in th e Procedure table, setting the num_of_inputs field to 0 and Changes_System_State to F for False. The operation remains a procedure until proven to be a function, th en it is removed from this table and placed in the function_declaration table instead. Example: in the “AddBirthday” schema in Figure 3.1, executing the above algorithm, will result in the following state: 1. Current_SR := AddBirthday; 2. Insert record in the Procedure table (AddBirthday,0,F) 3.4.2 Types of Schemas After specifying the name of a schema, as in Section 3.4.1, the nature of the schema is specified. For example, in Figure 3.1, in line 70 of the AddBirthday schema: Schema1 or BirthdayBook This means that AddBirthday is part of a nother schema (program in this case) i.e. AddBirthday is a subroutine definition. It indicates that after the execution of the AddBirthday schema, a change will occur in the state of the system, defined by Schema1, or BirthdayBook in this case. In this case, i.e. a or change in the state, the following should be executed: 1. Update the Function_Declaration or Pro cedure record for the Current_SR, set Changes_System_State to T for True. Example from Figure 3.1: Update the record of the Currecnt_SR (AddBirthday) in the

PAGE 76

63 Procedure table from (AddBirthday,0,F) to (AddBirthday,0,T) Another type of schema exists, one that does not change the state of the system, identified in Z by: Schema1 This symbol indicates that after the executi on of the schema, No change should occur in the state of the system, defined by Schema1. In this case: 1. Update the Function_Declaration or Pro cedure record for the Current_SR, set Changes_System_State to F for False, if not already F. 3.4.3 User-Defined Types User-defined types are built from the basic system-defined types, possibly with some constraints. In the Z syntax below, the basic system-defined types for these user-defined types are not specified: [Type1, Type2..] 1. Insert a record in the Types table for ever y type until the closing bracket is reached. Example: [ NAME, DATE ] Two records should be inserted in the Types table as follows: {NAME, , N} and {DATE, , N}

PAGE 77

64 3.4.4 Specifying Values for a User-Defined Type A variable type can be a pre-defined system type, or as part of the specification, other higher level types can be defined. The Z syntax for values of user-defined types is as follows: FreeType1::= val1 | val2 | ….. In this case all the values afte r the ::= sign form a list of valid values that FreeType1 can take, and implicitly specifies, that FreeType1 ca nnot be of any other value. In order to represent this list of values for a user-def ined type in SpecDB, the following algorithm should be executed: 1. Insert a record in the Types table (FreeType1,,,L ) where L is the value of the LRN field that specifies that the type has a list of valid values. 2. Set LOV_Count = 0 3. Increment LOV_Count. Also each time a “|” symbol is observed LOV_Count is incremented indicating that another valid va lue is specified next, then repeat step number 4. 4. Insert a record in the Types_LOV table (FreeType1,LOV_Count, ,Val,.. ) Val is placed in the text_value field since the type is not specified. Repeat steps 3 and 4 if the next symbol read is a “|” For example, line 290 below in Figure 3.2, the REPORT free type definition, has three possible values: {ok, already_known, not_known}. Figure 3.2: Report, User-defined Type Specification 290

PAGE 78

65 3.4.5 Defining Sets or Ta3bles Sets can be defined to describe entities in the software design. An example of a set is the set of known people in the birthday book. Se ts are defined in formal languages as follows: A: P B Where A is a set, and B is a type. In or der to represent the set definition above in SpecDB the following algorithm should be executed: 1. If not already inserted, insert B in Type s: type_name (leave all the rest of the attributes empty) 2. Create a new table, give it the name A, with one attribute (key) of type B and name this attribute: A. 3. Add A to the List_of_Tables table. 4. Add a record in the Table_Description table (A,A,B) Example: known: P NAME 1. Output: record added to Types table: (known, , ) 2. Table ‘known’ created, with one attribute, known, of type NAME 3. Record added to List_of_Tables table: (Known) 4. Record added to Table_Description table: (Known, known, NAME)

PAGE 79

66 3.4.6 Function Declaration System functions are higher level processes or subroutines that collectively define the system operations. A function return a value unlike a procedure. Functions are defined as follows in the Z formal specification language syntax: Where Func1 is a function or set, B is an input type and C is an output type. In order to declare a function in SpecDB the following algorithm should be executed: 1. Put all types (B and C) in the Types table (type_name), if not already inserted. 2. Consider variables B1 and C1 of types B and C respectively. Insert in the VAR table two records: (#,B1, B, Func1, ,T,….), where T is true for the is_input attribute and # is the next sequence number in the Var table (#,C1, C, Func1, , T,….), where T is true for the is_output attribute 3. Insert in the Function_Declaration table (Func1,1,C,# ,C1,F ,F,F). Where # is the VarID for C1. All restrictions are false by default till restrictions are declared. 4. Insert in the Sub_Routine_IP_Descrip tion table (Func1, 1,B,B1 ,F ,F ,F) 5. Add Func1 to the List_of_Tables table. 6. Create a new table, give it the name Func1, with two fields of type B and C, and give them the name B1 and C1. 7. Add a record in the Table_Description table (Func1,B1,B) 8. Add a record in the Table_Description table (Func1,C1,C) Func1: B C

PAGE 80

67 3.4.7 Subroutine Input Restriction The input of a subroutine (procedure or func tion) can have restrictions, based on the business rules. Subroutine Input restrictions are defines as follows in Z: A = dom Func1 In order to represent that input restri ction above in SpecDB the following algorithm should be executed: 1. Current_Var_ID:= the input var_ID for Func1 2. In the Sub_Routine_IP_Description table, update the fields of the Func1 function to reflect this restriction: (Func1,1,B, ,F,T ,F) i.e. the input B has a table restriction. 3. Insert a record in the Var_Table_Restrictions table (#,Current_Var_ID, Func1, ,I, BelongsTo, A, ,) where # is the current serial number for this table. Example: known = dom birthday In this case the specifications state that th e set of people already in the birthday book are the set of known individuals. New people a dded to the birthday book should not thus be already in it, to avoid multiple entrie s for the same person. Given the above specification, the algorithm trace should look as follows: 1. Current_Var_ID := 1; 2. Update the Birthday record in the Sub_R outine_IP_Description table: (Birthday, 1, NAME, Name1,F,T,F) 3. Add record to the Var_Table_Restric tions table: (1,1,Birthday, ,1,BelongsTo, Known, Known, )

PAGE 81

68 3.4.8 Variable Definition and Subroutine Inputs Variables can be defined as inputs to a subr outine, or in the main operation, they are defined in the Z formal specification language as follows: Var1? : Type1 In order to introduce the subroutine input s above in SpecDB the following algorithm should be executed: 1. Insert in table Var (#,var1,Type1,Current_S R ,T,,…). Current_SR is inserted in that record if the variable is defined in the context of an operation definition (Schema). Set the value of the Is_Input field to true. 2. Set Current_Var_ID = serial number given above (#). 3. If the variable is defined in the context of an operation, update the Procedure table, increment num_of_inputs, where proc_name = Current_SR. For example, line 80 in Figure 3.1: name?: NAME In this case a variable named: name is of the type NAME. This is a general variable definition statement. However in line 80 in Figure 3.1, it is defined w ithin the scope of a subroutine definition, accordingly, it is an input va riable to that subroutine. The trace of the algorithm for the above example should be as follows: 1. Add record in the Var table: (3,name, NAME,AddBirthday, ,T,….) 2. Current_Var_ID := 3 3. AddBirthday record updated in the Procedure table (AddBirthday, 1,F)

PAGE 82

69 3.4.9 Output Definition Variables can also be defined as an output of some subroutine. In Z output variables are defined by appending an exclamation mark to th e right of the variable name as follows: Var3! : Type2 In order to define output variables in SpecDB the following algorithm should be executed: 1. Insert in table Var (#,var3,Type2,Current_S R ,T,,…). Current_SR is inserted in that record if the variable is defined in the context of an operation definition (Schema). Set the value of the Is_Output field to true (T). 2. Set Current_Var_ID = serial number given above (#). 3. If the variable is defined in the contex t of an operation, then this operation is a function, not a procedure. Add Current _SR to the Function_Declaration table, with the same data in its record in th e procedure table. Set Op_type = Type2, and Op_Var_ID = Current_Var_ID. Set all restrictions to False in this record. 4. Remove Current_SR from procedure table For example in line 180 in Figure 3.3, below: date!: DATE Figure 3.3: FindBirthday Schema 150 160 170 180 190 200

PAGE 83

70 In this case, it is understood from the specifi cations that FindBirthday is a function that returns an output of type DATE. The algorithm trace should result in the following: 1. Add a record to the Var table: (6, date, DATE,FindBirthday, ,T,…) 2. Current_Var_ID := 6; 3. Record added to the Function_Declarati on table (FindBirthday, 1, DATE, 6, date, F,F,F,F,..) 4. Delete the FindBirthday record from the Procedure table. 3.4.10 Setting Variable Restrictions A variable can be restricted based on the busin ess rules or software specifications. In Z, variables restrictions are defined as follows: Var1? Set1 The symbol ' can be substituted with any other ope rator. Any set operators can be expressed using the same method. Also, the type of the variable, it can be an input with the symbol '?' or an output variable with the symbol '!' appended to the right of the variable name. In order to represent the va riable restrictions defined above in SpecDB the following algorithm should be executed: 1. Set Current_Var_ID = Var_ID of Var1 in the Var table, for the Current_SR if any. 2. Update the Var table for Current Var_ID, Set has_table_restrictions to true. 3. Insert record in the Var_Table_Restrictions table (#,Current_Var_ID, Current_SR, Var1, ,NotBelongTo, Set1, ). Current_SR is inserted in that record if the variable

PAGE 84

71 is defined in the context of an operation definition (Schema). (Substitute NotBelongTo with any other operator used.) For example in line 100 in Figure 3.1: name? known The trace of the above algorithm should yield the following results: 1. Current_Var_ID := 3; 2. The record for name in the Var table is modified: (3, name, NAME, AddBirthday, ,T,..,T..) 3. Add a record to the Var_Table_Restric tions table: (2, 3, AddBirthday, Name, NotBelongTo, Known,..) 3.4.11 Assignment Operation The syntax for assignment operations in Z is simple, and defined as follows: A = B Where A is a variables. B can be a variable or a constant. However, the same assignment syntax can be used for more comple x constructs than simple variables. The following algorithm should be executed to repr esent an assignment operation in SpecDB: 1. Insert a record in the DateFlow table (Pos #, Suc#, Assign, ..) If this statement is the end of a schema definition (operation) then set Suc# to 1, i.e. main menu control or the Suc# from the calling subroutine. 2. Set Current_Var_ID = Var ID for A in its s ubroutine context, if this statement is defined in a subroutine.

PAGE 85

72 3. If B is a variable, set Current_Var_ID2 = Va r_ID for B in its subroutine context, if this statement is defined in a subroutine. 4. Insert record in the Assignment table (Pos#, Current_Var_ID,Var,..., Current_Var_ID2,.). If B is a consta nt then replace Var for Const in the RHS_Type field, and put the value in the correct location according to its type. For example, in line 320 in Figure 3.4, result! = ok: Figure 3.4: Success Schema Below is the trace of the algorithm for the above example: 1. Add record to the DataFlow table (320,1,Assign,…) 2. Current_Var_ID := 9; 3. Add record to the Assignment table (320,9,Const,Text,…,ok,..) 3.4.12 Assignment Operation and Function Call The following formal specification syntax is more complex than those discussed above. In formal specifications, we can define both a function call and an assignment at the same time, where the output of the function call is directly assigned to a variable, as in the below example: Var2! = func1(var1?) 300 310 320

PAGE 86

73 In order to define function calls and assi gn the output to a variable in SpecDB the following algorithm should be executed: 1. Insert a record in the DataFlow table (Pos #, Suc#, Assign, ..) If this statement is the end of a schema definition (operation) then set Suc# to 1, i.e. main menu control or the Suc# from the calling position. Set Current_Var_ID = Var ID for Var1? in its subroutine context, if this statement is defined in a subroutine. Insert record in the Sub_Routine_IP_Instan ce table (#,1,func1,Var, Cu rrent_Var_ID,.. ). Repeat this for all inputs to the function in order, incrementing the IP_Order field every time. Inst_Num = # used in the above record inserted in Sub_Routine_IP_Instance Set Current_Var_ID = Var ID for Var2! in its subroutine context, if this statement is defined in a subroutine. Insert record in the Assignment table (Pos #, Current_Var_ID, Func , ,func1, Inst_Num, ….), where the RHS is set to Func, and FuncName to func1. For example, in line 200 in Figure 3.3: date! = birthday (name?), the function birthday is called, given the variable name? as an input, the output of th e function is stored in the output variable date! Below is the trace of the algorithm for this example: 1. Add record to the DataFlow table (200, 1 Assign,..) Current_Var_ID = 5; where 5 is the Var_ID for date in its subroutine. Add record to the Sub_Routine_IP_Instance table: (1,1,Birthday,Var,5)

PAGE 87

74 Inst_Num :=1; Current_Var_ID := 6; Add record to the Assignment table(200,6,Func,..,Birthday, 1, ..) 3.4.13 Assignment Operation for Sets As mentioned in Subsection 3.4.11, assignment operations can be more complex, in this section the syntax of assigning a set to the value of another set is discussed. Set1 = Set2 This is a complex operation that will be broken down at the preparation phase. The resulting operations will be as follows: Pos# : Drop Set1 Pos #+1: Set1=Set2 (Create New table Set 1 = Set 2) 1. If Set1 already exists, drop table, and c ontinue. Add record in the DataFlow table (Pos#, Pos#+1, DB,,,,,,). Add a record in the DB_Operations table (Pos#,Drop_Table, ,,…..,Set1 ) Where Set1 is the Table_Name field. 2. Create a new table, give it the name Set1, and with fields equal to Set2 3. Add record in the DataFlow table (Pos #, Suc#, DB,,,,,,). If step 1 was executed, then Pos#+1 should replace Pos#. also in th e next step. If this statement is the end of a schema definition (operation) then set Suc# to 1, i.e. main menu control or the Suc# from the calling subroutine.

PAGE 88

75 4. Add a record in the DB_Operations table (Pos#(+1),Create_Table, ,,…..,Set1,Set2,) Where Set1 and Set2 ar e the Table_Name and As_Table_name fields. 3.4.14 Assignment Operation for a Set The formal specification syntax allows for resetting set values, as follows: Set1 = In this case, the assignment operation on the set deletes all the elements in the set, i.e. all the table records and changes the state of the system as follows: 1. Add record in the DataFlow table (Pos#, Su c#, DB,,,,,,). If this st atement is the end of a schema definition (operation) then set Suc# to 1, i.e. main menu control or the Suc# from the calling subroutine. 2. Add a record in the DB_Operations table (Pos#, Delete,…) 3. Add a record in the DB_Operation_Tables table (Pos#, Set1,) 4. Add a record in the DB_Operation_Deta ils table (Pos#, 1, …..,T, …) Where the Whole_Table field is set to T for True An example is shown in line 280 in the InitBirthdayBook schema in Figure 3.5 below. Figure 3.5: InitBirthdayBook Schema 3.4.15 Inserting Records in the Database Elements can be added to sets, in other words, records can be added to tables, the syntax in Z is as follows: 260 270 280

PAGE 89

76 In order to represent this complex operati on in SpecDB, the following algorithm should be executed: 1. Add a record in the DataFlow table (Pos#+ 2, Suc#, DB,,,,,). If th is statement is the end of a schema definition (operation) then set Suc# to 1, i.e. main control or the Suc# from the calling subroutine. 2. Add a record in the DB_Operations table (Pos#+2,Insert,,,,,, ). 3. Add a record in the DB_Operation_Tables Table (Pos#+2,Set3,..) 4. Set Current_Var_ID = Var ID for Var1 in its specific subroutine if any. 5. Add a record in the DB_Operation_Deta ils Table (Pos#+2,1,,..,Current_Var_ID) 6. Set Current_Var_ID = Var ID for Var2 for its specific subroutine if any. 7. Add a record in the DB_Operation_Det ails Table (Pos#+2,2,,.., Current_Var_ID) (Current_Var_ID is inserted in the From_Var_ID field for steps 5 & 7) For example, in line 110 in Figure 3.1. birthday` = birthday U {name? date?} This is a complex operation that is broken down in the preparation phase. The resulting operations will be as follows: Line 110: birthday_ = birthday (creating a new table) Line 112: birthday_= birthday_ U name? date? For line 112, add record in the Data flow table: (112,1,DB..). Add record to DB_Operati ons (112, Insert,..). Add record to DB_Operation_Tables (112,Birthday_ .). Set Current_Var_ID := Var_ID of name?, Set1 = Set1 U Var1 Var2

PAGE 90

77 e.g. 3, add a record to DB_Operation_Deta ils (112,1, ..,3,..) and set Current_Var_ID := Var_ID of date?, e.g. 4. Add record to DB_Operation_Details (112,2, ..,4,..). 3.4.16 Output Table Definition A system may give various forms of outputs, se ts can be one of those output types. In Z, Output sets are defined as follows: Var1!: P Type1 Var1 is an output table or set in this case, this is known from its type. In order to represent this definition in SpecDB, the following algorithm should be executed: 1. If not already inserted, insert Type1 in types: type_name (all the rest of the attributes will be empty) 2. If table Var1 exists, drop it, add record in dataflow table (Pos#,Suc#,DB,…) and add record in DB_Operations table (P os#, Drop_Table,….,Var1,..) where Var1 is the table name. if not, add Var1 to the List_of_Tables table. 3. Create a new table, give it the name Var1, with one attribute (key) of type Type1 and name this attribute: Var1. 4. Add a record in the Table_Description table (Var1,Var1,Type1) 5. Insert a record in the Var table (#,var1,DB_Table,Current_SR ,T,,…). Current_SR is inserted in that record if th e variable is defined in the context of an operation definition (Schema). The true (T) is the value of the Is_Output field.

PAGE 91

78 If Var1 is the only output for a subroutine, then it cannot be considered as a function, since tables cannot be passed as parameters. The output table will be available for access outside the scope of the current subroutine, and when this subroutine or any other utilizing the same table is invoked, this tabl e will be dropped, and recreated, and the data redefined. For example, in line 240 in Figure 3.6. Figure 3.6: Remind Schema In this case the algorithm will add cards to the list of tables. Create table Cards, with one attribute (key), named Cards of type NAM E. Add record in the Table_Description table (Cards, Cards, NAME). Add record in the Var table (8,Cards, ,Remind, ,T,...) Where the Is_Output attribute is set to T. 3.4.17 Assigning a Variable to a Calculation The Z syntax of a calculation assigned to a variable is given below. Var1 = ((Func2 Func3) Func4 ) Any other right hand side calculation can be represented using the same method. The operators can be of any type that match the re sults of the operands. In order to represent this complex calculation and assignment ope ration in SpecDB the following algorithm should be executed: 210 220 230 240 250

PAGE 92

79 1. Add record in the DataFlow table (Pos#, Suc#, Assign,,,,,,). If this statement is the end of a schema definition (operation) then set Suc# to 1, i.e. main menu control or the Suc# from the calling subroutine. Set Current_Var_ID = Var ID for Var1 in its subr outine context, if this statement is defined in a subroutine. Insert record in the Assignment table (P os#, Current_Var_ID, Calc, ,Pos#,….), where Cal_Num = Pos#; Reset Component_Num = 0 Increment Component_Num. Follow the rule for inserting components in the cal culation or predicate table: Insert record in the Calculation table (Pos#, Component_Num, func,…,func2, func,..,func3,,AND) If the Calculation is a complex one like the one shown, then repeat steps 5 and 6 until all the components of the calculation are specified. Figure 3.7: RAddBirthday Schema Line 440 in Figure 3.7 is an example of assigning the variable result to either the value “Success” or “AlreadyKnown” depending on wh ether the input name and date already exist in the Birthday Book or not. 440 result! = (AddBirthday Success ) AlreadyKnown 390 400 410 420 430

PAGE 93

80 3.4.18 Constraining the Values Populating a Table or Set The syntax of the formal specification langua ge Z allows for specifying how a set or a table can be populated. In the below example, a table is populated from specific results of a query on another table: Var1!={n:Set1 | func1(n)=var2} Var1 is an output table set, func1 is also a table or a database operation on a table. The first part of this statement: “Var1!={” populates a table (set). Assigning a variable to a set means that this variable is really a set of values, which may include 0 or more values. The second part, “{n:Set1 |” introduces a new va riable, of type Set1, also it indicates that n is in itself a set (table) that is a subset of Set1. The third part of the statement “|” indicates that there is a restriction on n, it does not include all the values of set1, just those that satisfy the condition on the right hand side. The final part of this statement is the restriction “func1(n)=var2”. These four steps will be defined in SpecDB as in this translation of this SQL statement: Insert into var1 (Select Field_Name1 from Set1 Where Field_Name1 in (Select Field_Name2 from func1 Where Field_Name3 = Var2))

PAGE 94

81 The statement will be specified as above, using Pos# and Pos#+1. The field names and the quantities will be known by inquiring the already specified design. 1. Insert into DataFlow table (Pos#, Suc#, DB ,..) If this statement is the end of a schema definition (operation) then set Su c# to 1, i.e. main menu control or the Suc# from the calling subroutin e. In this case Suc# should not = Pos#+1, since this value will be used to define this statement. 2. Insert record in DB_Operations table (Pos#, Insert,, Pos#,…,Var1,,T) where the value of DB_Condition_Num = Pos#, Table_ Name = Var1 and AS_Select = T for true. 3. Set Field_Name = Field_Name from th e Table_Description table where Table_Name = Set1 4. Insert record in DB_Operation_Details (Pos#,1,,Field_Name,….) 5. Insert record in DB_Operation_Tables (Pos#, Set1,) 6. Reset Component_Num = 1 for the predi cate table, with each insert in the Predicate table with the same Pos#, increment the Component #. 7. Insert record in Predicate (Pos#, Co mponent_Num, DB_Field, …, Field_Name, …, Pos#+1,…) where Left_DB_Field = Field_Name and In_DB_Oper_Num = Pos#+1. 8. Insert record in DB_Operations table (Pos#+1, Select, ,Pos#+1,…) 9. Set Field_Name = SR_Var_name from the SubRoutine_IP_Description table where Subroutine_Name = func1 and type = that of Set1 10. Insert record in DB_Operation_Details (Pos#+1,1,,Field_Name,….)

PAGE 95

82 11. Insert record in DB_Operation_Tables (Pos#+1, func1,) 12. Set Field_Name = OP_Var_name from the Function_Declaration table where Func_Name = func1. 13. Set Current_Var_ID to Var_ID of var2. 14. Reset Component_Num = 1 for the predi cate table, with each insert in the Predicate table with the same Pos#, increment the Component #. 15. Insert record in Predicate (Pos#, Co mponent_Num, DB_Field, …, Field_Name, VAR, Current_Var_ID,…, Equal) wh ere Left_DB_Field = Field_Name and Right_Component_Type = Var and Ri ght_Var_ID = Current_Var_ID and Operator_ID = Equal. Figure 3.6, line 250 is an example of such an operation. The steps for this complex operation will be defined in SpecDB as in the following translation of the statement at Line 250 cards!={n: known | birthday(n) = today?}: Insert into cards (Select known from KNOWN Where known in (Select Name1 from Birthday Where Date1 = today)) The statement should be specified as above, using Pos# 250 and Pos# 251.

PAGE 96

83 3.5 Conclusion In Chapter 3, some examples were given to show how software specifications written in a formal specification language like Z, can be represented in SpecDB. The formal specification language Z was chosen because it is one of the more common formal specification languages. Representing formal specifications in SpecDB aids the process of automated testing and automated code generation. Through the use of SpecDB, automated tools can become more intelligent in generating and testing programs, based on an understanding of what the software under construction is supposed to do, by design. The scalable database design of SpecDB allows for any additions to accommodate more programming constructs, new forms of input and output, etc., as they evolve in the future, as well as formal specification syntax. In this chapter,a method to translate some examples of formal speci fications written in Z into the unified SpecDB representation was illustrated. Automated t ools can easily be developed based on the algorithm in this chapter to carry out the tr anslation from well formulated and complete specifications written in Z, or other formal methods, into the SpecDB representation. Once the specifications have been stored in SpecDB, automated tools can reason about the specifications and thus can automatically a nd intelligently create test cases to test the software. In Chapter 4 and Chapter 5 two examples of such intelligent testing applications are discussed, namely an automated constraint generator to enforce business rules at the database level of applications and a reverse engineering automated testing tool.

PAGE 97

84 CHAPTER 4 AUTOMATED CONSTRAINTS GENERATOR: AN APPLICATION OF SPECDB In this chapter, one of the tools that uses SpecDB as a foundation, is introduced; namely, an automated database constraints generator that creates constraints from represented business rules in SpecDB. The main advantage of database level constraints creation is the enforcement of some of the business rules at the database level, and not only at the application level. If business rules are enfo rced only within the code of the application running on a database, with no database level c onstraints on the tables, the possibility exists for inconsistent, unreliable data. Errors in the data may result from not enforcing and coding the business rules every time a change is made to the state of the database, or when the data is altered at the database level, and not the application level. The algorithm used to automatically generate database-level constraints is explained. The tables from SpecDB used in this process are also highlighted, together w ith an explanation of how the individual entities collectively desc ribe business rules represented by the specifications expressed in SpecDB.

PAGE 98

85 The use of SpecDB illustrated in this chapter focuses only on the specifications required to generate database-level constraints for the pur pose of testing the validity of the data in a database. In some production databases, database administrators prefer to drop constraints in order to offer more speed during system use. This strategy may result in faster system response, but can threaten data consistency, completeness, correctness and integrity. However, if the technique explained in this ch apter is used to generate constraints, data errors will be expelled, giving a chance to clean the production data. The tool described in this chapter can also be used in the data cl eaning phase of building a data warehouse. The algorithm in this chapter automatically generates a script for all the database constraints representing data-level business rules. When th e script is run, all the constraints are created in the database. Once the constraints are creat ed and the database integrity is checked, all the data that does not satisfy the constraints will be identified. The algorithm also creates another script for dropping all the constraints, if the system administrators would like to drop them again. As business rules are added to or changed in SpecDB, the algorithm in this ch apter will generate the new or modified constraints. A new table is introduce for SpecDB, in addition to the original design. also an extension to one of the tables in SpecDB is made. The addition a nd extension of these tables was necessary to store some business rules to aid the process of automatic constraint generation. The new table is Constraints. Table_Description was ex tended to include some required business rules for the fields. From SpecDB, Type_R estrictions, Type_LOV, Predicate and DB_Operation_Tables are also used.

PAGE 99

86 The extensions of the above mentioned tabl es highlight the scalability of the SpecDB implementation. Other tools can be created, like the one presented in this chapter to test other sections of the software applications. If an extension of SpecDB is required to represent the specifications necessary for th e completion of the new automated tool's function, a similar technique as the one used in this chapter can be adopted to extend the SpecDB representation. 4.1 The Design of the Additional Tables in SpecDB In this section, each of the new entities of th e SpecDB design is defi ned and explained in more detail. SpecDB is a database. The entities of the Sp ecDB database store the specifications of a software application. SpecDB is designed to efficiently and accurately represent the specifications necessary for unit testing database applications. In the representation described in Chapter 2, it was stated that the same concept used to create SpecDB can be used to add more tables to represent other asp ects of a software application, as needed. In this chapter, a new table is added to SpecDB, in addition to the original design. Also Table_Description was extended to become Table_Fields_Description. The addition and extension of these tables was necessary to store some business rules. The storage of business rules in SpecDB provided the basis for automa tic constraint generation for the purpose of data validity testing. The new table added is Constraints. From SpecDB the List_Of_Tables, Type_Restrictions, Type_L OV, Predicate and DB_Operation_Tables are also used.

PAGE 100

87 Throughout the subsections of Section 4.1 are figures showing parts of the relational database schema of the design of SpecDB. The relations between the entities are shown. Also the types of the fields in each entity are shown, numeric types are identified by the symbol “789”, text or character types are id entified by “A”, and date or time types are identified by a calendar card. Table keys are identified by a “#” symbol, and whether each field is optional or mandatory is identified by “O” or “ ” respectively. There are some fields that are constrained to a list of valid values, for example Is_Uni que is a Boolean value that can be only T for true or F for false. On the left of the field name, if the value of the field is constrained to a list of valid values, a relation symbol is viewed. All the tables specifications and field types are written in an SQL script in Appendix A. 4.1.1 Using List_of_Tables and Extending Table_Description The List_of_Tables entity stores all the soft ware/user-defined sets and tables. The Table_Description entity stores the names of the fields of the user-defined tables and sets, and their respective types. Figure 4.1: SpecDB Original Table_ Description The SpecDB Table_Description entity is exte nded to become Table_Fields_Description as shown in Figure 4.2. This table stores the fields of the user defined tables and the features of each field. The Field_ID is the Table_Name concatenated with the

PAGE 101

88 Field_Name, separated by an underscore sy mbol. Accordingly, Field_ID is a unique value in this table for every record (since a table cannot have two fields with the same name). The Field_ID is used to identify a specifi c field in the rest of the tables joined to Table_Fields_Description. Is_Primary_Key is a Boolean value indicating whether the field is a primary key to its entity. If Is _Primary_Key is true, then the algorithm in Section 4.2 will create a Primary Key constraint for this field on its table. Is_Unique is also a Boolean value indicating whether the fi eld is unique for each record. If Is_Unique is true, then the algorithm in Section 4.2 w ill create a UNIQUE constraint for this field on its table. Optional_Mandatory can be either O for optional or M for mandatory; indicating whether the field can be left bla nk or not for each recor d. If the value of Optional_Mandatory is “M”, then the al gorithm in Section 4.2 will create a NOT NULL constraint for this field on its table. Has_L OV_Restrictions is also a Boolean value used to identify whether there is a list of valid values that the values of this field have to take. Figure 4.2: Table Field Description Table

PAGE 102

89 Any other value not in the valid list of values is considered invalid. If the value of Has_LOV_Restrictions is true, the algorithm in Section 4.2 creates a table for the valid values referencing the Types_LOV table. It also creates a REFERENCES constraint on this field. Has_Table_Restrictions is simila r to Has_LOV_Restrictions. If the value of this field is true then Which_Table will stor e the table that this field references, and Which_Field will store the specific field referenced in the specified table. The algorithm in Section 4.2 will create a REFERENC ES constraint if the value of Has_Table_Restrictions is true. Finall y, Has_Range_Restrictions indicates whether there are other restrictions on the range of valid values for the specified field. For example, the value of “Age” for employees should be any value between 18 and 74. A list of values can be created, however, for some situations as the latter, it is best to specify a valid range of values. In this cas e, the algorithm in Section 4.2 creates a check constraint. For the latter example, the range is specified in the Types table, and a check constraint is created to enforce that range of valid values. Other restrictions like specifying a range excl uding one or more values are stored in the Type_Excluded_Ranges table. Also the Cons traints table stores some other more complex restrictions as shown in Section 4.1.2. If there are other types of constraints, the Has_Constraint_Restrictions attribute is set to true. Multiple constraints can exist for the same field, accordingly, the number of the other constraints is specified in the Num_of_Constraint_Restrictions attribute.

PAGE 103

90 4.1.2 Adding the Constraints Table to SpecDB and Using the Predicate Table The constraints table was added to specify cer tain business rules that are not represented in the Table_Fields_Description table and dynamic business rules. The constraints specified in Section 4.1.1 in table Table_Fields_Description are static, meaning that the requirement that a SSN is unique is a fact that needs to be enforced on the tables, regardless of the data in the tables. However, there are some business rules that are dynamic. For example, the salary of an employee should not exceed that of his/her manager. The salary of the manager is not pre-set, i.e. not static. Also different employees have different managers. In order to enforce such business rules on the database level, the Constraints table is added to SpecDB and use other tables like DB_Operation_Tables and Predicate to store t hose business rules. The Constraints table is defined in Figure 4.3. The tables us ed by the constraint are stored in DB_Operation_Tables. The conditions and rules are stored in the Predicate table. On_Field is a Boolean value indicating whethe r the constraint is on a specific field. If On_Field is True, then there is a constraint for the value of the specified field. For example the employee age has to be greater than 18. In this case the Field_Name, DB_Name and Table_Name will store the name of the field (age in this example), name of the database where the table is, and the name of the table containing the specified field, respectively. Field_Name and Table_ Name are required to link between the Table_Field_Description table and the Constraint table. A snapshot of the data in the SpecDB tables to represent constraint 1 is shown in Tables 4.1, 4.2 and 4.3. Constraint 1: employee age 18

PAGE 104

91 Table 4.1: Snapshot of SpecDB Table_Field_Description for Constraint 1 Table Name Field Name .. Is_unique Has_range_ restrictions Has_constraint_ restrictions .. .. .. .. .. .. .. Employee Age .. F F T .. Table 4.2: Snapshot of SpecDB Constraint Table for Constraint 1 Constraint_ Name Constraint_ Num On_ Field Field_ Name Table_ Name .. Constraint_ ClausePredNum .. .. .. .. .. .. .. .. .. EAgeGT18 27 T Age Employee .. 79 .. Table 4.3: Snapshot of SpecDB Predicate Table for Constraint 1 Pos_ num Component _Num Left_ Operand_ type .. Lt_ table_ Name Lt_DB _Field Right_ Operand _Type .. Right_ num_ Const Operator ID .. .. .. .. .. .. .. .. .. .. 79 1 DB_field .. employee Age Const .. 18 >= If On_Field is false, this m eans that the constraint is not defined on only one field, there is a number of tables involved, or a number of fields involved in the constraint, i.e. a complex business rule or constraint. In this case the relation between the values of some fields in specified tables has to be governed by the constraint. Figure 4.3: Constraint Table If for example, the salary of an employee s hould not exceed that of his/her manager, the employee entity is involved twice in this constraint, once as a manager and once as an

PAGE 105

92 employee. There are also other fields involved, the salary, manager's ID, and the employee ID. The employee table will be used twice in this case, once to reference the employee and once to reference the manager. For each case, the alias is different, for example, alias 'e' is used for employee and alia s 'm' is used for manager. The rule itself is placed in the Predicate table. To illustrate how this constraint is represented in SpecDB, snapshots of the tables of SpecDB n eeded to represent this constraint number are shown: Constraint 2: employee's salary employee's manager's salary. Table 4.4: Snapshot of SpecDB Table_Field_Description for Constraint 2 Table Name Field Name .. Is Unique Has_range_ restrictions Has_constraint_ Restrictions .. .. .. .. .. .. .. .. Employee Salary .. F F T .. Table 4.5: Snapshot of SpecDB Constraint Table for Constraint 2 Constraint Name Constraint Num On Field Field name Table Name Constraint Clause_Pred_Num Condition_Pred Pos Num .. .. .. .. .. .. .. .. .. ESalLTM Sal 28 F salary Employee 80 81 .. Table 4.6: Snapshot of SpecDB Predicate Table for Constraint 2 pos_ num Component num left operand type . Left table name Left table alias Leftt DB field Rt_op type Rt_table name Rt_ table alias Rt_db _field Operator_id .. .. .. .. .. .. .. .. .. .. 80 1 DB_field . Employee E salary DB_ field employee M salary <= 81 1 DB_field . Employee E manag -er_ID DB_ field employee M emp_ ID = The constraint in Table 4.5 states that there exists a restriction on the employee salary. The constraint itself (employee salary < manager salary) is placed in the Predicate table at Pos_num 80. However, this is not the complete constraint. The relation between the

PAGE 106

93 two employees is also stated in the Predi cate table at Pos_Num 81. Using the Predicate table in combination with the Constraint table allows for a wide variety of constraints and business rules that can be represented in SpecDB. Figure 4.4: Representing Complex Constraints in SpecDB Among the complex business rules for example, the age of an employee should not exceed 125% of his manager, or the increase in an employee's salary should not exceed 125% of his previous salary. The latter example is a comparison between the value of a field before and after a modification. To identify old and new values, :old.field_name and :new.field_name are used respectively in the Left or Right_DB_Field field in the Predicate table. This complex business rule requires a constraint clause in the Predicate table with two components. The first component is the :old.salary *1.25, in this case the left operand is a database field, the right ope rand is a constant and the operator is the

PAGE 107

94 multiplication operator. The second component is constraining :new.salary <= component 1, i.e. :new.salary <= :old.salary*1.25. 4.1.3 Other Tables Used from SpecDB, Type_Excluded_Ranges and Types_LOV SpecDB allows for user-defined restrictions on the fields of the user-defined entities. Restrictions can be one of the following: 4.1.3.1 Representing List of Values (LOV) Restrictions A List Of valid Values, i.e. any variables of the specified type can only take on one of the values in the LOV. In this case, the va lid values for a specified field are placed in the Types_LOV entity, each in a separate record. 4.1.3.2 Representing Other Field Restrictions If a type is restricted to a set of valid values that are better specified by ranges, the ranges are specified in the Types table. If there are some values to exclude from the range of valid values, or even a range to exclude, the rules are stored in the Type_Excluded_Ranges entity to define th e allowable range. For example, for a Month_Number type, valid values are betwee n the range of 1 to 12. Hence the value of the “Has_Range” field in the Types table is set to True, Range_From_Int is set to 1 and Range_To_Int is set to 12. A range can also be specified, with exclusions, e.g. integers from -150 to 150 excluding 0. In th is case the range -150 to 150 is specified in the Types table, and the exclusion of the value 0 is specified in the int_value in a record in the Types_Excluded_Ranges table.

PAGE 108

95 Figure 4.5: Representing Types, Extending the Initial Representation 4.2 Automatically Generating Database-Level Constraints: The Algorithm After describing the tables used by the automatic constraint generator tool in Section 4.1, in this section the algorithm required to generate th e different types of constraints, outlined in the previous section, is described. There are two main modules for th e automated constraint generator. The first is a straight forwar d constraint generator with little computation intelligence, that generates static constraints on the values of the fields. This first module creates database level constraints. The second is a more complex constraint generator that handles complex business rules that cannot be expressed by simple database constraint operations, but need more complex stored pro cedures for triggers to enforce the business rules on the database level. Accordingly, th e second module automatically generates stored procedure code for triggers, as opposed to simple database constraints as in the first module.

PAGE 109

96 In Section 4.2.1 simple constraints are gene rated. In Section 4.2.2, the algorithm for the stored procedure code generator is described. 4.2.1 Generating Simple Static Constraints – The Algorithm In this section, the simple algorithm for the au tomatic constraint creat or is described. The actual code is provided in Appe ndix C. The constraints that are automatically created by the algorithm in this section handle prim ary key constraints, uniqueness, not-null (mandatory) constraints, simple check constrai nts for static field values, list of values and relation constraints. 1. Open a file to create the SQL script to cr eate the constraints and another to drop all the constraints. 2. For every constraint created and appended to the SQL constraint script file, append a drop constraint command to the drop constraint s script, so as to create two SQL script files, one for creating all the constraints, and one for dropping them. 3. For every record in the Table_Field_Description table do a. if the field is a primary key (Primary_Key = T) then append to the script file the Alter Table command to add a primary key constraint on the current table (T able_Name) for the current field (Field_Name). b. if the field value is unique (Is_Unique = T) then append to the script file the Alter Table command to add a unique constraint on the current table for the current field. c. if the field is mandatory (Mandatory = T)

PAGE 110

97 then append to the script file the Alter Table command to add a Not Null constraint on the current table for the current field. d. if the field should have a value from a list of valid values then append to the script file the following commands: Create table command, give it the na me of the table appended to the field, for example Employee_MonthHired. Insert into the newly created table all the values from Types_LOV where the Type_Name is the current field ID. Alter the current table, add a foreign key constraint referencing the newly created table. e. if the field has table restrictions, i.e. it references another field in the same or another table, this is a referential integrity constraint that is enforced by the RDBMS once the constraint is created. Accordingly, there is no need to create triggers to check whether the constrai nt is satisfied upon the deletion of a record in the referenced table, since th is is enforced by the RDBMS. However, we need to specify the referential integrity constraint as done below: then append to the script file, a forei gn key constraint for the current field and table, referencing the specified ta ble and field in the which_table and which_field attributes.

PAGE 111

98 f. if the field has a range of valid values then query the Types table for the range boundaries (range_from, range_to values). Append a Check constraint to the scri pt file to check that the value of the field is between the two boundary values obtained. Since there can be one or more ex clusions from the range of valid values, query the Type_Restrictions table for exclusions on the valid range specified. For each exclusion, determine whet her it is a single value excluded from the range, or a range to exclude. Append to the script another Check constraint to check that the value of the field is not between th e specified range in the exclusion, or is not the value of the specific value to exclude. g. if the field has constraint restrictions then query the Constraint table for the current table and field. if a constraint exists in the Constraint table where the On_Field attribute is set to true then using the Constraint_Clause_Pre_ Num attribute for that constraint, query the Predicate table for the record(s) where Pos_Num = Constraint_Clause_Pre_Num. If there is only one record (i.e. on component) in the Predicate table then append to the script file a ch eck constraint on the current field

PAGE 112

99 and table to satisfy the predicate, i.e. left operand followed by the operator ID, followed by the right operand. Else if there are multiple components, Use a temporary empty text variable or database text fields to build the constraints from the predicate components Starting with component 1, append to the text variable, or DB field the left operand, operator ID, right operand repeat with the following components until all components are built Append all components to one another with the correct operators into one predicate. Append to the script file the Check constraint for the current field and table with the built predicate. 4. Commit the changes to the open files and close the two script files. Case 3.g is for representing a constraint sim ilar to constraint 1 discussed in Section 4.1.2. In this case, a constraint can have a simp le one component predicate e.g. Age >=18, or it can have a complex multiple component pred icate, e.g. Annual Salary + Annual Bonus <=125% Annual Salary. For the latter example, the constraint can be on the Annual Bonus field. Three components in the Predicate table represent this constraint 1. Annual Salary 1.25 …….Component 1 2. Annual Salary + Annual Bonus…….Component 2 3. Component 2 <= Component 1

PAGE 113

100 The algorithm specifies the use of a temporary database table containing one text field or a text variable to form the complete constr aint predicate statement before appending the constraint to the script file. 4.2.2 Generating Complex Dynamic Business Rules Enforcement Code More complex business rules involving dynamic checking of the data cannot be enforced using simple database constraints as those described in Section 4.2.1. Complex operations similar to constraint 2 discusse d in Section 4.1.2 can only be created using database assertions or stored procedure code for a database level trigger. Accordingly, in this section we look at how the automated c onstraint generator automatically generates procedure code for triggers to enforce those complex dynamic business rules. The algorithm is better understood following an ex ample. Constraint 2 is used for this purpose in this section: Constraint 2: employee's salary employee's manager's salary. There are two main criteria for specifying such a constraint. First, the constraint that needs to be satisfied, in this example the constraint that needs to be satisfied is: e.salary <= m.salary The second criterion is the relationship(s) be tween the entities that are involved in this constraint. For Constraint 2, the relationshi p is the specification: 'Employee's Manager', translated into PL/SQL code renders a wher e clause: e.manager_id = m.employee_id.

PAGE 114

101 The first criterion, the constraint that needs to be satisfied, can be a simple one component constraint, or it can be more complex, and more than one component is needed in the Predicate table to represent it. The same applies for the second criterion, the relationship between the entities. Again there can be none, one or more components in the predicate to specify the relationship between the entities involved in the constraint. The data needed for the automatic creati on of procedure code to enforce complex dynamic constraints is represented in the design of the SpecDB tables discussed in Section 4.1. From Tables 4.4, 4.5 and 4.6, the algorithm in Section 4.2.2.1 automatically generates the procedure code for a trigger to enforce Constraint 2. The full algorithm to include all cases is d ecision intensive, one of the cases are given here to illustrate the idea. However, ther e are other paths in the algorithm where the operands are not database fields, but constants for example. Handling components in the predicate table with constant operands is simpler than handling database field operands, as shown in Section 4.2.2.1. An example of handling constants was shown in Section 4.2.1. There is also paths through the algor ithm where the key to the table being modified is a composite key, meaning that there are multiple fields that collectively form the database table key. Before the below al gorithm is executed, the database table key is identified from the Table_Field_Description ta ble. In the algorithm shown in Section 4.2.2.1, a simple database table key is handle d. The same technique can be used to handle composite keys. Finally, there ar e cases in the algorithm involving multiple components for a predicate. The technique used in the algorithm in Subsection 4.2.1 to handle complex multiple component predicates is also used here for the same purpose.

PAGE 115

102 4.2.2.1 Complex Business Rules Generation Algorithm The algorithm described in this section is a part of the whole decision intensive algorithm to automatically create procedure code for database level triggers, as discussed in the previous section. A file is created for the procedure code of the trigger, every statement in the algorithm is appended to the file. Also the value of the fields involved in the update operation that triggers the execution of the procedure code created by the algorithm below, are refe rred to as 'New' values, in the 'Current' record. 1. Open a file to create the procedure code to create the constraint, and another to drop all the triggers created. 2. Determine the field name that is being updated from the Constraint table. 3. Create a Trigger to be executed befo re update on Constraint.Table_Name, Constraint.Field_Name. 4. Determine the type of the field stored in right_DB_field in the Predicate table. 5. Create a variable of the type identified in step 4, append the alias to the right_DB_field and use the result as the name of the variable (e.g. m_salary). 6. determine the key to the table being modified. 7. Query the Constraint table to identify the pos_num for the predicate representing the constraint clause (a), and that representing the condition (b).

PAGE 116

103 8. Append a select statement as follows: Select right_table_alias.right_db_field into variable (created in step 5) (the data used here is from (a) in step 7). From Right_table_name Right_t able_alias left_table_name, left_table_alias (the data used here is from (a) in step 7). Where left_table_alias.Key = CurrentKey and left_table_alias.left_db_field operator_id right_table_alias.right_db_field (the data used here is from (b) in step 7). 9. From step 2, the field being updated ha s been determined, now determine the new value of that field (Current_Field & New_Field_Value). 10. Append to the file the decision stat ement: If Not (New_Field_Value operator_id right_table_alias.right_db_field) (the data used here is from (a) in step 7) then raise and handle exception. From Constraint 2 the following Select statem ent is created from the above algorithm: Select m.salary into m_salary from employee m, employee e where e.employee_id = Current_Emp_ID and m.employee_id = e.manager_id; if not(Current_Salary <= m_salary) then raise_exception;

PAGE 117

104 4.3 Conclusion In this chapter, an automated tool to genera te business rules is described. It is also shown how to enforce them on the database level by automatically creating constrains and assertions using the data stored in SpecDB. The design of a new table, Constraints is explained, and an extension on the design of the Table_Description table of SpecDB, renamed Table_Fields_Description is shown. Those modifications were made in order to allow for the representation of business rule s. The automated tool introduced in this chapter creates a script to create all the c onstraints and another one for dropping them. System administrators can choose to run pr oduction systems with fe wer constraints or no constraints at all in order to enhance the sp eed of production systems. They can run the tool described in this chapter to create and automatically test the validity of the data during certain times of the week to maintain the accuracy, completeness and integrity of the system. Through the use of SpecDB, automated tools can become more intelligent in generating and testing programs, based on an understanding of what the software under construction is supposed to do, by design. The design of SpecDB allows for future enhancement and additions to accommodate more programming constructs or requirements, as they evolve in the future.

PAGE 118

105 CHAPTER 5 A REVERSE ENGINEERING TESTING TOOL USING SPECDB A lot of testers and available automated testing tools decide that a software application has passed a specific test case, if the expected outputs match the actual outputs, based on black box testing. This is, however not an absolute proof of the quality of the so ftware for that test scenario. The fact that the software appli cation can commit implicit errors that are not directly reflected in the output values still exists For example, in a database application, if a social security number of an employee is gi ven as an input, and the system outputs the correct required information about that employee, as specified for an operation; this is not enough for a tester or a testing tool to decide that the software passed for this operation. There is a possibility that some other records in the database have been mistakenly deleted, or modified, during or after the output is displayed. In another example from a system security perspective, an e-commerce tool takes credit card information, and customer confidential data, carries out the transaction, se nds a sales order to the correct department, and accomplishes all the tasks to complete the sa le transaction. The system could also send the credit card information to an external non-s ecure site. Such a security leak cannot be identified by black box testing techniques.

PAGE 119

106 The motivation to think of a testing tool solu tion that would spot such implicit errors of commission was aroused from the above discussi on where the software application carries out operations that are not part of the specifications. White box testing techniques, where the actual implementation code is examined is a good basis for such a solution. If a testing tool can trace an implemented program and ge nerate the specifications of the actual implementation, and present it in a non-technical form that can be understood by testers and end users, it will then be very easy to spot su ch errors of commission. In this chapter the operation of this tool is explained. it is called a reverse engineering tes ting tool, or RE tool, since it generates the specification of the impl emented software from its implementation. Such specifications can then be fed into SpecDB as an instance of a software specifications and can be easily automatically compared to another set of specifications, the required specifications of that software, i.e. compare act ual to specified behavior. The determination whether the implemented software matches the required specifications can then be easily made, with the guarantee that no errors of commission have been unnoticed. In this chapter, a reverse engineering appro ach is used to log the actual execution of a program, due to some input, using white box te sting techniques. Once the program actions have been logged, they are presented in a wa y that non-technical testers and other members of the software development team can understand. The actual outcome can then be easily matched and compared to the expected output based on the program specifications and oracles that determine what the expected pr ogram output should be, based on the specified inputs. The decision-to-decision path (also known as DD-paths) white box testing technique, is modified and used to implement this tool, but for a different purpose; namely to determine

PAGE 120

107 the actual program behavior given specific input, and not as a coverage matrix to determine which parts of the software are covered by some test cases, or as a guide to generate test cases to cover all DD-paths in a program. Inst ead, the technique for DD-paths testing is used to determine actual program behavior. 5.1 Technique The main idea of the reverse engineering (RE) te sting tool is to follow the program execution and report the inputs it receives, the actual output s it generates, and the process it follows. These three perspectives, input, actual output and the process are all represented in a simple format for non-programmers to be able to read and interpret how the program executed and whether there are specifications that it did not execute. This outcome will aid in the process of identifying and isolating faults. Faults are identified when the tester (or automated tool, using SpecDB) compares the expected output to the actual output. Faults are isolated by following the process execution behavior provided as an outcome of the approach explained in this chapter and comparing it to the speci fications. For the purpose of the RE tool described in this chapter, as an outcome of the tool, for a specific test case on specified inputs, the tool will show how the executed porti on of the software got its inputs, and how it generated its outputs and finally it presents a ll the outputs of the executed software. Those three outcomes of the tool (inputs, computa tion, and output) are presented in an easily understandable format described in the following sub-sections. Triggers are programmed units that execute if a pre-specified event occurs. For example, a database trigger can be implemented to check th e constraint on a table when a certain field is

PAGE 121

108 modified. GUI triggers execute when a certain event happens in the GUI, for example, a button click, a menu item click, a modification to th e value of a field, etc. In order to make the RE tool more effective, it is designed not only to test the specified unit without noting changes that occur due to some triggers that execute as a result of the state changes, or output results of the unit being tested, but also to consider all the modifications that happen to the system, including those of triggers. A domino effect might result from following triggers, since a state change or an output of one trigger may trigger a nother to execute. The specifications of the RE testing tool include that it should query all triggers, and note the possible domino effect, displaying as part of its own output all the system and state changes as well as all the outputs that result from the execution of the unit being tested. 5.1.1 Input and Pre Conditions Classification and Categorization The first outcome of the RE testing tool is identifying all the inputs used by the software in its execution of the test case. Inputs can be both explicit and implicit. Explicit inputs are those given to the software using a GUI component or a command line. Implicit inputs are values that the software imp licitly acquires during execution, for example when the application queries a database table fo r the value of a specific field in a specific record. Such values are used in the comput ations and the outcome of the software is affected by those implicitly acquired inputs too. The format of how the RE tool identifies what kinds of input the program received, are described in this section. Database applications are dynamic by nature depending on the input state of the database. This means that when a test case is executed on a database application with the

PAGE 122

109 same input values several times, the output can be different for each test case, if the data stored in the database changed, and yet, it can be decided that the application passed that test. Database applications are thus unlike a function, which depend soley on explicit parameters. For example, a calculator app lication is a function, each time you enter 4 7 to a calculator, it should output the same value, 28. However, each time you query the social security database for the number of living citizens, it might give you a different number each time. It is thus important to testers or automated testing tools on such data-centric applications to know the inputs, both explicit and implicit, to be able to judge whether the software being tested has passed a test case. Inputs can be presented to a program in many forms. Among the forms are GUI objects, data from a database or files. It is assume d that other forms of i nput like barcode readers can be considered as data from GUI components, since a barcode reader replaces manual keyboard entry of data in a text field for example. For inputs from the database, a specific database table, field and row identif ier are required to determine a single value, as an input. For example the Employee table, has a Salary field. There are as many values for this field as there are records, i.e. employees in this table. An employee identifier is also required to specify a singl e input salary value. Once the value is extracted from the database, it can be stored in a variable to be used in the computation process. For queries that retu rn multiple values of the same field, i.e. no record identifier is chosen, during run time the results of thos e queries are placed in what Oracle calls a cursor. Oracle uses cursors to handle computa tions on input values, one record at a time. The end result is that only one value per fiel d is handled for computation at a time in a

PAGE 123

110 loop, i.e. one record at a time. Accordingl y, the above assumption th at one field and one row identifier provide a single input is also va lid in the situation of using a cursor in a loop for computation involving a query that returns multiple rows. In this case, in each iteration through the loop, the value of the i nput variable will change because the row identifier changes. Inputs to a program can also be presen ted through a GUI component, like a text box, choice from a drop down menu or a value from a combo box containing a list of possible values, etc. If input is taken from a GUI component, several identifiers are needed to specify which GUI component the input is taken from, unless every component has a unique identification number or name. For exam ple, if an employee first name is entered in a text field, the window identifier where that text field is located should be given as part of the component definition, together with an identifier of that specific text field, since there can be many text fields in the same window, a text field for last name, for example. The type of the component should also be specified for computation purposes; in this example the component is a text field. If input is taken from a file, the file name a nd location are specified. The type of file is necessary for computation purposes, for example text or binary file, etc. In Table 5.1, the specifications for the different forms of input are summarized, i.e. how each input to the software being tested is identified as an outcome of the RE testing tool. The RE testing tool, determines all the kinds of inputs gi ven or acquired by a software application during execution of a scenario (test case). E ach input is given a number, and described

PAGE 124

111 depending on its type, i.e. obtained from a da tabase (DB), GUI or file. Types of GUI components can include “Button”, “Text”, “R adio” for a Radio Button and “Check” for a check box, etc. Table 5.1: Input and Pre Conditions Classification and Categorization Category Classification Database(DB) Table Field Row # Value Into variable GUI Window Component Type Value Into variable File Name/location Type Into Variable In the next subsections examples of the fi rst set of outputs provided by the RE testing tool are given, namely identified inputs of a software execution. The examples will show how the tool communicates the implicit and explicit inputs to a software in a nontechnical form, following the categorization in table 5.1. 5.1.1.1 Examples of Inputs from the Database Displayed by the RE Testing Tool Table 5.1 summarizes how the RE testing tool represents a database input to the program. In table 5.2 are some examples of database inputs. Table 5.2 specifies that the fifth input given to/acquired by the progr am is from the database. Specifically from the salary field in the employee table, at row #14. The value of Salary at row 14 when read was 56000 and this value was assigned to the internal variable Salary ".

PAGE 125

112 Table 5.2: Examples of Inputs from a Database 5.1.1.2 Examples of Inputs from the GUI Displayed by the RE Testing Tool Table 5.1 also summarizes how the RE testing tool represents a GUI input to the program. In table 5.3 show some examples of how the testing tool identifies GUI inputs. Table 5.3: Examples of GUI Inputs Table 5.3 indicates that the first input to th e program was a click on the Save button in the Employee_Entry window. This might have triggered the operation or computation that is taking place. The Second input was the social security number, take from the SSN field, the value taken was 123456789 and it was saved in the E_SSN variable to be used in the computation. The outcome of the RE tool is represented in this simple form to aid the process of specification comparison and isolating the sources of errors. The outcome can also be stored directly in SpecDB as will be shown in Section 5.3. 5.1.2 Output and Post Conditions Classification and Categorization The second and most important outcome of the RE testing tool is the full set of outputs, and system state modifications that resulte d from the software during execution of a Input # Category Table Field Row # Value Into variable 5 DB Employee Salary 14 56000 Salary 6 DB Employee EmpLName 14 Gibson LastName Input # Category Window Component Type Value Into variable 1 GUI EmployeeEntry Save Button Click 2 GUI Employee_Entry SSN TextBox 123456789 E_SSN 3 GUI Employee_Entry Female Radio Y Female 4 GUI Employee_Entry Speak_Spanish Check N Spanish

PAGE 126

113 scenario or test case. There is a great a dvantage to displaying to the development and quality assurance team all the outputs, and cha nges to the system state, both explicit and implicit so that hidden errors of commission a nd security flaws can be identified easily. Similar to the inputs, there are explicit and implicit outputs or post-conditions of the execution of the software application being te sted. Explicit outputs are those that can be observed by functional black box te sting techniques. Examples are the display of data in GUI components, a printed report, etc. However, there also can be some other implicit outputs like exporting information to another non-secure location. This kind of implicit output that cannot be spotted by black box tes ting techniques, and the post conditions of the software execution of a particular test cas e scenario are all displayed as an outcome of the RE testing tool. In the remaining part of this section, an e xplanation is given to show how the RE tool describes different forms of system output s and post conditions including system state modifications, and both implicit and explicit sy stem outputs. In table 5.4, the RE testing tool description of twelve different forms of outputs or post conditi ons is given, namely, database (DB), GUI, file, sequence, message, report, variable, role, privilege, commit, savepoint and finally rollback A modificati on in the database state could either be a deleted or updated row, or a newly inserted ro w. If the change in the database is a deletion or insertion, only the row number is presented. If the database change resulted from a row or field in a row that has b een updated, the row number, field name, value stored and from which variable the value wa s taken are all presented by the RE testing tool. In case a database sequence has been modified, the new value is displayed by the

PAGE 127

114 RE tool. Also if a message is displayed, the message text is given. If a report is one of the outputs of the software, the RE testing t ool displays the destination of the report, whether it is to a printer, file, or on the scree n. If a global (persistent) variable has been modified after an operation is completed, the RE testing tool also di splays its new value. This can help isolate the source of so me errors depending on calculations based on global variables. If the software application created or changed a data base user's role or gave or revoked database privileges, the RE tool also indicates the role or privilege, specified by the displayed identifier, has been granted or revoked to the specified user. In database applications if modifications have been made but not committed (saved), and a rollback operation has been triggered, the data in the database could become corrupt, or the operation that triggered the rollback command, would reverse the operation of another unit previously executed. System testers can have a hard time to isol ate where the resulting database state got its values, after testing a previous operation, and affirming that it operates correctly. When the RE testing tool displays all the operations of the system and all the implicit outputs, the software development and testing team can immediately recognize that the execution of the scenario being tested committed the changes to the database or not, and accordingly, spot errors of omission, namel y, forgetting to commit the changes to the database, so that the executed operations will not be rolled back accidentally.

PAGE 128

115 Table 5.4: Output and Post Conditions Classification and Categorization The RE testing tool also displays the opera tion of the unit or software being tested, displaying step by step how the outputs got thei r value. In Section 5.3, an example of the execution of the RE testing tool is given and also how it displays the inputs, outputs and operation of the software being tested. 5.1.2.1 Examples of Outputs Displayed by the RE Testing Tool As described in the previous section, the RE testing tool displays several types of outputs or post conditions resulting from the execution of a scenario. In table 5.5 an example of how the RE testing tool displa ys a GUI output of the application being tested is shown. In this case, the RE tes ting tool identifies that the application running the scenario, has displayed the employ ee picture from a file located in C:/images/gib1.jpg, in the Emp_Picture GUI component on the Employee_Data window or screen. Table 5.5: Example of Outputs from the GUI Output # Category Classification Database(DB) Table Change Field Row # Value From var GUI Window Component Type Value From variable File Name/location Type Change Sequence Name New Value Message Text Report Name Destination Variable Name Value From variable Role User Role Identifier Change Privilege User Privilege Identifier Change Commit Database Name SavePoint Database Name Savepoint Name Rollback Database Name Savepoint Name Output# Category Window Component Type Value From variable 1 “GUI” Employee_Date Emp_Picture “Image” C:/images/gib1.jpg

PAGE 129

116 5.2 Case Study After discussing the benefits and motivation fo r the creation of the RE testing tool, and explaining how it displays different forms of i nputs and outputs of the software being tested, in this section a case study to show the functiona lity of the tool is described. In order to demonstrate the RE testing tool's ability to spot errors, three examples are shown in Section 5.3. The examples are all different scenario s for the same case study or program. The program is an implementation of the vacati on salesman commission problem, adopted from [5]. In Section 5.2.1 the specifications, or pr oblem statement is explained. In Section 5.2.2 the implementation of the program is shown, to allow for white box testing, and to show how the RE testing tool traces the computation of th e software and how it displays its outputs. As noted previously in this chapter, a technique similar to DD-paths testing is adopted by the RE testing tool. Accordingly, the DD-paths and the DD-path graph are shown in Section 5.2.3 and 5.2.4. In Section 5.3 three different ex amples using the problem described in Section 5.2 are given, to show how the RE testing tool can spot help spot errors. 5.2.1 Problem Statement for the Vacation Salesman Commission Program The problem of the vacation salesman commission is adopted from [5]. Since it a reasonably small problem, it serves to show th e functionality of the RE testing tool. In this section the problem statement of th e vacation salesman commission program is given. A vacation planner sells cruises, airline ticke ts and books shore excursions, to a certain destination. A one week Cruise costs $450 per person, airline tickets fr om his city to the cruise port of departure cost $300 per pers on and the shore excursions and activities

PAGE 130

117 package costs $250 per person. The vacation pl anner hires a salesmen. A salesperson has to sell at least one cruise ticket, one airline ticket and one shore activities package per month to stay in the job! The vacation planner has a maximum of 70 cruise tickets to book per month, 80 airline tickets and 90 shore ex cursions packages. Occasionally, the salesman should report to his employer what vacation items he sold and how many of each. The employer enters this in the databa se, under the salesman name, and the date of sale. At the end of the month, the employer computes the salesman’s commission, by simply clicking on a button. The salesman gets a 10% on all sales up to and including $10000, 15% on the next $8000 and 20% on any sales in excess of $18000. An onscreen report is issued for the salesman s howing the total number of cruises, airline tickets and shore excursions sold. The repor t also shows the salesperson’s total dollar sales and his commission. 5.2.2 Implementation In order for the RE testing tool to accomplis h its task of identifying faults of commission or omission, a technique similar to DD-path te sting is used in the RE testing tool to determine how the software application bei ng tested computed the resulting output, and what output values it gave, and how it modified the system state, if it did. A detailed explanation of DD-Path testing can be found in [5]. White box tes ting techniques require the actual implementation code. Accordi ngly, the implementation of the vacation salesman commission program is presented in this section. The implementation will be used in the examples in Section 5.3, and also to generate the DD-paths for this program. Each line of the implementation of the Vacation Salesman Commission program shown

PAGE 131

118 below, is numbered, to help in the DD-paths generation in the subsequent sections and the examples following the case study. The c ode is written in PL-SQL, it is triggered when the calculate commission GUI button is clicked, to calculate the commission of the given salesman, in the given month and year. The salesman ID, month and year are three text fields in the GUI window, the implementa tion uses these values as inputs. After querying the database for all the sales of the month, and calculating the commission, the program displays total cruises, air tickets and activities sold, total sales, and the commission in five text fields on the same GUI window. Accordingly, the system takes three GUI inputs, other information from the database, and displays five outputs. All exception handling and error checking code has been taken out for simplicity. The code given is executed after checking that all input is given and in the correct format. Below is the implementation:

PAGE 132

119 1 Salesman Commission Calculate Button WHEN-BUTTON-PRESSED Trigger 2 Declare cruise, air_ticket, activity integer; 3 Cruise_Price, Air_Price, Activity_Price ,Total_Sales, commission real; 4 Number_Cruises_Sold, Number_Air_Tickets_Sold, Number_Activities_Sold integer; 5 Dollar_Cruise_sales, Dollar_Air_sales, Dollar_Activity_sales real; 6 cursor all_month_sales is 7 select s_cruise, s_air_ticket, s_activity from sales 8 where s_salesman = :salesman and s_month = :month and s_year = :year; 9 Begin 10 Cruise_Price = 450; 11 Air_Price = 300; 12 Activity_Price = 250; 13 Number_Cruises_Sold=0; 14 Number_Air_Tickets_Sold=0; 15 Number_Activities_Sold=0; 16 open all_month_sales; 17 loop 18 fetch all_month_sales into cruise, air_ticket, activity; 19 exit when all_month_sales %notfound; 20 Number_Cruises_Sold = Number_Cruises_Sold + cruise; 21 Number_Air_Tickets_Sold = Number_Air_Tickets_Sold + air_ticket; 22 Number_Activities_Sold = Number_Activities_Sold + activity; 23 end loop 24 close all_month_sales; 25 :Total_Cruises_Sold := Number_Cruises_Sold; 26 :Total_Air_Ticket_Sold := Number_Air_Tickets_Sold;

PAGE 133

120 27 :Total_Activities_Sold := Number_Activities_Sold; 28 Dollar_Cruise_sales = Cruise_Price Number_Cruises_Sold; 29 Dollar_Air_sales = Air_Price Number_Air_Tickets_Sold; 30 Dollar_Activity_sales = Activity_Price Number_Activities_Sold; 31 Total_Sales = Dollar_Cruise_sales + Dollar_Air_sales + Dollar_Activity_sales; 32 :Total_Sales = Total_Sales; 33 if Total_Sales > 18000 34 then 35 commission = 0.10 10000; 36 commission = commission + (0.15 8000); 37 commission = commission + (0.20*(Total_Sales 18000)) 38 else if Total_Sales > 10000 39 then 40 commission = 0.10 10000; 41 commission = commission + 0.15 (Total_Sales 10000) 42 else commission = 0.10 Total_Sales; 43 end if; 44 end if; 45 :Salesman_Commission := commission; 46 end;

PAGE 134

121 10 9 11 13 12 14 15 16 17 18 19 20 21 22 23 24 32 28 27 26 25 29 30 31 33 34 35 36 37 38 42 39 40 41 43 44 45 4 6 5.2.3 Program Graph for the Vacation Salesman Commission Problem From the implementation in Section 5.2.2, one could come up with the program graph shown in Figure 5.1. A detailed description of how to obtain this program graph from the implementation is available in [5]. The program graph is stage needed to be created to complete extract the program's DD-paths. The DD-paths are used to trace program behavior on specific input. Figure 5.1: Program Graph for the Vacation Salesman Commission Problem

PAGE 135

122 DD-path Nodes A 9 B 10-16 C 17 D 18 E 19 F 20-23 G 24-32 H 33 I 34-37 J 38 K 39-41 L 42 M 43 N 44 O 45 P 46 Table 5.6: DD-Path Graph for the Vacation Salesman Commission Problem 5.2.4 DD-Path Graph for the Vacation Salesman Commission Problem From the program graph in Figure 5.1 in S ection 5.2.3, the DD-Paths graph in Figure 5.2 can be automatically derived, from the DD-path s in table 5.6. Jorgensen in [5] explains how to derive the DD-Paths for an impl ementation. The DD-paths graph and the DDpaths are used in the RE tool. Figure 5.3 shows a higher level module diagram for the RE tool. The unit implementation and test case scenario are taken as inputs. The different process of the RE tool are show n, followed by the RE tool output, representing the inputs, and outputs to and from the syst em, both implicit and explicit, and also the process that the software executed to reach the results shown. Figure 5.2: DD-Path Graph for the Vacation Salesman Commission Problem B A C D E F G H I J L K M N O P

PAGE 136

123 In the following Section 5.3 three examples to test the vacation salesman commission program is demonstrated, based on the im plementation, DD-paths, and DD-paths graph generated in this section. Figure 5.3: RE Tool Model Diagram 5.3 Examples From the Case study described in Section 5.2, three examples are discussed in the following subsections. Each of the examples demonstrat es a different state s howing how the RE tool can spot errors of commission that are not identified by other tools, discussed in the literature review in Chapter 1. Code Simplifier/ Code line numbers Program Graph DD_Paths Identifier DD_Paths Graph Creator Executed DD_Paths Extractor Test Case Explicit Inputs Implicit Inputs Process Explicit Outputs Im p licit Unit Code

PAGE 137

124 5.3.1 Test Case 1 5.3.1.1 Scenario This example demonstrates a simple scenar io for the purpose of showing how the RE tool technique works. The salesman repor ts twice to his employer during a certain month. The first time he reports that he sold 8 cruise vacations, 12 airline tickets and 10 shore excursion packages. The second tim e he reports that he sold 12 cruise vacations, 8 airline tickets and 4 shore excu rsion packages. The employer enters the data as sent each time. Therefore the databa se will have two record s in the sales table for that specific salesman for that specif ic month. At the end of the month the employer enters the employee name, and the required month and year, then hits the Calculate Commission Button to view the report on the screen 5.3.1.2 Expected Output According to the calculations, the total sales amount sums up to $18500. Since the sales are greater than $18000, the salesperson should get a 10% on all sales up to and including $10000, 15% on the next $8000 and 20% on the remain ing $500, adding up to $2300 in commission. 5.3.1.3 Expected GUI Output The expected GUI output on a system monito r for the above scenario is shown in Figure 5.4. There is no database state change e xpected from this sc enario, it is only an inquiry, not a data entry scenario. Accord ingly, the input and output database states

PAGE 138

125 should be identical. Also, there should not be any other output or post condition other than the expected GUI output shown in Figure 5.4. Figure 5.4: The Expected GUI Output for Test Case 1 5.3.1.4 The RE Tool Display of the Inputs for Test Case1 When the RE testing tool runs on th e vacation salesman commission program implementation and for the specified scenario in test case1, it will produce three main outcomes. First, it will give all the input s, implicit and explicit, those inputs are shown in this section. It will also show which DD-paths have been traversed and the operation of the software, and how it comput ed each value of the outputs. The third outcome of the RE testing tool is a list of all the outputs produced by the software during execution of the test case. The ope ration and outputs are shown in the next section. Tables 5.7 and 5.8 show how the RE testing tool displays the inputs of the vacation salesman commission program, given from th e GUI and acquired from the database. These two tables are the first outcomes of the RE testing tool for test case 1. Employee Gibson Month 9 Year 2003 Total Cruises Sold this month 20 Total Air Tickets Sold this month 20 Total Shore Excursions Sold this month 14 Total Sales $18500 The Salesman Commission for this month $2300

PAGE 139

126 Table 5.7: The RE Testing Tool Displayed Database Inputs for Test Case 1 Table 5.8: The RE Testing Tool Di splayed GUI Input for Test Case 1 5.3.1.5 Process and Actual Output The RE testing tool also displays two othe r outcomes, the outputs of the program after executing the test case, and how the values of the output where obtained. The RE testing tool displays the following for test case 1: At execution time the program traversed the following nodes in the DDPath graph: ABCDEF CDEF CDE GHI NOP. The following statements where executed:9, 10-16, 17, 18, 19, 20-23, 17, 18, 19, 20-23 ,17 ,18 ,19, 24-32, 33, 34-37,44, 45, 46 The RE testing tool also displays the outputs and the calculations. The development or testing team can refer to the calculations if errors where found in the output. The calculations or the operation of the program unit resulting in the values of the outputs will thus aid in fault isolation. The following is what the RE testing tool displays for test case 1, outputs are tabulated. Input # Category Table Field Row # Value Into variable 1 DB Sales s_cruise 15 8 cruise 2 DB Sales s_air_ticket 15 12 air_ticket 3 DB Sales s_activity 15 10 activity 4 DB Sales s_cruise 17 12 cruise 5 DB Sales s_air_ticket 17 8 air_ticket 6 DB Sales s_activity 17 4 activity Input # Category Window Component Type Value Into variable 7 GUI Commission_Calc Salesman TextBox Gibson NULL 8 GUI CommissionCalc Month TextBox 9 NULL 9 GUI Commission_Calc Year TextBox 2003 NULL

PAGE 140

127 Cruise_Price = 450 Air_Price = 300 Activity_Price = 250 Number_Cruises_Sold=0 Number_Air_Tickets_Sold=0 Number_Activities_Sold=0 Number_Cruises_Sold = 0 + 8 Number_Air_Tickets_Sold = 0 + 12 Number_Activities_Sold = 0 + 10 Number_Cruises_Sold = 8 + 12 Number_Air_Tickets_Sold = 12 + 8 Number_Activities_Sold = 10 + 4 Table 5.9: GUI Output for Test Case 1 Part1 Dollar_Cruise_sales = 450 20 Dollar_Air_sales = 300 20 Dollar_Activity_sales = 250 14 Total_Sales = 9000 + 6000 + 3500 Table 5.10: GUI Output for Test Case 1 Part2 Since Total_Sales > 18000 Then commission = 0.10 10000; commission = 1000 + (0.15 8000); commission = 2200 + (0.20*(18500 18000)) Table 5.11: GUI Output for Test Case 1 Part3 Output # Category Window Component Type Value From Variable 1 GUI CommissionCalc Total_Cruises_ Sold Textbox 20 Number_Cruises_Sold 2 GUI CommissionCalc Total_Air_ Ticket_Sold Textbox 20 Number_Air_Tickets_Sold 3 GUI CommissionCalc Total_Activities _Sold Textbox 14 NumberActivitiesSold Output # Category Window Component Type Value From Variable 4 GUI CommissionCalc TotalSales TextBox 18500 TotalSales Output# Category Window Component Type Value From Variable 5 GUI CommissionCalc SalesmanCommission Textbox 2300 Commission

PAGE 141

128 The RE testing tool thus displays and confir ms that there were only five outputs to the program for test case 1. No other databa se outputs, or state changes have been executed by the unit. 5.3.1.6 Comparing Expected to Actual Output In this example it is clear that the exp ected output in Figure 5.4 matches the actual output in tables 5.9, 5.10 and 5.11, and therefore, there is little or no need to look at the execution process. The specifications of the implemented program can be fed in SpecDB, together with the actual outputs and the calculation formulas. Those implementation specifications can then be co mpared to the required specifications and an automated tool can make the correct decision whether the implementation matches the required specifications or not. Also if there are discrepancies between the actual and the required specifications, the outcome of the RE tool helps in isolating the source of the errors. The RE tool numbers the implementation statements, accordingly, it indicates which statement ca used the difference between the required specifications and actual the actual code. SpecDB is used to host the specifications resulting from the RE testing tool. An example is given in this section to clar ify how an automatic comparison between the required specifications and those resulting from the RE testing tool, i.e. the specifications of the implemented code, can be made using SpecDB. A snapshot of a portion of the specifications will be used to demonstrate the principal technique. The specifications of the output for the vaca tion salesman commission problem are then represented in SpecDB. In test case 1, both the required and actual specifications

PAGE 142

129 match. However, to show how a discre pancy can be automatically identified, a software error is injected. Software fau lt injection was researched in [102]. The software error injected is not in the code, but in the GUI, assuming that in the vacation salesman commission program, after the em ployer entered the correct salesman identification, month and date, and pressed the calculate commission button, the program did not display anything in the five fields. The same program unit implementation in test case 1 is used again. By viewing the result of the RE testing tool in the previous sections, it is noticed that the program unit executed correctly. For the purpose of comparing the actual specifi cations to the faulty ones, the scenario is changed, so that the output is not displa yed in the correct window, it is displayed in another screen. From the user's point of view, no output is generated when he clicks the button, although, the output is computed correctly, is it displayed in the wrong location in the GUI. Given the required sp ecifications and the observed specification of the fault injected test case 1, resulting from the RE tool, it is a trivial task to note where the error occurred. After injecting th is fault the results in tables 5.9, 5.10 and 5.11 should be slightly different, the wi ndow name should change and/or the component name should change. The RE tes ting tool outputs in this case, display the values in table 5.12.

PAGE 143

130 For the purpose of this example, only a snaps hot of the original required specifications represented in SpecDB is shown in the Var, Dataflow and the Input_Output_Definition tables. Table 5.12: GUI Output for Test Case 1 After Fault Injection Tables 5.13, 5.14 and 5.15 contain a snapshot of the data in Var, Dataflow and the Input_Output_Definition tables in SpecD B, reflecting the original required specifications. The snapshots of only the fiel ds in the SpecDB tables that are used are shown here, for simplification. Note that the program line numbers in Section 5.2.2 are not the same as the POS_Num in the specifications. Line numbers used in the implementation in Section 5.2.2 was for the purpose of extracting the DD-paths graph. However, when the specifications are entered in SpecDB directly or through the RE tool, different POS_Num numbers are given, to reflect actual program actions. For example, all variable or type declarations ar e inserted in the Var or Type tables prior to inserting the dataflow data in the Dataflow table. Output # category Window Component Type Value From Variable 1 GUI Sale_Report Total_Cruises _Sold Textbox 20 Number_Cruises _Sold 2 GUI Sale_Report Total_Air_ Ticket_Sold Textbox 20 Number_Air_Tickets _Sold 3 GUI SaleReport Total_Activities _Sold Textbox 14 Number_Activities _Sold 4 GUI SaleReport TotalSales Textbox 18500 TotalSales 5 GUI Sale_Report Salesman_Commission Textbox 2300 Commission

PAGE 144

131 Table 5.13: SpecDB Dataflow Table Snapshot Original Specifications Pos_Num Suc_Num Pos_Type 16 17 Output .. 17 18 Output .. 18 19 Output .. 23 24 Output .. 34 35 Output .. .. .. .. .. The above specifications in the Dataflow ta ble in SpecDB, specify that the sixteenth operation that should take place in the implemented vacation salesman program should be an output. The output is defi ned in the Input_Output_Definition table, shown in Table 5.15 below. But before we examine the snapshot of the specifications in the Input_Output_Definition table, we need to look at the snapshot of the SpecDB Var table, in Table 5.14. Table 5.14: SpecDB Var Table Snapshot Original Specifications var_ID Var_Name Type Is_Input Is_output Is_GUI _object GUI_window .. 7 Total_Sales Real F F F .. 8 Commission Real F F F .. 9 Number_Cruises_Sold Int T F F .. 10 Number_air_tickets_Sold Int T F F .. 11 Number_Activities_Sold Int T F F .. 18 Total_cruise_sold Int F T T CommissionCalc .. 19 Total_air_sold Int F T T CommissionCalc .. 20 Total_activities_sold Int F T T CommissionCalc .. 21 TotalSales Real F T T CommissionCalc .. 22 SalesmanCommission Real F T T CommissionCalc .. .. .. .. .. .. .. .. .. The Var table snapshot shown in Table 5.14 above, indicates that the variable Total_cruise_sold is an output, specifica lly, a GUI numeric output, that should be displayed in the CommissionCalc window or screen. Finally, from the above two SpecDB tables and the snapshot of the SpecDB Input_Output_Definition table in Table 5.15, it is specified that the output in the sixteenth operation is a display of the

PAGE 145

132 value of the Number_Cruises_Sold variable (Var_ID =9) in the Total_cruise_sold GUI output (Var_ID =18) in the CommissionCalc window or screen. Table 5.15: SpecDB Input_Output_Definition Ta ble Snapshot – Original Specifications Pos_Num Var_ID Input_Outpu t Source_Destination_Var_I D … 16 18 Output 9 .. 17 19 Output 10 .. 18 20 Output 11 .. 23 21 Output 7 .. 34 22 Output 8 .. .. .. .. .. .. After examining the snapshot of the require d specifications stored in SpecDB above, we will now examine a snapshot of the sp ecifications of the program after fault injection. After examining how the speci fications of the faulty program are represented in SpecDB as a result of the RE testing tool, the errors can then be automatically isolated and reported; as shown below. In Table 5.12, the RE testing tool display of the GUI Output for test case 1 after fault injection was demonstrated. The outcome of th e RE tool is then stored in SpecDB, in the same three tables, Var, Dataflow and Input_Output_Definition. The RE testing tool specifications stored in the Dataflow and input_Output_Definition tables can be identical to that of Tables 5.13 and 5.15 for test case1 after fau lt injection. Table 5.16, however, shows that the five output variables (text boxes) are in another GUI window.

PAGE 146

133 Comparing the two instances of the SpecDB tables to one another, those of the required specifications and the specifications of the implemented code resulting from the RE tool, is a trivial task that is easy to automate. However, much attention should be dedicated to identifying variable names and matching the implemented variable names to those of the specified design. If the implementation uses the exact variable names as those in the specifications and de sign, then comparing the RE testing tool specifications of the implementations to thos e of the required specifications is an easy task. The same consideration should be gi ven to the operation position numbers in the dataflow table, to facilitate the comparison of the two sets of specifications and the process of fault isolation. The next example illustrates how RE Testing tool helps spot faults of commission, when either th e program generates an unspecified output, or generates the wrong output, due to an incorrect computation. Table 5.16: SpecDB Var Table Snapshot – RE Tool Implementation Specifications var_ID Var_Name Type Is_Input Is_output is-GUI-object GUI_window . 7 Total_Sales Real F F F . 8 Commission Real F F F . 9 Number_Cruises_Sold Int T F F . 10 Number_air_tickets_Sold Int T F F . 11 Number_Activities_Sold Int T F F . 18 Total_cruise_sold Int F T T Sale_Report . 19 Total_air_sold Int F T T Sale_Report . 20 Total_activities_sold Int F T T Sale_Report . 21 Total_Sales Real F T T Sale_Report . 22 Salesman_Commission Real F T T Sale_Report . .. .. .. .. .. .. .. .

PAGE 147

134 5.3.2 Test Case 2 In test case 1, the correct implementation in Section 5.2.2 was used. In order to demonstrate how the RE testing tools spots errors of commission, a slight modification of the code will be used. If the program was generated using th e wrong implementation, it will result in errors. Accordingly, in test case 2, only one line of c ode will be modified to show how the RE testing tool identifies errors of commission. Per the specifications, the data entered by the employer in salesman identification, month and year fields should not change, when the user clicks on the calculate commission button. The five other fields should display the required information. In test case 2, we will assume that the program in line 25 has the following statement: 25 :month := Number_Cruises_Sold; instead of 25 :Total_Cruises_Sold := Number_Cruises_Sold; In this case, the programmer made an e rror while writing the statement at line 25. Consequently, the actual output displayed on the GUI screen, will appear as in Figure 5.5. The GUI output clearly indicates an e rror, since the total sales and commission amounts are based on more than just 20 air tickets and 14 shore excursions. The value in the month field has also changed to 20. Since an error is obvious in the GUI output, the report displayed by the RE testing tool he lps to isolate where the error occurred. In test case 2, the actual input is identical to th at in test case 1, detailed in Tables 5.7 and 5.8.

PAGE 148

135 Figure 5.5: GUI Output for Test Case 2 5.3.2.1 Process and Actual Output For test case 2, the DD-paths and all calculati ons will be identical to the process shown in test case 1. Hence, the calculation for the total number of cruises sold is correct. The program just does not display it in its position. The only difference between the correct program implementation in test case 1 and that of test case 2 is in the first set of outputs shown in Table 5.17. Output number 1 in Table 5.17 highlights the error. Table 5.17: RE Testing Tool Display of the GUI Output for Test Case 2 Employee Gibson Month 20 Year 2003 Calculate Commission Total Cruises Sold this month Total Air Tickets Sold his month 20 Total Shore Excursions Sold this month 14 Total Sales $18500 The Salesman Commission for this month$2300 Output # Category Window Component Type Value From Variable 1 GUI CommissionCalc Month Textbox 20 Number_Cruises _Sold 2 GUI CommissionCalc Total_Air_ TicketSol d Textbox 20 Number_Air_ Tickets_Sold 3 GUI CommissionCalc TotalActivities Sol d Textbox 14 Number_Activities _Sold

PAGE 149

136 From the GUI output, the tester can see that the text box Total_Cruises_Sold was not modified by the program. Whereas the text box Month was modified. This is a fault of commission since the program should not have modified that Month field. In this example it was given the value 20. If there wa s a restriction on the integer values that are allowed to be put in the Month text box from 1 to 12, this might have cause a run time error as well. There is also a fault of omission since the program did not modify the Total_Cruises_Sold text box, when it shoul d have displayed in it the value of the variable Number_Cruises_Sold. These faults could be easily spotted by the testers, once the RE testing tool displays the program output shown in Table 5.17 and the process. The RE testing tool technique thus far helped to identify the faults by showing the source of the implementation's actual output. It also helped isolate where the problem occurred. As in test case 1, the results of the RE te sting tool in SpecDB are shown. The required specifications represented in SpecDB remain unchanged for all the test cases. In test case2, the specifications from the RE testing t ool help to automatically isolate the source of the error and show that the specifications of the implementation are different, in only one location. Table 5.18 shows that the inform ation in the Var table resulting from the RE tool specifications are correct, and match the original specifications. Table 5.18: SpecDB Var Table Snapshot – RE Tool Specifications – Test Case 2 Var_ID Var_Name Type Is_Input Is_output Is_GUI_object GUI_window .. 18 Total_cruise_sold Int F T T CommissionCalc .. .. .. .. .. .. .. .. ..

PAGE 150

137 Table 5.19 shows the specifications of the impl emented code, resulting from the RE testing tool. Compared to table 5.15, the error lies in the Var_ID, it should have been 18(Total_cruise_sold) not 16(Month). By a triv ial automated comparison of the two sets of specifications, the error can be easily isolated. Table 5.19: SpecDB Input_Output_Definition Table Snapshot – RE Tool Implementation Specifications – Test Case 2 Pos_Num Var_ID Input_Output Source_Destination_Var_ID … 16 16 Output 9 .. .. .. .. .. .. 5.3.3 Test Case 3 The third example shows errors that most probably are missed by the testing team. These involve errors that happen in the database and are not immediately visible to the tester. This happens when all the items in the GUI return the expected output, however, more happens in the background that the user/te ster cannot immediately see, but that also leaves the database in a possibly unstable or wrong state, i.e. containing erroneous data. Such implicit errors, can be easily identified wh en the tester examines the outcome of the RE tool containing the list of program outputs given in the actual output table. The following example demonstrates those invisi ble errors of commission and/or omission. The advantage of this technique lies in its ability to recognize errors as early as on the first run of a specific unit. To demonstrate this, the following scenario is used. The expected output for the Calculate Commission unit is only to fill out five fields in the GUI and not to change the underlying database in any way. In this example the code in line 46 is changed to show how the RE testing tool identifies hidden errors of commission. The original code in line 46 of the implementation was:

PAGE 151

138 46 end; Instead, we will assume that the programmer put the following lines of code: 46 update sales 47 set cruise = Total_Sales 48 where s_salesman = :salesman and s_month = :month and s_year = :year; 49 end; This updates the entire salesman’s cruise sale s in that specific month and year (taken from the GUI text fields) to the total dollar sales amount for the cruises, air_tickets and activities. This is of course an error. Howeve r, it is not visible to the tester if she uses ordinary testing procedures to compare the actual output to the expected output. The outputs are correct for the first time, but then the data in the database changes, and when the unit is executed again with the same inputs, it will generate different values each time! Such an error may not be visible duri ng the testing phase when the database is not fully populated, but after a year of usage, if the employer wants a report of each salesman’s sales and commissions earned, the report will give him totally different numbers than those the salesmen actually received during the year. In order to spot such an error early one need s to either compare the database state before and after the execution of the unit code or us e the proposed technique in this chapter. The output process will reveal the following, maki ng it very easy for the tester to identify that there is an error of commission where the unit was programmed to execute more than it was intended to execute. The actual input will be identical to that in test case 1.

PAGE 152

139 The process will have statements 46-48 extra. The output will be identical to that in Example 1; five GUI outputs will be displayed by the RE testing tool in addition to the database modification in Table 5.20, which is di splayed by the RE tes ting tool at the end of the program operation. Table 5.20: Database Output for Example 3 Similar errors can be spotted when execu ting the Commit database command after an error is introduced to the database. Also ot her errors that can be spotted with this technique are those resulting from deleting co rrect information that was stored in the database earlier but was not yet committed and a Rollback command is executed. When the results of the RE tool are fed into SpecDB, it will be shown that there are an extra set of specifications resulting from the implemen tation that do not exist in the original specifications, thus isolating an error of commission. 5.4 Conclusion In this chapter one of the applications of SpecDB is introduced, namely a reverse engineering approach is used to log the actual execution of the program from the code due to some input. Once the program actions have b een logged, the specifications of the actual executed code are fed into SpecDB to allow for automatically matching them with the expected specifications. In this chapter, the actual program outputs and post conditions are presented as an output of the RE testing tool, in a way that non-technical testers and other members of the software development team can understand. As a result of the RE testing Output# Category Table Change Field Row # Value From Var 6 DB Sales Updated s_cruise 15 18500 Total_Sales 7 DB Sales Updated s_cruise 17 18500 Total_Sales

PAGE 153

140 tool, the development team will have two doc uments, the required specifications, and the actual implemented specifications. The observed output can then be easily compared to the expected output based on the program specifica tions and oracles that determine what the expected program output should be. If the speci fications of the softwa re were already logged in SpecDB, then both specifications can be auto matically compared, and errors automatically reported. Specifications can be stored in SpecDB from th e RE tool without a sp ecific input for a test case. The RE testing tool as shown in the cas e study in this chapter au tomates the process of program DD-paths graph creation. After trav ersing the code with the white box testing technique shown using DD-paths testing, the specifications of the implemented code can be fed directly into SpecDB following the datafl ow of the implementation. Accordingly, the RE tool can feed the specifications of the im plemented units into SpecDB without the input data from a test case.

PAGE 154

141 CHAPTER 6 ENHANCING OTHER TESTING TOOLS USING SPECDB A survey of related literature, and the current available tools in the market was given in Chapter 1. In Chapter 2, the motivation for the creation of SpecDB was discussed together with its detailed design. In Chapters 4 and 5 two testing tools using SpecDB were introduced. The focus of this chapter is on using the machine-readable specifications represented in SpecDB to enhance other tools developed or designed by other researchers. As an example, AGENDA, de veloped by Chays et. al. in [27,28,54,55,56,57] is chosen. The research involving AGENDA was summarized in Chapter 1. After introducing the machine interpretable specification of SpecDB, The re search on AGENDA and othe r testing tools or techniques, can be enhanced. One of the mo tivations for the creation of SpecDB was to enhance already existing automated testing sy stems and complete current recommendations in some of the published literature. This chapter focuses on improvements that can be made to those testing tools using the SpecDB repres entation and tools described in the previous chapters of this dissertation. In each section a limitation of other tools is emphasized, a solution using SpecDB is proposed and examples of the solution are provided.

PAGE 155

142 6.1 Using the Database Representation of SpecDB to Expand the Testing Domain Some testing tools, including AGENDA use linked lists and files to store the information needed about rules, constraints and the sche ma of the application being testing. Tools implemented using linked lists are not as easy to modify as database queries, and database schema design. Accordingly, scalability is an advantage of using a database design over a linked list representation. Using the database of SpecDB is a more effective approach, since much more software specifications can be st ored in the SpecDB database. Storing more specifications result in an ability to automatica lly create better test cases and automate the process of test case result verification. Also enforcing the constraints on the SpecDB database is a means of testing the specifications for completeness and accuracy to a great extent. The authors of AGENDA investigated memory capacity and had concerns about size limits for the linked lists used. However, the use of tables in SpecDB to store and access the information as opposed to linked lis ts, eliminates the concern about memory overflow. To demonstrate how storing more specifications result in an ability to automatically create better test cases, examples are given to show what test cases can be generated by AGENDA, compared to those created by a tool using SpecDB. In the vacation salesman program described in Chapter 5, the only portion of th e unit implementation in Section 5.2.2 that can be tested using AGENDA, is line 7 and 8 (the Se lect statement) of the 46 lines of code. Tools built on SpecDB, however, can test impera tive code as well. The RE testing tool demonstrated in Chapter 5 traversed the en tire unit implementation, as shown by the three

PAGE 156

143 examples in Chapter 5. Accordingly, better test coverage is possible using the SpecDB. However, in order to benefit from the results of other research, it is shown how it is an uncomplicated task to expand SpecDB, to repr esent the information used by other tools. Again this is demonstrated with an exampl e from AGENDA. Translating the information needed and used by AGENDA in linked lists to the SpecDB representation is shown in Figure 6.1. In this figure it is s hown how the data used to generate test cases and database states in AGENDA can be stored in SpecDB. Each database operation to test in AGENDA is given a unique query ID. This ID is stor ed in SpecDB in the DB_Test_Operation table (operation ID). The host variables used or updated in the query or database operation are stored in the Host_Var table. The descriptions of those variables are also stored in this table. In AGENDA only one attribute is observed fo r modifications per operation. The SpecDB representation offers an expansion to this simp le query testing, to include multiple attributes per operation. This is achieved by storing those attributes in the Attributes_Changed table. Also multiple pre and post-conditions can be observed and tested using the SpecDB representation. The Pre_Post_Conditions table in SpecDB stor es the operation number together with the corresponding condition number. The conditions are stored in the Predicate table, using the Condition_Num to reference the Pos_Num. The authors of AGENDA explained that only simple mathematical numeric calculations for post-conditions can be carried out using their tool. In the SpecDB representation, very co mplex operations can be specified using the generality of the Predicate table and the comp licated operations that it can represent. The Predicate table can also represent operations on other types besides numbers, text, date, etc.

PAGE 157

144 Figure 6.1: Representing AGENDA Data in SpecDB Data groups used by AGE NDA to populate the tables with valid data and generate test cases, are stored in the Data_Group table in SpecDB, together with the probability of each group. The probability was used by AGENDA as a heuristic to guide the processes of data and test case generation. Boundary values are stored in the Type_Restrictions table in SpecDB. In SpecDB every attribute in every table can have a distinct type with specified allowable values. Using this representation in SpecDB allo ws for more complex type restrictions. For example specifying a range (from -5 to 5) excl uding one or more values (like 0). If the attribute is better described by a list of values (e.g. month name), the values are represented in the Types_LOV table. Using AGENDA, the tester specifies some valid values for each of the data groups to guide the database state and test case generation proce sses. The list of valid values for each data

PAGE 158

145 group is stored in the Equivalence_Class table in SpecDB. The attribute can be of any basic type. Other more complex types can also be used by expanding this table in SpecDB, giving the tester more flexibility. The List_of_Tables table in SpecDB hosts the required information to be used by AGENDA. The attributes and their respective constraints are stored in the Table_Field_Description table in SpecDB. In AGENDA, information about th e constraints on each attribute is gathered, but the testing tool does not check that thes e constraints are enforced. The automated constraint generator tool created using SpecDB creates the constraints and triggers for more complex business rules, to enforce them at th e database level as opposed to the application level, hence producing a better database state. 6.2 Enhancing State Validation Tools Using SpecDB In order to test the operation of parts of a software application, some tools, including AGENDA, check the state of the resulting sy stem after test case execution. Another enhancement to AGENDA using the SpecDB represen tation is that of the state validator. AGENDA creates log tables to log all modificati ons after update, delete, or insert operations. Temporary constraints ar e created on the log tables to check the database state changes after test execution. AGENDA also checks whether th e number of affected rows by the test execution equals that expected. If not, AGENDA reports an error, indicating that either too many or too few records have been affected. It is then the tester’s task to go through the database and the log tables to identify the error.

PAGE 159

146 There are some errors that can occur and not be identified by AGENDA, in its current technique. AGENDA checks that the post-conditions have been fulfilled using the temporary constraints on the log tables. However, if the specified post-conditions have been fulfilled but also other modifications have been made to the database too, i.e. errors of commission, that would be a software error that would not be spotted by AGENDA. For example, if the operation being tested, gives an annual raise to all employees. AGENDA will make sure that the salaries of all the employees have been raised by the specified corresponding rates. It would not however, spot an error of a change in the department made to all employees. Also AGENDA would not spot other operations in the database that are not linked to the database table being updated, for example, the operation grants privileges to unauthorized users to access the database. Us ing SpecDB the latter problem is solved by using the reverse engineering approach to te sting the database system, where all of the modifications made to a system are logged for each test operation. Those errors of commission can be spotted very easily using this technique, as shown in the examples of Chapter 5. If the number of rows modified equals that expected, and the post-conditions are satisfied by the temporary constraints on the log tables, but other attributes have been modified too, AGENDA does not report an error. In order to report such software errors, using SpecDB more constraints can be automatically generate d on the log tables to check if all the other attributes, (other than the ones in the Attri butes_Changed table for the specified operation) have not changed. If the post-conditions have not been satisfied or satisfied but other attributes have also been modified, the SpecDB tool will report an error.

PAGE 160

147 6.3 Expanding the Testing Scope Beyond Variables and Database States Testing tools that are not based on the specifica tions are limited in the areas of the software that can be tested. AGENDA for example, tack les only variables and database states. It does not tackle operations where the variables or database states are not changed, but files, printed reports, magnetic card programming, etc. Many Applications us e numerous types of inputs and outputs, besides database state cha nges. SpecDB’s reverse engineering testing tool handles all these different types of inputs and outputs, as seen in Chapter 5. 6.4 Testing Different Operation Types Another limitation of testing tools not based on software specifications, is the types of operations tested. AGENDA tests four types of database operations, namely, insert, update, delete and select. Using the reverse engineer ing testing tool of SpecDB, other database operations can also be tested, and their results logged, together with imperative code, i.e. programming constructs that are not related to database operations. Using SpecDB the following database operations can be tested: Grant, Revoke, Create and Drop(view, table, constraint, sequence, etc.), Roles, Acce ss Sequence, Commit, Rollback and Savepoint. Other operations can be easily added to SpecDB. The reverse engineering SpecDB testing tool observes all modifications that happen in th e database state, the application variables, and the input/output domains. Each of the a bove mentioned database operations are logged with all its details, as described in the reverse engineering SpecDB testing tool in Chapter 5.

PAGE 161

148 6.5 Conclusion In this chapter four areas of enhancements offered by the SpecDB representation of the software specifications over other tools, were proposed. As an example from the literature, Chays et. al. AGENDA research [27,28,54,55,56,57] us ing SpecDB was chosen. Additional enhancements can be made using the SpecDB repr esentation to other tools in the market and the literature using the machine interpretabl e specifications stored in SpecDB. Many researchers confirmed that having such a queryab le formal specification that can be used in their tools will automate and enhance the process of test case generation and result verification [31, 34, 38]. As a future work, many of those proposals can be implemented using the SpecDB technique, to test systems using the proposed automated ideas published, based on the software's machine readable specifications. In Chapter 7 a comparison between different testing tools discussed in Chapter 1, and SpecDB, is given to show some of the limitations and strengths of each tool. Also a conclusion of all the di ssertation contributions and recommend future work is summarized in Chapter 7.

PAGE 162

149 CHAPTER 7 CONCLUSION AND FUTURE WORK In this dissertation the design of SpecDB, a da tabase created to represent and host software specifications in a machine-readable format is introduced. This scalable database representation was shown to aid in the processe s of both automated software code generation and automated software testing, based on the actual software specifications. One of the main contributions of the dissertation is the creation of SpecDB; its design is detailed in Chapter 2. SpecDB is the database that can hold the speci fications required for unit testing database software. An algorithm was created to show how to translate formal specifications into the SpecDB representation, in Chapter 3. Some ex amples of automated testing applications benefiting from the machine-retable specificati ons represented in Sp ecDB were created and discussed in Chapters 4 and 5. This chapter concludes with a summary of the research contributions and accomplishments. In Section 7.1 the contributions are summarized in points. In Section 7.2 a more general discussion of the research accomplishments is summarized together with some proposals for future work and research continuation ideas.

PAGE 163

150 7.1 A Comparison Between Testing Tools In this section six test case scenarios are de scribed. With these scenarios, the ability of different tools to identif y errors in the software being tested, is highlighted. The testing tools used in this comparison are AGENDA, SpecDB, RE testing tool, Rational, Mercury and Segue set of tools. In the next section the test cases and scenarios are describe first. Then a comparison follows denoting the strength and limitations of each of the tools mentioned above. 7.1.1 Test Cases and Scenarios 7.1.1.1 Test Case 1 The case study and implementation in Chapter 5 is also use for this test case. The salesman reports twice to his employer during a certain month. The first time he reports that he sold 8 cruise vacations 12 airline tickets and 10 shore excursion packages. The second time he reports that he sold 12 cruise vacations, 8 airline tickets and 4 shore excursion packages. The employer enters the data as sent each time. Therefore the database will have two record s in the sales table for that specific salesman for that specific month. At the end of the month the employer enters the employee name, and the required month and year, then hits the Calculate Commission Button to view the report on the screen. In this scenario, it is assumed that the code is all error free and matches the specifications. No errors should be identified.

PAGE 164

151 7.1.1.2 Test Case 2 In test case 2 the same implementation as in test case 1 is used, changing only one line of code. Per the specifications, the da ta entered by the employer in salesman identification, month and year fields should not change, when the user clicks on the calculate commission button. The five ot her fields should display the required information. In test case 2, we will a ssume that the program in line 25 has the following statement: 25 :month := Number_Cruises_Sold; instead of 25 :Total_Cruises_Sold := Number_Cruises_Sold; Accordingly, the testing tool should identify 2 errors, changing the month value, and not displaying the total cruises sold. 7.1.1.3 Test Case 3 In this test case, the same code used in te st case 1 is adopted, how ever, the code in line 46 is changed. The original code in line 46 of the implementation was: 46 end; Instead, we will assume that the programmer put the following lines of code: 46 update sales 47 set cruise = Total_Sales 48 where s_salesman = :salesman and s_month = :month and s_year = :year; 49 end; As discussed in Chapter 5, this results in modifying the data in the database, and will generate different values for the same sale sman for the same month, each time the test case is run. One Error of commission, resulti ng in two database modifications should

PAGE 165

152 be identified as an error. If this case is run only once, the error will not be identified by any functional testing tool. If run se veral times on the same inputs, functional testing techniques might spot the difference in the output data each time. 7.1.1.4 Test Case 4 Again in test case 4 ,the implementation is th e same as that used in test case 1, with the exception of displaying the output in another window. From the user's point of view, no output is generated when he clicks the button, although, the output is computed correctly, it is displayed in the wrong location in the GUI. Ten errors should be identified, since the five outputs are misplaced. 7.1.1.5 Test Case 5 In this test case, the same code used in test case 1 is adopted here too, changing only the code in line 46. The original code in line 46 of the implementation was: 46 end; Instead, we will assume that the programmer put code that grants privileges to all users to enter sales information, a privilege that should only be gi ven to the employer. The same test case can also include ot her database operations including, Revoke, Create and Drop(view, table, constraint, se quence, etc.), Roles, Access Sequence, Commit, Rollback and Savepoint. All functi onal testing tools will not identify this error, no matter how many times the test is run. Although this test case involves a database operation, yet AGENDA cannot test it, because it is a Grant operation.

PAGE 166

153 7.1.1.6 Test Case 6 Test case 6 is similar to test case 5, but assuming that code is written before the end to export employee names, social security num ber, addresses and dates of birth to another remote location. No functional testi ng tools will identify this error even if the test is run many times. Also high volume testing techniques [77,79] cannot spot this kind of error. High volume testing is when the test case is executed many times, even if the result is correct, to identify whether the system will react differently over time, for example from a slight system clock difference that accumulates over time. 7.1.2 Comparing the Ability of Testing Tools to Identify Errors In this section, a comparison is given between different testing tool s to show how each reacts to the errors in each scenario of the test cases in Section 7.1.1. A tabulation of the results is presented in Table 7.1. The test cas es selected in Secti on 7.1.1, were chosen to show only the differences between the tools. There are a lot of test cases that show that all or most of the tools react similarly. In Table 7.1 the instances where the tools react similarly are removed, and only the areas wh ere they differ in capability are included. The functionality of each testing tool is obs erved and compared based on a variety of factors. Among the comparison criteria is whet her the tool can test imperative code, i.e. performs Structural testing; also coverage of the test cases that can be handled by the tool and the percentage of code in the implementation that can be tested by the tool.

PAGE 167

154 Table 7.1: A Comparison Between How Testi ng Tools/ Techniques Handle Test Cases 1-6 As seen from Table 7.2, there is some functi onality that can be performed by other tools, namely memory management, stress testing, et c, that cannot be performed SpecDB tools using the current design of SpecDB. Howe ver, as proved previously, the SpecDB Test case # Tool Can run test Can spot error Spot errors (different inputs for same scenario Can automaticall y verify results Isolates error sources 1 2 3 4 5 6 SpecDB Based Tools 1 2 3 User To an extent 4 5 6 AGENDA 1 2 3 4 5 6 WinRunner, Robot, SilkTest 1 2 3 4 5 6 QuickTest, Functional SilkTest5 1 2 3 4 5 6 LoadRunner, Performance Tester, Silk Performer

PAGE 168

155 representation is readily scalable. The curre nt design of SpecDB deta iled in Chapter 2, is for the purpose of unit testing. However, Sp ecDB can be expanded eas ily, as previously shown in Chapter 4, to accommodate more specifications to test other parts of the software including security, memory usage, etc. There are unique capabilities associated with Sp ecDB based tools that other testing tools cannot perform, as shown in Figure 7.1; namely the ability of the tools to automatically identify errors in execution if the inputs to a test case changed, and the ability to isolate the source of errors in the code. Functiona l testing tools like regression testing, stress testing, scenario testing, record and play back tools, need the testers to verify the result of the test case. Since the tools have no sp ecifications, the testers need to identify the expected outputs of each test case. The latter tools can run the same test scenarios with different input and automatically verify the output, unless for each of those test cases the expected output is given to the tool. For SpecDB based tools however, since the sp ecifications are available, tools based on SpecDB can automatically generate the expected output and verify wh ether the test case results match the specifications. The dissert ation research is concluded with this comparing of some of the mostly used te sting tools with the capabilities of SpecDB based tools. In the next sections the contributions are summarized, and recommendations for future work on SpecDB and SpecDB based tools are suggested.

PAGE 169

Table 7.2: A Comparison Between the Capabilities of Testing Tools and Techniques Capability Tool Can automatically generate tests Readily scalable Can automatically verify results Unit Coverage Memory / performance testing Stress testing Tests DB operations Structural testing Functional testing Handles complex operatio ns Can isolate error sources SpecDB Based Tools 100% AGENDA To an extent Only DB operations (DB operations only) WinRunner Robot, SilkTest (only test scripts from user scenarios) Based on scenarios considered functions observed only QuickTest, Functional SilkTest5 (only user scenario test scripts) Based on scenarios considered functions observed only LoadRunner Performance Tester, Silk Performer observed only Purify observed only 156

PAGE 170

157 7.2 Research Accomplishments at a Glance In this section the research contributions are ou tlined. The main contributions are in italics. In Section 7.3 the contributions are discussed in more detail. In this dissertation, the following was accomplished: 1. Software specifications are represented in a machine readable format This was detailed in Chapter 2, the design of SpecDB. 2. A translation algorithm to transform form al specifications written in a formal language like Z, to the machine-readable SpecDB representation is created This was shown in Chapter 3, storing specifications in SpecDB. 3. Useful applications that make use of the data stored in SpecDB are developed. a. In Chapter 4 an automated tool that gene rates constraints and business rules to enforce them at the database level as opposed to the application level, for better reliability, was introduced. b. Also in Chapter 5 a reverse engineering approach to testing database applications was introduced. It uses specifications to intelligently test software. 4. Enhancements on published work in the field of testing database applications, were suggested using the techniques described in this research in Chapter 6; some enhancements on AGENDA, created by Chays et. al. were proposed.

PAGE 171

158 7.3 Conclusion and Recommendations In this dissertation, the design of SpecDB, a database to hold parts of a software’s specification is being investigated. Representi ng software specifications in this manner aids the process of automated testing and auto mated code generation. Through the use of machine-readable specifications represented in SpecDB, automated tools can become more intelligent in generating and testing program s, based on an understanding of what the software under construction is supposed to do, by design. The design of SpecDB, discussed in Chapter 2, is scalable, allowing for any e nhancement and additions to accommodate more forms of specifications or business requirements, as they evolve in the future, or other parts of the software application, e.g. security or netw orking, etc. In this dissertation, the idea of a unified standard to represent portions of the so ftware specifications necessary to aid in the process of unit testing was explained. In Chapter 3, an algorithm is implemented to translate examples of formal specification statements to the SpecDB representation. In this chapter, it was shown how a formal software specification can be au tomatically represented in SpecDB. Additional future work is to implement this algorithm, and also create an automated intelligent software testing tool to test software represented by SpecDB. As an application of SpecDB and to illustrate its purpose, an automated tool was developed to generate business rules and enforce them on the database level by automatically creating constrains and assertions using the data stored in SpecDB, in Chapter 4. the design of a new table in SpecDB was discussed, Constraints, and the design of the Table_ Description table of

PAGE 172

159 SpecDB was extended, and renamed Table_Fiel ds_Description. Those modifications were made in order to allow for the representation of business rules, and to show how the main idea of SpecDB is scalable and easily modifi able to accommodate the representation of different specifications, proving that the desi gn of SpecDB allows for future enhancement and additions to accommodate more types of require ments, as they evolve in the future. The automated tool introduced in Chapter 4 creat es a script to create all the constraints and another to drop them. System administrators can choose to run the production system with fewer constraints or no constraints at all in order to enhance the speed of production systems. They can run the automatic constraint generati ng tool demonstrated in Chapter 4 to create and automatically test the validity of the data during certain times of the week to maintain the accuracy, completeness and integrity of the system. Through the use of SpecDB, automated tools can become more intelligent in generating and testing programs, based on an understanding of what the software under construction is supposed to do, by design. The use of SpecDB in a reverse engineering te sting tool is discussed in Chapter 5. The reverse engineering approach was used to l og the actual execution of the program due to some input, extracting the specifications of th e implemented code. Once the program actions have been logged, they are presented in simple explanations which non-technical testers and other members of the software development team can understand. The actual outcome can also be easily matched and automatically co mpared to the expected output based on the program specifications in SpecDB that determine what the expected program output should be based on the specified inputs. Finally, in Chapter 6, some enhancements to published

PAGE 173

160 literature, using SpecDB was suggested; as an example AGENDA [27,28,54] was used to show the suggested enhancements. The representation of specifications in SpecD B can be used as a completion of AGENDA, the work done by Chays et. al. in [27,28,54,55,56,57] AGENDA uses the database schema as a specification of the relations between the different entities in the software, and to extract the constraints on the relations or the valu es of the attributes. AGENDA depends on the tester’s user scenarios to test a database syst em, as well as the testers input files describing attribute valid values and their respective distribution probabilities. It generates tests by parsing application code not the specifications and design of the software. AGENDA then generates test data to populate the database a nd use the populated database as the input state for the testing operations. The data genera ted by AGENDA are only valid data that satisfy the constraints imposed by the implemented schema. The tester suggests inputs for test cases a nd finally AGENDA provides a log file of the output database state after each test operation has executed, providing the modifications that happened in the database and output variables as a result of the test execution. The tester then uses those log files to determine whether the database output state is correct given the database input state before the test was execute d with the given input variable values. The authors repeatedly highlight the fact that the process of verifying the test results cannot be automated since there is no formal specification av ailable (i.e. available for automated to tool to interpret and reason with).

PAGE 174

161 SpecDB is the machine-readable specifications that can be used with tools like AGENDA to automate the process of verifying the output state of a database after a test has been executed. In production level databases when there are thousands and maybe millions of records, it will be difficult and time consuming for the testers to verify the output and determine whether or not a test failed. SpecDB can also be used as an input to systems like AGENDA, instead of testers log files. Wh en relying on testers to supply values for attributes or values for test inputs, are l eaving the tester with the task of attaining good system coverage with the values he/she provide s. Automating this process too, will provide the coverage needed without any coverage gaps. Given SpecDB as one of the inputs to AGENDA, better data can be generated to populate the database. Also better test cases can be generated, with better system coverage. As another future work suggestion, the specifications stored in SpecDB can be used to automate the process of mathematically inducing rules to determine the validity of the output state after a test has been performed, and thus fully automate the process of output validation. 7.4 Future Work at a Glance From the above description of the contributions introduced in this di ssertation, and the ideas to continue on the work summarized in Section 7.2, the following future work ideas are proposed: 1. Expand on the design of SpecDB to allow fo r testing other areas of the software, including system security, networking, etc. An example of this was presented in Chapter 4, to show the scalability of the SpecDB representation.

PAGE 175

162 2. Expand on the translation algorithm using the same idea in Chapter 3 to include other forms of the formal specifications syntax, as needed for the testing process. 3. Implement the algorithm in Chapter 3, to au tomatically read formal specifications and translate them to the SpecDB representa tion. This is a simple coding task that requires no further contribution. 4. Create a code generating tool that automa tically implements software from the specifications in SpecDB. This is mostly a coding task, and requires a good understanding of the design of SpecDB a nd where and how specifications are represented and the interactions between the different entities of SpecDB. 5. Create other intelligent automated testing tools that benefit from the machinereadable software specifications in Sp ecDB and automatically reason with the specifications and create test cases based on an understanding of the specifications. This task requires the deployment of some artificial intelligence techniques to automatically generate intelligent and useful test cases to test software based on its specifications detailed in SpecDB. The test cases generated should attain good system coverage based on a white box system coverage matrix, maybe as part of the reverse engineering tool explained in Ch apter 5. The result is a very useful intelligent automated testing tool that crowns the work established in this research. All the requirements to build this tool ar e detailed in this dissertation. A lot of simple test cases can be directly generate d from the basic data in SpecDB, including

PAGE 176

163 boundary analysis test cases, and type restrictions, as seen in Chapter 4. However, since the SpecDB representation handles more complex specifications and business rules, the proposed automated testing tool for future work can create much more intelligent, important and useful test cases, reasoning with the specifications in SpecDB, as well as generic requirements te st cases including memory and exception handling, etc. 6. Implement the automatic comparison tool, which compares actual system behavior reported by the reverse engineering tool in Chapter 5, to that specified in SpecDB and report system errors or any discrepancies. 7. Automate the process of output state valid ation by mathematically inducing rules to determine the validity of the output state after a test has been performed. For database applications with thousands of records in production databases, it is very hard to manually validate the output state of the database after an operation has been executed. Accordingly, the proposed automate d database output state validation tool can make the decision whether the output state matches the post conditions specified. 8. Create a GUI – SpecDB populating tool to populate SpecDB with the software specifications. Instead of writing the fo rmal specifications, or manually populating SpecDB with the specifications, the human language specifications can be fed into SpecDB using this GUI tool. The GUI tool can ask questions and give choices for answers. Based on the input, the specificati ons are stored in the correct location in

PAGE 177

164 SpecDB. The GUI tool can also make sure that the specifications are complete, and non-contracting to one another, i.e. test the specifications. All the proposed future work tasks outlined in this section can be completed following the contributions achieved in this research. The introduction of the machine-readable specifications of SpecDB, is certainly the most important step and the basis for creating the intelligent automated testing tools that can aid the testing task immensely, if not promise to fully automate the process of testing software applications. After the completion of the recommendations for future work in this dissertation, there is no doubt that, a fully automated intelligent test case generator and out put validator tool, based on the contributions of this dissertation, will be available in the near future.

PAGE 178

165 REFERENCES 1. Brown, Norm, Industrial – Strength Manageme nt Strategies. IEEE Software, July 1996. 2. Mann, Charles, Why Software Is So Bad, Technology Review, July/August 2002. 3. Kaner, Cem., The Impossibility of complete testing, Software QA, Volume 4, #4, p. 28, 1997. 4. Whittaker, James, What is Software Testing? And Why is it So Hard?, IEEE Software, January/February 2000. 5. Jorgensen, Paul, Software Testing: A Craftsman's Approach. CRC Press, 2nd Ed. 2002. 6. Kaner, Cem, Jack Falk and Hung Q. Nguyen, Testing Computer Software, John Wiley & Sons, Inc., 1999. 7. Sommerville, Ian, Software Engineering. 4th ed., Addison-Wesley, 1993. 8. Langer, Arthur M., Analysis and Design of Information Systems. Springer-Verlang New York, Inc. Second ed., 1997. 9. Jackson, Michael, Software Requirements & Specifications: A lexion of Practice, Principles, and Prejudices. ACM Press Books 1995. 10. Kaner, Cem and James Bach, Paradigms of black box software testing, 16th International Conference and Exposition on Testing Computer Software, Washington, D.C., June, 1999. 11. Eriksson, Hans-Erik and Magnus Penker, Business Modeling with UML: business patterns at work. John Wiley & Sons, Inc., 2000. 12. Naiburg, Eric and Robert Maksimchuk, UML for Database Design. Addison Wesley, 2001. 13. Fowler, Martin, UML Distilled: Applying the standard object modeling language. Addison Wesley Longman, Inc. 1997. 14. Eriksson, Hans-Erik and Magnus Penker, Business Modeling with UML: Business Patterns at work. John Wiley & Sons, Inc. 2000.

PAGE 179

166 15. Bowen, Jonathan, Formal Specification & Documentation using Z: A case study approach. International Thomson Publishing Inc., 1996. 16. Pagan, Frank, Formal Specifications of Programming Languages: A Panoramic Primer. Prentice-Hall, Inc. 1981. 17. Nissanke, Nimal, Formal Specifications: techniques and applications. SpringerVerlag, 1999. 18. Spivey, J.M. The Z Notation: A Reference Manual, Second Ed. Prentice Hall, Int. Series in Computer Science, 1989. 19. Spivey, J.M. An Introduction to Z and formal specifications. ACM Software Engineering Journal, Vol. 4 Issue 1, January 1989. 20. Potter, Ben and Jane Sinclair, An Introduction to Formal Specification and Z. Second Edition, Prentice Hall 1996. 21. Jacky, Jonathan, The way of Z: Practical programming with formal methods. Cambridge University Press,1997. 22. Feijs, L.M and H. Jonkers, Formal Specifiications and Design. Cambridge University press, 1992. 23. Spivey, J.M, Understanding Z: A specification language and its formal semantics. Cambridge University Press 1988. 24. Harry, Andres, Formal Methods Fact File: VDM and Z. John Wiley & Sons 1996. 25. ApTest: Software Testing Resources a nd Tools. www.aptest.com/resources.html. 26. Oracle Developer Suite: Product Editions http://www.oracle.com/tools/index.html?tools_editions.html. 27. David Chays Yuetang Deng, et al., Demons tration of AGENDA tool set for testing relational database applications, Proceedings of the 25th international conference on Software engineering, p.802, May 03-10, 2003, Portland, Oregon. 28. David Chays, Saikat Dan, et al., A framew ork for testing database applications, Proceedings of the International Symposium on Software Testing and Analysis, Portland, Oregon, Pages: 147 – 157, 2000. 29. Zhang, Jian and S.C. Cheung, Automated test case generation for the stress testing of multimedia systems. ACM Software Practice & Experience, Volume 32, Issue 15, December 2002.

PAGE 180

167 30. Dustin, Elfriede, Jeff Rashka and John Paul, Automated Software Testing: introduction, management and performance. Addison Wesley, 1999. 31. Kuhn, Richard et. al., Cost effective use of formal methods in verification and validation. Technical report, National Institute of Standards and Technology and NASA Langley Research Center, 2000. 32. Atif M. Memon, Martha E. Pollack, and Mary Lou Soffa, Hierarchical GUI Test Case Generation Using Automated Pla nning, IEEE transactions on software engineering, vol. 27, no. 2, February 2001. 33. Hung, Edward, Testing of Database Applications, Technical Report, http://www.cs.umd.edu/ ~ehung/research_brief.htm. Dec 2001. 34. Antoy, Sergio and Dick Hamlet, Automatically checking an implementation against its formal specification. IEEE Transac tions on Software engineering, Vol. 26, No.1, January 2000. 35. Bernot, Gilles et al, Software testing based on formal specifications: a theory and a tool. Software engineering Journal, November 1991. 36. Chan, Man and Shing Cheung, Applying White box Testing to Database Applications. Technical report, The Hong Kong University of Science and Technlogy, January 1999. 37. Guida, Giovanni and Giancarlo Mauri, Evaluating performance and quality of knowledge-based systems: Foundation and Methodology. IEEE Transactions on Knowledge and Data Engineering, Vol. 5, No.2, April 1993. 38. Nota, Giancarlo and Giuliano Pacini Querying of Executable software specifications. IEEE Transactions on So ftware Engineering Vol 18 No 8 August 1992. 39. Stocks, Phil and David Carrington, A fram ework for specification-based testing. IEEE Transactions on Software Engineering Vol 22 No 11 November 1996. 40. Tsai, W., et. al., Automated Test Case Generation for Programs specified by Relational Algebra Queries. IEEE Trans actions on Software Engineering, Vol 16, No.3, March 1990. 41. Rational Software: Rational Robot. http://www306.ibm.com/software/awdtools/tester /robot/index.html. 42. Rational Software: Rational Functional Tester, http://www306.ibm.com/software/awdtools/tester/functional/index.html.

PAGE 181

168 43. Rational Software: Rational Performance Tester, http://www306.ibm.com/software/awdtools/tester/performance/index.html. 44. Rational Software: Rational Purify, http://www306.ibm.com/software/awdtools/purify/unix/. 45. Mercury Software Products: www.merc-int.com. 46. Mercury Software Products: Mercury WinRunner, http://www.mercury.com/us/products/quality-center/functional-testing/winrunner. 47. Mercury Software Products: Mercury QuickTest, http://www.mercury.com/us/products/quality-center/functional-testing/quicktestprofessional. 48. Mercury Software Products: Mercury LoadRunner, http://www.mercury.com/us/products/performance-center/loadrunner. 49. Segue, SilkCentral Test Manager, www.segue.com. 50. Segue, Silk Test, http://www.segue.com/products/functional-regressionaltesting/silktest.asp. 51. Segue, Silk Performer, http://www.segue.com/products/load-stress-performancetesting/silkperformer.asp. 52. Compuware: QA-Run: www.compuware.com. 53. Kit, Edward, Software Testing in the real world: improving the process. ACM Press & Addison-Wesley, 1995. 54. Chays, David, et al., Testing Database Transactions with AGENDA, Proceedings of the International Conference on Software Engineering ICSE, St. Louis Missouri, May 15-21 2005. 55. Chays, David, et al., AGENDA: A test Generator for Relational Database Applications – A Technical Report. Polytechnic University TR-CIS-2002-04. 56. Chays, David, et al., An Agenda to test relational database applications. Journal of software testing, verification and reliability, March 2004. 57. Chays, David, Test Data Generation for Re lational Database Applications. Ph.D. thesis, Polytechnic University, January 2004. 58. M. Y. Chan and S.C. Cheung. Testing database applications with SQL semantics. Proceedings of the 2nd International Symposium on Cooperative Database Systems for Advanced Applications, pages 363-374, March 1999.

PAGE 182

169 59. Davies, R., R. Beynon and B. Jones, Automating the testing of databases. Proceedings of the first international workshop on automated program analysis, testing and verification, June 2000. 60. Lyons, N.R., An automatic data generating system for database simulation and testing. Database, 8(4):10-13, 1977. 61. Zhang, Jian, C. Xu and S.C. Cheung, Automatic generation of database instances for white-box testing. Proceedings of the 25th Annual International Computer Software and Applications Conference, October 2001. 62. Deng, Y, P. Frankl and J. Wang, Testing web database applications. Workshop on Testing, Analysis and Verification of Web Services, 2004. 63. Kapfhammer, G and M. Soffa, A family of test adequacy criteria for databasedriven applications. ESEC/FSE, September 2003. 64. Cabal, M and J. Tuya, Using an SQL coverage measurement for testing database applications. FSE, October 2004. 65. Butler, Michael, Introductory notes on Specifications with Z, Technical Report, University of Southampton, 2001. 66. Robertson, Suzanne, An early start to testing: How to test requirements, Euro Star, Amsterdam 1996. 67. El Masry, Ramez and Shamkant Navathe, Fundamentals of Database Systems. Addison-Wesley Publishing Company, 2nd Ed.,1994. 68. Daou, B., R. Haraty and B. Mansour, Regre ssion testing of database applications. Proceedings of the 16th ACM SAC 2001 symposium on applied computing, 2001, pages 285-289. 69. Gotlieb, A., B. Botella, M. Rueher, Automatic test data generation using constraint solving techniques. Proceedings of the 1998 International symposium on software testing and analysis. Pages 53-62. ACM Press, March 1998. 70. Gupta, N., A Mathur and M. Soffa, Automatic test data generation using an interactive relaxation method. Proceedings of Foundations of Software Engineering, ACM Press, November 1998. 71. Gray, J., et. al. Quickly generating billi on record synthetic databases. SIGMOD Record, 23(2):243-252, June 1994. 72. Lyons, N., An automatic data generating system for database simulation and testing. Database, 8(4):10-13, 1977.

PAGE 183

170 73. Date, C., H. Darwen, A Guide to the SQ L Standard. Addison-Wesley, New York, 1977. 74. Earp, Richard and Sikha Bagui, Learning SQL: A step by step guide using oracle. Addison-Wesley 2002. 75. Urman, Scott, Oracle 8i: Advanced PL/SQL Programming, Osborne/McGraw-Hill, 2000. 76. Urman, Scott, Oracle 8i: PL/SQL Programming, Osborne/McGraw-Hill, 1997. 77. Berndt, D. and A Watkins, High Volume Software Testing using Genetic Algorithms, Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) Track 9 p. 318b, 2005. 78. Kaner, C., J. Bach, and B. Pettichord, Lessons Learned in Software Testing, John Wiley & Sons, 2002. 79. Kaner, C., W. Bond, and P. McGee, High Volume Test Automation, Keynote Address, International Conference on Software Testing Analysis and Review (STAR), Orlando, Florida, May 20, 2004. 80. Pargas, R, M. Harold and R. Peck, Test Data Generation Using Genetic Algorithms, Software Testing, Verification And Reliability, 9: 263-282, 1999. 81. Watkins, A, The Automatic Generation of Software Test Data using Genetic Algorithms, Proceedings of the Fourth Software Quality Conference, 2: 300-309, Dundee, Scotland, July, 1995. 82. Borgelt, K, Software Test Data Generation from a Genetic Algorithm, Industrial Applications of Genetic Algorithms, CRC Press 1998. 83. Lin, J-C. and P-U. Yeh, Automatic Test Data Generation for Path Testing using GAs, Information Sciences, 131: 47-64, 2001. 84. Michael, C., G. McGraw and M. Schatz Generating Software Test Data by Evolution, IEEE Transactions On Soft ware Engineering, 27(12), December 2001. 85. Watkins, A, Test Data Generation Using Genetic Algorithms, Proceeding of Software Engineering and Applications SEA (397) 2003. 86. Beizer, B. Software Testing Tec hniques. Van Nostrand Reinhold, 2nd edition, 1990. 87. Chang, K, W. Carlisle, J. Cross and D. Brown. A heuristic approach for test case generation. Proceedings of the 1991 ACM Computer Science Conference, pages 174-180. ACM, 1991.

PAGE 184

171 88. Deason, W., D. Brown, K. Chang, and J. H. Cross. A rule-based software test data generator. IEEE Transactions on Knowle dge and Data Engineering, 3(1):108-117, March 1991. 89. DeMillo, R. A. and A. J. Offutt. Constraint-based automatic test data generation. IEEE Transactions on Software E ngineering, 17(9):900-910, September 1991. 90. Ferguson, R.and B. Korel. The chaining approach for software test data generation. IEEE Transactions on Software E ngineering, 5(1):63-86, January 1996. 91. Ferguson, R. and B. Korel. Generating test data for distributed software using the chaining approach. Information and So ftware Technology, 38(1):343-353, January 1996. 92. Korel, B. Automated software test data generation. IEEE Transactions on Software Engineering, 16(8):870-879, August 1990. 93. Korel, B. and A. M. Al-Yami. Assertion-oriented automated test data generation. In Proceedings of the 18th International Conference on Software Engineering (ICSE), pages 71-80. IEEE, 1996. 94. Michael, C. and G. McGraw. Automated software test data generation for complex programs. In 13th IEEE International Conference on Automated Software Engineering, pages 136-146, October 1998. 95. Ramamoorthy, C, S. F. Ho, and W. T. Chen. On the automated generation of program test data. IEEE Transactions on Software Engineering, SE-2(4):293-300, December 1976. 96. Corning, Michael, Building Executable Soft ware Test Specifications, Software Testing Analysis and Review, STAREAST 2003. 97. Hayes, I. and C. Jones. Specifications are not (necessarily) executable. HighIntegrity System Specification and Design [12], pages 563–581, 1990. 98. Fuchs, N. E. Specifications are (prefera bly) executable. Software Engineering Journal, 7(5):323-334, September 1992. 99. Abdallah, A., et. al., A Provably Correct Functional Programming Approach to the Prototyping of Formal Z Specifications. Pr oceedings of the ACS/IEEE International Conference on Computer Systems and Appli cations (AICCSA'03), Tunis, Tunisia, 14-18 July 2003. 100. Abdallah, A., Filter promotion transformation strategies for deriving efficient programs from Z specification. Third IEEE International Conference on Formal Engineering Methods ICFEM, pages 157–167, 2000.

PAGE 185

172 101. Yang, Sarah, Google, Microsoft and Sun Fund New UC Berkeley Internet Research Center. Press Release, University of California, Berkeley, Dec 2005. 102. Voas, Jeffrey and Gary McGraw, Software Fault Injection: Inoculating Programs Against Errors, John Wiley and Sons, inc, 1998.

PAGE 186

173 APPENDICES

PAGE 187

174 Appendix A: SQL Scripts for SpecDB Entities The following SQL provides the creation scripts for the entities of the SpecDB database Prompt Creating Table 'Valid_Operands' Create Table Valid_Operands (Operator_Id Number(2) Not Null ,Operand1_Type Varchar2(10) ,Operand2_Type Varchar2(10) ,Operand3_Type Varchar2(10) ,Over_Loaded Varchar2(1) Default 'F' )/ Prompt Creating Table 'Designed_Sr_Post_Conditions' Create Table Designed_Sr_Post_Conditions (Sub_Routine_Name Varchar2(50) Not Null ,Post_Cond_Num Integer Not Null ,Source Varchar2(5) ,Post_Cond_Type Varchar2(10) ,From_Var_Id Integer ,From_Gui_Obj_Name Varchar2(30) ,From_Gui_Window Varchar2(30) ,From_Seq_Name Varchar2(20) ,From_Char_Const_Value Varchar2(1) ,From_Text_Const_Value Varchar2(100) ,From_Int_Const_Value Integer ,From_Real_Const_Value Real ,From_Bool_Const_Value Varchar2(1) Default 'T' ,From_Date_Const_Value Date ,From_Time_Const_Value Date ,To_Gui_Obj_Name Varchar2(30) ,To_Gui_Window Varchar2(30) ,To_Var_Id Integer ,To_File_Name Varchar2(100) ,To_File_Type Varchar2(10) ,To_File_Change Varchar2(10) ,To_File_New_Location Varchar2(100) ,To_Message_Text Varchar2(240) ,To_Report_Name Varchar2(100) ,To_Report_Destination Varchar2(10) ,To_Report_File_Name Varchar2(100) ,To_Inc_Seq_Name Varchar2(30) ,Run_Exe_Name Varchar2(100) ,Open_Window_Name Varchar2(40) ,Savepoint_Rollback_Name Varchar2(30) ,Db_Operation_Num Integer )/

PAGE 188

175 Appendix A: (Continued) Prompt Creating Table 'Actual_Input' Create Table Actual_Input (Ai_Ip# Number(7) Not Null ,Ai_Test_Case# Number(7) Not Null ,Ai_Category Varchar2(4) ,Ai_Db_Table Varchar2(20) ,Ai_Db_Field Varchar2(25) ,Ai_Db_Row# Integer ,Ai_Value Varchar2(240) ,Ai_Into_Var Varchar2(240) ,Ai_Gui_Window Varchar2(240) ,Ai_Gui_Component Varchar2(240) ,Ai_Type Varchar2(240) ,Ai_File_Name Varchar2(240) )/ Prompt Creating Table 'Var' Create Table Var (Var_Id Integer Not Null ,Var_Name Varchar2(15) Not Null ,Type Varchar2(10) ,Sub_Routine_Name Varchar2(30) ,Length Integer ,Is_Input Varchar2(1) Default 'T' ,Is_Output Varchar2(1) Default 'T' ,Is_Constant Varchar2(1) Default 'T' ,Is_Gui_Object Varchar2(1) Default 'T' ,Class_Name Varchar2(240) ,Object_Name Varchar2(240) ,Encapsulation Varchar2(10) ,Gui_Window Varchar2(240) ,Has_Lov_Restrictions Varchar2(1) Default 'T' ,Has_Table_Restrictions Varchar2(1) Default 'T' ,Has_Other_Restrictions Varchar2(1) Default 'T' ,Num_Of_Other_Restrictions Integer ,Physical_Logical Varchar2(1) ,Mandatory Varchar2(1) Default 'T' ,Default_Char_Value Varchar2(1) ,Default_Text_Value Varchar2(100) ,Default_Int_Value Integer ,Default_Real_Value Real ,Default_Bool_Value Varchar2(1) Default 'T' ,Default_Date_Value Date ,Default_Time_Value Date

PAGE 189

176 Appendix A: (Continued) ,Current_Char_Value Varchar2(1) ,Current_Real_Value Real ,Current_Text_Value Varchar2(100) ,Current_Int_Value Integer ,Current_Bool_Value Varchar2(1) Default 'T' ,Current_Date_Value Date ,Current_Time_Value Date )/ Prompt Creating Table 'List_Of_Tables' Create Table List_Of_Tables (Table_Name Varchar2(240) ,Composite_Primary_Key Varchar2(1) Default 'T' ,Order_To_Fill Number )/ Prompt Creating Table 'Types_Lov' Create Table Types_Lov (Type_Name Varchar2(20) Not Null ,Lov_Id Number Not Null ,Int_Value Integer ,Char_Value Varchar2(1) ,Text_Value Varchar2(240) ,Real_Value Real ,Date_Value Date ,Time_Value Date ,Bool_Value Smallint Default 0 )/ Prompt Creating Table 'Assignment' Create Table Assignment (Pos_Num Integer Not Null ,Assign_To_Var_Id Number(3) Not Null ,Rhs_Type Varchar2(5) ,Const_Type Varchar2(10) ,Cal_Num Integer ,Func_Name Varchar2(100) ,Instance_Num Integer ,Var_Id Number(3) ,Const_Int_Value Integer ,Const_Char_Value Varchar2(1) ,Const_Real_Value Real ,Const_Date_Value Date ,Const_Text_Value Varchar2(100) ,Const_Boolean_Value Varchar2(1) Default 'T' ,Const_Time_Value Date

PAGE 190

177 Appendix A: (Continued) ,Sequence_Name Varchar2(30) )/ Prompt Creating Table 'Sub_Routine_Ip_Description' Create Table Sub_Routine_Ip_Description (Subroutine_Name Varchar2(30) Not Null ,Ip_Order Integer Not Null ,Type Varchar2(240) Not Null ,Sr_Var_Name Varchar2(240) ,Has_Lov_Restrictions Char ,Has_Table_Restrictions Char ,Has_Other_Restrictions Char )/ Prompt Creating Table 'Predicate' Create Table Predicate (Pos_Num Number(7) Not Null ,Component_Num Number(7) Not Null ,Left_Operand_Type Varchar2(5) Default 'Comp' ,Left_Component_Num Integer ,Left_Var_Id Integer ,Left_Text_Const Varchar2(240) ,Left_Num_Const Float(12) ,Left_Table_Name Varchar2(240) ,Left_Table_Alias Varchar2(240) ,Left_Db_Field Varchar2(240) ,Left_Func_Name Varchar2(240) ,Left_Func_Inst_Num Integer ,In_Db_Oper_Num Integer ,Right_Operand_Type Varchar2(5) Default 'Comp' ,Right_Component_Num Integer ,Right_Var_Id Integer ,Right_Text_Const Varchar2(240) ,Right_Num_Const Float(12) ,Right_Func_Name Varchar2(240) ,Right_Func_Instance_Num Integer ,Right_Table_Name Varchar2(240) ,Right_Table_Alias Varchar2(240) ,Right_Db_Field Varchar2(240) ,Operator_Id Integer Not Null )/ Prompt Creating Table 'Pre_Post_Conditions' Create Table Pre_Post_Conditions (Db_Operation_Id Number Not Null ,Condition_Num Number Not Null

PAGE 191

178 Appendix A: (Continued) ,Pre_Post Varchar2(4) )/ Prompt Creating Table 'Actual_Output' Create Table Actual_Output (Ao_Op# Integer Not Null ,Ao_Test_Case# Integer Not Null ,Ao_Category Varchar2(240) ,Ao_Db_Table Varchar2(240) ,Ao_Change Varchar2(240) ,Ao_Field Varchar2(240) ,Ao_Row# Varchar2(240) ,Ao_Value Varchar2(240) ,Ao_From_Var Varchar2(240) ,Ao_Window Varchar2(240) ,Ao_Component Varchar2(240) ,Ao_Type Varchar2(240) ,Ao_Name Varchar2(240) ,Ao_Text Varchar2(240) ,Ao_Destination Varchar2(240) ,Ao_User Varchar2(240) ,Ao_Role_Id Varchar2(240) ,Ao_Privilege_Id Varchar2(240) ,Ao_Database Varchar2(240) ,Ao_Savepoint Varchar2(240) )/ Prompt Creating Table 'Class_Relationships' Create Table Class_Relationships (Relationship_Id Number Not Null ,Class_Name1 Varchar2(240) Not Null ,Relationship Varchar2(30) Not Null ,Class_Name2 Varchar2(240) Not Null )/ Prompt Creating Table 'Input_Output_Definition' Create Table Input_Output_Definition (Pos_Num Integer Not Null ,Var_Id Number(3) Not Null ,Var_Sequence Integer Not Null ,Input_Output Varchar2(1) ,Source_Destination_Var_Id Integer ,Type Varchar2(240) ,Report_Name Varchar2(240) ,Destination_Device Varchar2(240) ,File_Name Varchar2(240) ,File_Type Varchar2(240)

PAGE 192

179 Appendix A: (Continued) ,File_Append_Overwrite Varchar2(1) ,Message_Text Varchar2(240) )/ Prompt Creating Table 'Triggers' Create Table Triggers (Trigger_Name Varchar2(30) Not Null ,Dataflow_Pos_Num Integer ,On_Gui_Object Varchar2(1) Default 'T' ,Gui_Object_Var_Id Integer ,Gui_Trigger_Event Varchar2(240) ,On_Table Varchar2(1) Default 'T' ,Before_After Varchar2(1) ,Table_Insert Varchar2(1) Default 'T' ,Table_Delete Varchar2(1) Default 'T' ,Table_Update Varchar2(1) Default 'T' ,Field_Name Varchar2(20) ,Table_Name Varchar2(30) ,For_Each_Row Varchar2(1) Default 'T' ,When_Condition_Num Integer )/ Prompt Creating Table 'Db_Operation_Details' Create Table Db_Operation_Details (Db_Operation_Num Integer Not Null ,Order_Num Integer Not Null ,Field_Type Varchar2(10) ,Field_Name Varchar2(30) ,Into_Var_Id Number ,From_Var_Id Integer ,Char_Value Varchar2(1) ,Text_Value Varchar2(100) ,Int_Value Integer ,Real_Value Real ,Bool_Value Varchar2(1) Default 'T' ,Date_Value Date ,Time_Value Date ,Whole_Table Varchar2(1) Default 'T' ,Order_By_Requested Varchar2(1) Default 'T' ,Order_Desc Varchar2(1) Default 'T' ,Order_By_First Varchar2(15) ,Order_By_Second Varchar2(15) ,Order_By_Third Varchar2(15) ,Order_By_Fourth Varchar2(15) ,Insert_Select Varchar2(1) Default 'T'

PAGE 193

180 Appendix A: (Continued) ,Insert_Select_Db_Oper_Num Number ,Delete_Where_Pos_Num Number )/ Prompt Creating Table 'Types' Create Table Types (Type_Name Varchar2(20) Not Null ,Type Varchar2(10) ,Type_Size Integer ,Has_Lov Varchar2(1) Default 'T' ,Has_Range Varchar2(1) Default 'T' ,Even_Odd Varchar2(1) ,Range_From_Int Integer ,Range_To_Int Integer ,Range_From_Real Real ,Range_To_Real Real ,Range_From_Char Char ,Range_To_Char Char ,Range_From_Date Date ,Range_To_Date Date )/ Prompt Creating Table 'Expected_Output' Create Table Expected_Output (Eo_Op# Integer Not Null ,Eo_Test_Case# Integer Not Null ,Eo_Category Varchar2(240) ,Eo_Db_Table Varchar2(240) ,Eo_Change Varchar2(240) ,Eo_Field Varchar2(240) ,Eo_Row# Varchar2(240) ,Eo_Value Varchar2(240) ,Eo_From_Var Varchar2(240) ,Eo_Window Varchar2(240) ,Eo_Component Varchar2(240) ,Eo_Type Varchar2(240) ,Eo_Name Varchar2(240) ,Eo_Text Varchar2(240) ,Eo_Destination Varchar2(240) ,Eo_User Varchar2(240) ,Eo_Role_Id Varchar2(240) ,Eo_Privilege_Id Varchar2(240) ,Eo_Database Varchar2(240) ,Eo_Savepoint Varchar2(240) )/

PAGE 194

181 Appendix A: (Continued) Prompt Creating Table 'Db_Operations' Create Table Db_Operations (Db_Operation_Num Integer Not Null ,Operation_Type Varchar2(6) ,Db_Name Varchar2(30) ,Db_Condition_Num Integer ,Sequence_Name Varchar2(240) ,Sequence_Incremented Varchar2(1) Default 'T' ,Sequence_Start_With Number ,Savepoint_Rollback_Name Varchar2(30) ,Role_Priv_Name Varchar2(6) ,Role_Priv_User Varchar2(20) ,Priv_Object_Name Varchar2(240) ,Grant_Admin_Option Varchar2(1) Default 'T' ,View_Name Varchar2(240) ,Table_Name Varchar2(240) ,As_Table_Name Varchar2(240) ,As_Select Varchar2(1) Default 'T' ,Cursor_Name Varchar2(240) )/ Prompt Creating Table 'Db_Test_Operation' Create Table Db_Test_Operation (Operation_Id Number Not Null )/ Prompt Creating Table 'Test_Templates' Create Table Test_Templates (Db_Operation_Id Number Not Null ,Test_Template_Name Varchar2(30) Not Null )/ Prompt Creating Table 'Type_Restrictions' Create Table Type_Restrictions (Type_Name Varchar2(20) Not Null ,Restriction# Integer Not Null ,Include_Exclude Varchar(1) ,Has_Range Char ,Even_Or_Odd Varchar2(1) ,Range_From_Int Integer ,Range_To_Int Integer ,Int_Value Integer ,Range_From_Char Char ,Range_To_Char Char ,Char_Value Char

PAGE 195

182 Appendix A: (Continued) ,Range_From_Real Real ,Range_To_Real Real ,Real_Value Real ,Range_From_Date Date ,Range_To_Date Date ,Date_Value Date ,Range_From_Time Date ,Range_To_Time Varchar2(240) ,Time_Value Date ,Text_Value Varchar2(240) )/ Prompt Creating Table 'Attributes_Changed' Create Table Attributes_Changed (Db_Operation_Id Number Not Null ,Attribute Varchar2(30) ,Of_Table Varchar2(40) )/ Prompt Creating Table 'Type_Excluded_Ranges' Create Table Type_Excluded_Ranges (Type_Name Varchar2(20) Not Null ,Restriction_Num Integer Not Null ,Has_Range Char ,Range_From_Int Integer ,Range_To_Int Integer ,Int_Value Integer ,Range_From_Char Char ,Range_To_Char Char ,Char_Value Char ,Range_From_Real Real ,Range_To_Real Real ,Real_Value Real ,Range_From_Date Date ,Range_To_Date Date ,Date_Value Date ,Range_From_Time Date ,Range_To_Time Varchar2(240) ,Time_Value Date ,Text_Value Varchar2(240) )/

PAGE 196

183 Appendix A: (Continued) Prompt Creating Table 'Sub_Routine_Ip_Instance' Create Table Sub_Routine_Ip_Instance (Instance_Num Integer Not Null ,Ip_Order Integer Not Null ,Subroutine_Name Varchar2(30) Not Null ,Source_Type Varchar2(5) ,From_Var_Id Integer ,From_Gui_Object_Name Varchar2(50) ,From_Gui_Window Varchar2(50) ,From_Seq_Name Varchar2(30) ,From_Char_Const_Value Varchar2(1) ,From_Text_Const_Value Varchar2(100) ,From_Int_Const_Value Integer ,From_Real_Const_Value Real ,From_Bool_Const_Value Varchar2(1) Default 'T' ,From_Date_Const_Value Date ,From_Time_Const_Value Date ,Object_Name Varchar2(240) )/ Prompt Creating Table 'Function_Declaration' Create Table Function_Declaration (Func_Name Varchar2(240) Not Null ,Num_Of_Inputs Integer ,Op_Type Varchar2(240) Not Null ,Op_Var_Id Integer ,Op_Var_Name Varchar2(30) ,Op_Has_Lov_Restrictions Varchar2(1) Default 'T' ,Op_Has_Table_Restictions Varchar2(1) Default 'T' ,Op_Has_Other_Restrictions Varchar2(1) Default 'T' ,Changes_System_State Varchar2(1) Default 'T' ,Class_Name Varchar2(240) ,Encapsulation Varchar2(10) )/ Prompt Creating Table 'Var_Restrictions' Create Table Var_Restrictions (Var_Id Integer ,Restriction_Num Integer Not Null ,Sub_Routine_Name Varchar2(50) ,Var_Name Varchar2(30) ,Input_Or_Output Char ,Include_Exclude Varchar(1) ,Has_Range Char ,Even_Or_Odd Varchar2(1)

PAGE 197

184 Appendix A: (Continued) ,Equivalence_Class Varchar2(240) ,Range_From_Int Integer ,Range_To_Int Integer ,Int_Value Integer ,Range_From_Char Char ,Range_To_Char Char ,Char_Value Char ,Range_From_Real Real ,Range_To_Real Real ,Real_Value Real ,Range_From_Date Date ,Range_To_Date Date ,Date_Value Date ,Range_From_Time Date ,Range_To_Time Varchar2(240) ,Time_Value Date ,Text_Value Varchar2(240) )/ Prompt Creating Table 'Host_Var' Create Table Host_Var (Db_Operation_Id Number Not Null ,Host_Var_Name Varchar2(30) Not Null ,Input_Output Varchar2(1) ,Direct_Indirect Varchar2(1) ,On_Attribute Varchar2(30) ,Of_Table Varchar2(40) )/ Prompt Creating Table 'Objects' Create Table Objects (Object_Name Varchar2(240) Not Null ,Class_Name Varchar2(240) Not Null )/ Prompt Creating Table 'Data_Group' Create Table Data_Group (Db_Operation_Id Number Not Null ,Data_Group_Name Varchar2(240) Not Null ,Attribute Varchar2(30) Not Null ,Of_Table Varchar2(40) ,Probability Number )/

PAGE 198

185 Appendix A: (Continued) Prompt Creating Table 'Var_Table_Restrictions' Create Table Var_Table_Restrictions (Var_Tab_Rst_Id Integer Not Null ,Var_Id Integer Not Null ,Sub_Routine_Name Varchar2(240) ,Var_Name Varchar2(30) ,Input_Or_Output Varchar2(1) ,Operator Varchar2(240) Not Null ,Which_Table Varchar2(240) Not Null ,Which_Field Varchar2(240) ,Equivalence_Class Varchar2(240) )/ Prompt Creating Table 'Operator' Create Table Operator (Operator_Id Number(2) Not Null ,Symbol_Syntax Varchar2(5) Not Null ,Over_Loaded Varchar2(1) Default 'F' )/ Prompt Creating Table 'Constants' Create Table Constants (Constant_Name Varchar2(240) ,Type Varchar2(240) ,Int_Value Integer ,Real_Value Real ,Char_Value Char ,String_Value Varchar2(240) ,Date_Value Date ,Time_Value Date )/ Prompt Creating Table 'Expected_Input' Create Table Expected_Input (Ei_Ip# Number(7) Not Null ,Ei_Test_Case# Number(7) Not Null ,Ei_Category Varchar2(4) ,Ei_Db_Table Varchar2(20) ,Ei_Db_Field Varchar2(25) ,Ei_Db_Row# Integer ,Ei_Value Varchar2(240) ,Ei_Into_Var Varchar2(240) ,Ei_Gui_Window Varchar2(240) ,Ei_Gui_Component Varchar2(240) ,Ei_Type Varchar2(240) ,Ei_File_Name Varchar2(240) )/

PAGE 199

186 Appendix A: (Continued) Prompt Creating Table 'Char_Equiv_Class' Create Table Char_Equiv_Class (Var_Name Varchar2(15) Not Null ,Range_From Varchar2(1) Not Null ,Range_To Varchar2(1) Not Null ,Equivalence_Class Integer Not Null )/ Prompt Creating Table 'Runtime_Sr_Post_Conditions' Create Table Runtime_Sr_Post_Conditions (Sub_Routine_Name Varchar2(50) Not Null ,Post_Cond_Num Integer Not Null ,Instance_Num Integer Not Null ,Source Varchar2(5) ,Post_Cond_Type Varchar2(10) ,From_Var_Id Integer ,From_Gui_Obj_Name Varchar2(30) ,From_Gui_Window Varchar2(30) ,From_Seq_Name Varchar2(20) ,From_Char_Const_Value Varchar2(1) ,From_Text_Const_Value Varchar2(100) ,From_Int_Const_Value Integer ,From_Real_Const_Value Real ,From_Bool_Const_Value Varchar2(1) Default 'T' ,From_Date_Const_Value Date ,From_Time_Const_Value Date ,To_Gui_Obj_Name Varchar2(30) ,To_Gui_Window Varchar2(30) ,To_Var_Id Integer ,To_File_Name Varchar2(100) ,To_File_Type Varchar2(10) ,To_File_Change Varchar2(10) ,To_File_New_Location Varchar2(100) ,To_Message_Text Varchar2(240) ,To_Report_Name Varchar2(100) ,To_Report_Destination Varchar2(10) ,To_Report_File_Name Varchar2(100) ,To_Inc_Seq_Name Varchar2(30) ,Run_Exe_Name Varchar2(100) ,Savepoint_Rollback_Name Varchar2(20) ,Open_Window_Name Varchar2(40) ,Db_Operation_Num Integer )/

PAGE 200

187 Appendix A: (Continued) Prompt Creating Table 'Equivalence_Class' Create Table Equivalence_Class (Db_Operation_Id Number Not Null ,Data_Group_Name Varchar2(240) Not Null ,Attribute_Name Varchar2(30) ,Int_Value Integer ,Real_Value Real ,Char_Value Varchar2(1) ,Text_Value Varchar2(100) ,Date_Value Date )/ Prompt Creating Table 'Dataflow' Create Table Dataflow (Pos_Num Integer Not Null ,Suc_Num Integer Not Null ,Pos_Type Varchar2(9) Default 'Process' ,Result Varchar2(1) Default 'T' ,Procedure_Name Varchar2(240) ,Instance_Num Integer ,Open_Window_Name Varchar2(240) ,Run_Exe_Name Varchar2(240) ,Object_Name Varchar2(240) ,Class_Name Varchar2(240) )/ Prompt Creating Table 'Real_Equiv_Class' Create Table Real_Equiv_Class (Var_Name Varchar2(15) Not Null ,Range_From Real Not Null ,Range_To Real Not Null ,Equivalence_Class Integer Not Null )/ Prompt Creating Table 'Procedure' Create Table Procedure (Proc_Name Varchar2(30) Not Null ,Num_Of_Inputs Integer ,Changes_System_State Varchar2(1) Default 'T' ,Class_Name Varchar2(240) ,Encapsulation Varchar2(10) )/

PAGE 201

188 Appendix A: (Continued) Prompt Creating Table 'Db_Operation_Tables' Create Table Db_Operation_Tables (Id Integer Not Null ,Db_Operation_Num Integer ,Db_Constraint_Num Number ,Table_Name Varchar2(30) Not Null ,Table_Alias Varchar2(240) )/ Prompt Creating Table 'Data_Group_Values' Create Table Data_Group_Values (Db_Operation_Id Number Not Null ,Data_Group_Name Varchar2(240) Not Null ,Int_Value Integer ,Real_Value Real ,Char_Value Varchar2(1) ,Text_Value Varchar2(100) ,Date_Value Date )/ Prompt Creating Table 'Int_Equiv_Class' Create Table Int_Equiv_Class (Var_Name Varchar2(15) Not Null ,Range_From Integer Not Null ,Range_To Integer Not Null ,Equivalence_Class Integer Not Null )/ Prompt Creating Table 'Var_Lov' Create Table Var_Lov (Id Number Not Null ,Var_Id Integer ,Var_Name Varchar2(30) ,Sub_Routine_Name Varchar2(240) ,Input_Or_Output Char ,Equivalence_Class Varchar2(240) ,Char_Value Varchar2(1) ,Text_Value Varchar2(100) ,Int_Value Integer ,Real_Value Real ,Date_Value Date ,Time_Value Date )/

PAGE 202

189 Appendix A: (Continued) Prompt Creating Table 'Constraints' Create Table Constraints (Constraint_Name Number Not Null ,Constraint_Num Integer Not Null ,Constraint_Or_Assertion Varchar2(1) ,On_Field Varchar2(1) Default 'T' ,Field_Name Varchar2(240) ,Table_Name Varchar2(240) ,Db_Name Varchar2(240) ,Condition_Predicate_Pos_Num Integer ,Constraint_Clause_Pred_Num Number ,Db_Op_Tables_Id Integer ,Db_Operation_Num Integer ,Db_Operator_Id Number(2) )/ Prompt Creating Table 'Calculation' Create Table Calculation (Cal_Num Integer Not Null ,Component_Num Integer Not Null ,Left_Operand_Type Varchar2(5) Default 'Comp' Not Null ,Left_Component_Num Integer ,Left_Var_Id Integer ,Left_Text_Const Varchar2(240) ,Left_Num_Const Float(12) ,Left_Func_Name Varchar2(50) ,Left_Func_Instance_Num Integer ,Right_Operand_Type Varchar2(5) Default 'Comp' Not Null ,Right_Component_Num Integer ,Right_Var_Id Integer ,Right_Text_Const Varchar2(240) ,Right_Num_Const Float(12) ,Right_Func_Name Varchar2(50) ,Right_Func_Instance_Num Integer ,Operator_Id Integer Not Null )/ Prompt Creating Table 'Classes' Create Table Classes (Class_Name Varchar2(240) Not Null )/

PAGE 203

190 Appendix A: (Continued) Prompt Creating Table 'Table_Field_Description' Create Table Table_Field_Description (Has_Constraint_Restriction Varchar2(1) Default 'T' ,Table_Name Varchar2(240) Not Null ,Field_Name Varchar2(240) Not Null ,Field_Id Varchar2(240) ,Field_Type Varchar2(240) ,Is_Primary_Key Varchar2(1) Default 'T' ,Is_Unique Varchar2(1) Default 'T' ,Mandatory Varchar2(1) Default 'T' ,Length Integer ,Has_Lov_Restriction Varchar2(1) Default 'T' ,Has_Table_Restriction Varchar2(1) Default 'T' ,Has_Range_Restriction Varchar2(1) Default 'T' ,Num_Of_Constraint_Restriction Integer ,Which_Table Varchar2(240) ,Which_Field Varchar2(240) )/

PAGE 204

191 Appendix B: SQL Scripts for SpecDB Constraints Prompt Creating Primary Key On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Post_Cond_Pk Primary Key (Post_Cond_Num)) / Prompt Creating Primary Key On 'Actual_Input' Alter Table Actual_Input Add (Constraint Ip_Pk Primary Key (Ai_Ip# ,Ai_Test_Case#)) / Prompt Creating Primary Key On 'Var' Alter Table Var Add (Constraint Var_Pk Primary Key (Var_Id)) / Prompt Creating Primary Key On 'Types_Lov' Alter Table Types_Lov Add (Constraint Tlov_Pk Primary Key (Lov_Id)) / Prompt Creating Primary Key On 'Assignment' Alter Table Assignment Add (Constraint Fun_Pk Primary Key (Pos_Num)) / Prompt Creating Primary Key On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Sr_Id_Pk Primary Key (Subroutine_Name ,Ip_Order)) / Prompt Creating Primary Key On 'Predicate' Alter Table Predicate Add (Constraint Pre_Pk Primary Key (Pos_Num ,Component_Num))/

PAGE 205

192 Appendix B: (Continued) Prompt Creating Primary Key On 'Actual_Output' Alter Table Actual_Output Add (Constraint Actual_Op_Pk Primary Key (Ao_Op# ,Ao_Test_Case#)) / Prompt Creating Primary Key On 'Class_Relationships' Alter Table Class_Relationships Add (Constraint Cls_Rel_Pk Primary Key (Relationship_Id)) / Prompt Creating Primary Key On 'Input_Output_Definition' Alter Table Input_Output_Definition Add (Constraint Odn_Pk Primary Key (Var_Id ,Pos_Num ,Var_Sequence)) / Prompt Creating Primary Key On 'Triggers' Alter Table Triggers Add (Constraint Trg_Pk Primary Key (Trigger_Name)) / Prompt Creating Primary Key On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Dit_Pk Primary Key (Db_Operation_Num ,Order_Num)) / Prompt Creating Primary Key On 'Types' Alter Table Types Add (Constraint Type_Pk Primary Key (Type_Name)) / Prompt Creating Primary Key On 'Expected_Output' Alter Table Expected_Output Add (Constraint A_Op_Pk Primary Key (Eo_Op# ,Eo_Test_Case#)) /

PAGE 206

193 Appendix B: (Continued) Prompt Creating Primary Key On 'Db_Operations' Alter Table Db_Operations Add (Constraint Db_Oper_Pk Primary Key (Db_Operation_Num)) / Prompt Creating Primary Key On 'Db_Test_Operation' Alter Table Db_Test_Operation Add (Constraint Dbto_Pk Primary Key (Operation_Id)) / Prompt Creating Primary Key On 'Test_Templates' Alter Table Test_Templates Add (Constraint Tte_Pk Primary Key (Db_Operation_Id ,Test_Template_Name)) / Prompt Creating Primary Key On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Type_Res_Pk Primary Key (Restriction#)) / Prompt Creating Primary Key On 'Type_Excluded_Ranges' Alter Table Type_Excluded_Ranges Add (Constraint Type_Exran_Pk Primary Key (Restriction_Num)) / Prompt Creating Primary Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Sr_Ipin_Pk Primary Key (Instance_Num ,Ip_Order)) / Prompt Creating Primary Key On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Func_Dcl_Pk Primary Key (Func_Name)) /

PAGE 207

194 Appendix B: (Continued) Prompt Creating Primary Key On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Tres_Pk Primary Key (Restriction_Num)) / Prompt Creating Primary Key On 'Host_Var' Alter Table Host_Var Add (Constraint Hvr_Pk Primary Key (Db_Operation_Id ,Host_Var_Name)) / Prompt Creating Primary Key On 'Objects' Alter Table Objects Add (Constraint Obj_Pk Primary Key (Object_Name)) / Prompt Creating Primary Key On 'Data_Group' Alter Table Data_Group Add (Constraint Dgp_Pk Primary Key (Data_Group_Name ,Db_Operation_Id)) / Prompt Creating Primary Key On 'Var_Table_Restrictions' Alter Table Var_Table_Restrictions Add (Constraint Fvtr_Pk Primary Key (Var_Tab_Rst_Id)) / Prompt Creating Primary Key On 'Operator' Alter Table Operator Add (Constraint Opr_Pk Primary Key (Operator_Id)) / Prompt Creating Primary Key On 'Expected_Input' Alter Table Expected_Input Add (Constraint Exop_Pk Primary Key (Ei_Ip# ,Ei_Test_Case#)) /

PAGE 208

195 Appendix B: (Continued) Prompt Creating Primary Key On 'Char_Equiv_Class' Alter Table Char_Equiv_Class Add (Constraint Ces_Pk Primary Key (Var_Name ,Range_From ,Range_To)) / Prompt Creating Primary Key On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Rt_Post_Cond_Pk Primary Key (Post_Cond_Num ,Instance_Num)) / Prompt Creating Primary Key On 'Dataflow' Alter Table Dataflow Add (Constraint Daw_Pk Primary Key (Pos_Num ,Suc_Num)) / Prompt Creating Primary Key On 'Real_Equiv_Class' Alter Table Real_Equiv_Class Add (Constraint Rec_Pk Primary Key (Var_Name ,Range_From ,Range_To)) / Prompt Creating Primary Key On 'Procedure' Alter Table Procedure Add (Constraint Proced_Pk Primary Key (Proc_Name)) / Prompt Creating Primary Key On 'Db_Operation_Tables' Alter Table Db_Operation_Tables Add (Constraint Otab_Pk Primary Key (Id)) / Prompt Creating Primary Key On 'Int_Equiv_Class' Alter Table Int_Equiv_Class Add (Constraint Iec_Pk Primary Key (Var_Name ,Range_From ,Range_To)) /

PAGE 209

196 Appendix B: (Continued) Prompt Creating Primary Key On 'Var_Lov' Alter Table Var_Lov Add (Constraint Fvlov_Pk Primary Key (Id)) / Prompt Creating Primary Key On 'Constraints' Alter Table Constraints Add (Constraint Con_Pk Primary Key (Constraint_Num)) / Prompt Creating Primary Key On 'Calculation' Alter Table Calculation Add (Constraint Calc_Pk Primary Key (Cal_Num ,Component_Num)) / Prompt Creating Primary Key On 'Classes' Alter Table Classes Add (Constraint Cls_Pk Primary Key (Class_Name)) / Prompt Creating Primary Key On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Tdesc_Pk Primary Key (Table_Name ,Field_Name)) / Prompt Creating Unique Key On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Sr_Ipdesc_Uk Unique (Subroutine_Name ,Sr_Var_Name)) / Prompt Creating Unique Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Sr_Ipinst_Uk Unique (Subroutine_Name)) /

PAGE 210

197 Appendix B: (Continued) Prompt Creating Unique Key On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Field_Id_Uk Unique (Field_Id)) / Prompt Creating Check Constraint On 'Valid_Operands' Alter Table Valid_Operands Add (Constraint Avcon_1144240102_Opera_000 Check (Operand1_Type In ('Char', 'Text', 'Int', 'Real', 'Bool', 'Date', 'Time', 'Db_Table'))) / Prompt Creating Check Constraint On 'Valid_Operands' Alter Table Valid_Operands Add (Constraint Avcon_1144240102_Opera_001 Check (Operand2_Type In ('Char', 'Text', 'Int', 'Real', 'Bool', 'Date', 'Time', 'Db_Table'))) / Prompt Creating Check Constraint On 'Valid_Operands' Alter Table Valid_Operands Add (Constraint Avcon_1144240102_Opera_002 Check (Operand3_Type In ('Char', 'Text', 'Int', 'Real', 'Bool', 'Date', 'Time', 'Db_Table'))) / Prompt Creating Check Constraint On 'Valid_Operands' Alter Table Valid_Operands Add (Constraint Avcon_1144240102_Over__000 Check (Over_Loaded In ('T', 'F'))) / Prompt Creating Check Constraint On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Avcon_1144240102_Sourc_000 Check (Source In ('Var', 'Const', 'Gui', 'Seq'))) / Prompt Creating Check Constraint On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Avcon_1144240102_Post__000 Check (Post_Cond_Type In ('Gui', 'Db', 'File', 'Report', 'Seq', 'Message', 'Priv_Role', 'Global_Var', 'Commit', 'Savepoint', 'Rollback', 'Run_Exe', 'Open_Win' 'Terminate'))) /

PAGE 211

198 Appendix B: (Continued) Prompt Creating Check Constraint On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Avcon_1144240102_From__000 Check (From_Bool_Const_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Avcon_1144240102_To_Fi_000 Check (To_File_Change In ('Create', 'Delete', 'Moidify', 'Relocate'))) / Prompt Creating Check Constraint On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Avcon_1144240102_To_Re_000 Check (To_Report_Destination In ('Printer', 'Screen', 'File'))) / Prompt Creating Check Constraint On 'Actual_Input' Alter Table Actual_Input Add (Constraint Avcon_1144240102_Ai_Ca_000 Check (Ai_Category In ('Db', 'Gui', 'File'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Is_In_000 Check (Is_Input In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Is_Ou_000 Check (Is_Output In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Is_Co_000 Check (Is_Constant In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Is_Gu_000 Check (Is_Gui_Object In ('T', 'F'))) /

PAGE 212

199 Appendix B: (Continued) Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Encap_000 Check (Encapsulation In ('Private', 'Protected', 'Public'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Has_L_000 Check (Has_Lov_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Has_T_000 Check (Has_Table_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Has_O_000 Check (Has_Other_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Manda_000 Check (Mandatory In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Defau_000 Check (Default_Bool_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var' Alter Table Var Add (Constraint Avcon_1144240102_Curre_000 Check (Current_Bool_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'List_Of_Tables' Alter Table List_Of_Tables

PAGE 213

200 Appendix B: (Continued) Add (Constraint Avcon_1144240102_Compo_000 Check (Composite_Primary_Key In ('T', 'F'))) / Prompt Creating Check Constraint On 'Types_Lov' Alter Table Types_Lov Add (Constraint Avcon_1144240102_Bool__000 Check (Bool_Value In (0, 1))) / Prompt Creating Check Constraint On 'Assignment' Alter Table Assignment Add (Constraint Avcon_1144240102_Rhs_T_000 Check (Rhs_Type In ('Calc', 'Func', 'Var', 'Const', 'Seq', 'Gui', 'Db'))) / Prompt Creating Check Constraint On 'Assignment' Alter Table Assignment Add (Constraint Avcon_1144240102_Const_000 Check (Const_Type In ('Char', 'Text', 'Int', 'Real', 'Bool', 'Date', 'Time', 'Db_Table'))) / Prompt Creating Check Constraint On 'Assignment' Alter Table Assignment Add (Constraint Avcon_1144240102_Const_001 Check (Const_Boolean_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Avcon_1144240102_Has_L_001 Check (Has_Lov_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Avcon_1144240102_Has_T_001 Check (Has_Table_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Avcon_1144240102_Has_O_001 Check (Has_Other_Restrictions In ('T', 'F'))) /

PAGE 214

201 Appendix B: (Continued) Prompt Creating Check Constraint On 'Predicate' Alter Table Predicate Add (Constraint Avcon_1144240102_Left__000 Check (Left_Operand_Type In ('Comp', 'Var', 'Const', 'Func', 'Db_Table', 'Db_Field')))/ Prompt Creating Check Constraint On 'Predicate' Alter Table Predicate Add (Constraint Avcon_1144240102_Right_000 Check (Right_Operand_Type In ('Comp', 'Var', 'Const', 'Func', 'Db_Table', 'Db_Field'))) / Prompt Creating Check Constraint On 'Pre_Post_Conditions' Alter Table Pre_Post_Conditions Add (Constraint Avcon_1144240102_Pre_P_000 Check (Pre_Post In ('Pre', 'Post'))) / Prompt Creating Check Constraint On 'Actual_Output' Alter Table Actual_Output Add (Constraint Avcon_1144240102_Ao_Ca_000 Check (Ao_Category In ('Db', 'Gui', 'File', 'Sequence', 'Message', 'Report', 'Variable' 'Role', 'Privilege', 'Commit', 'Savepoint', 'Rollback'))) / Prompt Creating Check Constraint On 'Class_Relationships' Alter Table Class_Relationships Add (Constraint Avcon_1144240102_Relat_000 Check (Relationship In ('Inherits_From', 'Aggregate_Of', 'Associated_With', 'Uses', 'Instantiates', 'Metaclass_Of'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_On_Gu_000 Check (On_Gui_Object In ('T', 'F'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_On_Ta_000 Check (On_Table In ('T', 'F'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers

PAGE 215

202 Appendix B: (Continued) Add (Constraint Avcon_1144240102_Befor_000 Check (Before_After In ('B', 'A'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_Table_000 Check (Table_Insert In ('T', 'F'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_Table_001 Check (Table_Delete In ('T', 'F'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_Table_002 Check (Table_Update In ('T', 'F'))) / Prompt Creating Check Constraint On 'Triggers' Alter Table Triggers Add (Constraint Avcon_1144240102_For_E_000 Check (For_Each_Row In ('T', 'F'))) / Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Field_000 Check (Field_Type In ('Char', 'Text', 'Int', 'Real', 'Bool', 'Date', 'Time', 'Db_Table'))) / Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Bool__001 Check (Bool_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Whole_000 Check (Whole_Table In ('T', 'F'))) / Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Order_000 Check (Order_By_Requested In ('T', 'F')))/

PAGE 216

203 Appendix B: (Continued) Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Order_001 Check (Order_Desc In ('T', 'F'))) / Prompt Creating Check Constraint On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Avcon_1144240102_Inser_000 Check (Insert_Select In ('T', 'F')))/ Prompt Creating Check Constraint On 'Types' Alter Table Types Add (Constraint Avcon_1144240102_Has_L_002 Check (Has_Lov In ('T', 'F'))) / Prompt Creating Check Constraint On 'Types' Alter Table Types Add (Constraint Avcon_1144240102_Has_R_000 Check (Has_Range In ('T', 'F'))) / Prompt Creating Check Constraint On 'Types' Alter Table Types Add (Constraint Avcon_1144240102_Even__000 Check (Even_Odd In ('E', 'O'))) / Prompt Creating Check Constraint On 'Expected_Output' Alter Table Expected_Output Add (Constraint Avcon_1144240102_Eo_Ca_000 Check (Eo_Category In ('Db', 'Gui', 'File', 'Sequence', 'Message', 'Report', 'Variable' 'Role', 'Privilege', 'Commit', 'Savepoint', 'Rollback'))) / Prompt Creating Check Constraint On 'Db_Operations' Alter Table Db_Operations Add (Constraint Avcon_1144240102_Opera_003 Check (Operation_Type In ('Select *', 'Select', 'Insert', 'Update', 'Delete', 'Create View' 'Create_Table', 'Drop_Table', 'Create_Seq', 'Access_Seq', 'Create Role', 'Select Count', 'Commit', 'Rollback', 'Savepoint', 'Grant', 'Revoke', 'Cursor', 'Fetch'))) /

PAGE 217

204 Appendix B: (Continued) Prompt Creating Check Constraint On 'Db_Operations' Alter Table Db_Operations Add (Constraint Avcon_1144240102_Seque_000 Check (Sequence_Incremented In ('T', 'F')))/ Prompt Creating Check Constraint On 'Db_Operations' Alter Table Db_Operations Add (Constraint Avcon_1144240102_Grant_000 Check (Grant_Admin_Option In ('Grant', 'Admin')))/ Prompt Creating Check Constraint On 'Db_Operations' Alter Table Db_Operations Add (Constraint Avcon_1144240102_As_Se_000 Check (As_Select In ('T', 'F'))) / Prompt Creating Check Constraint On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Avcon_1144240102_Inclu_000 Check (Include_Exclude In ('I', 'E'))) / Prompt Creating Check Constraint On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Avcon_1144240102_Has_R_001 Check (Has_Range In ('T', 'F'))) / Prompt Creating Check Constraint On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Avcon_1144240102_Even__001 Check (Even_Or_Odd In ('E', 'O'))) / Prompt Creating Check Constraint On 'Type_Excluded_Ranges' Alter Table Type_Excluded_Ranges Add (Constraint Avcon_1144240102_Has_R_002 Check (Has_Range In ('T', 'F'))) / Prompt Creating Check Constraint On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Avcon_1144240102_Sourc_001 Check (Source_Type In ('Var', 'Const', 'Gui', 'Seq'))) / Prompt Creating Check Constraint On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance

PAGE 218

205 Appendix B: (Continued) Add (Constraint Avcon_1144240102_From__001 Check (From_Bool_Const_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Avcon_1144240102_Op_Ha_000 Check (Op_Has_Lov_Restrictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Avcon_1144240102_Op_Ha_001 Check (Op_Has_Table_Restictions In ('T', 'F'))) / Prompt Creating Check Constraint On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Avcon_1144240102_Op_Ha_002 Ch eck (Op_Has_Other_Restrictions In ('T', 'F')))/ Prompt Creating Check Constraint On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Avcon_1144240102_Chang_000 Check (Changes_System_State In ('T', 'F')))/ Prompt Creating Check Constraint On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Avcon_1144240102_Encap_001 Check (Encapsulation In ('Private', 'Protected', 'Public'))) / Prompt Creating Check Constraint On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Avcon_1144240102_Input_000 Check (Input_Or_Output In ('I', 'O'))) / Prompt Creating Check Constraint On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Avcon_1144240102_Inclu_001 Check (Include_Exclude In ('I', 'E'))) / Prompt Creating Check Constraint On 'Var_Restrictions'

PAGE 219

206 Appendix B: (Continued) Alter Table Var_Restrictions Add (Constraint Avcon_1144240102_Has_R_003 Check (Has_Range In ('T', 'F'))) / Prompt Creating Check Constraint On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Avcon_1144240102_Even__002 Check (Even_Or_Odd In ('E', 'O'))) / Prompt Creating Check Constraint On 'Host_Var' Alter Table Host_Var Add (Constraint Avcon_1144240102_Input_001 Check (Input_Output In ('I', 'O'))) / Prompt Creating Check Constraint On 'Host_Var' Alter Table Host_Var Add (Constraint Avcon_1144240102_Direc_000 Check (Direct_Indirect In ('D', 'I'))) / Prompt Creating Check Constraint On 'Var_Table_Restrictions' Alter Table Var_Table_Restrictions Add (Constraint Avcon_1144240102_Input_002 Check (Input_Or_Output In ('I', 'O'))) / Prompt Creating Check Constraint On 'Operator' Alter Table Operator Add (Constraint Avcon_1144240102_Over__001 Check (Over_Loaded In ('T', 'F'))) / Prompt Creating Check Constraint On 'Expected_Input' Alter Table Expected_Input Add (Constraint Avcon_1144240102_Ei_Ca_000 Check (Ei_Category In ('Db', 'Gui', 'File'))) / Prompt Creating Check Constraint On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Avcon_1144240102_Sourc_002 Check (Source In ('Var', 'Const', 'Gui', 'Seq')))/ Prompt Creating Check Constraint On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Avcon_1144240102_Post__001 Check (Post_Cond_Type

PAGE 220

207 Appendix B: (Continued) In ('Gui', 'Db', 'File', 'Report', 'Seq', 'Message', 'Priv_Role', 'Global_Var', 'Commit', 'Savepoint', 'Rollback', 'Run_Exe', 'Open_Win' 'Terminate')))/ Prompt Creating Check Constraint On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Avcon_1144240102_From__002 Check (From_Bool_Const_Value In ('T', 'F'))) / Prompt Creating Check Constraint On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Avcon_1144240102_To_Fi_001 Check (To_File_Change In ('Create', 'Delete', 'Moidify', 'Relocate'))) / Prompt Creating Check Constraint On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Avcon_1144240102_To_Re_001 Check (To_Report_Destination In ('Printer', 'Screen', 'File'))) / Prompt Creating Check Constraint On 'Dataflow' Alter Table Dataflow Add (Constraint Avcon_1144240102_Pos_T_000 Check (Pos_Type In ('Initial', 'Begin', 'Input', 'Assign', 'Decision', 'Whileloop', 'Forloop', 'Run_Exe', 'Output', 'Call', 'Db', 'Open_Window', 'Construct_Object', 'Destruct_Object', 'Call_Object_Method', 'End', 'Endfor', 'End Loop', 'Terminal'))) / Prompt Creating Check Constraint On 'Dataflow' Alter Table Dataflow Add (Constraint Avcon_1144240102_Resul_000 Check (Result In ('T', 'F'))) / Prompt Creating Check Constraint On 'Procedure' Alter Table Procedure Add (Constraint Avcon_1144240102_Chang_001 Check (Changes_System_State In ('T', 'F')))/ Prompt Creating Check Constraint On 'Procedure' Alter Table Procedure

PAGE 221

208 Appendix B: (Continued) Add (Constraint Avcon_1144240102_Encap_002 Check (Encapsulation In ('Private', 'Protected', 'Public'))) / Prompt Creating Check Constraint On 'Var_Lov' Alter Table Var_Lov Add (Constraint Avcon_1144240102_Input_003 Check (Input_Or_Output In ('I', 'O'))) / Prompt Creating Check Constraint On 'Constraints' Alter Table Constraints Add (Constraint Avcon_1144240102_Const_002 Check (Constraint_Or_Assertion In ('C', 'A')))/ Prompt Creating Check Constraint On 'Constraints' Alter Table Constraints Add (Constraint Avcon_1144240102_On_Fi_000 Check (On_Field In ('T', 'F'))) / Prompt Creating Check Constraint On 'Calculation' Alter Table Calculation Add (Constraint Avcon_1144240102_Left__001 Check (Left_Operand_Type In ('Comp', 'Var', 'Const', 'Func', 'Db_Table', 'Db_Field'))) / Prompt Creating Check Constraint On 'Calculation' Alter Table Calculation Add (Constraint Avcon_1144240102_Right_001 Check (Right_Operand_Type In ('Comp', 'Var', 'Const', 'Func', 'Db_Table', 'Db_Field'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Has_C_000 Check (Has_Constraint_Restriction In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Is_Pr_000 Check (Is_Primary_Key In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description'

PAGE 222

209 Appendix B: (Continued) Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Is_Un_000 Check (Is_Unique In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Manda_001 Check (Mandatory In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Has_L_003 Check (Has_Lov_Restriction In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Has_T_002 Check (Has_Table_Restriction In ('T', 'F'))) / Prompt Creating Check Constraint On 'Table_Field_Description' Alter Table Table_Field_Description Add (Constraint Avcon_1144240102_Has_R_004 Check (Has_Range_Restriction In ('T', 'F'))) / Prompt Creating Foreign Key On 'Valid_Operands' Alter Table Valid_Operands Add (Constraint Oprnd_Opr_Fk Foreign Key (Operator_Id) References Operator (Operator_Id)) / Prompt Creating Foreign Key On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint Post_Cond_Proc_Fk Foreign Key (Sub_Routine_Name) References Procedure (Proc_Name)) / Prompt Creating Foreign Key On 'Designed_Sr_Post_Conditions' Alter Table Designed_Sr_Post_Conditions Add (Constraint

PAGE 223

210 Appendix B: (Continued) Post_Cond_Fdcl_Fk Foreign Key (Sub_Routine_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Var' Alter Table Var Add (Constraint Var_Cls_Fk Foreign Key (Class_Name) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Var' Alter Table Var Add (Constraint Var_Odn_Fk Foreign Key (Var_Id) References Input_Output_Definition (Var_Id)) / Prompt Creating Foreign Key On 'Var' Alter Table Var Add (Constraint Var_Obj_Fk Foreign Key (Object_Name) References Objects (Object_Name)) / Prompt Creating Foreign Key On 'Types_Lov' Alter Table Types_Lov Add (Constraint Tlov_Tdesc_Fk Foreign Key (Type_Name) References Table_Field_Description (Field_Id)) / Prompt Creating Foreign Key On 'Types_Lov' Alter Table Types_Lov Add (Constraint Tlov_Tps_3_Fk Foreign Key (Type_Name) References Types (Type_Name)) / Prompt Creating Foreign Key On 'Assignment' Alter Table Assignment Add (Constraint

PAGE 224

211 Appendix B: (Continued) Fun_Calc_Fk Foreign Key (Cal_Num) References Calculation (Cal_Num)) / Prompt Creating Foreign Key On 'Assignment' Alter Table Assignment Add (Constraint Fun_Daw_Fk Foreign Key (Pos_Num) References Dataflow (Pos_Num)) / Prompt Creating Foreign Key On 'Assignment' Alter Table Assignment Add (Constraint Fun_Fdcl Foreign Key (Func_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Srp_Proc_Fk Foreign Key (Subroutine_Name) References Procedure (Proc_Name)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Finrest1_Fdcl_Fk Foreign Key (Subroutine_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Description' Alter Table Sub_Routine_Ip_Description Add (Constraint Srp_Tps_3_Fk Foreign Key (Type) References Types (Type_Name)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Pre_Opr Foreign Key

PAGE 225

212 Appendix B: (Continued) (Operator_Id) References Operator (Operator_Id)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Right_Var_Fk Foreign Key (Right_Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Pre_Trg_Fk2 Foreign Key (Pos_Num) References Triggers (Trigger_Name)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Pre_Daw_Fk Foreign Key (Pos_Num) References Dataflow (Pos_Num)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Left_Comp_Fk Foreign Key (Left_Component_Num ,Pos_Num) References Predicate (Component_Num ,Pos_Num)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint Left_Var_Fk Foreign Key (Left_Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Predicate' Alter Table Predicate Add (Constraint

PAGE 226

213 Appendix B: (Continued) Right_Comp_Fk Foreign Key (Right_Component_Num ,Pos_Num) References Predicate (Component_Num ,Pos_Num)) / Prompt Creating Foreign Key On 'Pre_Post_Conditions' Alter Table Pre_Post_Conditions Add (Constraint Ppn_Dbto_Fk Foreign Key (Db_Operation_Id) References Db_Test_Operation (Operation_Id)) / Prompt Creating Foreign Key On 'Class_Relationships' Alter Table Class_Relationships Add (Constraint Crfk2 Foreign Key (Class_Name2) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Class_Relationships' Alter Table Class_Relationships Add (Constraint Crfk1 Foreign Key (Class_Name1) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Input_Output_Definition' Alter Table Input_Output_Definition Add (Constraint Odn_Daw_Fk Foreign Key (Pos_Num) References Dataflow (Pos_Num)) / Prompt Creating Foreign Key On 'Db_Operation_Details' Alter Table Db_Operation_Details Add (Constraint Op_Detail_Fk Foreign Key (Db_Operation_Num) References Db_Operations (Db_Operation_Num)) /

PAGE 227

214 Appendix B: (Continued) Prompt Creating Foreign Key On 'Test_Templates' Alter Table Test_Templates Add (Constraint Tte_Dbto_Fk Foreign Key (Db_Operation_Id) References Db_Test_Operation (Operation_Id)) / Prompt Creating Foreign Key On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Type_Res_Tdesc_Fk Foreign Key (Type_Name) References Table_Field_Description (Field_Id)) / Prompt Creating Foreign Key On 'Type_Restrictions' Alter Table Type_Restrictions Add (Constraint Type_Res_Tps_Fk Foreign Key (Type_Name) References Types-Old (Type_Name)) / Prompt Creating Foreign Key On 'Attributes_Changed' Alter Table Attributes_Changed Add (Constraint Ace_Dbto_Fk Foreign Key (Db_Operation_Id) References Db_Test_Operation (Operation_Id)) / Prompt Creating Foreign Key On 'Type_Excluded_Ranges' Alter Table Type_Excluded_Ranges Add (Constraint Tres_1_Tps_3_Fk Foreign Key (Type_Name) References Types (Type_Name)) / Prompt Creating Foreign Key On 'Type_Excluded_Ranges' Alter Table Type_Excluded_Ranges Add (Constraint Type_Exran_Tdesc_Fk Foreign Key

PAGE 228

215 Appendix B: (Continued) (Type_Name) References Table_Field_Description (Field_Id)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Srp_1_Obj_Fk Foreign Key (Object_Name) References Objects (Object_Name)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Sp_Fk Foreign Key (Subroutine_Name ,Ip_Order) References Sub_Routine_Ip_Description (Subroutine_Name ,Ip_Order)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Srp_Proc1_Fk Foreign Key (Subroutine_Name) References Procedure_1 (Proc_Name)) / Prompt Creating Foreign Key On 'Sub_Routine_Ip_Instance' Alter Table Sub_Routine_Ip_Instance Add (Constraint Finrest_Fdcl_Fk Foreign Key (Subroutine_Name) References Function_Declaration_1 (Func_Name)) / Prompt Creating Foreign Key On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Fdcl_Cls_Fk Foreign Key (Class_Name) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Function_Declaration' Alter Table Function_Declaration Add (Constraint Fdcl_Tps_3_Fk Foreign Key (Op_Type) References Types (Type_Name))/

PAGE 229

216 Appendix B: (Continued) Prompt Creating Foreign Key On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Varres_Fdcl_Fk Foreign Key (Sub_Routine_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Varres_Var_Fk Foreign Key (Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Var_Restrictions' Alter Table Var_Restrictions Add (Constraint Var_Fip_Fk Foreign Key (Sub_Routine_Name ,Var_Name) References Sub_Routine_Ip_Description (Subroutine_Name ,Sr_Var_Name)) / Prompt Creating Foreign Key On 'Host_Var' Alter Table Host_Var Add (Constraint Hvr_Dbto_Fk Foreign Key (Db_Operation_Id) References Db_Test_Operation (Operation_Id)) / Prompt Creating Foreign Key On 'Objects' Alter Table Objects Add (Constraint Obj_Cls_Fk Foreign Key (Class_Name) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Data_Group' Alter Table Data_Group Add (Constraint Dgp_Dbto_Fk Foreign Key (Db_Operation_Id) References Db_Test_Operation (Operation_Id)) /

PAGE 230

217 Appendix B: (Continued) Prompt Creating Foreign Key On 'Var_Table_Restrictions' Alter Table Var_Table_Restrictions Add (Constraint Vtr_Fip_Fk Foreign Key (Sub_Routine_Name ,Var_Name) References Sub_Routine_Ip_Description (Subroutine_Name ,Sr_Var_Name)) / Prompt Creating Foreign Key On 'Var_Table_Restrictions' Alter Table Var_Table_Restrictions Add (Constraint Vtr_Fdcl Foreign Key (Sub_Routine_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Var_Table_Restrictions' Alter Table Var_Table_Restrictions Add (Constraint Vtr_Var_Fk Foreign Key (Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Post_Con_1_Fdcl_Fk Foreign Key (Sub_Routine_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Runtime_Sr_Post_Conditions' Alter Table Runtime_Sr_Post_Conditions Add (Constraint Post_Con_1_Proc_Fk Foreign Key (Sub_Routine_Name) References Procedure (Proc_Name)) / Prompt Creating Foreign Key On 'Equivalence_Class' Alter Table Equivalence_Class Add (Constraint Ecs_Dgp_Fk Foreign Key (Db_Operation_Id ,Data_Group_Name) References Data_Group (Db_Operation_Id ,Data_Group_Name)) /

PAGE 231

218 Appendix B: (Continued) Prompt Creating Foreign Key On 'Dataflow' Alter Table Dataflow Add (Constraint Daw_Obj_Fk Foreign Key (Object_Name) References Objects (Object_Name)) / Prompt Creating Foreign Key On 'Dataflow' Alter Table Dataflow Add (Constraint Daw_Cls_Fk Foreign Key (Class_Name) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Procedure' Alter Table Procedure Add (Constraint Proc_Cls_Fk Foreign Key (Class_Name) References Classes (Class_Name)) / Prompt Creating Foreign Key On 'Db_Operation_Tables' Alter Table Db_Operation_Tables Add (Constraint Otab_Con_Fk Foreign Key (Db_Constraint_Num) References Constraints (Constraint_Num)) / Prompt Creating Foreign Key On 'Db_Operation_Tables' Alter Table Db_Operation_Tables Add (Constraint Otab_Db_Oper_Fk Foreign Key (Db_Operation_Num) References Db_Operations (Db_Operation_Num)) / Prompt Creating Foreign Key On 'Data_Group_Values' Alter Table Data_Group_Values Add (Constraint Dgv_Dgp_Fk Foreign Key (Db_Operation_Id ,Data_Group_Name) References Data_Group (Db_Operation_Id ,Data_Group_Name)) /

PAGE 232

219 Appendix B: (Continued) Prompt Creating Foreign Key On 'Var_Lov' Alter Table Var_Lov Add (Constraint Vlov_Fdcl Foreign Key (Var_Id ,Sub_Routine_Name) References Function_Declaration (Op_Var_Id ,Func_Name)) / Prompt Creating Foreign Key On 'Var_Lov' Alter Table Var_Lov Add (Constraint Vlov_Var_Fk Foreign Key (Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Var_Lov' Alter Table Var_Lov Add (Constraint Vlov_Srp_Fk Foreign Key (Sub_Routine_Name ,Var_Name) References Sub_Routine_Ip_Description (Subroutine_Name ,Sr_Var_Name)) / Prompt Creating Foreign Key On 'Constraints' Alter Table Constraints Add (Constraint Con_Db_Oper_Fk Foreign Key (Db_Operation_Num) References Db_Operations (Db_Operation_Num)) / Prompt Creating Foreign Key On 'Constraints' Alter Table Constraints Add (Constraint Con_Tdesc_Fk Foreign Key (Field_Name ,Table_Name) References Table_Field_Description (Field_Name ,Table_Name)) / Prompt Creating Foreign Key On 'Constraints' Alter Table Constraints Add (Constraint Con_Opr_Fk Foreign Key (Db_Operator_Id) References Operator (Operator_Id)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Right_Comp_Var_Fk2 Foreign Key (Right_Var_Id) References Var (Var_Id))/

PAGE 233

220 Appendix B: (Continued) Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Left_Comp_Var_Fk2 Foreign Key (Left_Var_Id) References Var (Var_Id)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Left_Comp_Calc_Fk Foreign Key (Left_Component_Num ,Cal_Num) References Calculation (Component_Num ,Cal_Num)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Right_Comp_Calc_Fk Foreign Key (Right_Component_Num ,Cal_Num) References Calculation (Component_Num ,Cal_Num)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Calc_Opr_Fk Foreign Key (Operator_Id) References Operator (Operator_Id)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Calc_Fdcl_Fk2 Foreign Key (Right_Func_Name) References Function_Declaration (Func_Name)) / Prompt Creating Foreign Key On 'Calculation' Alter Table Calculation Add (Constraint Calc_Fdcl_Fk Foreign Key (Left_Func_Name) References Function_Declaration (Func_Name)) /

PAGE 234

221 Appendix C: PL-SQL Code for the Constraints Generator PROCEDURE "CONSTRAINT_LOADER" --is (Directory IN VARCHAR2, File _Name IN VARCHAR2) is T_Table_Name varchar2(240); T_Field_ID varchar2(240); T_Field_Name varchar2(240); T_Field_Type varchar2(240); T_Primary_Key varchar2(1); T_Is_Unique varchar2(1); T_Mandatory varchar2(1); T_LENGTH INT; T_Has_LOV_Restrictions varchar2(1); T_Has_Table_Restrictions varchar2(1); T_Has_Range_Restrictions varchar2(1); T_Has_Constraint_Restrictions varchar2(1); T_Num_of_Constraint_Restrictions INT; T_Which_Table varchar2(240); T_Which_Field varchar2(240); F_Type_Name VARCHAR2(20); F_Restriction_Num int; F_Include_Exclude VARCHAR2(1); F_Has_Range char(1); F_Even_Or_Odd VARCHAR2(1); F_Range_From_Int int; F_Range_To_Int int; F_Int_Value int; F_Range_From_Char char; F_Range_To_Char char; F_Char_Value char; F_Range_From_Real real; F_Range_To_Real real; F_Real_Value real; F_Range_From_Date date; F_Range_To_Date date; F_Date_Value date; F_Range_From_Time date; F_Range_To_Time date; F_Time_Value date; F_Text_Value VARCHAR2(240); ScriptFile UTL_FILE.FILE_TYPE; LOV_Tab_Name varchar2(20); included_ranges int; excluded int; int1 int; int2 int; char1 char; char2 char; real1 real; real2 real; date1 date; date2 date;

PAGE 235

222 Appendix C: (Continued) CURSOR Field_Desc IS Select From Table_Field_Description; CURSOR field_restrictions IS Select From TYPE_EXCLUDED_RANGES Where Type_Name = T_Field_ID; BEGIN -A file should already be created or else, an exception handler should be created ScriptFile := UTL_FILE.FOPEN (Directory, File_Name, 'w'); OPEN Field_Desc; LOOP Fetch Field_Desc Into T_Table_Name, T_Field_ID, T_Field_Name, T_Field_Type, T_Primary_Key, T_Is_Unique, T_Mandatory, T_LENGTH, T_Has_LOV_Restrictions, T_Has_Table_Rest rictions, T_Has_Range_Restrictions, T_Has_Constraint_Restrictions, T_Num_of _Constraint_Restrictions, T_Which_Table, T_Which_Field; EXIT WHEN Field_Desc%NOTFOUND; IF (T_Primary_Key = 'T') OR (T_Primary_Key = 't')-Primary Key Constraint THEN UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_PK PRIMARY KEY (%s); \n',T_Table_Name,T_Field_Name,T_Field_Name); END IF; IF (T_Is_Unique = 'T') OR (T_Is_Unique = 't')-Unique Field Constraint THEN UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_UK UNIQUE (%s); \n',T_Table_Name,T_Field_Name,T_Field_Name); END IF; IF (T_Mandatory = 'M') OR (T_Mandatory = 'm')-Not NULL constraint THEN UTL_FILE.PUTF(S criptFile,'ALTER TABLE %s MODIFY %s NOT NULL; \n',T_Table_Name, T_Field_Name); END IF;

PAGE 236

223 Appendix C: (Continued) IF (T_Has_LOV_Restrictions = 'T') OR (T_Has_LOV_Restrictions = 't')-Has a List of Valid Values THEN LOV_Tab_Name := T_Table_Name ||'_' || T_Field_Name || '_LOV'; UTL_FILE.PUTF(ScriptFile,'CREATE TABLE %s (%s Varchar2, valid_value %s);\n', LOV_Tab_Name, T_Field_Name,T_Field_Type); IF T_Field_Type IN ('INTEGER','SMALLINT') THEN UTL_FILE. PUTF(ScriptFile,'INSERT INTO %s SELECT INT_Value FROM Types_LOV WHERE Type_Name = %s ;\n',LOV_Tab_Name, T_Field_ID); ELSIF T_Field_Type IN ('CHAR','VARCHAR2','LONG') THEN UTL_FILE. PUTF(ScriptFile,'INSERT INTO %s SELECT CHAR_Value FROM Types_LOV WHERE Type_Name = %s ;\n',LOV_Tab_Name, T_Field_ID); ELSIF T_Field_Type IN ('NUMBER','REAL') THEN UTL_FILE. PUTF(ScriptFile,'INSERT INTO %s SELECT REAL_Value FROM Types_LOV WHERE Type_Name = %s ;\n',LOV_Tab_Name, T_Field_ID); ELSIF T_Field_Type IN ('BOOLEAN') THEN UTL_FILE. PUTF(ScriptFile,'INSERT INTO %s SELECT BOOL_Value FROM Types_LOV WHERE Type_Name = %s ;\n',LOV_Tab_Name, T_Field_ID); -Other types can also be added here, if in the Types_LOV table, just change the string value of the T_Field_Type and the name of the field after the Select statement END IF; UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_FK FOREIGN KEY(%s) REFERENCES %s(valid_value); \n', T_Table_Name, T_Field_Name, T_Field_Name, LOV_Tab_Name); END IF; IF (T_Has_Table_Restrictions = 'T') OR (T_Has_Table_Restrictions = 't')-References another field in the same or a a different table THEN UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_FK FOREIGN KEY(%s) REFERENCES %s(%s); \n', T_Table_Name, T_Field_Name, T_Field_Name, T_Which_Table, T_Which_Field); END IF;

PAGE 237

224 Appendix C: (Continued) IF (T_Has_Range_Restrictions = 'T') OR (T_H as_Range_Restrictions = 't')-if there are other restrictions in the range, including certain numbers, or excluding others, etc. THEN --1. Look in TYPE, to define the range of valid values. IF T_Field_Type IN ('INTEGER','SMALLINT') THEN select min(range_from_int), max(range_to_int) into int1, int2 from TYPES where type_name = T_Field_ID and upper (has_range) = 'T' ; UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,int1, int2); ELSIF T_Field_Type ='CHAR' THEN select min(range_from_char), max(range_to_char) into char1, char2 from TypeS where type_name = T_Field_ID and upper (has_range) = 'T'; UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,char1, char2); ELSIF T_Field_Type IN ('NUMBER','REAL') THEN select min(range_from_REAL), max(range_to_REAL) into REAL1, REAL2 from TypeS where type_name = T_Field_ID and upper (has_range) = 'T'; UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,REAL1, REAL2); ELSIF T_Field_Type ='DATE' THEN select min(range_from_DATE), max(range_to_DATE) into DATE1, DATE2 from TypeS where type_name = T_Field_ID and upper (has_range) = 'T';

PAGE 238

225 Appendix C: (Continued) UTL_FILE. PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,DATE1, DATE2); -ELSIF T_Field_Type ='TIME' -THEN -select range_from_TIME, range_to_TIME into TIME1, TIME2 from Type_Restrictions where type_name = T_Field_ID; -UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s BETWEEN %s AND %s); \n', -T_Table_Name, T_Field_Name, T_Field_Name,TIME1, TIME2); END IF; Select count(*) into excluded From TYPE_EXCLUDED_RANGES Where Type_Name = T_Field_ID ; if excluded > 0 --there is at least one excluded range or value then OPEN field_restrictions; LOOP Fetch field_restrictions Into F_Type_Name, F_Restriction_Num, F_Has_Range, F_Range_From_Int,F_Range_To_Int,F_Int_Value, F_Range_From_Char,F_Range_To_Char,F_Ch ar_Value,F_Range_From_Real,F_Range_ To_Real, F_Real_Value,F_Range_From_Date,F_Range_To_Date,F_Date_Value, F_Range_From_Time, F_Range_To_Time,F_Time_Value,F_Text_Value; EXIT WHEN field_restrictions%NOTFOUND; IF T_Field_Type IN ('INTEGER','SMALLINT') THEN IF upper(F_Has_Range) = 'T' then UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s NOT BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name,

PAGE 239

226 Appendix C: (Continued) T_Field_Name,F_Range_From_Int, F_Range_To_Int); el se UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s <> %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Int_Value); end if; --has range or single value ELSIF T_Field_Type ='CHAR' THEN IF upper(F_Has_Range) = 'T' then UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s NOT BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Range_From_Char, F_Range_To_Char); el se UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s <> %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Char_Value); end if; --has range or single value ELSIF T_Field_Type IN ('NUMBER','REAL') THEN IF upper(F_Has_Range) = 'T' then UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s NOT BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Range_From_Real, F_Range_To_Real); el se UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s <> %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Real_Value); end if; --has range or single value ELSIF T_Field_Type ='DATE' THEN IF upper(F_Has_Range) = 'T' then UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s NOT BETWEEN %s AND %s); \n', T_Table_Name, T_Field_Name, T_Field_Name,F_Range_From_Date, F_Range_To_Date); el se UTL_FILE.PUTF(ScriptFile,'ALTER TABLE %s ADD CONSTRAINT %s_CK CHECK(%s <> %s); \n', T_Table_Name, T_Field_Name,

PAGE 240

227 Appendix C: (Continued) T_Field_Name,F_Date_Value); end if; --has range or single value END LOOP; END IF; --there is at least excluded range or value END IF; END loop ; UTL_FILE.FFLUSH(ScriptFile); CLOSE Field_Desc; COMMIT; UTL_FILE.FCLOSE (ScriptFile); END;

PAGE 241

ABOUT THE AUTHOR Rana Mikhail finished her Bachelor' s Degree in Computer Science from the American University in Cairo (AUC) in 1998. She recei ved her M.Sc. degree in Computer Science from AUC in 1999. She worked as a decision support systems consultant for two years and then started her Ph.D. in Computer Science in 2000. Her research interests include database systems, software testing, da ta mining, data warehousing and software engineering including system analysis and design.