Towards generating a data integrity standard - Semantic Scholar

10 downloads 62827 Views 2MB Size Report
system automatically selects one of the repair actions for one of the violated ...... Security standards ± government and commercial, AT&T Technical Journal, ...
Data & Knowledge Engineering 32 (2000) 291±313

www.elsevier.com/locate/datak

Towards generating a data integrity standard Moshe Zviran a,*, Chanan Glezer b a

b

Information Science Department, Claremont Graduate University, 130 E. Ninth St., Claremont, CA 91711, USA Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel Received 26 October 1998; received in revised form 21 June 1999; accepted 7 August 1999

Abstract The tremendous growth in size, complexity and value of the organizationÕs data resources has given rise to an urgent need for a data integrity standard that will provide a consensus de®nition, a common measure and a set of tools for evaluating the various models and mechanisms in this domain. This paper attempts to pave the way for such a data integrity standard. It discusses various de®nitions of data integrity and indicates the one that best serves as a common de®nition. It then provides a description and assessment of two prominent data integrity models: the Biba model, and the Clark±Wilson model. Next, a framework for evaluating these models is proposed and operationalized. The ®nal section discusses conclusions and makes some practical recommendations that derive from the comparison of the models. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Data security; Data integrity; Protection

1. Introduction Data security is composed of three major factors: secrecy, availability and integrity [17]. Data secrecy focuses on preventing unauthorized disclosure; data availability deals with methods of preventing denial of authorized access; and data integrity refers to preventing unauthorized modi®cation of data. Basic data security techniques were developed early in the history of computer-based systems to protect the data resource. Elementary password controls became associated with the protection of data ®les to maintain their secrecy and integrity. In addition, data integrity elements were installed in batch processing systems of the 1950s and 1960s to minimize the error content in the data. Check digits, radix checks, range checks and content checks were extensively applied, particularly as part of the e€ort to prevent transcription errors from corrupting the data [22]. With the rapid advance of information technology, greater emphasis has been placed on secrecy and availability. The sophisticated control mechanisms that have been developed in these domains

*

Corresponding author. Present address: Faculty of Management, The Leon Recanati School of Business Administration, Tel Aviv University, 69978 Tel Aviv, Israel. Tel.:+972-3-6-409671; fax: +972-3-6407741. E-mail address: [email protected] (M. Zviran). 0169-023X/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 9 - 0 2 3 X ( 9 9 ) 0 0 0 4 2 - 7

292

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

have proved e€ective in limiting and preventing unauthorized disclosure while minimizing denial of service to authorized users [23]. However, as the organizationÕs data resources grow in size, complexity and value, an urgent need arises for models and mechanisms to prevent unauthorized manipulation or modi®cation of data by users, and for a data integrity standard that will provide a common measure and a set of tools for evaluating the various models and mechanisms in this domain [3,18]). Examining previous computer security standards, however, reveals the fact that none of them provides comprehensive coverage of the data integrity issue. Aspects of integrity are mentioned as part of other standards (e.g., connection integrity in the ISO 7498 standard, DoD TCSEC [12]) but no adequate standard to protect the integrity of the data resource has yet been proposed. Consequently, from the data integrity view, a top priority is to generate a standard that will de®ne the issue and establish an acceptable set of mechanisms to provide the necessary security services. To cope with the need to establish a data integrity standard, two major issues need to be addressed. The ®rst is the lack of a commonly accepted de®nition for data integrity. Several de®nitions have been proposed and widely discussed in the literature. However, each tackles various aspects of data integrity and none of them is considered to be acceptable as a common ground for articulating the issues associated with data integrity. Another problem that stems from the lack of a consensus de®nition and uni®ed interpretation of data integrity is that each of the integrity models is based on its own de®nition, making their evaluation extremely dicult [2]. A second issue is the absence of a standard data integrity model. Two major models have been proposed, the Biba model and the Clark±Wilson model, neither of which, however, has become an industry standard. Moreover, even a framework for evaluating and comparing these models has not yet been de®ned. Consequently, a confusion appears to exist with respect to the practical interpretation and application of popular integrity in real systems. Finally, there is no real guidance on the notion of `external security requirements' for systems that focus on countering integrity threats. This paper addresses the voids mentioned above and present an initial framework for a data integrity standard. First, it discusses the various de®nitions of integrity and attempts to identify a common and comprehensive de®nition. It then provides a description and an assessment of two prominent data integrity models: the Biba model, which is based on the hierarchical lattice of integrity levels [7] and the Clark±Wilson model, which is based on certi®cation and enforcement rules that preserve integrity by taking the system from one valid state to another [10]. A framework for evaluating the models is proposed and operationalized for comparing the two. The ®nal section discusses conclusions and makes some practical recommendations that derive from the comparison of the models. It is, however, important to keep in mind that since each model was developed in a di€erent environment and with di€erent aims, there is no common ground for raising one of them as a universal standard. 2. De®ning integrity Many de®nitions of integrity in the context of computer systems have been proposed in the literature, focusing on various aspects of the issue and attempting to achieve di€erent goals. The

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

293

de®nitions either focus on a single element or a group of several elements. They can be divided into three major clusters: single-element data-focused de®nitions, single-element non-data-focused de®nitions, and multi-element focused (comprehensive) de®nitions. For the data-focused de®nitions, Fernandez et al. [13] view data integrity as a complement to data security rather than an integral ingredient. These authors suggest that data integrity is concerned with the correctness of the database content and that it can be compromised by failures originating in user, program or system actions. Thus, data integrity is to be treated as a multifaceted concept consisting of semantic integrity, concurrency control and recovery mechanisms. Clark and Wilson [10] speak in terms of ``data that are free from unauthorized manipulation''. Ceri and Garziotti [9] present an approach where the designer of the consistency constraints speci®es a set of repair actions for each constraint. Once a consistency violation is detected, the system automatically selects one of the repair actions for one of the violated constraints, performs it, and restarts the consistency check. Ceri et al. [8] further describe a speci®c architecture for constraint de®nition and enforcement. The components of the architecture include a Constraints Editor for introducing constraints by the end-users, a Rule Generator for producing a set of active rules (called a maximal rule set) that enforce the constraints, a Rule Analyzer to determine the partial order on the constraints, a Rule Selector for postoptimization by excluding redundant rules for non-critical constraints, and ®nally a Run-Time System that executes user-supplied transactions. Motro [16] suggests that database integrity has two complementary components: validity, which guarantees that all false information is excluded from the database, and completeness, which guarantees that all true information is included. In order to ensure these components, the author proposes using two types of constraints: validity constraints and completeness constraints. The constraints are stored in a set of meta-relations that mirror the actual database relations. Whenever a query is issued to a relational database both the data and the constraints are retrieved. The model uses the constraints to certify the integrity of the answers as fully or partially valid and fully or partially complete. Moerkotte and Lockemann [15] use the term consistency as an equivalent to integrity. They de®ne a database to be consistent if its current state, and perhaps the transition that led to it, obey a given set of conditions that re¯ect laws governing states and transitions in the miniworld. Consistency is achieved in part by utilizing concepts o€ered by the DBMS data model, and by proper design of the database schema that determines the use of these concepts (data model and schema consistency are collectively referred to as internal consistency). Laws that are not covered in this way must be explicitly formulated in the form of conditions called consistency constraints (external consistency). To implement their ideas the authors propose an architecture for a system that supports a user in an environment where he or she issues a transaction that violates one or more consistency constraints. The goal is to automatically identify symptoms of inconsistencies, then derive the causes of these inconsistencies and ®nally suggest repair transactions that are appended to the userÕs transaction in order to regain consistency of the database. Comparable intentions have been reported for relational databases. Awad and Gotterer [1] de®ne the property of integrity as ``a situation where data in a database is correct at all times''. Date [11] provides a more detailed de®nition that views data integrity in the context of accuracy, correctness and validity. The challenge is to guard a database against

294

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

invalid updates and prevent undesired modi®cation of data caused by concurrent execution of valid transactions. Such manipulation or modi®cation can be caused by accidental mistakes, system malfunctions or by malicious manipulation. According to Bertino [6], data integrity is jointly ensured by an access control mechanism and by semantic integrity constraints. Whenever a user tries to modify some data, the access control mechanism veri®es that the user has a right to modify the data, whereas the semantic integrity subsystem veri®es that the updated data are semantically correct. Semantic correctness is veri®ed against a set of conditions, or predicates, that the database state must verify. For the second cluster of de®nitions, Biba [7] claims that integrity does not imply guarantees concerning the absolute behavior of a computer system. He considers a computer system to possess the property of integrity if it can be trusted to adhere to a well-de®ned code of behavior and therefore perform as it was intended to perform by its creator. No a priori statement as to the properties of this behavior are relevant. Terry and Wiseman [20] adopt the approach of de®ning security as the simple real world requirement that a job is carried out with respect to secrecy considerations, in a correct manner, and that it is done only if it is in some sense appropriate. In their paper they speak of three aspects of security: con®dentiality, integrity and appropriateness. They de®ne integrity as a property of state; a correctness. In their model of a state machine, all states need to be correct for the machine to be correct overall. For the third cluster of de®nitions, Henning and Walker [14] o€er a comprehensive de®nition of integrity covering six areas: a. How correct the information is thought to be. b. Level of con®dence that the information is from the original source. c. Correctness of the functioning of the process using the information. d. Level of correspondence of the process function to the designed intent. e. How correct the information in an object is initially. f. Con®dence that the information in an object is unaltered. This de®nition appears to be more comprehensive than the previous ones since it provides coverage of all the areas that logically fall under the title of integrity. Ruthberg and Polk [19] discuss the de®nition developed by the integrity working group (IWG) of the Invitational Workshop on Data Integrity, which seems to be the most comprehensive: Integrity ± a property that data, an information process, computer equipment and/or software, people, etc. or any collection of these entities meet an a priori expectation of quality that is satisfactory and adequate in some circumstance. The attributes of quality can be general in nature and implied by the context of the discussion, or speci®c and in terms of some intended usage or application. This de®nition addresses many issues that are commonly associated with the notion of data integrity while remaining broad enough to be applied to many environments. The broad range prevents restriction of the de®nition to data integrity alone and makes it a good tool for comparison. The axiom here is that the broader the standard, the greater the number of de®nitions that can be measured against it.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

295

A closer examination of the goal reveals that it is composed of the following key elements: 1. A priori expectation. This term emphasizes that there must be a goal or desired outcome (i.e., expectation) for the element being studied for integrity. 2. Quality. This term refers to the attributes that characterize the element being studied. The most common attributes contained within the heading of quality are accuracy, timeliness, consistency, and completeness. Ruthberg and Polk state that integrity is not quality but rather the ``extent to which the qualities (i.e., accuracy, precision, timeliness, etc.) taken together are considered adequate for a given purpose''. Fig. 1 summarizes and synthesizes the integrity de®nitions mentioned above in a two-dimensional format: one refers to the scope of the integrity de®nition (e.g., systems, data, process) and the second focuses on the goals which the de®nition aims to maintain or achieve. As evident from Fig. 1, the IWG de®nition is the most comprehensive one. It addresses many aspects that are commonly associated with the notion of data integrity while remaining broad enough to be applied to many environments. Consequently, due its ¯exibility and completeness, this de®nition can best serve as a standard for comparing other de®nitions of data integrity. Nevertheless, for the same reasons the IWG de®nition is also the most dicult one to implement and achieve. The IWG de®nition is mostly suitable for groupware applications, work¯ow management systems and project management systems such as CASE tools. These systems encapsulate mechanisms to protect the complex relationship among processes, entities, documents and people, in an attempt to achieve a standard of quality in terms of timeliness, accuracy and precision. Another example of an architecture that implements the broad scope of the IWG de®nition is Trusted Oracle 7 (www.oracle.com) which is a high security database server product based directly on Oracle 7 technology. In addition to the range of features and functionality of Oracle 7, Trusted Oracle 7 includes enhanced data security capabilities for processing sensitive, proprietary and even classi®ed information. Trusted Oracle 7 automatically labels all data and enforces a centralized security policy, enabling storage and processing of data in di€erent levels of sensitivity on a single machine ± without risk of compromise. Trusted Oracle 7 provides integrity using mechanisms that support system integrity and mechanisms that enforce relational database integrity. System integrity ensures that a data item inserted into the system is the same in content when it is subsequently retrieved. It is achieved by using several mechanisms that enable concurrency and serializability of transactions, as well as discretionary and mandatory access control which prevent unauthorized modi®cation and deletion of data by users. Relational database integrity is supported by the use of declarative entity and referential integrity constraints as de®ned in the ISO/ANSI SQL89 standard. Trusted Oracle 7 enables the enforcement of cross-level integrity and cross-level con®dentiality. In cases where con®dentiality con¯icts with integrity, Trusted Oracle 7 enables to specify which of the two takes precedence (e.g., polyinstantiation ± allowing a new record at a lower security level to have the same primary key value as a higher level record, in order to conceal the existence of the higher level record from the end-user). 3. The Biba integrity model The Biba model for system integrity [7] is the outcome of a research project prepared for the US Air Force by the MITRE Corporation. Biba developed his integrity model on the basis of the

296

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

Fig. 1. Comparing the data integrity de®nitions.

assumption that integrity is the dual of secrecy, presenting several policies for protection of integrity and tailoring each policy for implementation in a Multics environment. BibaÕs model [7] is based on a hierarchical lattice of integrity levels. Several basic elements are used in the model:

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

297

· Subjects (S): Active system information processing elements of a computing system. · Objects (O): Passive information repository elements of a computing system. · Integrity levels (I): A set of levels with a relation (leq) de®ning a partial order (`less than or equal'). Integrity levels are directly analogous to those used for security level assignments. · il: A function de®ning the integrity level of each subject and object. · o: A relation (subset of S ´ O) de®ning the capability of a subject to observe an object. · m: A relation (subset of S ´ O) de®ning the capability of a subject to modify an object. · i: A relation (subset of S ´ S) de®ning the capability of a subject to invoke another subject. Integrity is evaluated at a subsystem level where a subsystem is some subset of a systemÕs subjects and objects isolated on the basis of function or privilege. A computer system is de®ned to be composed of any number of subsystems. Biba classi®es integrity threats to the intended performance of a subsystem into the following two dimensions: source and type. Threat source can be either external or internal. In an external threat, one subsystem attempts to change the behavior of another by supplying false data or improperly invoking functions. Internal threats arise if a component of a subsystem is malicious or incorrect. Biba notes that internal threats are addressed by program veri®cation techniques. Threat type can be either direct or indirect. Direct threats involve only a subject and an accessed data object, whereas indirect threats refer to a much larger scenario in which a data object is corrupted through a transfer path of corrupted procedures and data. The Biba model elements support two classes of integrity policies: mandatory and discretionary. They di€er in the manner in which a protection policy, once made, may be changed. A mandatory integrity policy is a protection policy which, once de®ned for an object, is unchangeable and must be satis®ed for all states of the system (as long as the object exists). On the other hand, a discretionary policy is one which may be dynamically de®ned by the user (during the existence of an object). 3.1. Mandatory integrity controls Mandatory integrity control policies cannot be bypassed, avoided, or altered by users. Each policy in this category must meet two requirements: it must identify the objects that require protection and it must determine when requests to access data are permissible. This is the access control for the system. Each of the three policies presented by Biba meets these criteria. The policies use di€erent constraints to limit data access while identifying protected objects for the system. The Low-Watermark Policy is based on the premise that integrity level of a subject is dynamic and will change depending on his or her previous behavior: speci®cally, the integrity level of the subject will be determined by the integrity level of the most recently accessed object. The integrity level of the objects in the system will not change; the data in the objects remain at a constant level, with the collection of subjects permitted to access those objects constantly changing. Under this policy it is possible for subjects to downgrade their own integrity level to the lowest level in the system, hence the name low-watermark. The main drawback of the policy is that access to the lowest level objects will decrease the integrity level of the subject to the lowest level. Subjects can be reinitialized in order to restore their original level of integrity. However, this is obviously not an event that should occur frequently. The policy allows for altering integrity levels downward but it does not allow subjects to increase their integrity level. In addition to changing

298

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

the integrity level of subjects, Biba [7] introduces a version of this policy called the Low-Water Mark for Objects, where the integrity level of modi®ed objects may change. The Low-Watermark Policy is depicted in Fig. 2. In this ®gure, the subject (S1) possesses a High integrity level before it accesses object O2. This means that S1 is authorized to access objects that are labeled High (such as O1). When S1 expands its domain and attempts to access O2 the following sequence occurs. First, access is granted and second, S1 is assigned an integrity level of Medium. This results from the fact that the level of O2 is Medium. S1 has downgraded its own integrity level from High to Medium by requesting and being granted access to O2. O1 is now out of S1's domain and cannot be accessed by S1. Any subsequent actions by S1 to access objects with lower integrity levels than O2 (i.e., Low) will result in the integrity level of S1 being further reduced. The goal of this policy is to prevent the indirect sabotage of object integrity by subjects. The Ring Policy is designed to address attempts by subjects to directly modify objects. This policy ®xes the integrity levels of both subjects and objects to a constant level. Modi®cations are allowed only to objects of less or equal integrity level. This policy increases the ¯exibility of the system by allowing observation (reading) of objects at any level. Subjects are allowed to observe any object, even those which possess a higher integrity level than the subject. The trade-o€ for increased ¯exibility is decreased integrity assurance. Observation of all objects by all subjects increases the possibility of contamination of data contained in high-level objects. The Strict Integrity Policy can be considered the complement or dual of the security policy presented by Bell-Lapadula [5]. It consists of three axioms that provide the same functions as the

Fig. 2. The Low-Watermark Policy.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

299

Low-Watermark Policy but without allowing change in the integrity level of a subject. Whereas the Low-Watermark Policy prevents contamination of high-integrity objects by changing the integrity level of subjects to that of the object most recently accessed, the Strict Integrity Policy forbids access by lower level subjects to a higher level object. The three axioms used in the policy are the Simple Integrity Condition, the Integrity *-property and the Invocation Property. The Simple Integrity Condition states that a subject cannot observe object of lesser integrity. This rule constrains the use of objects (data or procedures) to those to whose non-malicious character (by virtue of their integrity level) the subject can attest; those objects having an integrity level greater than or equal to that of the subject. Biba considers execute access to be equivalent to obsaccess, so objects must have an integrity level greater than or equal to that of the requesting subject in order to be executed. The Integrity *-property states that a subject cannot modify objects of higher integrity. This rule ensures that objects may not be directly modi®ed by subjects possessing insucient privilege. The Invocation Property states that a subject may not send messages to subjects of higher integrity. Since invocation is a logical request for a service from one subject to another, it is a considered a special case of modi®cation and therefore follows directly from the Integrity *-property. The Strict Integrity Policy is depicted in Fig. 3. Subject S1 possesses an integrity level of Medium. This gives S1 the ability to observe objects at the High level, modify objects at the Low level and modify/observe objects at the Medium level. In this case, S1 can observe objects O1 and

Fig. 3. The Strict Integrity Policy.

300

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

O4 and modify O2. O3 is at the same level as S1 and therefore can be both modi®ed and observed. S1's level will not change even though it may modify lower level objects. This constant subject integrity level is the di€erence between the Strict Integrity Policy and the Low-Water Policy. All in all, a basic premise of the Biba model is the concept of ``no-write-up, no-read-down''. Low-level data are more open to unauthorized manipulation and can therefore be contaminated. High-level data can likewise be contaminated if low-level data are allowed to enter into a process using the high-level data. The restrictions presented by Biba prevent a user with low-level authorization from possibly destroying the integrity of high-level data. 3.2. Discretionary integrity controls Discretionary controls can be modi®ed by a user, or group of users, who are placed on an authorization list which speci®es the ability to alter discretionary controls. A user has the ability to de®ne his or her own integrity controls after access to an object is made, thereby making the controls discretionary. Two discretionary policies discussed by Biba are Access Control Lists (ACL) and Rings. An access control list is a de®ned set of subjects authorized to access a speci®c object. Each object within the system has its own access control list. This mechanism is discretionary because the list of subjects can be modi®ed by an authorized user. Certain users, such as system administrators, have the authorization to dictate which subjects are allowed access to which objects. This is based on the present integrity levels of both the subjects and the objects.

Fig. 4. Rings.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

301

The use of access control lists creates the problem of identifying those subjects authorized to modify the ACL. This problem can be solved by externally de®ning those subjects with modi®cation authority and maintaining this list of authorized subjects at a minimum level. Fewer subjects with modi®cation authority means less opportunity for either inadvertent or malicious sabotage. The ring policy described here is similar to the ring policy used in mandatory controls with the exception that the access privileges of subjects can be modi®ed. As illustrated in Fig. 4, the integrity of objects is protected by allowing modi®cation only by subjects within a speci®ed integrity ring (domain) which is established for each subject. The subjects can observe or modify only those objects that are within their respective ring. The ®gure shows that the rings may overlap, as O1 is within the rings of both S1 and S2. Objects that are outside a subject's ring are not accessible by that subject. 4. The Clark±Wilson integrity model The Clark±Wilson integrity model [10] makes a comparison between military and commercial security policies and takes the ®ndings of this comparison to formulate a model that can be used to preserve data integrity. The authors clearly distinguish between the needs of the military and commercial environments concerning data integrity and use the DoDÕs Trusted Computer System Evaluation Criteria [12] to set standards for their model. Clark and Wilson de®ne `data integrity' as meaning that data are free from unauthorized manipulation and in a valid state. Free from unauthorized manipulation means that unauthorized users have not altered the data in any way. Valid state suggests that data protected meet the requirements of the integrity policy. The concept of validity means that the data are in the same unaltered condition that they were in when they were received. Separation of duty and well-formed transaction mechanisms are used to ensure this validity. Separation of duty attempts to ensure the external consistency of data objects: the correspondence between a data object and the real-world object it represents. This correspondence is ensured indirectly by separating all operations into several subparts and requiring that each subpart be executed by a di€erent person. A well-formed transaction is structured so that a user cannot manipulate data arbitrarily, but only in constrained ways that preserve or ensure the internal consistency of the data. The Clark±Wilson integrity model is built on the premise that ensuring integrity is a two-part process, consisting of certi®cation and enforcement. Both of these terms are used in reference to data that must be protected against manipulation. The model begins by identifying constrained data items (CDIs), the items that need to be covered by the model and to which the model is applied. Veri®cation that CDIs are within the constraints of the data integrity model is accomplished by Integrity Veri®cation Procedures, or IVPs, which ensure that the data are in a valid state before any operations are performed. Transformation procedures (TPs) move CDIs between valid states and are used to change the collection of CDIs that correspond to each valid state. Moving from one valid state to another changes the applicable CDIs and this change is performed by a TP or set of TPs. The TP is used in the sense of a well-formed transaction in a commercial integrity model. By allowing only TPs to change CDIs, the integrity of each CDI is ensured.

302

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

The term CDI is derived from the requirement that only a TP can alter the data. When the CDIs meet the requirements of the integrity policy then a condition known as a `valid state' arises. CDIs will be continuously in a valid state if they are altered only by TPs. The TPs take the CDIs from one valid state to another and thereby maintain data integrity. Enforcement of the requirement that only TPs manipulate CDIs can be accomplished by the system. The validity of the initial IVP, which con®rmed that the CDIs met the integrity policy requirements, and the validity of the TPs can only be accomplished by a trusted user (i.e., a security ocer). This veri®cation is done by comparing the IVP and each TP against the integrity policy that is in use. This two-step process is the basis of the Clark±Wilson model. Enforcement of the TP requirement by the system and certi®cation of each TP by the security ocer are the two steps in the model. Clark and Wilson developed a set of rules for both the certi®cation and enforcement requirements. There are ®ve rules concerning certi®cation and four rules concerning enforcement. The certi®cation rules are labeled C1±C5 and enforcement rules are labeled E1±E4. They are given in order of implementation. (C1) (C2) (E1)

IVPs are required to ensure that all CDIs are in valid states when an IVP is executed. All TPs must be able to take a CDI from one valid state to another valid state, thereby ensuring integrity of the CDI. The system must ensure that only TPs are allowed to manipulate CDIs. It must also allow a relationship to be created to identify a user with the TPs that are available for that user, as well as the CDIs that the TPs are allowed to access (access triple).

These three rules are concerned with the internal consistency of the CDIs. The requirements speci®ed by these rules are met by the proper functioning of the system. Enforcement is also accomplished by the system. (E2) (C3) (E3)

The relations developed in E1 must be stored by the system so that users are only capable of accessing those TPs for which they are authorized. The relations that are created by E1 and stored by the system in E2 must meet the requirements of the integrity policy. The system must be capable of identifying each user and verifying that users are allowed to use only those TPs for which they are cleared.

These rules develop the requirement for each user to be identi®ed upon initial access to the system so that only appropriate TPs are available. This limits access to TPs and, therefore, CDIs to authorized users. (C4)

All TPs must be capable of writing to a write only CDI the information that is necessary to reconstruct the TP if required. This creates a `log' to record the occurrence of each TP as well as the design of the TP itself.

Rule C4 establishes an audit trail for each TP. Pertinent information about each TP is captured so that independent reconstruction of the TP is possible. This creates a log to serve as a document for audit.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

303

The next rule (C5) addresses a component of the model that has not yet been mentioned. This component is the unconstrained data item (UDI), which is a data item not covered by the integrity model. UDIs are important because they represent the most common method for entering new data into the system. Clark and Wilson give the example of a user typing information at a keyboard. This shows that a TP can accept unconstrained data as input and then alter the value of certain CDIs based on these UDIs. Rule C5 is, therefore, necessary to provide for certi®cation of UDIs. (C5)

A TP must be capable of taking a UDI as input and transforming it into a valid state CDI. If this cannot be done then the UDI must be rejected by the TP.

The ®nal rule (E4) prevents a user from creating a TP and then executing that TP without any certi®cation taking place. Enforcement of this rule prevents bypassing the certi®cation requirements. (E4)

An individual with the ability to certify IVPs or TPs must not be capable of executing those same IVPs or TPs.

This combination of rules forms the basis of the system. The enforcement rules, which correspond to the application-independent security functions, and the certi®cation rules, which correspond to application-speci®c de®nitions for integrity, de®ne the system. The authors endeavor to place as much responsibility as possible on the enforcement rules, thereby limiting the certi®cation requirements. This is desirable because of the complexity of the certi®cation process compared to the enforcement capability of the system. The Clark±Wilson model has been acknowledged as a new approach to de®ning and maintaining data integrity. There has been a great deal of follow-on work which takes the basics of the Clark±Wilson model and attempts to re®ne it for implementation with speci®c computer systems. This follow-on work has served to highlight some of the strengths of the model. The Clark±Wilson model described above was further extended by Abrams et al. [2], who propose several more powerful mechanisms that support external integrity. They claim that these new mechanisms need to support objectives such as external consistency, separation of duties, internal consistency, and error recovery. As an example, to achieve separation of duties, they adopt the ideas of Primary CDIs and enabling sequences as a complementary measure to the access triples in the Clark±Wilson model. The enabling sequences must ensure that changes to the primary CDI have the necessary corroboration. The idea of primary CDIs and enabling sequences allows one to focus attention on which CDIs require separation of duties and whether this separation is adequately supported by enabling sequences of events. 5. Comparison of the data integrity models Each of the data integrity models described above addresses the issue from its unique perspective. Thus, in order to compare and assess the models, a generic framework needs to be applied. Such a framework must provide a common ground for evaluation so that each of the

304

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

models can be evaluated individually. The proposed framework consists of three domains that cover the underlying de®nition of integrity, the concepts used in the model and the relative advantages and limitations of the model. The domains and the detailed characteristics used in the framework are depicted in Fig. 5. After applying the framework, recommendations concerning acceptance of each model can be formulated. 5.1. De®nition of integrity used in model The de®nition of integrity used in each model is examined for the purpose of determining its adequacy, completeness, and assumptions. The de®nitions are measured against the standard set by the de®nition which has been adopted as a benchmark, namely the IWG de®nition. 5.1.1. Adequacy/completeness Adequacy, as stated in the IWG standard, is concerned with the areas of integrity that are addressed by the de®nition. The IWG standard de®nition itself is both adequate and complete as it addresses many of the areas frequently associated with data integrity. Analysis of the models produces the following results. Biba on adequacy/completeness: The Biba de®nition treats integrity as a relative rather than an absolute measure. There is no a priori statement concerning the performance speci®cations of the system. Rather, the system need only perform to the designer's intent, whatever that intent may be [21]. This perspective makes the Biba de®nition extremely broad. It places the responsibility for integrity on the ability of the creator to design a system in which integrity can actually be achieved. Because of this, the Biba de®nition is lacking in speci®c detail and is general enough to be applied to almost any system or subsystem. It relates to ¯exibility but standardization is missing. The conclusion from this is that the Biba de®nition of integrity is adequate but not complete.

Fig. 5. A framework for comparing the data integrity models.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

305

Clark±Wilson on adequacy/completeness: The Clark±Wilson integrity de®nition is based on prevention of unauthorized manipulation of data. Data that are in a valid state are maintained in that valid state, thereby ensuring integrity, only if authorized manipulations are performed on or with the data. This de®nition is broad enough to be applied to many di€erent environments. It does not address the issue of determining whether the data is initially in a valid state. The valid state concept serves to isolate the data and label them as being worthy of protection. This is essential in setting limits to the items that need protection. The de®nition used in Clark±Wilson is more useful because of its applicability to many types of environments. It is complete in respect to the IWG standard and as a result is quite adequate. The conclusion reached in this section is that the Biba and Clark±Wilson integrity de®nitions are adequate in accordance with the IWG standard. 5.1.2. Assumptions The assumptions made concerning the integrity de®nition in each model are analyzed to determine if the de®nition is realistic. Assumptions may be so great that they make the integrity de®nition, and possibly the entire model, potentially unacceptable for implementation in any realworld environment. Because of this, caution should be exercised when making assumptions to accompany any data integrity de®nition. Biba on assumptions: The assumptions made in BibaÕs de®nition are: 1. The system being evaluated is designed in such a way that integrity can actually be achieved. 2. External veri®cation is performed on the system to ensure that it is functioning properly. 3. Classi®cation labels exist for integrity levels. These classi®cation labels are quite similar to the levels attached to the security classi®cations used for military information. Each of these assumptions is based on sound reasoning. The design of the system is irrelevant from the perspective of BibaÕs model. Likewise, the external veri®cation is a realistic condition to expect before implementation of integrity controls. The existence of integrity classi®cation labels is quite necessary and not an unreasonable expectation. The conclusion after examining these assumptions is that the Biba de®nition rests on assumptions that are both necessary and reasonable. Clark±Wilson on assumptions: The Clark±Wilson integrity de®nition incorporates three assumptions: 1. Data are initially received in a valid state. There is no mechanism available within the model to test for validity, it is simply assumed. 2. The initial integrity veri®cation procedure (IVP), which con®rms that the data items requiring protection meet certain conditions, is assumed to be a valid process itself. 3. It is assumed that the data item and the real-world object that it represents correspond closely. Each of these assumptions is acceptable, with the possible exception of the assumption concerning the integrity of data upon receipt. The assumption that data are in a valid state, specifically that they are correct and in their original form, creates a precondition that is not easily met. It is somewhat unrealistic to assume that all data are received in a correct state. Many things can happen to data to change either their format or content. Designing a system based on an integrity de®nition that requires received data to be in a valid state is probably not the best approach to addressing the data integrity problem.

306

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

The conclusion is that each of the models is based on sound, reasonable assumptions in this domain that do not damage its credibility. The necessary assumptions are not liabilities for any of the models. 5.2. Concepts on which the model is based This section examines the central theme of the model and its relation secrecy. It describes the internal building blocks that serve as the foundation of each model and the modelÕs external relationship with the issue of secrecy. These criteria are useful in helping to determine compatibility with the objectives of the DoDÕs Trusted Computer System Evaluation Criteria. 5.2.1. Central theme The basic theme of each model should be based on sound, provable principles that make the model practical rather than simply theoretical. A thorough evaluation of each model's basic concepts is necessary before a decision can be made concerning acceptance and implementation. The central theme of the model is also important for determining compatibility with DoD requirements. Biba on central theme: The central theme of the Biba model is the development of a hierarchical lattice with the ``no-write-up, no-read-down'' restrictions which is used to identify authorized users and separate users by type. This system is e€ective in preventing modi®cations by unauthorized individuals. Biba implements his ``no-write-up, no-read-down'' restrictions through both mandatory and discretionary controls. Integrity classi®cations with either militaryoriented or commercial-oriented labels assign data to di€erent levels. The use of mandatory and discretionary controls along with the assignment of classi®cation labels support the central theme of this model. Clark±Wilson on central theme: The Clark±Wilson model is built on two premises: the wellformed transaction and separation of duties. A well-formed transaction is designed so that it allows only authorized modi®cations of data. This transaction will prohibit unauthorized manipulation, thereby preserving the integrity of the data. There is a requirement that the transaction, or process, be designed in such a way that the well-formed label may be applied. This is not a trivial matter, especially in large-scale systems. The separation of duties premise is necessary to preserve a correspondence between data objects and the real-world objects that they represent. This separation prevents unauthorized manipulation by breaking down an operation into several subparts and requiring that each of the subparts be executed by di€erent individuals. In this way, no one user can execute an entire operation. Unless there is collusion among users, this will prevent malicious tampering with the data. The conclusion of this section is that the central theme of the Biba and Clark±Wilson models is largely practical, thereby making implementation possible. 5.2.2. Relation to secrecy The second question to be answered in this section concerns the relationship between the data integrity model and secrecy. This issue is important because of the possible incorporation

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

307

of the model into a complete security policy that addresses both disclosure and manipulation. An examination of the two data integrity models in this work and their relationship to secrecy will determine whether they can possibly be incorporated into the TCSEC to ®ll the existing void. Biba on relation to secrecy: Of the two models analyzed, the Biba model has the strongest relation to secrecy. Biba takes the Bell±LaPadula model and creates its dual for integrity. The mechanisms in the Bell±LaPadula model are incorporated into Biba thereby allowing for implementation of both models simultaneously. The requirement for integrity classi®cation labels in Biba is matched perfectly with the security labels developed in Bell±LaPadula. This ties an integrity policy to a security policy, thereby creating a complete protection policy for access control and modi®cation control of data. Clark±Wilson on relation to secrecy: The Clark±Wilson model relates to secrecy in that it has the ability to limit the data that a user can access. This is a method of disclosure control. The model uses separation of duties and well-formed transactions to prevent one user from having the ability to execute all steps of one speci®c process. This helps to preserve the integrity of the data while at the same time establishing an access control mechanism. Because of this feature, Clark± Wilson has a strong relation to secrecy and also to the requirement for access control that characterizes a secure military system. The conclusion drawn from the analysis in this area is that the Biba model has a stronger relation to secrecy. 5.3. Advantages and limitations The advantages and limitations of each data integrity model must be evaluated and understood before selecting a model for implementation. This section will look at the advantages and limitations of each model. An examination of these areas will help determine the suitability of the model for di€erent applications. It will also help verify whether the weaknesses of the model can proscribe its acceptance. 5.3.1. Description of strengths and weaknesses The strengths of each model are important for performing an analysis of the bene®ts to be achieved by accepting the model. This will determine what voids in the current security policy the model can ®ll. While noting strengths is important, the emphasis in this section is on limitations. This is because the limitations of each model will be the deciding factor in determining whether it can be accepted and implemented as a standard. If the model has limitations that make it unacceptable and these limitations cannot be corrected, then the model will be inappropriate, regardless of its strengths. If the limitations can be corrected then the model can be considered for acceptance. Biba's strengths and weaknesses: The main strength of the Biba mode was its attempt to identify integrity as the dual of secrecy. Biba used the Bell±LaPadula [5] model, which is concerned with the unauthorized disclosure of information, and created a similar model to address unauthorized manipulation of information. In so doing, Biba was a pioneer in identifying integrity as a topic separate from secrecy. A second strength of the Biba model is that it o€ers a variety of policies for both mandatory and discretionary controls. This variety increases the probability of successful integration of an

308

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

integrity policy as part of a security plan. Each of the policies has di€erent requirements and speci®cations which may or may not ®t into the design of a security plan. The designer of the plan has more than one option available when deciding on an appropriate integrity policy. Nevertheless, the Biba model su€ers from several weaknesses. It is designed for implementation in systems featuring ring architecture, especially the Multics kernel system. The policies are tailored for this system and are not applicable for implementation using other systems. While approaching integrity as the dual of secrecy, the Biba model ignores the topic of secrecy. Since the Bell±LaPadula model [5], on which the DoD secrecy policies are based, does not completely address integrity, Biba attempted to ®ll this void. In doing so, however, he completely ignores a discussion of secrecy. In addition, BibaÕs Strict Integrity Policy does not specify how to assign appropriate integrity labels comparable to the criteria used by government classi®cation systems for assigning disclosure levels. Based on the above, the policies presented by Biba are not ¯exible enough for implementation in real-world applications. The policies are not only too Multics speci®c but are also not capable of being altered to ®t into systems that do not conform to the particular policy speci®cations. Clark±Wilson's strengths and weaknesses: The de®nition of integrity used in the Clark±Wilson model relates to integrity as a concept within the context of a computer system. The model o€ers a working de®nition that is applied e€ectively to the area of computer data, supports the de®nition o€ered, and builds a framework that is targeted at maintaining integrity within the scope of the de®nition. A second advantage of the Clark±Wilson model is that it identi®es the features of a computer system in which integrity is the main goal. The model provides a blueprint for nine basic rules that must be established and implemented in systems that are used to maintain integrity. Adherence to the rules established in the model will allow the construction of a valid, working integrity mechanism. Several criticisms have been leveled at the Clark±Wilson model. One is its inability to have the integrity controls strictly internal [19]. The dual process of certi®cation and enforcement takes into account the environment both internal and external to the system. The enforcement is accomplished internally by the system itself, while the certi®cation is performed externally by a security ocer. This means that the system will maintain the integrity of data that have been veri®ed externally before being entered; it may accept data that have been entered incorrectly, whether accidentally or maliciously. External veri®cation declares the data to be in a valid state and the system then accepts the data and maintains their integrity. A second criticism of the Clark±Wilson model is that, by requiring IVPs, the model needlessly complicates the certi®cation process. As mentioned earlier, it is desirable to shift as much of the veri®cation responsibility to enforcement because enforcement can be done by the system. Since an IVP is essentially a special type of TP the requirement for IVPs is redundant. This redundancy runs counter to the authorsÕ desire to use minimal certi®cation rules because of the level of complexity and the manual work necessary for certi®cation. A third criticism of the Clark±Wilson model is that it is applicable only at a single level of granularity, which is the size and resolution of the protected system elements. Badger [4] has developed rules concerning integrity policies and how they relate to the level of granularity. The dominant rule developed is that at each level of granularity, the integrity policy should specify how the state may change in terms of the next lower level of granularity. As it is presented, the

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

309

Clark±Wilson model is designed for use at a single granularity level. The inability of the model to be implemented in a multi-granular environment limits its range of applicability. 5.3.2. Correction of de®cient areas This section examines the noted weaknesses of the model and attempts to determine whether they can be corrected. An acceptance decision must be based on a thorough evaluation of the model's weaknesses. If the weaknesses cannot be overcome then the options available to the decision maker are limited. The option of accepting the model with modi®cation is eliminated. If corrections can be made, then analysis of the feasibility of making these corrections must be done. The corrections may involve processes which excessively complicate the model and actually create another weakness while solving the original weakness. Biba on correction of de®cient areas: As noted, the main weakness of Biba is its heavy orientation to ring architecture systems, which makes it somewhat in¯exible. The feasibility of application to systems featuring other types of architecture must be determined. The principles of the model are valid for application to any type of system, even though the speci®c details are not, and indeed it can be adapted to other architectures without major modi®cations. In order to address the missing issue of secrecy, a plan for integrating an integrity policy into a secrecy policy is needed to create a security policy that can be considered complete. Clark±Wilson on correction of de®cient areas: The requirement of Clark±Wilson that certi®cation is needed for those procedures that access protected data is its main limitation and must be dealt with before acceptance. There is a real need in this model for the procedures to be certi®ed for proper functioning. The assumption that the data were received in a valid state and are, therefore, worthy of protection is acceptable for the data but not for the certi®cation of the procedures. This limitation cannot be overcome without adversely a€ecting the proper functioning of the mechanisms in the Clark±Wilson model. The analysis performed allows for recommendations to be made concerning the suitability of the two data integrity models. The models presented in this paper are suitable candidates for ®lling the void described in Section 1. Fig. 6 summarizes the comparison of the two models using the evaluation framework, eliciting the following conclusions: 1. The Biba data integrity model is capable of implementation in a ring architecture system, but the integrity de®nition and practical concepts on which it is based are inadequate. 2. The Clark±Wilson data integrity model o€ers an adequate integrity de®nition and is based on sound, provable concepts. With the added capability of integrity label attachment, this is probably a more practical model for acceptance as a standard.

6. Conclusions and recommendations In their fragmented state, data may appear meaningless, but when viewed in total can constitute one of the most critical assets of an organization, and, as such, should be adequately managed and secured. Basic data security techniques have been developed to protect the data resource, but the emphasis has been placed mainly on secrecy and availability, rather than integrity. Nevertheless, as the organizationÕs data resources grow in size, complexity and value, an

310

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

Fig. 6. Comparison of the data integrity models.

urgent need arises for models and mechanisms to prevent unauthorized manipulation or modi®cation of data, and for a data integrity standard that will provide a common measure and a set of tools for evaluating the various models and mechanisms in this domain. The framework proposed in this paper attempts to lay the groundwork for establishing a data integrity standard, comprising two elements: the IWG de®nition as a commonly accepted de®nition for data integrity and the Clark±Wilson model as a recommended data integrity model. Analysis of each model within the guidelines of the framework gives rise to the following conclusions: 1. While the IWG de®nition of integrity is accepted as a standard for application within the framework, there is no agreement in either the military or commercial environment on a single acceptable de®nition to serve as a standard. The primary reason for this is the lack of research in the area of data integrity. Because there exist situations in which unauthorized manipulation may be more harmful than unauthorized disclosure, data integrity is very much a concern in today's computer-based information systems. 2. The Clark±Wilson data integrity model is the most appropriate for incorporation into the TCSEC as an integrity standard. It is recommended for the following reasons: a. The integrity de®nition used in the Clark±Wilson model is both adequate and complete in respect to the IWG standard and is more appropriate. It is suciently broad for application in many di€erent environments including the military, surpassing the Biba integrity de®nitions in terms of range and applicability.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

311

b. The Clark±Wilson model has a strong relation to secrecy. The ability of the model to limit the data a user can access serves to perform the function of disclosure control. The separation of duties and well-formed transaction concepts limit the ability of any one user to perform all steps in a process. This has the e€ect of preserving the integrity of the data involved in the process. c. The Clark±Wilson model identi®es the features of a system in which integrity is the primary goal. It presents nine rules to implement in order to safeguard the integrity of data used in the system, as a blueprint for building an e€ective integrity enforcement system. d. The Clark±Wilson model has the potential for integrity labeling similar to military information classi®cation labeling. At present, it does not have the capability to attach integrity labels, but due to its ability to limit the data that each user can access through the separation of duties and well-formed transaction concepts, the addition of this capability is possible. The Biba model is actually more suitable than Clark±Wilson in this speci®c area in that it has a greater potential for the successful implementation of labeling. This is due to the relationship of the Biba model to the Bell±LaPadula model for security. e. The Clark±Wilson model has no major limitations. No areas of the model are de®cient enough to overshadow its advantages or to prohibit its acceptance. The framework application provides the results which can be used to actually select a single data integrity model for implementation within military computer environments. Based on the conclusions stated in the above section, the following recommendations are made: 1. The Clark±Wilson data integrity model should be accepted as the basis for an integrity policy to be incorporated into the TCSEC. 2. A useful implementation of the Clark±Wilson model must include more than TPs, IVPs and their management and control to ensure integrity of the enterprise. Additional considerations, including speci®c organizational policies for separation of duties and the goals and techniques associated with integrity validation, must in¯uence systems analysis and design to ensure that system-implemented and administratively-handled mechanisms interact appropriately to ensure higher-level integrity objectives [2]. 3. The Clark±Wilson model captures a fundamental view of integrity for a trusted system. Abrams et al. [2] suggest the following re®nements to the model: · External consistency and user roles-reserved CDIs encapsulated within vendor supplied TPs. · Separation of duty-primary CDIs and enabling sequences of TPs. · Integrity applied to the mechanisms that implement integrity encapsulate the audit mechanism and provide an append-audit kernel call. · External consistency and I/O device. Ensuring a trusted path for input from a keyboard or from a terminal associated with a printer, and treating output devices as CDIs. 4. The TCSEC and Clark±Wilson model should adopt a data integrity labeling scheme similar to the scheme which is currently in use for data secrecy. There should be separate levels of integrity classi®cations, with all applicable data properly classi®ed. These integrity classi®cations should restrict both manipulation and modi®cation with mechanisms in place to allow only authorized individuals such privileges. While the integrity labels do not need to be exactly the same as the Top Secret, Secret, and Con®dential used for security purposes they do need to have a similar pattern. Labels such as High, Medium, and Low are acceptable provided that the mechanism that enforces integrity is capable of determining authorized access requests

312

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313

from unauthorized requests. There should be three data integrity levels to correspond to the three data security levels. 5. More research is needed to adapt the Clark±Wilson model to preserve integrity in object-oriented database systems. The revised authorization model should be able to support the semantic richness of object-oriented data models and the constructs used in these models such as classes, object instances, versions, and encapsulation. References [1] E.M. Awad, M.H. Gotterer, Database Management, Boyd and Fraser Publishing Company, 1992. [2] M.D. Abrams, E.G. Amroso, L.J. LaPadula, T.F. Lunt, J.G. Williams, Report on an integrity research study group, Computers and Security 12 (7) (1993) 679±689. [3] L.K. Barker, L.D. Nelson, Security standards ± government and commercial, AT&T Technical Journal, 1988, pp. 9±18. [4] L. Badger, A model for specifying multi-granularity integrity policies, in: Proceedings of the 1989 IEEE Symposium on Security and Privacy, April 1989, pp. 84±91. [5] D.E. Bell, L.J. LaPadula, Secure Computer Systems: Mathematical Foundations and Model, The Mitre Corporation, Bedford, MA, November 1973. [6] E. Bertino, Data security, Data & Knowledge Engineering 25 (1998) 199±216. [7] K.J. Biba, Integrity Considerations for Secure Computer Systems, The Mitre Corporation, Bedford, MA, April 1977. [8] S. Ceri, P. Fraternali, S. Paraboschi, L. Tanca, Automatic generation of production rules for integrity constraints, ACM Transactions on Database Systems 19 (3) (1994) 367±422. [9] S. Ceri, F. Garziotti, Speci®cation and management of database integrity constraints through logic programming, Polytechnic di Milano, Technical Report 88-025, 1988. [10] D.D. Clark, D.R. Wilson, A comparison of commercial and military security policies, in: Proceedings of the 1987 IEEE Symposium on Security and Privacy, April 1987, pp. 184±194. [11] C.J. Date, An introduction to Database Systems, sixth ed., Addison-Wesley, New York, 1995. [12] DoD, Trusted Computer System Evaluation Criteria, US Department of Defense, National Computer Security Center, August 1983. [13] E.B. Fernandez, R.C. Summers, C. Wood, Database Security and Integrity, Addison-Wesley, New York, 1981. [14] R.R. Henning, S.A. Walker, Data integrity vs data security: a workable compromise, in: Proceedings of the 10th National Computer Security Conference, October 1987. [15] G. Moerkotte, P.C. Lockemann, Reactive consistency control in deductive databases, ACM Transactions on Database Systems, 1991, pp. 52±64. [16] A., Motro, Integrity ˆ validity + completeness, ACM Transactions on Database Systems, 1989, pp. 480±502. [17] C. P¯eeger, Security in Computing, second ed., Prentice-Hall, Englewood Cli€s, NJ, 1997. [18] J.E. Roskos, S.R. Welke, J.M. Boone, T. May®eld, Integrity in the Department of Defense Computer Systems, IDA Paper P-2316, Institute of Defense Analysis, 1990. [19] Z.G. Ruthberg, W.T. Polk, Report of the Invitational Workshop on Data Integrity, Government Printing Oce, 1989. [20] P. Terry, S. Wiseman, A new security policy model, in: Proceedings of the IEEE Computer Society Symposium on Security and Privacy, May 1989. [21] S. Walke, J. Roskos, J. Boone, T., May®eld, Taxonomy of integrity models, implementations and mechanisms, in: Proceedings of the 13th National Computer Security Conference, October 1990. [22] S.J. Waters, Methodology of computer systems design, The Computer Journal 17 (1) (1974) 17±24. [23] M. Zviran, W.J. Haga, Evaluating password techniques for multilevel authentication mechanisms, The Computer Journal 36 (3) (1993) 227±237.

M. Zviran, C. Glezer / Data & Knowledge Engineering 32 (2000) 291±313 Moshe Zviran is a senior lecturer of Information Systems in the Faculty of Management, The Leon Recanati Graduate School of Business Administration, Tel Aviv University. He received his B.Sc degree in Mathematics and Computer Science and M.Sc and Ph.D degrees in information systems from Tel Aviv University, Israel, in 1979, 1982 and 1988, respectively. He has held academic positions at the Claremont Graduate University, CA, the Naval Post graduate School, CA, and Ben-Gurion University, Israel. His research interests include information systems, planning, development and management of information systems, information systems security and information systems in health care and medicine. He is also a consultant in these areas for a number of leading organizations. Dr. ZviranÕs research has been published in MIS Quarterly, Communications of the ACM, Journal of Management Information Systems, IEEE Transactions on Engineering Management, Information and Management Information Systems, Omega, The Computer Journal, Journal of Medical Systems and other journals. He is also co-author (with N. Ahituv and S. Neumann) of Informations Systems for Management (Tel Aviv, Dyonon, 1996).

313

Chanan Glezer is a lecturer of Information Systems in the Department of Industrial Engineering and Management at Ben-Gurion University of the Negev. Dr. Glezer holds a Ph.D in MIS from Texas Tech University and a M.B.A. and B.Sc in General Science from Tel Aviv University. His main research interest is in the development and evaluation architectures to address organizational coordination tasks such as meeting±scheduling. He is also working on topics such as interoperable electronic product catalogues, network information retrieval systems, systems integration and management of the IS function. He has published in Communications of the ACM, Journal of Medical Systems, and the Journal of Organizational Computing and Electronic Commerce. He has over eight years of MIS-related work experience with the Israel Defense Forces, serving as an application programmer systems designer and MIS ocer.