Efficient Detection of Failure Modes in Electronic ... - Semantic Scholar

1 downloads 0 Views 109KB Size Report
the Needham-Schroeder protocol for symmetric algorithms. (see [19]). ... [5] M. Burrows, M. Abadi, and R. Needham. A logic of ... John Wiley and. Sons, 1996.
Efficient Detection of Failure Modes in Electronic Commerce Protocols Sigrid G¨urgens GMD - German National Research Center for Information Technology Rheinstrasse 75, 64295 Darmstadt, Germany [email protected]

Javier Lopez Computer Science Dept. Universidad de Malaga Campus de Teatinos Malaga - 29071 [email protected]

Abstract The design of key distribution and authentication protocols has been shown to be error-prone. These protocols constitute the part of more complex protocols used for electronic commerce transactions. Consequently, these new protocols are likely to contain flaws that are even more difficult to find. In this paper, we present a search method for detecting potential security flaws in such protocols. Our method relies on automatic theorem proving tools. Among others we present our analysis of a protocol recently standardized by the German standardization organization DIN to be used in digital signature applications for smartcards. Our analysis resulted in the standard being supplemented with comments that explain the possible use of cryptographic keys.

1. Introduction Electronic Commerce is a general concept that covers any form of business interaction using information and communication technology. This includes communication between companies, between companies and their customers, or between companies and public administrations [2], [3]. It aims at improving efficiency in terms of lower costs, improving effectiveness in terms of widening market potential and better meeting customers' needs, and enhancing product and service innovation through customersupplier interactivity. Most of the work on electronic commerce has focused on providing client/server model based applications [9], a

 c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Ren´e Peralta Center for Cryptography, Computer, and Network Security University of Wisconsin Milwaukee, WI 53201 [email protected]

class of distributed applications which involve two types of systems: a client system which is used directly by the end-user, and a server system which makes sophisticated resources or services available to the end-user via the client system. So, client/server applications easily meet the needs of digital commercial transactions. Additionally, they can be deployed on wide area networks such as the Internet, as the Internet facilitates one-time or short-term relationships between transacting parties (client and server), while the Web makes it easy to find buyers and sellers for goods and services and, increasingly, to provide efficient means of quickly negotiating business agreements. However, there is a potential drawback. The electronic systems and infrastructures that support electronic commerce are susceptible to abuse, misuse, and failure in many ways. Tremendous damage can occur to any electronic commerce participant, including commercial traders, financial institutions, service providers, and consumers. From the business perspective, the consequences can include direct financial loss resulting from fraud, theft of valuable confidential information, loss of business opportunity through disruption of service, unauthorized use of resources, and loss of customer confidence. The potential growth of Internet-based commerce is shortened by concerns about the security of systems, and organizations find out that there are many risks related to the use of Internet that do not arise in the real world. Thus, despite the rewards of conducting business on the Internet, major corporations have been slow to adopt this technology because of the risks being involved.

2. Formal methods for cryptographic protocol verification Traditionally, the security of electronic communication is provided by applying cryptograpic algorithms to the mes-

sages, i.e. by using what is called cryptographic protocols. By this means the authentic and confidential exchange of valuable data can be achieved, the access to resources can be controlled, and the identity of communicating entities can be verified. However, the design of cryptographic protocols is a difficult task. There are many examples of protocols that were first considered to be secure and were later shown to contain flaws (see e.g. [5], [15]). Thus, in the last ten to fifteen years, the problem of formal methods for the security analysis of protocols received increasing attention and has become a research area on its own. The two most important approaches in this area can be categorized as development of logics about security (socalled authentication logics) or as development of model checking tools. An excellent partial overview is given in [16]. A seminal paper on authentication logics is [5]. Work along this area has produced significant results in finding protocol flaws, but also appears to have limitations, which will be hard to overcome within the paradigm. Model checking involves the definition of a state space (typically modelling the “knowledge” of different participants in a protocol) and transition rules (formulas) which define both the protocol being analyzed, the network properties, and the capabilities of an enemy. Initial work in this area can be traced to [10]. Much has already been accomplished. A well-known software tool is C. Meadows's NRL Protocol Analyzer [18]. Other promising work includes G. Lowe's use of the Failures Divergences Refinement Checker in CSP [15], S. Schneider's use of CSP [21]; and the model checking algorithms of Marrero, Clarke, and Jha [16]. Although much work has been dedicated to the problem, it is far from being solved. On the other hand, the security of electronic commerce transactions is an even more complex problem and additional requirements arise that exceed the necessity of providing confidentiality and authenticity. Secure electronic commerce functions such as transaction certification, electronic notary functions, operational performance, commerce disposal, anonymity, auditing, etc are necessary for a successful deployment of electronic commerce transactions. These functions in turn produce requirements like identification of the sender and/or the receiver of a message, certification of delivery, confirmation of arrival, timestamping, detection of tampering, electronic storage of originality-guaranteed information, approval functions, and access control. Moreover, some electronic commerce transactions involve some kind of payment protocols (see [11], [1], [22], [7]), with an increasing number of participants: Issuers, cardholders, merchants, acquirers, payment gateways and Certification Authorities (CAs) are involved in the same protocol. This again makes the confidentiality problem

harder to solve as part of the information must be made available only to some of the participants. Similar to traditional electronic communication, electronic commerce transactions have been adapted to use cryptography for satisfying the above discussed security requirements. However, the design of a secure electronic commerce protocol is much more complicated than the design of traditional key distribution and authentication protocols. So the need for formal methods that assist the design process is apparant. In this paper we present our approach for the formal analysis of cryptographic protocols using the theorem prover Otter developed at Argonne National Laboratories, Chicaco (for a detailed description see [24]), we discuss our design and its implementation, we show our analysis of a protocol that was standardized by the German standardization organisation DIN, and explain our results.

3. The communication and attack model We will call the participants of cryptographic protocols “agents”. Agents are not necessarily people. They can be, for example, computers acting autonomously or processes in an operating system. The agents act in cryptographic protocols by sending and receiving messages and by performing particular checks on messages. So, in what follows s: A ?! B : m denotes that in step s of a protocol execution (which we will call a protocol run throughout this paper) agent A sends message m to agent B . crypt(K; (m1 ; m2 ; : : : ; mr )) denotes the transformation of the ordered tuple of messages (m1 ; m2 ; : : : ; mr ) using the cryptographic algorithm crypt and the key K . If we need to distinguish between different algorithms we use scrypt for a symmetric algorithm, acrypt for an asymmetric algorithm and sig for a signature algorithm. Furthermore, PKX denotes the public key and SKX the private key of agent X , respectively. Some protocols involve a special agent called Trusted Third Party (TTP) which is trusted by all agents with respect to certain services (such as generating valid certificates). The security analysis of protocols does not deal with weaknesses of the cryptographic algorithms used. So, in what follows we assume that the following security properties hold: 1. Messages encrypted with a key K can only be decrypted by those who possess the inverse key K ?1 . 2. A key can not be guessed. 3. Given m, knowledge of the key K is necessary to produce a cyphertext which is the encryption of a plaintext P containing m. 2

While the first two items describe generally accepted properties of cryptographic algorithms, the last one, which Boyd called the “cohesive property” in [4], does not hold in general. Boyd and also Clark and Jacob in [6] show that under certain conditions, particular modes of some cryptographic algorithms allow to generate a ciphertext by concatenation of blocks. These papers were important in that they drew attention to hidden cryptographic assumptions in “proofs” of security of cryptoprotocols. In fact, it is clear now that the number (and types) of hidden assumptions usually present in security proofs is much broader than what Boyd and Clark and Jacob pointed out. All communication is assumed to occur over insecure communication channels. What an insecure communication channel is can be defined in several ways. In our model we assume that there is a malicious agent E controlling all communication. E has the ability to intercept messages, and, after intercepting, to change the message to anything it can compute. This includes changing the destination of the message and the supposed identity of the sender. Furthermore, E is considered a valid agent, thus also owns a symmetric key KE;TTP and/or a key pair (PKE ; SKE ) for an asymmetric algorithm. (Which keys E owns in an actual analysis depends on the type of algorithms used in the protocol.) The enemy E also is allowed to initiate communication with other agents and to generate random numbers. Figure 1 depicts our communication model.

3. 4. 5.

a ciphertext is considered one part of a message). 6. 7.

8. 9. 10.

E knows everything it can generate by enciphering something it knows using something it knows as a key. E knows every plaintext it can generate by deciphering a ciphertext it knows, provided it knows the inverse of the key used as well, and the ciphertext was generated with an algorithm that allows message recovery (i.e. which is not a one-way function). E knows every concatenation of data it knows. E knows the origin and destination of every message. At every instance of the protocol, E knows the “state” of every agent, i.e. E knows what messages they are expecting to receive, whether or not they will accept a particular message at a particular moment of the protocol run, what they will send after having received (and accepted) a message, etc.

At every state in a possible search path, after having received an agent's message, E has the possibility to forward this message or to send whatever message it can generate to one of the agents. Obviously, the search space will grow enormously. Thus, we must limit the concept of knowledge if the goal is to produce a verification tool which can be used in practice. Therefore we restrict the definition in various ways in order to make E 's knowledge set finite. For example, if the protocol to be analyzed uses a public key algorithm, we do not allow E to generate ciphertexts using data as a key which is neither the private nor the public part of a key pair. We do this on an ad-hoc basis. This problem needs further exploration. 1 Note that the above implies that E never forgets. In contrast, the other agents forget everything which they are not specifically instructed to remember by the protocol description.

TTP

E

A

E knows its own random numbers. E knows every message it has received. E knows every part of a message it has received (where

B

Figure 1. The communication model.

4. Automatic testing of protocol instantiations

E intercepts all messages and the agents and TTP only receive messages sent by E . What E can send depends on

As the title of this section suggests, we make a distinction between a protocol and a “protocol instantiation”. We deem this a necessary distinction because protocols in the literature are often under-specified. A protocol may be implemented in any of several ways by the applications programmer.

what it is able to generate, and this in turn depends on what E knows. The knowledge of E can be recursively defined as follows: 1. 2.

E knows the names of the agents. E knows the keys it owns (e.g. its public key PKE and its private key SKE ).

1 We note, however, that leaving this part of our model open allows for a very general tool. This is in part responsible for the success of the tool, as documented in later sections.

3

For describing a possible state of a protocol we use predicates containing every agent's and E 's information: the list of actions already performed and the list of data known by E up to the current step of the protocol, the data each of the agents has stored so far for later use (session keys, nonces, the name of the desired communication partner etc.), the protocol runs the agent assumes to be taking part in and the next message it will accept in each of these runs, the agent which sent the current message, the intended recipient, the message, the number of actions already taken and the number of times E changed a message instead of forwarding it. Using these predicates we formulate formulas which describe the transformation of protocol states. In particular, we have



 

formalize the usage of an asymmetric signature algorithm with message recovery, while using only the second equation is equivalent to using a one-way signature algorithm. Finally we add the negation of a predicate which describes a non-desired state of the protocol (e.g. agent A believing that it shares a secret key with B , while this key is element of the knowledge list). We start the analysis with a predicate describing the initial state of the protocol run, including the initial knowledge of the enemy E . Otter then deduces new predicates using this initial state, the formulas described above and logical inference rules. These new predicates are again used to deduce new predicates. The process stops if either no more predicates can be deduced or Otter finds a contradiction with one of the predicates and the negation of the non-desired state. The action list of this last predicate then shows a possible attack. Sometimes the attack can be avoided by adding a check to the formula describing the respective action, in which case we can perform a further analysis to see whether the protocol contains more weaknesses. This allows for the simultaneous exploration of many possible attacks on the protocol. Otter has other facilities Otter, for example it provides the possibility to assign different weights to predicates which allows to direct the search strategy (to do depth-first instead of breadth-first search, to disregard undesired predicates, to only allow the enemy to change messages a certain number of times each analysis run etc.). For a detailed description we refer the reader to [24].

formulas describing under which conditions the agents take which action. The actions of a protocol participant can generally be described as follows: If agent X receives a message and this message passes some particular checks then it takes whatever action is called for in the protocol description. If the checks fail it halts. formulas that apply to every protocol analysis (e.g. whenever a protocol participant sends a message it is received by the enemy E , a message sent by E to agent X is received by X , etc.) formulas that describe the possible actions of an intruder E . Whenever E receives a message, everything that E can learn from this message is added to the knowledge list. Using this knowledge list new messages are constructed and new statements are deduced where E sends these new messages to the agents. This is the crucial part of the analysis, since in general the number of messages that E can construct from the knowledge list at each step of the protocol is infinite. So we have to reduce the possibilities of message construction in a way that does not exclude a possible attack if there should exist one.

5. Some protocols and their analysis In this section we present two different protocols and our analysis using Otter.

5.1 A protocol for digital signatures applications in smartcards

A typical formula in our modeling of a protocol will look like:

Digital signatures will be important cryptographic components of many electronic commerce applications. This was taken into account in Germany by enacting the “Digital Signature Law” in 1997 [13]. Since smartcards are considered the appropriate devices for securely storing the signature keys the German standardization organization DIN has developed a standard describing the general design of digital signature applications in smartcards in accordance with the above mentioned law [20]. We now describe a protocol of this standard using digital signatures for authentication purposes. Generally it is assumed that the digital signature application being for example located in a PC controls the smartcard ( ICC ) by using

send E(action list,knowledge list,from agent,to agent, message,: : :) ^ check(: : :) ^ : : :

?!

send agent(update(action list),update(knowledge list), new from agent,new to agent,new message,: : :). We use equations to describe the properties of algorithms, keys, etc., used in the protocol. For example, the equations

acrypt(key(x; pub); acrypt(key(x; priv); y)) acrypt(key(x; priv); acrypt(key(x; pub); y))

= =

y y 4

a so-called interface device (IFD). By sending the respective commands to the IFD the application causes the IFD to communicate with the smartcard accordingly. The actual computations (e.g. of ciphertext) are accomplished by the IFD. For simplicity we will use IFD and the digital signature application synonymously. Note that the smartcard always acts as the responding entity, i.e. never sends commands itself. In the first phase of this protocol the certificates of the smartcard and the smartcard reader, respectively, are exchanged for verification and storage of the keys needed for the authentication phase. In this second phase ICC and IFD exchange encrypted signatures on their respective random numbers and secrets K 1 and K 2 that are later used to generate a session key for securing further communication. Phase 3 consists of reading a display message stored in the smartcard. This step is already performed in a security mode called “secure messaging”, where each message contains a cryptographic checksum for authentication purposes and confidential data are encrypted. Reading the display message is only allowed for an authenticated smartcard reader the certificate of which contains the access right “read”, while the certificate of the smartcard itself contains no access right. The protocol assumes that a signature algorithm with the possibility of message recovery is used (i.e. an algorithm which is not a one-way function). After the successful run of the protocol the smartcard and the smartcard reader shall share a session key that they can use for authentic and confidential communication. Since the objective of this standard is to describe only the process in the smartcard the actions of the smartcard reader are not of interest here.

IFD

: 2: < 1

: 4: < 3

: 6: < 5

: 8: < 7

SELECT FILE OK READ BINARY CertICC MSE (SET) OK V ERIFY CERTIFICATE (CertIFD ) OK

cate by the same authentication authority CA) IFD determines the key PKCA that the smartcard shall use for the subsequent certificate check with the command MSE (Manage Security Environment) (step 5) and then sends its own certificate CertIFD . When ICC found IFD's certificate valid it stores the certificate's public key (i.e. the public key PKIFD of IFD) together with the certificate holder's distinguished name (IDIFD ) and the respective access right in a temporary file for later use. Then the second phase of the protocol starts. In an earlier version of the standard this consisted of the following steps:

IFD

ICC >

MSE (SET ())

: 10: < 9

OK

INTERNAL AUTH (RIF D ;IDIFD )

:

11

>

(PKIFD ;sig (SKICC ;( 61 ;PR1;K 1; : < acrypt h(PR1;K 1;RIFD ;IDIFD ); BC ))) 0

12

0

MSE (SET ())

: 16: < :

18

>

RICC

15

:

0

GET CHALLENGE

: 14:


OK

EXTERNAL AUTH (acrypt(PKICC ;sig(SKIFD ; > ( 61 ;PR2;K 2;h(PR2;K 2;RICC ;IDICC ); BC )))) OK < 0

0

0

0

In step 9 IFD determines the signature key SKICC to be used by the smartcard. Then it requests the ICC to authenticate itself by sending the command INTERNAL AUTH in step 11 with its challenge RIFD and its distinguished name IDIFD , where IDIFD determines the public key to be used by the smartcard for the encryption of the signature. (The smartcard searches for IDIFD in the temporary file in which it has stored the certificate data received in step 7 and uses the respective key.) The smartcard now generates a random number PR1 for padding purposes and a secret K 1 to be used later as part of the new session key and then generates a signature using the key determined in step 9 for the following data: the hexadecimal padding value 0 610 , the random number PR1, the key part K 1, and additionally the hash value of PR1; K 1, IFD's random number RIFD , and IFD's distinguished name IDIFD , both received in the previous step, and finally the hexadecimal value 0 BC 0 . This signature is then encrypted using the key determined by IDIFD and the resulting ciphertext is sent to IFD. IFD decrypts the ciphertext using its own private key and checks whether the signature is valid, using for this the public key that was transmit-

ICC > > > >

In this first phase of the authentication process the smartcard reader IFD selects the file that contains the certificate of the smartcard ICC (step 1) and reads it (step 3). After having checked that the certificate is valid (we assume here that both IFD and ICC have been issued a certifi5

decided to model E as a malicious interface device that does not own any valid certificate or any other data useful for an attack. Our analysis showed that the following protocol run was possible:

ted with the certificate in step 4 of the protocol run. Then IFD requests ICC 's challenge RICC (step 13) and again determines a key with the command MSE which the smartcard shall use subsequently to verify IFD's signature (step 15). Finally, in step 17, IFD generates its own authentication token by signing the equivalent data using its signature key SKIFD and encrypting the signature with ICC 's public key PKICC . For this it generates a padding random number PR2 and the second secret part K 2 of the session key to be generated and used later on. The smartcard uses the key determined in step 9 for decrypting the ciphertext and the key determined in step 15 for checking the signature. After successful mutual authentication the IFD is allowed to read the display message of the smartcard which shall inform the smartcard owner of the positive result of the authentication. This part of the communication is already processed in secure messaging mode, which means that each data field carries a cryptographic checksum, and additionally the display message is encrypted using a symmetric algorithm. The key K used for this is a combination of the secret parts K 1 and K 2 exchanged in the authentication phase.

IFD 19: :


CertICC MSE (SET)

>

OK V ERIFY CERTIFICATE (CertICC )

>

OK

In the first phase E sends in step 7 the smartcard's certificate received in step 4 instead of its own. The smartcard checks that the signature is valid but does not perform any additional checks, in particular it does not check that the certificate holder's name is not equal to its own name. Then it stores the certificate's public key PKICC , its holder's identification IDICC and access right with respect to the display message in the respective file, i.e. the smartcard stores its own certificate data. A possible authentication phase that was found by our analysis is the following:

ICC > >

IFD

5.2 Our analysis

: 10: < 9

The first step of analyzing a protocol is to describe the valid agents' actions. Since in the above explained first version of the standard the possible use of the keys was not explicitly restricted in any way we allowed the smartcard to use any key for any purpose, except that it only uses keys with the purpose “certificate check” for checking certificates. As was pointed out to us later, this is not in accordance with [14], a DIN standard describing security related smartcard commands such as MSE . However, we believe that a designer of a smartcard operating system may not be aware of the key restrictions either if not explicitly mentioned, which justifies our formalization. For verifying a certificate we let the smartcard only check the certificate's signature (the verification of certificates is out of the scope of [20]), which is the way this will probably be implemented. (Note that the smartcard is not able to check a validity date as it does not own a clock.) Second we had to decide what is the knowledge of E . We

:

11

ICC >

MSE (SET ()) OK INTERNAL AUTH (RE ;IDICC )

>

E correctly determines SKICC as the signature key to be used by the smartcard in step 12, but with IDICC as the second item of the INTERNAL AUTH command in step 11 it requests the smartcard to use its own public key as encryption key (the key that was stored in step 7). Since the protocol is designed for an algorithm with message recovery, where acrypt(PKX ; sig (SKX ; m)) = m for any agent X , as a result the smartcard does not send its encrypted signature in the next step. Instead, it sends plaintext, in particular it sends the first secret part K 1 of the session key in plaintext: IFD 6

ICC

: < 61 ;PR1;K 1;h(PR1;K 1;RE ;IDICC ); BC Now E requests the smartcard's challenge as described 0

0

0

0

smartcard its own certificate in the first protocol phase. In this case the authentication process will break off with the INTERNAL AUTH command, but it might be hard in practise to find the reason for this (the certificate was verified as being valid). We believe that it is advisable to have the smartcard only accept those certificates that are reasonable in an authentication process. We also believe that the programmer of the smartcard operating system might not be aware of the key restrictions if they are not explicitly mentioned in the protocol description. After the presentation of our analysis the standard was changed to explicitly not allowing the above described attack. Notes were added that explain which keys the smartcard is not allowed to use for which purpose. Additionally the way the keys the smartcard shall use are determined was changed. In the new version (of November 30th, 1998) one command MSE (SET (< key ? reference1; key ? reference2 >)) is used in step 9 to determine both keys to be used by the smartcard later on (steps 15 and 16 are now obsolete), which prevents the use of the smartcard's signature key for the signature verification during the EXTERNAL AUTH command. This new version of the standard has not yet been analyzed.

12

in the standard, but then, instead of determining the correct public key for signature verification, it demands the smartcard to use its own signature key for this purpose:

IFD 13: 14: < : 16: < 15

ICC >

GET CHALLENGE RICC

MSE (SET ())

>

OK

Then it generates a “signature” by using the smartcard's public key received in step 4 of the protocol and encrypts this data, again using the same key. The resulting ciphertext is then sent to the smartcard:

IFD ICC EXTERNAL AUTH (acrypt(PKICC ;sig(SKICC ; 17: > ( 61 ;PR2;K 2;h(PR2;K 2;RICC ;IDICC ); BC )))) The smartcard uses its private key SKICC (determined 0

0

0

0

in step 9) to decrypt the ciphertext and then, according to the key having been specified in step 15, uses this key again for checking the “signature”. Since the keys PKICC and SKICC correspond to each other the smartcard finds the signature valid and answers with

5.3 The TMN protocol We used Otter for the analysis of various other protocols, among these the TMN protocol presented by Tatebayashi, Matsuzaki, and Newman in [23] which uses both an asymmetric and a symmetric algorithm. This protocol presents various problems, the most interesting of which is a flaw due to the homomorphic property of RSA (the multiplication of two encryptions being the same as the encryption of the multiplication of the two respective plaintexts). A possible attack that exploits this property was first pointed out by Simmons (see [23]). We analyzed this protocol and where able to formally verify this attack, in particular since the above property can easily be formalized using our methodology. Note that most of the approaches we described in section 2 do not have the necessary generality to formalize the homomorphic property of RSA. We also found various other difficulties of this protocol.

IFD ICC OK 18: < Now E can try to read the display message of the smart-

card but will not succeed, since the certificate data stored by the smartcard in the first phase of the protocol is the smartcard's own data, and the smartcard is not allowed to access its display message. However, we may assume that a malicious interface device will be able to present the necessary confirmation of the successful authentication on its own. At the end of this protocol run E owns the two secrets that form the session key to be used by the smartcard subsequently for secure messaging purposes. If the smartcard accepts the above protocol run as a successful authentication, i.e. if it does not insist on the display message being read successfully, E is now able to perform whatever action the application allows. In particular, E is able to generate digital signatures using the smartcard's signature key. As we pointed out already at the beginning of this section, the above described use of the keys is not in accordance with [14]. In general, signature keys may not be used for encryption and vice versa. In particular, the smartcard is not allowed to use its own public key for the encryption of its signature in step 12. However, in the new version of the standard it is still possible to present the

6. Conclusions and future work The design of secure cryptographic protocols is a difficult task. There exist many protocols containing flaws that were detected only several years after the protocol had been published. Due to their additional requirements the design of secure electronic commerce protocols is even harder than the design of traditional key distribution and authentication protocols. For this reason, flaws are less likely to be found 7

and formal methods and tools that enable to detect these flaws are of great importance. In this paper we have introduced our approach, a formal verification method for cryptograpic protocols that uses Otter, an automatic theorem prover. We have presented our analysis of a protocol recently standardized by the German standardization organization DIN to be used in digital signature applications for smartcards. To our knowledge, this is the only formal analysis performed on this protocol. Our analysis found a previously unknown attack on the protocol which is possible if the smartcard operating system's programmer is not aware of certain restrictions with respect to the keys that the smartcard is allowed to use. As a consequence, the new version of the standard includes notes that explicitly explain that certain keys may not be used for certain purposes. Also, the standard was changed with respect to the way the keys to be used by the smartcard are determined. We applied our methods to other protocols as well (among the known attacks which we also found by our methods are the on the TMN protocol, and the Lowe [15] and the Meadows [17] attacks on the Needham-Schroeder public key protocol [19]). In particular, we analyzed the Needham-Schroeder protocol for symmetric algorithms (see [19]). Our analysis found the attack first shown by Denning and Sacco in [8], and in addition a new attack not previously published (which we called “arity” attack) that uses the fact that the agent does not check the length of the ciphertext (see [12] for a detailed description). Our approach has shown itself to be a powerful and flexible way of analyzing protocols. However, we do not yet have a tool which can be used by applications programmers. Analyzing protocols with our methods requires some expertise in the use of Otter as well as some expertise in cryptology. Although it is clear that much of what we now do can be automated (e.g. it would not be too hard to write a compiler to map protocol descriptions to Otter inference rules and demodulators); it is not clear how much of the search strategy can be automated. This is the subject of work in progress. Future work will concentrate on extending our formalization to cover further aspects of electronic commerce.

[4] C. Boyd. Hidden assumptions in cryptographic protocols. In Proceedings of IEEE, volume 137, pages 433–436. IEEE, 1990. [5] M. Burrows, M. Abadi, and R. Needham. A logic of authentication. Report 39, Digital Systems Research Center, Palo Alto, California, Feb 1989. [6] J. Clark and J. Jacob. On the security of recent protocols. Information Processing Letters, 56:151–155, 1995. [7] B. Clifford Neuman and G. Medvinsky. NetCheque, NetCash, and the Characteristics of Internet Payment Systems. In MIT Workshop on Internet Economics, 1995. [8] D. Denning and G. Sacco. Timestamps in key distribution protocols. Communications of the ACM, 24:533–536, 1982. [9] R. Dixon. Client/server and open systems. John Wiley and Sons, 1996. [10] D. Dolev and A. Yao. On the security of public-key protocols. IEEE Transactions on Information Theory, 29:198–208, 1983. [11] M. B. et al. iKP - A Family of Secure Electronic Payment Protocols. In First Usenix Workshop on Electronic Commerce, 1995. [12] S. G¨urgens and R. Peralta. Efficient automated testing of cryptographic protocols. Technical report, GMD German National Research Center for Information Technology, 1998. [13] Digital Signature Law - Article 3 Law for Informations- and CommunicationsServices - IuKDG, August 1997. [14] ISO/IEC. ISO/IEC CD 7816-8.2: “Identification cards - Integrated circuit(s) cards with contacts - Part 8: Security related interindustry commands”, June 1997. [15] G. Lowe. Breaking and fixing the Needham-Schroeder public-key protocol using CSP and FDR. In Second International Workshop, TACAS ' 96, volume 1055 of LNCS, pages 147–166. SV, 1996. [16] W. Marrero, E. M. Clarke, and S. Jha. A model checker for authentication protocols. In DIMACS Workshop on Cryptographic Protocol Design and Verification, http://dimacs.rutgers.edu/Workshops/Security/, 1997. [17] C. Meadows. A system for the specification and verification of key management protocols. In IEEE Symposium on Security and Privacy, pages 182–195. IEEE Computer Society Press, New York, 1991. [18] C. A. Meadows. Analyzing the Needham-Schroeder public key protocol: A comparison of two approaches. In Proceedings of ESORICS, Naval Research Laboratory, 1996. Springer. [19] R. Needham and M. Schroeder. Using encryption for authentication in large networks of computers. Communications of the ACM, pages 993–999, 1978. [20] D. NI-17.4. Spezifikation der Schnittstelle zu Chipkarten mit Digitaler Signatur-Anwendung/Funktion nach SigG und SigV. DIN, November 1998. [21] S. Schneider. Verifying authentication protocols with CSP. In IEEE Computer Security Foundations Workshop. IEEE, 1997. [22] M. Sirbu and J. Douglas Tyger. Netbill: An electronic commerce system optimized for network delivered information services. In IEEE Compcon ' 95, 1995. [23] M. Tatebayashi, N. Matsuzaki, and Newman. Key distribution protocol for digital mobile communication systems. In G. Brassard, editor, Advances in Cryptology - CRYPTO ' 89, volume 435 of LNCS, pages 324–333. SV, 1991.

References [1] Mastercard and VISA Corporations, Secure Electronic Transactions (SET) Specification. http://www.setco.org, 1996. [2] Electronic Commerce. A Guide for Better Practice, G8 Pilot Project on Global MarketPlace for SMEs. http://www.ispo.cec.be/Ecommerce/MoU/default.htm, 1997. [3] Memorandum of Understanding. Open Access to Electronic Commerce for European SMEs, G8 Pilot Project on Global MarketPlace for SMEs, http://medianet.kauhajoki.fi/paraddis/sme/ec.htm, 1998.

8

[24] L. Wos, R. Overbeek, E. Lusk, and J. Boyle. Automated Reasoning - Introduction and Applications. McGraw-Hill, Inc., 1992.

9

Suggest Documents