Green Cryptography: Cleaner Engineering Through ...

1 downloads 0 Views 707KB Size Report
Jun 16, 2010 - appetizer to show you the right way to go about recycling the AES for both symmetric ..... it won't hurt to get to know each other a little better.
Green Cryptography: Cleaner Engineering Through Recycling Justin Troutman Extorque Information Security Engineering [email protected]

Vincent Rijmen Dept. of Electrical Engineering, K.U.Leuven and IBBT Graz University of Technology [email protected]

June 16, 2010 The original draft is dated April 1, 2008. Abstract We introduce the concept of “green cryptography,” which adopts the principle of recycling cryptographic design strategies, components, and primitives; in this essay, we’ll focus on the AES, and it’s underlying block cipher, Rijndael. Cryptographic implementation is met with a mature and minimalist, “do a lot with a little” design paradigm – mature in that it recycles the rigorously cryptanalyzed AES, and its underlying block cipher, Rijndael, and minimalist in that it recycles the AES for both encryption and authentication, via generic composition, where we encrypt then authenticate, separately (e.g., AES-CTR-then-AES-CMAC), or via an integrity-aware confidentiality mode of operation based on generic composition, where encryption then authentication is handled in a single mode (e.g., EAX). The end result is an implementation-centric framework for achieving the strongest notions of confidentiality and integrity, while retaining simplicity within the implementation. In short, recycling-based green cryptography is aimed at sustainable security within scalable implementations. We take a concise look – with an emphasis on symmetric cryptography – at some of the issues that are responsible for why cryptography usually ends up looking bad, in practice, and fails to establish the right threat model, let alone realize it; this is largely due to a lack of cryptographic competence, and the dreaded habit of crammed-in-and-cobbled-together design. To address these issues, we, with the assistance, and comedic relief, of Alice and Bob, give several rules of thumb for sufficient and simplistic cryptographic implementations. Be prepared for a bowl of acronymous porridge, but don’t worry; we’ll make sure it’s as easy to swallow as possible, and it might even up your Scrabble game. So, to the pulpit we go, ready to preach a sermon so desperately in need of being heard, and to which heed should be taken.

1

Contents 1 Green Cryptography

3

2 The Cryptographers (Or, AESthetically speaking)

4

2.1

Recycling Design Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2.2

Recycling Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2.3

Recycling Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.4

The Family Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.5

Limitations of Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.6

Margins of Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

3 The Point of Diminishing Returns

9

4 The Developers (Or, Alice tutors Bob)

10

4.1

Recycling: It’s good for the ecosystem, and your cryptosystem. . . . . . . . . . . . . .

10

4.2

If you want confidentiality, you want integrity too. . . . . . . . . . . . . . . . . . . . .

11

4.3

4.2.1

Meet MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

4.2.2

Use the AES: It Comes With The Standardized Territory . . . . . . . . . . . .

13

Putting it together with cryptographic Elmer’s . . . . . . . . . . . . . . . . . . . . . .

13

4.3.1

Authenticated Encryption Via Generic Composition . . . . . . . . . . . . . . .

14

4.3.2

Authenticated Encryption Via Integrity-aware Modes . . . . . . . . . . . . . .

15

4.3.3

Juxtaposing generic composition and integrity-aware modes . . . . . . . . . . .

15

5 The Good, the Bad, and the Proof

16

5.1

IND-CCA2 ∧ INT-CTXT security is what we want. . . . . . . . . . . . . . . . . . .

16

5.2

Brief definitions, or “Briefinitions,” if you will. . . . . . . . . . . . . . . . . . . . . . .

16

5.3

Mathematicians versus Cryptographers . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

6 From Developer to Implementation to User

17

7 Conclusion

18

8 Because One Conclusion Isn’t Enough

19

9 Acknowledgments

20

2

1

Green Cryptography

Green cryptography 1 is a school of thought that observes a natural progression of cryptographic primitives, from the moment they’re designed to the moment they’re deployed; it is realized through recycling, where design strategies, components, and primitives, are recycled, supported by a twofold argument: Firstly, if we are comfortable, cryptanalytically-speaking, with a design strategy, component, or primitive, we should attempt to recycle it. This makes sense, because the most compelling reason to use a design strategy, component, or primitive is that it has “earned its bones,” cryptanalytically. Secondly, to ensure the simplicity of implementations, we should recycle primitives, whenever and wherever possible. This also makes sense, because complexity is the culprit behind nearly all instances of cryptography failing in practice. Concisely, recycling-based green cryptography is about maximizing confidence in cryptographic primitives while minimizing complexity in their implementation. A concept of practical heft, we’d say. In the first section of this essay, we look at green cryptography from the perspective of cryptographers, who, despite having certain design criteria to meet, can be quite liberal in choosing what they recycle and how they recycle it, be it design strategies, components, or primitives. This portrays the more voluntary aspect of recycling, where most everything is at the cryptographer’s discretion. In the second section of this essay, we look at green cryptography from the perspective of developers, who are often bound by policies that dictate which cryptographic primitives they can, and cannot, use; needless to say, this often means sticking to standards and standards only. This portrays the involuntary aspect of recycling, where neither cryptographers nor developers, despite their ability to influence, get to choose what becomes a standard. Fortunately, the poster child for good recycling at work happens to be a standard – the AES. From being the brainchild of Vincent Rijmen and Joan Daemen, to its ratification by NIST as a standard, to its current role as the most cryptanalytical-attention-grabbing block cipher in use today, it’s an attractive solution, and one that was tailored to fit into numerous environments. Of course, we have no qualms about potential niche applications with environmental constraints for which another secure block cipher may be more suitable. For the sake of this essay, though, we’re going to focus on the AES, the cryptographic primitive behind it, Rijndael, the components of which it is composed, and the design strategy on which it all rests. Not only that, but we’re going to serve an algorithmic appetizer to show you the right way to go about recycling the AES for both symmetric message encryption and message authentication, in order to achieve the strongest notions of confidentiality and integrity, with implementation simplicity in mind. Recycling is a grand concept, and what it can do for an environment is worth the effort in doing it. We recycle for a cleaner ecosystem, and most everyone “gets it,” from those who design recyclables to those who end up using them and tossing them in the recycle bin. Of course, we could do a lot better2 , but most of us are aware of what happens when we do and what happens when we don’t. In cryptography, it’s much the same; we recycle for a cleaner cryptosystem. It’s a process that takes place from “conception to cellophane,” and one that preaches the fruits of mature (i.e., security) and minimalist (i.e., simplicity) cryptography – cryptosystems that achieve the strongest notions of security while being gutted of all the “unnecessaries.” On the cryptographer side of things, green cryptography is about fostering the maturation of 1 And 2A

so another term enters the cryptographic lexicon. gigantic understatement, we know.

3

cryptographic design by recycling design strategies, components, and primitives. On the developer side of things, green cryptography is about doing a lot with a little, and recycling mature primitives whenever and wherever you can. Less is more! Simplicity is king! Cryptographic design boasts an impeccable track record – arguably the best out of all the layers of the proverbial security onion. Tragically, cryptographic deployment isn’t following suit, which is evidenced by all of the cryptosystems that fall apart, due to excessive complexities. Green cryptography is our effort in educating and conditioning all those involved in the birth of a cryptosystem, so the paradigm of recycling can be fully realized.

2

The Cryptographers (Or, AESthetically speaking)

First, it’s the cryptographers turn. They engage in three primary levels of recycling: design strategies, components, and primitives. In the following subsections, we’ll break each level down, with an emphasis on how cryptographers recycle design strategies, components, and primitives, within the context of the AES.

2.1

Recycling Design Strategies

For a cryptographer, this is the “game plan.” Before the components are formed and the primitives named, a design strategy is hatched. The design strategy lays out how components will be shaped, in order to meet specific cryptographic goals; that is, to ensure resistance against specific cryptanalytical techniques, such as linear cryptanalysis (in the known-plaintext attack model) and differential cryptanalysis (in the chosen-plaintext attack model). In particular, we like the wide trail strategy[1, 2], which renders efficient round transformations and allows for provable bounds on the correlation of linear trails and the weight of differential trails. In other words, the goal of this strategy is to provide provable security against linear and differential cryptanalysis, while achieving high performance on a myriad of platforms. The predecessors and successors of Rijndael, and even stream ciphers and hash functions, have latched onto this strategy – a culmination of conservative design, across the board. It’s a recipe for building a primitive that facilitates analysis such that security properties are easily captured, while not sacrificing its ability to perform, and perform well, most everywhere you throw it.

2.2

Recycling Components

Most folks are familiar with the colorful, animated names3 that are given to primitives, but don’t think about the guts that lie beneath the epidermis of a name tag. We can look at the composition of a cryptographic primitive in much the same way as we look at the composition of ourselves, in that we’re both composed of “organs4 ,” which have been designed to carry out specific functions and achieve certain properties; if they fail to do so, it could jeopardize our survival. It’s no different with a cryptographic primitive. To ensure a “healthy” primitive, cryptographers employ a toolkit’s worth 3 Blowfish, Twofish, Serpent, and Square. Lion, Tiger, Shark, and a Bear! So many names for us to rhyme, but it just so happens that we don’t have the time. 4 We can’t help but see the parallel between cryptanalysis and a nice, retro-inspiring game of OperationTM .

4

of techniques for determining the resistance of a primitive’s organs against an onslaught of numerous attacks; this requires the cryptographer to also be a good cryptanalyst, because to know what to do, you need to know what not to do. In other words, to understand security is to first recognize insecurity. Cryptanalysis determines whether or not advertised security claims are being lived up to, by looking at individual components, interaction between components, and the composite primitive as a whole. While the wide trail strategy selects different components according to separate design criteria, the general focus remains the same for each component: optimize the worst-case behavior. This optimization is approached in two ways: local and global. Local optimization is concerned with building a round transformation for which the worst-case behavior for one round is optimized; global optimization is concerned with building a round transformation for which the worst-case behavior for a sequence of rounds is optimized. Both of these approaches are used to determine the number of rounds necessary to provide resistance against linear and differential cryptanalysis; for a security margin, more rounds are usually tacked on5 . Ultimately, the wide trail strategy allows us to evaluate and select components independently, while providing security bounds for the composite. Rijndael’s round transformation consists of four transformations, or steps: SubBytes, ShiftRows, MixColumns, and AddRoundKey. With the exception of the final round, which omits the MixColumns step, Rijndael recycles them all in a specific number of iterations, or rounds – the number of which depends on the block and key length. Recycling components leads to designs with simple descriptions; the benefit of this is two-fold. Not only does a simple description form a more attractive target for cryptanalysts, who will be more inclined to spend time evaluating the primitive, but it facilitates the correctness of implementations, which, as you’ll see later on in the essay, is the make-or-break dictator of cryptography’s ability to do its job in practice. We want the highest level of confidence in a primitive that we can get, and this two-fold benefit of recycling components is a great way to go about contributing to that.

2.3

Recycling Primitives

When you think of the AES, encryption is usually the concept that comes to mind; after all, it is called the Advanced Encryption Standard. However, there’s an equally important concept that the AES can address: authentication. MACs, or Message Authentication Codes, preserve integrity. Two examples are CMAC[3], a block cipher-based MAC that was built to accommodate the AES as its underlying primitive, and EAX[4], an integrity-aware mode of confidentiality, which not only takes care of message authentication, but message encryption, as well. Not only can we recycle primitives for different purposes, such as encryption and authentication, but we can also recycle primitives in the design of new primitives. As for recycling the AES in block ciphers and MACs, we say, “Feel free.” Heck, even stream ciphers are taking notes![5]

2.4

The Family Tree

To further our chances of getting that “Oh! I had no idea recycling was that big of a deal” reaction from you, we thought some visuals should be thrown in. With that in mind, we present to you a 5 Of

course, you’ve got to be careful when you consider margins of security. See §2.5.

5

pedigree of Rijndael. Inlaws, outlaws, and even the sister’s best friend’s boyfriend’s grandmother’s hairdresser – if they recycled something, they’re on some branch of the recycling tree.

Figure 1: Family tree of cryptographic primitive recycling. In Figure 1, we see two essential elements of Rijndael that have been recycled numerous times. The first element is the branch number; this concept was proposed in 1995 as a measure for the diffusion quality of a map. From this came the MDS-based diffusion, pioneered in SHARK, which later evolved into two-step diffusion as exhibited in Square. And then there’s the S-box based on inversion in a finite field; it’s because of its superior nonlinearity that it has been recycled over and over again. This pedigree of Rijndael’s predecessors, successors, and other other related primitives – block ciphers, stream ciphers, and hash functions – shows around a decade’s worth of recycling, which makes it evident that recycling is a viable design paradigm, and one that’s well established and liberally applied.

6

2.5

Limitations of Recycling

While the wide trail strategy can certainly be recycled in the design of hash functions, as exhibited in the designs of both Whirlpool[6] and Maelstrom-0[7], recycling the AES in hash functions is met with uncertainty. Truth be told, we don’t know enough about what constitutes good criteria for designing cryptographically-secure hash functions. Furthermore, there’s a significant difference to consider. Hash functions don’t have secret keys; block ciphers do. Given that, a block cipher used in a hash function should be secure in situations where the key is not secret. Unfortunately, there aren’t any good methods for evaluating the security of a block cipher when a cryptanalyst knows the key. Fret not, though; cryptographers are working on it, and we’re hopeful that the in-progress SHA-3 competition will be a prime catalyst for generating answers in this matter. In fact, quite a few of the candidates have recycled the AES in some way, so we’d say the conditions for answers are quite favorable. And with that, we give you a pedigree of SHA-3 candidates that recycle from the AES.

Figure 2: Pedigree of SHA-3 candidates that recycle from Rijndael. In Figure 2, to show the continued viability of recycling, we look at a bunch of SHA-3 candidates that look to AES for structural advice, and, with a Wikipedia-styled “[citation needed],” it has been said that the AES is the real winner of the SHA-3 competition, due to how much the SHA-3 candidates

7

have recycled from it.

2.6

Margins of Security

Mmm. Security margins. It’s this one metric that’s predominantly responsible for the “preference battle” between Rijndael, Twofish, and Serpent; it’s a battle with regiments spanning forums galore. “Serpent has the largest margin of security, so it’s the most secure,” says a zealous advocate of the tank-of-a-block-cipher that Serpent is. First things first; neither this section nor this essay is antisecurity margin. The concept of a security margin is quite useful as a design metric, but it’s usefulness is bounded by its limitations; to get the most out of it, it pays to be aware of them. Don’t assume that a block cipher with the most rounds is the most secure; it’s not as easy as a round-measuring competition. Suppose we have two block ciphers, A and B. A is a Feistel, while B is an SPN. There’s a truncated differential attack on A that covers 7 of its 10 rounds and a higher-order differential-linear attack on B that covers 20 of its 40 rounds. On the surface, we might say that B is stronger than A, because it still has 20 rounds left untouched, whereas A is only 3 rounds shy of a full break. On the other hand, this assumption starts to lose foundation, when you consider that we’re talking about two block ciphers based on different structural designs, employing two different round transformations, and analyzed under two different attack models. It could very well be the case that extending the attack on B ’s 20 remaining rounds is easier than extending the attack on A’s 3 remaining rounds; then again, maybe it’s the other way around. The number of rounds in question isn’t conclusive; it all depends on what’s going on inside the round transformation. To further support this point, let’s look at an aspect of Rijndael’s round transformation that outscores the round transformations of Twofish and Serpent, respectively: full diffusion. First, Twofish is a Feistel network, unlike Rijndael and Serpent, which are Substitution-Permutation Networks (SPN). With a Feistel, each bit of a plaintext block is modified at least once, after two rounds. Because of this, we need to introduce the notion of a cycle, where one cycle equals two rounds, in the case of a Feistel, and one round, in the case of an SPN, which allows us to compare Feistels and SPNs. Easy enough. Rijndael achieves full diffusion after two rounds, while Serpent achieves full diffusion after three rounds, with Twofish pulling in last, by achieving full diffusion after four rounds. Here are some tables to cast a perspective light on these numbers:

Block Cipher Serpent Rijndael Twofish

Table 1 Cycles (Rounds) 32 (32 rounds) 10 (10 rounds) 8 (16 rounds)

Full Diffusion Steps 10.6666667 5 4

Table 1: Figures reflect ratio of cycles to full diffusion steps for all block ciphers set at their defined number of rounds, respectively.

8

Block Cipher Rijndael Serpent Twofish

Table 2 Cycles (Rounds) 10 (10) 10 (10) 5 (10)

Full Diffusion Steps 5 3.33333333 2.5

Table 2: Figures reflect ratio of cycles (rounds) to full diffusion steps for all block ciphers set at 10 rounds.

Block Cipher Rijndael Serpent Twofish

Table 3 Full Diffusion Steps 5 5 5

Cycles (Rounds) 10 (10) 15 (15) 10 (20)

Table 3: Figures reflect necessary number of cycles (rounds) to achieve 5 full diffusion steps. In Table 1, the block ciphers are set at their defined number of rounds – Rijndael, 10; Twofish, 12; and Serpent, 32 – assuming a 128-bit block length and key length; this assumption applies to all tables. Obviously, Serpent’s juggernaut-of-a-round-count brings it out on top. In Table 2, the block ciphers are set to 10 rounds, showing the number of full diffusion steps achieved. In Table 3, the block ciphers are set to the necessary number of rounds in order to achieve 5 full diffusion steps. In Tables 2 and 3, the efficiency of Rijndael’s round transformation is obvious.

3

The Point of Diminishing Returns

Bear in mind that all measurements of security margins are valid only for the attacks that we know about now. Adding more rounds for the sake of a larger security margin sounds a bit like building dikes or levees a bit higher, to withstand an increase in water level. Unfortunately, it doesn’t work in the same deterministic way. Sure, if future cryptanalytical methods are a lot like the cryptanalytical methods of today, but perhaps a little better, then those block ciphers that padded themselves with extra rounds will profit. On the other hand, if these future attacks are completely different, then we have no way of gauging which block ciphers have the largest security margin at the present. Revisiting a tried-and-true analogy, look at the block cipher as a house that you’re trying to protect against a burglar. Since we have a lot of experience with securing doors, we might put multiple locking mechanisms in place; hopefully, they will thwart the burglar’s attempts by outsmarting his lockpicking capabilities. This may work, but only if we assume that the burglar will only try to pick the locks. What happens if he decides to climb through the window or descend from the chimney? Ultimately, our security margin depends on burglars not getting much better at lockpicking in the future; if they get better at some other means of entry, then we’re out of luck. And then comes convenience. In practice, security and usability must strike a balance; otherwise, it might turn into a “versus.” Tragically for us, as cryptographers, convenience trumps security all too 9

often. After a (short) while, 15 locks on our front door will become more cumbersome than they’re worth; this is why we don’t do it in the first place (setting aside that this is probably a lousy use of security dollars anyway). When designing primitives, we’re often faced with the task of designing them such that they fit and perform well in a variety of architectures. Some applications will impose very tight constraints, and it’s applications like this that often answer the question, “How much security margin do I want and can I afford it?” Just as a bulky primitive may sacrifice its implementability for extra security margin padding, so will any security feature be ditched if its users find the overhead too much of a hassle. It’s not that security margins are a bad idea; it’s certainly wise to be conservative. Just as well, though, it needs to make sense in context, because practical cryptography is often dictated by non-cryptography – and even non-security – issues. For security to be at its optimal game, it needs to do its job as transparently as possible. Don’t be too noticeable. As consumers of security, we’re willing to make trade-offs, but only if it doesn’t hinder our ability to live out the human condition as we see fit.

4

The Developers (Or, Alice tutors Bob)

Now, it’s the developers turn to pick up where the cryptographers left off, by recycling primitives. With a bit of comedic relief from Alice and Bob, we’ll look at why developers should recycle, two fundamental goals they should be concerned with – confidentiality and integrity – and how to go about achieving those goals through recycling.

4.1

Recycling: It’s good for the ecosystem, and your cryptosystem.

With a cryptographic palette full of block ciphers to choose from, you might find yourself wondering which to use. The answer is simpler than you might think: Use the AES[8, 9], unless there are constraints that don’t allow you to. (A possible exception would be niche applications that impose environmental constraints for which another block cipher is more suitable.) The most compelling reason for using the AES is based on what happens as a result of being a standard - it receives more cryptanalytical attention than any of the other AES finalist candidates, which gives us a solid cryptographic position on recommending it. Not only that, but it fits in with the engineering principle of “recycling primitives.” Oh, and we don’t want to hear any of that “I need other block ciphers just in case the AES is broken” sputter; the odds that you’ll make an implementation blunder are a stack of magnitudes greater than the surfacing of a practical attack on the AES. So, here’s something we ask of you: Unless you absolutely cannot, use the AES. “What’s so holy about the alliance between the AES and recycling primitives?,” you might question. Alice, you take it from here. “Hey Alice, I picked up some things at the Diffie Mart.” “Please tell me you didn’t forget the Hellman’s mayonnaise.” “Calm down, Alice. It’s right here.” “Whew. So, what else did you pick up?” “Well, they had a sale on fresh Twofish and Blowfish, so I got a filet of each. Let’s see, I bought a box of HMAC6 ’n cheese - the SHArp7 cheddar kind. Oh, and while I was in the check-out lane, I saw a MARS8 bar. I know, I know, but I’ve been craving one.” “Bob, why do we need all of that?” “The more, the merrier, Alice! The more, the merrier!” “Oh Bob, now we have to worry about cooking 6 Hash

function-based Message Authentication Code. Hash Algorithm. 8 IBM’s submission to the AES selection process. 7 Secure

10

fish, and you know I can’t cook fish. HMAC ’n cheese and a MARS bar? Seriously, Bob. Seriously. Things would be a lot easier on our wallets, and our stomachs, if we just stick with our AES diet. It has all we need, and there’s a lot of analysis to back it.” “Can I just keep the MARS bar?” “Bob.” “Yeah, yeah, Alice. You win.” Bob is a complete sm¨ org˚ asbord addict. To him, more is more, and more is good. What he’s missing is the fact that cryptographic implementation isn’t served well by this way of thinking. Academiaborne cryptographic design has a remarkably good-looking track record, and you’d hope that this would carry over to its implementation. Tragically, this is far from the case. In practice, when cryptography fails, it’s almost never the cryptography’s fault; if we’re looking to point fingers, we usually look in the direction of the implementation. Many developers, like Bob, try to cram in as much cryptography as possible. Oh yeah. Let’s throw in every block cipher known to Bruce. How about an assortment of stream ciphers and hash functions? How many do you intend to invite to this primitive party? If you’re a “Bob,” you might think something along the lines of, “Aren’t more options better?” If you’re an “Alice,” you’ll see that more options lead to more complexity. The more you attempt to shove into an implementation, the more likely you are to make a mistake. And believe me when I say that even the most subtle of mistakes can pack a huge punch. Don’t get caught up in the illusion that you need a lot to do a lot; in fact, you can do a lot with a little. Less is more. Be resourceful. Reuse what’s already there; that is, recycle. If you can use a single primitive for multiple purposes, then by all means, do so. Simplify the implementation by using good design rationale, because the fruits of your decisions are passed on to the implementation and the user. Don’t burden the implementation with unnecessary complexities or smother the user with a plethora of configurations, as this is where things usually go wrong. By using the AES as the underlying primitive in our encryption scheme, authentication scheme, and PRF9 , or pseudorandom function, we do our implementation both cryptographic and engineering favors. Two birds, one - oh, you know10 . You see, by recycling primitives, we not only simplify the implementation, but we take advantage of the cryptanalysis that the AES has received, in each of the schemes in which we recycle it. This reflects the perpetually mutual relationship between cryptography and engineering, within the context of recycling primitives, when the correctness and security of the implementation are at stake. It’s great to know what should be in your toolbox, but that doesn’t amount to much if you don’t know how to use it; this is why you’re probably about to say, “So, guys, we know we need confidentiality and integrity, and we know we can use the AES to achieve both, but how do we put this all together?” Time to break out the alphabet adhesive.

4.2

If you want confidentiality, you want integrity too.

“Hey Alice. You’re looking splendid, as usual.” “You’re not too bad yourself, Bob.” “I don’t know about you, Alice, but I’m feeling a little exposed. Why don’t we put on our matching sweaters that Vincent and Joan gave us? 100% SPN11 . How much more elegant can encryption get? The block pattern is great. Argyle just doesn’t do it for me anymore.” “You’re right, Bob. The length is just right for me, too - 128 bits. But, aren’t you forgetting something?” “We’ve got encryption. What more do we need?” “We might as well go out in our birthday paradox suits, if all we’re wearing is 9 Pseudorandom function; the MACs we discuss in this essay, namely the identical OMAC1 and CMAC constructions, are good PRFs, which implies that they’re also good MACs. 10 No birds were harmed during the writing of this essay. 11 Substitution-Permutation Network.

11

encryption.” “I’m not following you, Alice.” “Bob, Bob, Bob. How do we get confidentiality?” “We use encryption, Alice. Are you feeling alright?” “I’m fine, Bob, but if you don’t put on your MAC12 , you won’t be.” “Wait a minute. If all I want is confidentiality, why are you telling me to put on my MAC? I already checked the forecast and I don’t think I’ll need integrity today.” “You might not think you’ll need it, but you probably will. Put it on, Bob. Now.” “Okay, okay, Alice. Easy.” Alice and Bob. At times, it’s all they can do to agree on a key, but Bob’s leading lady is going to save his ciphertext. If I were to ask you, “How do you achieve confidentiality?,” your retort might also be, “That’s easy; use encryption.” And so it goes that confidentiality is all too often the only goal that’s considered. There’s another goal though, that, if not met, results in grave consequences. For those of you who don’t know about it - you probably should; for those of you who know about it, but don’t think you need it - you’re probably wrong. That goal is integrity, and the consequences for not achieving this goal are often far worse than those for not achieving confidentiality. Now you may be thinking to yourself, “Why does he insist that we need integrity? Okay, so there are consequences for not having it, but we want confidentiality, not integrity.” The reality is quite the contrary. When we want confidentiality, the obvious thought is, “we need encryption,” but, realistically, it’s much wiser to say that if we want confidentiality, “we need authentication, too.” If that’s not enough for you, consider that one of these consequences might include losing confidentiality, as well. In [10], Bellovin cuts to the chase with his cut-and-paste attacks on IPsec, and makes a point, and very clearly at that: “It is quite clear that encryption without integrity checking is all but useless. We strongly recommend that all systems mandate joint use of the two options.” That was 1996. In [11], Vaudenay furthers the case for authentication with his side-channel attacks on CBC padding, with applications to IPsec. By mirroring the work of Bleichenbacher, in [12], and Manger, in [13], in the asymmetric case, Vaudenay shows that the symmetric case is just as vulnerable to these types of side channels when an adversary is able to distinguish between valid and invalid ciphertexts. In [14], Black and Urtubia extend this work, in a more efficient manner, and conclude that “perhaps it is time to view authentication as a strongly-desirable property of any symmetric encryption scheme, including those where privacy is the only security goal,” and that “the way to prevent all of these attacks is to insist on integrity of ciphertexts13 in addition to semantic security14 as the ‘proper’ notion of privacy for symmetric encryption schemes.” That was 2002 – six years later. Some folks mistakenly look at ciphertext as plaintext that has been sprayed with OFF! repellent, such that any curious, malicious mosquitoes (malsquitoes, if you will) would be quickly deterred from any potential attempt at latching on. Unfortunately, S. C. Johnson & Son isn’t in the cryptography business, and even when using a good block cipher confidentiality mode of operation, such as CTR or CBC, ciphertext is still malleable. Yet, here we are in 2009, another seven years later, reviving the pioneering work of yesterday, with the “in the times” updates of today, begging, pleading, preaching, and near bullying, all for one cause. You say confidentiality is what you want – not integrity. We’re telling you that if you want the former, you better have the latter. In short, if you don’t want it – tough. Use it anyway! Confucius says. Simon too. Jantje zegt. O mestre mandou. Alright, they didn’t, but we’re pretty sure they could be easily convinced. 12 Message

Authentication Code.

13 See

§4. 14 See §4.

12

4.2.1

Meet MAC

Meet MAC, otherwise known by its full name, Message Authentication Code. If you’ve already met, it won’t hurt to get to know each other a little better. Laymanizing this as much as possible, a MAC achieves the goal of integrity; if an adversary attempts to manipulate ciphertext, a MAC detects it. You feed the MAC a shared secret key and a message of arbitrary length, and it spits out a tag. Let’s say Alice is sending a message to Bob. She’ll encrypt it, then compute a tag on the ciphertext15 [15, 16], of which it will accompany on its way to Bob. When Bob receives the message, he recomputes a tag on the ciphertext, using the same shared secret key, and compares it with the tag that accompanied the ciphertext. If the tags match, the integrity of the message has been preserved; if not, the ciphertext has probably been tampered with. When we’re ready to put this wisdom into action, we might, for example, recommend CMAC16 for message authentication schemes; it’s a MAC that uses a block cipher as its underlying primitive. This allows us to use the AES17 , which we already like to use for message encryption. Why do we like the AES and why do we want to use it for both encryption (i.e., to achieve confidentiality) and authentication (i.e., to achieve integrity)? Well, go pour yourself a glass of your poison-of-choice18 , then sit back down. There’s more crypto-sorcery to come. 4.2.2

Use the AES: It Comes With The Standardized Territory

Once more, we want to reinforce our recommendation of using the AES. Take the foremost cryptographers from all corners of this planetary marble we call Earth, throw them together into a free-for-all, and you’ve got yourself one big cryptographic shindig bound to churn out something good. Untethered access to the best of the best – a cryptographic primitive can’t ask for much more. Such cryptanalysis wasn’t limited to the AES selection process, however; in fact, once the Rijndael became a standard, it became a target, and the most popular one at that, marking the beginning of ongoing cryptanalytical bombardment that won’t cease. The AES carried an enormous burden, and one that policies everywhere will require it shoulder on its own. In such a case, the reasoning behind recommending the AES becomes as clear as it is simple; after all, it’s receiving more cryptanalytical attention than any other block cipher. We can’t think of a more compelling reason to recommend a cryptographer primitive. Fielded designs should be designs we’re comfortable with, and in whose structure we have confidence. What we want developers to take from this is that they shouldn’t look at implementing the AES as merely “policy;” they should look at it as being prudent. We’ve been blessed with a healthy standard. Embrace it, whenever and wherever possible.

4.3

Putting it together with cryptographic Elmer’s

It’s not enough to know the goals you want to achieve, or even the tools necessary for achieving them, if you don’t know how to use the tools. Whenever cryptographic implementations fall apart, it’s often 15 For cryptographic reasons, as examined in [15, 16], we recommend computing a tag on the ciphertext, instead of the plaintext. 16 Block cipher-based Message Authentication Code. 17 Advanced Encryption Standard. 18 Ours happens to be sweet iced tea.

13

because the developer had the right idea, and the right tools, but failed to correctly and securely realize it. We’ve helped you define the right goals, given you the tools to achieve them, so now let’s break out the cryptographic glue you’ll need to stick it all together. 4.3.1

Authenticated Encryption Via Generic Composition

By now, we’re going to assume that you understand the goals of confidentiality and integrity, as well as the mantra of, “If you want confidentiality, you want integrity too[17].” We’re going to lay out a simple framework for achieving these two goals through a generic composition of message encryption, to address confidentiality, and message authentication, to address integrity. Simply put, we’re dealing with two base schemes: Message Encryption (ME = (Ke , E, D)) and Message Authentication (MA = (Ka , T , V)). ME=(K, E, D) is the authenticated encryption scheme obtained by composing ME and MA in the Encrypt-then-Authenticate composition19 . You’ll probably notice that we’ve chosen to encrypt first, then authenticate; this is crucial, because it’s the only generically secure composition, in the sense that it’s secure for all possible secure instantiations of its constituent primitives. Being secure from all angles, it’s the least error-prone, and anything that makes it easier for developers to get right is a gigantic plus. Not only that, but it allows us the ability to achieve the strongest notions of confidentiality and integrity: IND-CCA2 ∧ INT-CTXT. Algorithm K $

Ke ← − Ke $

Ka ← − Ka Return Ke ||Ka

Algorithm E(Ke ||Ka , M ) 0

$

Algorithm D(Ke ||Ka , C) Parse C as C 0 ||τ 0

C ← − E(Ke , M ) $

τ0 ← − T (Ka , C 0 ) C ← C 0 ||τ 0 Return C

M ← D(Ke , C 0 ) v ← V(Ka , C 0 , τ 0 ) If v = 1 then return M else return ⊥.

The premise is simple: Key generation algorithm, K, is used to derive both encryption key, Ke , and authentication key, Ka . Alice encrypts a plaintext message, M , using encryption algorithm, E, with encryption key, Ke , yielding the ciphertext, C 0 . She then computes a tag, τ 0 , on the ciphertext, C 0 , using tagging algorithm, T , with authentication key Ka . She appends the tag, τ 0 , to the ciphertext, C 0 , yielding the tagged ciphertext, C, which she sends to Bob. Bob parses C as C 0 ||τ 0 . He then verifies the ciphertext, C 0 , using verification algorithm, V, with authentication key, Ka . If the tag he computes matches the tag, τ 0 , that Alice sent along with the ciphertext, C 0 , he proceeds to decrypt the ciphertext, C 0 , using decryption algorithm D, with encryption key, Ke , thus returning the plaintext message, M (i.e., if v = 1 then return M , signifying a valid ciphertext); if it doesn’t match, the ciphertext is discarded (i.e., else return ⊥, signifying an invalid ciphertext). 19 In [16], the authors refer to this as “Encrypt-then-MAC”, which is equivalent to our “Encrypt-then-Authenticate.” We made this alteration in order to better coincide with our base schemes of Message Encryption, ME, and Message Authentication, MA; as such, their notation for Symmetric Encryption, SE, and the MAC key, Km , becomes Message Encryption, ME, and authentication key, Ka , respectively. However, there is no change in the meaning of the notation, so please refer to [16] for a detailed look at this composition and its associated security proofs.

14

4.3.2

Authenticated Encryption Via Integrity-aware Modes

What if we had a single, self-contained mode of operation, that addressed both confidentiality and integrity? You’re in luck; we do. Confidentiality modes of operation, such as CTR20 , are, well, confidentiality-only; they do nothing for integrity. With an integrity-aware mode of operation, you’re getting integrity preservation on top of confidentiality preservation. Two birds, one – there’s really no need to call PETA21 . Oh, and such modes are often designed to directly achieve IND-CCA2 ∧ INT-CTXT security. A prime example of this is EAX[4], a mode for authenticated-encryption with associated data, or AEAD, where associated data refers to information that you wouldn’t encrypt, such as a packet header. (On the other hand, while this associated data may not require confidentiality, it is often needed for the sake of authenticity.) It’s a two-pass scheme, meaning that encryption is taken care of in the first pass, while authentication is handled by the second pass. What’s particular nifty about EAX is that, if you were to peer below its epidermis, you’d see a generic composition, known as EAX2; in fact, it uses CTR mode for confidentiality and OMAC122 for integrity, with the latter being equivalent to CMAC. While a generic composition uses two separate keys – one for encryption and one for authentication – EAX2 collapses two keys into one. If you’re partial towards generic composition, you really can’t go wrong with EAX2; if you’re in the market for an integrity-aware mode of operation, you really can’t go wrong with EAX, either, given that EAX2 makes up its guts. As far as authenticated encryption is concerned, cryptographers have ensured that you’re well covered, and efficiently so. We’re taking out more birds than even Hitchcock’s imagination could summon. 4.3.3

Juxtaposing generic composition and integrity-aware modes

In a generic composition, the encryption and authentication schemes are separated, and applied in an arbitrary order; in our case, we’ve chosen to encrypt first, then authenticate. As these schemes handle two different conceptual goals – confidentiality and integrity – it makes sense, aesthetically, to apply them separately; in turn, this makes the correctness of the implementation more clearly defined, while making it more flexible, as well. With an integrity-aware mode, encryption and authentication are wed together into a single mode, which requires a lot less from the developer, in regards to thinking about encryption and authenticate separately; instead, the developer thinks about one mode that does it all. This could very well makes mistakes less likely. Aside from these cryptographic and engineering observations, there are performance attributes to take into consideration. One of the most obvious of these attributes is exemplified through EAX, which, for example, collapses two keys into one, while a generic composition requires two keys; this saves on key material and key-setup. When all is said and done, you can get IND-CCA2 ∧ INT-CTXT security with either. You can do either securely, so don’t be afraid to consider the advantages of whichever may be especially well-suited to your environment. 20 Counter

mode. to 3 for reassurance. Cryptographer’s honor. 22 One-key Message Authentication Code. 21 Refer

15

5 5.1

The Good, the Bad, and the Proof IND-CCA2 ∧ INT-CTXT security is what we want.

Going into the “whys” of this is far beyond the scope of this essay, but in short, when we tout the importance of authenticated encryption, below that generality lies what we really want: INDCCA2 ∧ INT-CTXT security – the two notions of security that capture the strongest notions of confidentiality and integrity. To achieve this, we need an IND-CPA encryption scheme and a SUFCMA authentication scheme, in the Encrypt-then-Authenticate composition; within the context of recycling the AES, this could be something along the lines of first encrypting with AES-CTR, then computing a MAC on the ciphertext using AES-CMAC.

5.2

Brief definitions, or “Briefinitions,” if you will.

We’ve thrown several attack model acronyms at you, without any definitions, thus far, and with good reason: We want to spare you from the math. In the interest of not keeping you completely in the dark, here are some brief definitions, or “briefinitions,” if you will. If you’re looking for directions to the Carnaval de Matem´ atica23 , consult the references we’ve provided – especially the work of Bellare and Namprempre, in [16]. In a chosen-plaintext attack (CPA), the adversary has access to an encryption oracle, which he will query with arbitrary plaintexts to be encrypted, with the corresponding ciphertexts being returned; the adversary is allowed to make multiple, adaptive queries based on information gained from previous encryptions. IND-CPA security means that the adversary is not able24 to distinguish between the encryptions of different messages, even if he has the ability to make arbitrary encryptions. In an adaptive chosen-ciphertext attack (CCA2), the adversary also has access to a decryption oracle, which he will query with arbitrary ciphertexts to be decrypted, with the corresponding plaintexts being returned; the adversary is allowed to make multiple, adaptive queries based on information gained from previous encryptions or decryptions. IND-CCA2 security means that the adversary is not able to distinguish between the encryptions of different messages, even if he has the ability to make arbitrary encryptions and decryptions. To have integrity of ciphertexts (INT-CTXT), the adversary must not be able to produce a ciphertext that was not previously produced by the sender, regardless of whether or not the underlying plaintext is “new.” The adversary is allowed to mount a chosen-message attack. For a MAC to be strongly unforgeable under a chosen-message attack (SUF-CMA), the adversary must not be able to find a new message-tag pair, even after a chosen-message attack. Note that any PRF is a SUF-CMA MAC. To recap, when considering authenticated encryption schemes, it’s common to think of it in terms of coupling an integrity notion, like INT-CTXT, with a confidentiality notion, like IND-CPA. Let’s suppose our authentication scheme meets the criteria for INT-CTXT ∧ IND-CPA. INT-CTXT ∧ IND-CPA → IND-CCA2 (Read → as “implies.”), which means that our authenticated encryption scheme is IND-CCA2 ∧ INT-CTXT secure – exactly what we want. Apply a SUF-CMA MAC, like AES-CMAC, to the ciphertext of an IND-CPA secure encryption scheme, like AES-CTR, or, use an 23 When 24 Read

in need of spice, reference Brazil. “is not able” as “computationally infeasible.”

16

integrity-aware mode built to achieve IND-CCA2 ∧ INT-CTXT security directly, and you get just that.

5.3

Mathematicians versus Cryptographers

CTR, OMAC1, CMAC, and EAX carry proofs of security that depend on the underlying block cipher being a secure PRP, or pseudorandom permutation. Furthermore, they’re components in generic compositions and integrity-aware modes of operation that are proven to be secure in the sense of IND-CCA2 ∧ INT-CTXT. But, we can’t toss in a term like “proof,” without sprinkling in a little on the heated debates hanging heavily around it, like gold around Mr. T. Unfortunately, “provable security” has been juggled a bit too carelessly by some and misunderstood by others, so the opposition’s sentiment[18] is understood. They’re not absolute proofs of anything, so don’t stretch them that far. Look at them as rigorous proofs, given in the context of information security. Practical consequences aren’t guaranteed. We’ll call on the automobile industry for an analogy. A car is advertised as getting x miles per gallon (MPG), and under certain, well-defined laboratory conditions, it will do just that. This is a rigorous number, obtained in a reproducible manner (like a consumption proof). However, if you were to, let’s say, drive in the city, over the hills, up the mountains, or with a lead foot – any real-life environment – the car will consume more – sometimes vastly more. You might be led to believe that the MPG figure is inaccurate or lacking mathematical soundness; this is certainly not the case. It’s accurate science, and in the ideal world, it works perfectly; it’s in the real world that its impact is limited. Obviously, in regards to security proofs, there’s a need to better gauge the balance between our expectations and their limitations. Despite all of the opinionated back-and-forth, when the dust settles, we’re of the mindset that provable security is a useful cryptographic design metric. Security proofs are a good “sanity check,” if you will. They make it easier to hone in on potential design weaknesses, which, believe you me, can save you from the most brain-jangling of cryptographic migraines. Yes! That’s right! Real systems can fail[10, 14] for not taking adaptive chosen-ciphertext attacks into consideration! Proper integrity preservation, through the use of a MAC (Aha!) or integrity-aware mode (Aha Part Deux!), address these attacks. One could, and probably should, write a book about provable security and its cans and cannots, but we’ll leave you with this to consider: “Cleanliness” is a desirable attribute for making analysis easier, from design to deployment, and provable security is a proponent of that. It’s easier to design a good system if you can clearly define your goals and assumptions, and justify your design claim, such that if the assumptions hold, your system meets the goals. Ad hoc approaches won’t pass the muster in this department. Now, place your bets. We know where ours will be.

6

From Developer to Implementation to User

With implementations like loose slacks, we’ve got a belt, yet we can’t seem to put it on right. Developers don’t need to spend so much time worrying about which block cipher and key length to use; this is likely to be the easiest part of your decision-making process. Cryptography is like that good student in class, always doing what you ask of it, and doing it well. In other words, your attention is direly needed elsewhere; rest assured that cryptography won’t be the first to misbehave. Sear this

17

into long-term memory: Cryptography is only as effective as the implementation allows it to be, so the logical path to better odds in getting it right is to simplify the way developers look at it. It follows that the implementation, and essentially, the user, are at the mercy of the developer’s decisions. If the developers aren’t given the heads up, things start to look pretty bleak as you travel from developer to implementation to user. If you get it right, cryptography will be good to you. We’ll go so far as to say that it’s likely the most dependable part of your system. (If cryptography is the weakest part of your system, you’re doing incredibly well. Teach us.) While we tout the simplification of cryptographic design decisions, cryptography isn’t paint-by-numbers simple; it can be downright difficult. So, if you take anything from this essay: Don’t do cryptography without a cryptographer on board! Be that the exhortation of our careers. There are some things that we can’t do, so we call a professional. For instance, you wouldn’t perform a root canal on yourself; you’d trust this procedure to a dentist. (At least, we hope you would; if you’ve ever done this yourself, you’re probably Chuck Norris or his offspring.) On the other hand, there are some things we could probably learn to do ourselves, with a little time and patience, yet we still often call a professional. Take painting a room or installing ceramic floor tile, for example. What are two DIY projects are more easily outsourced. Besides, by having an experienced someone do the job, you reduce the likelihood of mistakes happening, which, as you know by now, is exactly what cryptographic implementation counts on – reduced likelihood of mistakes. Why is it that folks trust professionals to do jobs, both PhD-only and piece-of-cake, yet they chance the fantasy that they can take on the intricately artistic and scientific nooks and crannies of cryptographic design? Although we’d like to tell you that educating developers on how to properly implement cryptography will solve the issue, this is where the bad news comes in. It’s not enough that developers know how to use the tools properly. First, they need to know which tools they should be using; this is what we know as threat modeling. It’s where a good design begins, so you’d better get it right, because everything thereafter depends on it. Look at this as the digital analog of a physical structure’s foundation; everything else will collapse under the weight of a failure to threat model properly. Risking this without know-how is risking it all. While the correctness and security of an implementation dictates whether or not the cryptography can do its job, the threat model dictates which cryptography is right for the job, and it requires what a cryptographer’s forte delivers. We can educate developers to a certain extent, but, after all, reaching security is first recognizing insecurity. Knowing what to do requires knowing what not to do. And, let us be the first to tell you - this takes a certain kind of expertise that’s born of experience, experience, and more experience. There’s no substitute for having a cryptographer on hand. The reality is that we don’t need better cryptography; we need better implementations. The more we simplify the way developers approach cryptography, the simpler, and better, the implementations will be. There’s an enormous gap between cryptographers and developers, and education is one of the ways we can lessen it. Whaddya say? We’re game if you are.

7

Conclusion

Educating developers is a solid effort in mitigating the potential hazards of implementing cryptography, but it’s far from a panacea; if anything, it’s a small, but worthwhile, piece of weaponry in this

18

“war on incompetence” (don’t mind the parallel of political satire). Bear in mind that much of the cryptographic software you see is the product of software vendors. Not cryptographers. Not even security folks. Without a cryptographer close by, or a well-versed-in-cryptography security expert, at least, you’re placing your bets on software vendors to establish the right threat model; it’s a crapshoot, really, and the odds aren’t in favor of the consumer. We’ll also take a moment to mention the open-source versus closed-source debate. “Open-source is inherently more secure,” preach some open-source zealots. Wrong; inherently, it’s no more secure than closed-source. Open-source proponents dwell on the idea that the more eyes you have looking, the more secure the implementation will be; in reality, it’s not about how many eyes are looking – it’s whose eyes are looking. (To establish a case for this, imagine a commercial entity that has the financial resources to hire a cryptanalyst to analyze their closed-source product, while a similar open-source project, although boasting a lively community, hasn’t the resources to get the right pair of eyes looking at their code.) It’s all about potential, and we are certainly all for the open-source model as the most potentially secure. Openness has always been the pulse of cryptographic design, and we should expect nothing less from its implementation. It’s important to understand, however, that openness works in cryptographic design, because, well, cryptographers and cryptanalysts are the ones doing the designing and analyzing. When it comes to implementing cryptography, it’s usually non-cryptographers behind the scene, so it’s simply not enough to open the source – you’ve got to point the right people to it. We say, “Keep it open,” but, just as importantly, realize that opening it doesn’t mean they’ll come; it might have worked for Ray in Field of Dreams, but you’re most likely going to have to find them. While we only skim the surface of what needs to be considered, this essay will have done its job if it steers developers towards more fruitful cryptography, through mature and minimalist design. We hope that you’ll add these tidbits to your arsenal of design philosophies, and wield them liberally, the next time you’re faced with implementing things cryptographic. It’s about time that the instantiation of cryptography reflects the decorated heritage that decades of academic research have begotten. On behalf of Alice and Bob, we bid you farewell and better cryptography. Oh, and allow us to stress, once more, a point so dear to our cryptographic hearts, in a Ray Parker Jr. slash Ghostbusteresque tone: Who you gonna call? Cryptographers! We ain’t ’fraid of no crypto.25

8

Because One Conclusion Isn’t Enough

Such a bugbear of an issue, this complexity, that we were muscled into verbosity. As this essay might suggest, we’re pretty hardcore when it comes to cryptography, but we realize that cryptography is usually the strongest link of any system; when things go wrong, this is rarely where it occurs. It’s because of the implementation, and complexity is usually the culprit. The more options you have, the more complexity you have; complexity is security’s worst enemy. That’s a given. In fact, had Noah Webster been a cryptographer, we’d probably have this: Main Entry: com·plex·i·ty Pronunciation: \k@m-"plek-s@-t¯e, k¨ am-\ Function: noun, enemy 25 And

so it goes that we’re not paid for our pun.

19

Inflected Form(s): plural com·plex·i·ties Date: 1661 1. security’s worst enemy 2. the antithesis of security 3. an anathema to implementations And, if we had our way, we’d even employ different languages to emphasize our disdain for complexity: 4. bicho de sete cabe¸cas; bˆete noire; omethingsay eallyray adbay Unfortunately, folks tend to overemphasize the application of cryptography. Romanticize, even. Why? Well, it’s sexier. More seems to be better, na¨ıvely, but when it comes to security – cryptography included – less is more. So, developers: Focus on the more problematic areas of security; i.e., not cryptography. Implementing cryptography should be as simple and painless as possible; piling it on is not the answer. There’s no need to obsess over it; do it right and it will do its job. Besides, it will be the least of your worries. Security’s effectiveness is dictated by the soundness of its implementation, so each design decision should be implementation-centric – “implementationally benign,” if you will. Infrastructures begat of such design decisions are much more sustainable as they scale, and that’s good. Really good. Implementations shouldn’t adopt a Rube Goldberg design strategy, to say the very least. If the prowess of Vincent and Joan can turn a block cipher into a creature of elegance and poise, there’s no reason that its implementation need turn into a byzantine mess of bad decisions. Signing off, take this as our version of a much-needed cryptographic stimulus package for security decisions – both for the clean-up of bad-decision aftermath and the influence of those decisions pending. Here’s hoping you dodge the former. Look at green cryptography as good ergonomics for your cryptoposture. Ante up for a greener cryptoscape.

9

Acknowledgments

We extend our utmost gratitude to David Wagner, Chanathip Namprempre, Bruce Schneier, Paulo Barreto, Eli Biham, and Peter Gutmann, for their indispensable insight and much-appreciated assistance in bettering the way developers look at cryptographic engineering. And an extra special thanks to Ludmila Marques Lopes Troutman for breathing graphical life into the concept of recycling. We also appreciate the work of Daniel Day-Lewis, who, through his character, Daniel Plainview, taught us that there is no greater shame to cast upon someone than to drink their milkshake. Be this our attempt to keep that from happening to you.

20

References [1] J. Daemen and V. Rijmen, “The Wide Trail Design Strategy,” in Proceedings of the 8th IMA International Conference on Cryptography and Coding, (London, UK), pp. 222–238, SpringerVerlag, 2001. [2] J. Daemen and V. Rijmen, “Security of a Wide Trail Design,” in INDOCRYPT ’02: Proceedings of the Third International Conference on Cryptology, (London, UK), pp. 1–11, Springer-Verlag, 2002. [3] M. Dworkin, “Recommendation for Block Cipher Modes of Operation: The CMAC Mode for Authentication.” NIST Special Publication 800-38B, 2005. [4] M. Bellare, P. Rogaway, and D. Wagner, “The EAX mode of operation,” in Fast Software Encryption (FSE) 2004, (Berlin, Germany), pp. 389–487, Springer-Verlag, 2004. [5] S. Halevi, D. Coppersmith, and C. S. Jutla, “Scream: A Software-Efficient Stream Cipher,” in FSE ’02: Revised Papers from the 9th International Workshop on Fast Software Encryption, (London, UK), pp. 195–209, Springer-Verlag, 2002. [6] P. S. L. M. Barreto and V. Rijmen, “The Whirlpool Hashing Function.” First open NESSIE Workshop, Leuven, Belgium, 2000. [7] D. L. G. Filho, P. S. L. M. Barreto, and V. Rijmen, “The Maelstrom-0 Hash Function.” VI Brazilian Symposium on Information and Computer Systems Security – SBSeg’2006, 2006. [8] “Specification for the Advanced Encryption Standard (AES).” FIPS Publication 197, 2001. [9] J. Daemen and V. Rijmen, The Design of Rijndael. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2002. [10] S. M. Bellovin, “Problem areas for the IP security protocols,” in SSYM’96: Proceedings of the 6th conference on USENIX Security Symposium, Focusing on Applications of Cryptography, (Berkeley, CA, USA), pp. 1–16, USENIX Association, 1996. [11] S. Vaudenay, “Security Flaws Induced by CBC Padding - Applications to SSL, IPSEC, WTLS ...,” in EUROCRYPT ’02: Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, (London, UK), pp. 534–546, Springer-Verlag, 2002. [12] D. Bleichenbacher, “Chosen Ciphertext Attacks Against Protocols Based on the RSA Encryption Standard PKCS #1,” in CRYPTO ’98: Proceedings of the 18th Annual International Cryptology Conference on Advances in Cryptology, (London, UK), pp. 1–12, Springer-Verlag, 1998. [13] J. Manger, “A Chosen Ciphertext Attack on RSA Optimal Asymmetric Encryption Padding (OAEP) as Standardized in PKCS #1 v2.0,” in CRYPTO ’01: Proceedings of the 21st Annual International Cryptology Conference on Advances in Cryptology, (London, UK), pp. 230–238, Springer-Verlag, 2001. [14] J. Black and H. Urtubia, “Side-Channel Attacks on Symmetric Encryption Schemes: The Case for Authenticated Encryption,” in Proceedings of the 11th USENIX Security Symposium, (Berkeley, CA, USA), pp. 327–338, USENIX Association, 2002. 21

[15] H. Krawczyk, “The Order of Encryption and Authentication for Protecting Communications (or: How Secure Is SSL?),” in CRYPTO ’01: Proceedings of the 21st Annual International Cryptology Conference on Advances in Cryptology, (London, UK), pp. 310–331, Springer-Verlag, 2001. [16] M. Bellare and C. Namprempre, “Authenticated Encryption: Relations among Notions and Analysis of the Generic Composition Paradigm,” in ASIACRYPT ’00: Proceedings of the 6th International Conference on the Theory and Application of Cryptology and Information Security, (London, UK), pp. 531–545, Springer-Verlag, 2000. [17] J. Troutman, “The Virtues of Mature and Minimalist Cryptography (Adapted from, “Alice Tutors Bob on Mature and Minimalist Cryptography”),” IEEE Security and Privacy, vol. 6, no. 4, pp. 62–65, 2008. [18] B. Schneier, “Mathematicians vs. Cryptographers.” Schneier on Security (2007/09 Archive), 2007.

22

Suggest Documents