A New Efficiency Criterion for Security Oriented Error ... - IEEE Xplore

0 downloads 0 Views 721KB Size Report
A New Efficiency Criterion for Security Oriented. Error Correcting Codes. Yaara Neumeier and Osnat Keren. Faculty of Engineering. Bar-Ilan University.
2014 19th IEEE European Test Symposium (ETS)

! !

A New Efficiency Criterion for Security Oriented Error Correcting Codes Yaara Neumeier and Osnat Keren Faculty of Engineering Bar-Ilan University Email: [email protected], [email protected] Abstract—Security oriented codes are considered as one of the most efficient countermeasures against fault injection attacks. Their efficiency is usually measured in terms of their error masking probability. This criterion is applicable in cases where it is possible to distinguish between random errors and malicious attacks. In practice, if the induced errors are not fixed for several clock cycles, it is difficult to distinguish between the two. Moreover, a decoder that tries to correct the tampered word can conceal the fact that the device is under attack. This paper defines a new criterion, named t-robustness, for evaluating the efficiency of robust codes that provide both reliability and security. An error correcting code is called t-robust, if it can correct up to t errors and at the same time detect any attack that changes the data. The paper presents a general structure for concatenated codes that have this property.

I.

INTRODUCTION

A device is considered as reliable and secure if it can cope with random errors caused by nature, and provide security against malicious attacks. Error correcting codes, and in particular linear codes, can provide reliability against random errors of small multiplicity. Error detecting codes, and in particular nonlinear robust codes, can provide security against fault injection attacks. From the reliability perspective, the efficiency of a code is measured in terms of its minimum distance. The minimum distance between the codewords determines the error correction capability of the code. From the security perspective, errors can be injected into the data at any multiplicity, and therefore, the efficiency of a code is measured in terms of its undetected error probability, also referred to as the error masking probability of the code. Clearly, as the minimal distance of the code increases, more errors can be corrected, and hence, more injected errors are being masked. Consequently, reliability and security may seem as conflicting requirements. The error masking probability is the probability that an injected error maps a codeword onto another codeword [6]. A code is characterised by its maximal error masking probability. Codes that can detect any error with nonzero probability are called robust codes. Codes that can detect some errors with nonzero probability but mask others, are called partially robust codes. Robust codes are presented, for example, in [1, 6, 10], partially robust nonlinear codes of rate greater than one half are presented in [3, 6, 7]. Security-oriented codes with error correction capability are presented in [2, 11]. As shown in these papers, nonlinear multi-error correcting codes can be

978-1-4799-3415-7/14/$31.00 ©2014 IEEE

constructed by concatenating linear and nonlinear redundant bits. For example, the generalized Phelps code and Vasilev code are partially robust nonsystematic codes of rate larger than one half that can correct a single error. However, their error masking probability is relatively high [11]. An efficiency criterion for codes with error correction capability is presented in [12]. The criterion reflects the efficiency of codes in cases where the system (i.e. the decoder) can distinguish between errors caused by nature and malicious attacks. Such a distinction can be made if the error changes much slower than the data, that is, in cases where the injected errors last several consecutive clock cycles, and the number of cycles is known to the system designer [5]. However, when the decoder cannot identify the source errors, the system may become vulnerable to malicious attacks of a new type: An adversary can ”hide” the fact that the device is under attack by taking advantage of the error correction capability of the code. Namely, when the decoder ”corrects” a (maliciously) tampered word into a legal codeword, it conceals the fact that the system is under attack. In this work we suggest a new criterion for evaluating the efficiency of security oriented error correcting codes. We say that a code is both reliable and secure if it can correct any error of multiplicity t or less, and detect with probability (greater than zero) any error with multiplicity greater than t. We call a code that meets this requirement a t-robust code. A t-robust code can detect any attack that tries to change the inputs to the receiving module and at the same time maintain a reliable operation. The requirements for a t-robust code are rather strict; nevertheless, as we show in Section IV, it is possible to construct t-robust codes. The remaining of the paper is organized as follows: In the next section we recall known efficiency criteria. In Section III we introduce the t-robust criteria and provide bounds on the rate of a t-robust code. In Section IV we introduce an algebraic structure of a t-robust concatenated code. Section V concludes the paper. II. P RELIMINARIES -

EFFICIENCY CRITERIA

Let C(n, k, d) be a systematic code of length n, dimension k, and minimum distance d. A codeword of a systematic code has two parts: An information part x, which is a binary vector of length k, and a redundancy part f (x) which is a binary

!

! vector of length r = n − k. The redundant bits are calculated from the information bits. It is convenient to divide the redundant bits into two sets, one set consists of rL bits that are a linear function of the information bits, and the second set contains the remaining rN = r − rL bits. Without loss of generality, a codeword c in a systematic code is in the following format c = (x, fL (x), fN (x)), where fL : → Fr2L is the encoding function for the linear redundant bits, fN : Fk2 → Fr2N is the encoding function for the nonlinear redundant bits (Fk2 stands for a linear space of dimension k over GF (2)). The minimum distance of the code is the minimum Hamming distance between any pair of (distinct) codewords in C, that is

distributed [7, 12]. This way, the code’s properties depend solely on the set of binary vectors that forms it, it doesn’t depend on the probability distribution of the codewords nor on the specific implementation of the encoder/decoder. In practice, however, the codewords are not uniformly distributed, [15]. As shown in [13, 14] an inadequate encoding may turn a robust code into a partially robust code (and vice versa). The maximal error masking probability of a codeword whose codewords are uniformly distributed is denoted by Qmc ,

Fk2

d=

min

c1 ,c2 ∈C,c1 ̸=c2

dH (c1 , c2 ) =

min

c1 ,c2 ∈C,c1 ̸=c2

wtH (c1 − c2 )

where dH and wtH stand for Hamming distance and Hamming weight, respectively. A code of minimum distance d = 2t + 1 can correct any error vector with multiplicity (i.e. with Hamming weight) less or equal to t and detect any error vector of Hamming weight less or equal to d − 1. Errors of Hamming weight greater or equal to d may be always detected, never detected, or detected with some probability. The error correction capability of the code doesn’t depend on the probability distribution of the codewords. This is not the case with the error masking probability of a code. The error masking probability of a code is defined as follows, Definition 1. Denote by p(c) the probability that codeword c will be used. The probability that an error e is masked by the codewords is ∑ Q(e) = p(c)δ(c ⊕ e), c∈C

where δ(τ ) is the characteristic function of the code C, that is, δ(τ ) = 1 if τ ∈ C, and it equals 0 otherwise. If all the codewords are equally likely to appear, the error masking probability equals ∑ δ(c ⊕ e) Q(e) = c∈C . |C| Note that no assumption is made on the probability that an error e occurs. In contrast to errors caused by nature (whose probability distribution can be characterized), attacks induce errors of any multiplicity. Indeed, we assume that an attacker can inject any error he chooses whenever he wants. The detection kernel of a code, denoted by Kd , consists of all the error vectors that are never detected (Q(e) = 1). The kernel Kd is a linear subspace. Codes that can detect any nonzero error, i.e. codes with |Kd | = 1, are called robust codes. Codes whose kernel is of size 1 < |Kd | < |C| are called partially robust codes. Efficiency criteria for robust and partially robust codes are usually defined for codes whose codewords are uniformly

Qmc = max Q(e). e∈K / d

The maximal error masking probability of a robust code is greater than or equal to the average Q(e). Therefore, ∑ Q(e) Qmc ≥ e n = 2−r . 2 In binary codes, there exists at least one nonzero error vector that maps two codewords one onto the other. Consequently, Qmc is lower bounded by, Theorem 1 ([7]). Qmc ≥ max (21−k , 2−r ).

(1)

Definition 2. A code that satisfies (1) with equality is called an optimum code. If the code rate is greater than one half, i.e. r < k, then Qmc of an optimum code is equal to 2−r . In other words, every redundant bit directly improves the attack detection capability of the code. In fact, only two optimum and close to optimum robust codes are known so far, the Quadratic-Sum code and the Punctured-Cubic codes [6, 10]. However, these codes have no error correction capability. Most of the known error correcting codes are partially robust, some for them, e.g. the Generalized Vasilev code, the Phelps code, and the One Switching Code, are optimum [12]. In [12], the authors introduced the following bound for robust (or partially robust) systematic codes. Unlike the previously mentioned bounds, this bound takes into account both the correction capability of the code and its error masking probability. Theorem 2 ([12]). Denote by r0 the smallest possible number of redundant bits for a systematic code with minimum Hamming distance d and length n. For any C(n, k, d) code with error masking probability Qmc , and detection kernel of dimension ωd , 2k ≤ Qmc (2n − 2k 2r0 + (2k − 2ωd )2rN ) + 2ωd .

(2)

It follows from the theorem that for robust codes with large enough k we have, Qmc ≥ 2−r

1 . 1 + 2−rL − 2r0 −r

It is important to point out that all the bounds presented so far, evaluate the efficiency of the code in respect to the tampered codeword (refer to figure 1). They do not take into

!

!

Fig. 3: General architecture of the checker and the decoder from Ex. 1.

Fig. 1: The conventional mathematical model of fault attacks.

Fig. 2: Mathematical model of fault attacks for systems with decoder. account the role of the decoder that may change the received word. As we show next, when the decoder tries to correct the received word the efficiency of the code degrades. Example 1. Consider a circuit whose functionality is defined according to Table 1. The circuit has three inputs and six outputs, however, there are only four legal output vectors (codewords). These four codewords form a code. The minimal Hamming distance between the codewords is three. Notice that, the outputs x1 and x2 carry all the information required to calculate the other outputs. Therefore the code is a systematic C(6, 2, 3) code. Each codeword is of the form c = (x, fL (x), fN (x)), namely, the six bits can be divided into three parts: a two-bit information word (x1 and x2 ), a three-bit linear redundancy part (x3 , x4 and x5 ), and a single non linear redundancy bit x6 1 . The linear part (x, fL (x)), denoted as CL , has minimal distance d = 3 and and thereby the error correction ability of C is t = 1. The nonlinear part ((x, fN (x))) form a robust code, denoted as CN , with Qmc = 0.5 since its error masking probability function equals { 1 If e = 000 0 If e = 001 Q(e) = 0.5 otherwise Since Qmc = 0.5 = 2−r , the code is an optimum robust 1 Notice that (x, f (x)) is a codeword in a linear code C (5, 2, 3) defined L L ( ) by a generating matrix G = 10 01 11 01 10 and (x, fN (x)) is a codeword in a nonlinear CN (3, 2, 1) Punctured Cubic code with P = (0, 1) [10].

code (refer to Eq. 1). At the receiving end. The decoder has two blocks: a linear decoder and a nonlinear checker. The general structure of the hardware is given in Fig. 3 and a detailed implementation is given in Fig. 4. Assume that a vector v = c ⊕ e is received. The linear calculates the syndrome s = Hv T where ) (1 1decoder 1 0 0 H = 01 10 00 10 01 . The syndrome may take one of eight possible values. If the calculated s is the all zero word, then v is a codeword in CL , otherwise, an error has occurred. Five out of the seven nonzero syndromes corresponds to a single bit error, which is corrected by the decoder. The two remaining syndromes indicate an error of multiplicity greater than one which cannot be corrected, and thus, a flag indicating decoding failure is raised. The nonlinear checker calculates x1 ∨x2 and compares the result to fN , if difference is detected, a flag is raised, indicating an attack. If either the nonlinear flag is raised or the decoding failure flag is raised, an attack is detected. Notice that the linear checker does not check for an error in fN , and the nonlinear checker does not check for an error in fL . As a result, the error vector (000001) which is of weight one, is not corrected by the linear decoder. Thus, the resulted coding scheme does not correct all errors of multiplicity one (even though t = 1). Moreover, the error (000001) will be detected with some probability by the nonlinear checker and will be mistakenly considered as an attack, and its information will be discarded. Clearly, in this case, the combination of linear code (for correcting random errors of small multiplicity) and nonlinear robust code (for probabilistic detection of any error vector) degrades on the detection and correction abilities. In other words, reliability and security may be conflicted.

i1 0 0 0 0 1 1 1 1

Inputs i i2 0 0 1 1 0 0 1 1

x i3 0 1 0 1 0 1 0 1

x1 0 1 0 0 0 1 1 1

x2 0 0 1 1 0 0 1 1

Outputs fL (x) x3 x4 x5 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 1 0 1 0 1 1 0 1 1

III. T HE t- ROBUST CRITERION Denote by Bt a Hamming ball of radius t, Bt = {v|v ∈ Fn2 , wtH (v) ≤ t}.

fN (x) x6 0 1 1 1 0 1 1 1

!

!

Fig. 5: Types of errors at the input of the checker in Figure 1: the error e1 is always detected, the error e2 is never detected, the error e3 is detected with a nonzero probability.

Fig. 4: The checker and the decoder from Ex. 1. ∑t ( ) The size of Bt is i=0 ni . Denote by v+Bt a Hamming ball centered on a vector v ∈ Fn2 . Note that if dH (v, u) ≥ 2t + 1 than the two Hamming balls v + Bt and u + Bt are disjoint. Let C(n, k, d) be a code of dimension k. Denote by U B the union of Hamming balls centered on the codewords, ∪ UB = (c + Bt ). c∈C

Since the code has an error correction capability of up to t errors, |U B| = 2k |Bt |. Let c ∈ C be the transmitted codeword and let y = c + e be the received word. Consider a decoder that can correct up to t errors. Denote by Dec(y) the decoder’s output. One of the following three scenarios may happen: • If y is a codeword than Dec(y) = y. • If y is not a codeword but it is within a Hamming ball of radius t centered on a codeword c′ ∈ C, i.e. y ∈ c′ + Bt , then the decoder’s output is Dec(y) = c′ . If the multiplicity of e is smaller or equal to t then the decoder computes the correct word (c′ = c), otherwise a decoding error occurs. • If y ∈ / U B, the decoder declares that it fails to decode y. That is, the decoder indicates that the codeword is tampered by an error of multiplicity greater than t. Figures 5 and 6 illustrate the difference between the error masking probabilities in a system without a decoder (see Figure 1) and a system with a decoder (see Figure 2). Without the decoder, whenever the received word is not a codeword the attack is detected. That is, in order to be unnoticed an attacker must inject an error vector that maps all the codewords onto the code (refer to Fig. 5). However, in a system with a decoder, the attacker’s task becomes simpler. Now, it is sufficient to find an error vector that maps all the codewords into the union of Hamming balls of radius t around the codewords (refer to Fig. 2). Namely, a decoder helps the attacker since |U B| > |C|. We call a code that can provide both reliability and security (in spite of the decoder) a t-robust code. The formal definition

Fig. 6: Types of errors at the output of the decoder in Figure 2: the error e1 is always detected as it maps the code C outside U B, the error e2 is never detected as it maps C into U B, the error e3 is detected with probability that depends on the size of the intersection between e3 + C and U B.

of t-robustness is the following: Definition 3 (t-robust code). Let C ∈ GF (q n ) be a code of length n and minimum distance d = 2t + 1. The code is called t-robust if for any e, • •

If wtH (e) ≤ t, then Dec(c + e) = c, If wtH (e) > t, then there exists at least one c ∈ C for which Dec(c + e) ∈ / C.

Note that if the code has no error correcting capability ( t = 0), the above definition agrees with the conventional definition of robust codes. In this sense, robust codes can be considered as a class of t-robust codes. In [4], the security requirement is that any error up to certain multiplicity is corrected and any repeated error is detected with nonzero probability. Here, the requirement is stronger, we require that a t-robust code will detect (with nonzero probability) any error with multiplicity greater than t – no assumption about repeated errors is made. The errors that are masked by the decoder form the correction kernel of the code Kc . The error masking probability of a t-robust code is defined in respect to Kc (and not in respect of Kd as in conventional robust codes). The definition is the following: Definition 4. The generalised error masking probability, deˆ noted by Q(e), is ∑ δ(Dec(c + e)) ˆ Q(e) = c∈C . (3) |C| The maximal generalised error masking probability, denoted ˆ mc , is Q ˆ mc = maxe∈K ˆ by Q Q(e). / c

!

! TABLE I: Standard array for the code in Ex. 1 col.1 000000 000001 000010 000100 001000 010000 100000

col.2 011101 011100 011111 011001 010101 001101 111101

col. 3 101011 101010 101001 101111 100011 111011 001011

col. 4 100111 100110 100101 100011 101111 110111 000111

Example 2. Consider the code C(6, 2, 3) from Ex. 1. A decoder for this code can be implemented as a table (refer to Table I). If the received vector v appears in the table, it is corrected to the vector written in the the first row. If v is not in the table the system declares that it is under attack. Assume that an error 011100 was injected: 1) If c = 000000 then v = 011100 and the output of the decoder is the (wrong) codeword 011101. 2) If c = 011101 then v = 000001and the output is the (wrong) codeword 000000. 3) If c = 101011 then v = 110111 which is a (wrong) codeword. 4) If c = 110111 then v = 101011 which is a (wrong) codeword. Consequently, the error vector 011100 is never detected since it is being masked by the correcting ability of the code, thus ˆ = 011100) = 1. As a result, the code C is not t-robust. Q(e Note that here Bt is a set of all the vectors of length 6 and Hamming weight smaller or equal to one, i.e. the leftmost column in Table I; and U B is the entire table. ˆ mc is lower bounded by the average Q(e), ˆ The Q therefore, Theorem 3.

ˆ mc ≥ 2−r |Bt |. Q

ˆ Proof. The average Q(e) is the following. ∑ ∑ ∑ c∈C δ(Dec(c+e)) ˆ Q(e) n e∈F e∈Fn |C| 2 2 = n 2n 2 ∑ ∑ δ(Dec(c + e))) c∈C ( e∈Fn 2 = k+n 2 |U B| = 2n = 2−r |Bt |.

this form. Denote by E(v) the set of error vectors {e|v = c + e, c ∈ C}. Since any pair of distinct codewords correspond to a distinct pair of error vectors, i.e. ei ̸= ej for i ̸= j, the size of E(v) equals 2k . The union of E(v) over all the v’s in Fn2 \ U B forms the set of error vectors that are detected with nonzero probability. All other errors are always masked by the decoder, since they do not map even a single codeword outside U B. In a t-robust code all the errors of multiplicity greater than t are detected with nonzero probability, therefore, Theorem 4. Any C(n, k, d) t-robust code satisfies, |Bt |(2k + 1) ≤ 2n .

(5)

Proof. There are 2 − |Bt | errors of multiplicity greater than t. A t-robust code can detect all these errors with a nonzero probability. Therefore, ∪ E(v) . 2n − |Bt | = v∈Fn \U B n

2

The size of the union is upper bounded by |C| times the size of E(v). Thereby, 2n − |Bt | ≤ (2n − 2k |Bt |)2k .

(6)

The latter inequality is satisfied if 2k ≤

2n − 1. |Bt |

Next, we show that there exist t-codes that meet the lower bound on the error masking probability (Theorem 3), however, their code rate is smaller than the maximal rate derived from Theorem 4. IV. C ONSTRUCTION OF A t- ROBUST CODE Usually, codes that provide both reliability and security are concatenated codes. There are several ways to concatenate two codes, here we employ serial concatenation where the inner (i.e. close to the channel) code is a linear code with error correcting capability of t errors, and the outer (i.e. close to the source) code is a robust code. The structure of the code is shown in Figure 7.

(4)

A perfect code is a code that fulfills the Hamming bound [9], i.e. a code for which |U B| = 2n . Clearly, a perfect code is not a t-robust code. In fact, a perfect code is the worst code in terms of security, since any attack is masked by the decoder. A robust code must have |U B| < 2n . Recall that the decoder fails to produce a legal codeword whenever it gets a word that is not in U B. Therefore, every word v in Fn2 \ U B can be represented as v = ci + ei , where ci ∈ C and ei is an error vector that is detected with a nonzero probability. In fact, v has exactly |C| = 2k representations of

Construction 1. Let CN (k + rN , k) be a systematic robust code with error masking probability Qmc , and let fN : Fk2 → Fr2N be its nonlinear encoding function. Let CL (k+rN +rL , k+ rN , d∗ ) be a linear code with distance d∗ = 2t + 1, and let N fL : Fk+r → Fr2L be its encoding function. A concatenated 2 code C(n, k, d ≥ d∗ ), consists of words of the form C = {c = (x, fN (x), fL (x, fN (x))) : x ∈ Fk2 }. Theorem 5. A C(n, k, d ≥ 2t + 1) concatenated code with a decoder that corrects up to t errors, is a t-robust code with ˆ mc = Qmc . Q

!

! Example 3. Let CN be a (24 + 2, 24) Quadratic-Sum code, with Qmc = 2−2 . Let CL be the [31, 26, 3] single error correcting Hamming code. Then, the concatenated code C(31, 24, 3) code is a 1-robust code with a generalized maximal error ˆ mc = 2−2 . This code is optimum as masking probability of Q it meets the lower bound of Theorem 3, i.e., it has the smallest ˆ mc . possible Q Table IV presents t-robust codes with t = 1 for different values of k, constructed using Construction 1. For simplicity, CN is a binary Quadratic-Sum (QS) robust code ˆ mc = Qmc = 0.5 The nonlinear redundancy bit with Q equals x1 x2 ⊕ x3 x4 . . . ⊕ xk−2 xk−1 [6]. The linear codes (CL ) are cyclic codes defined by a generator polynomial [9]. The coefficient of the polynomial are given in the table in hexadecimal representation. The polynomials were calculated using MATLAB’s build-in command ’cyclpoly’. k 8 16 32 64

rL 5 7 12 19

rN 1 1 1 1

n 14 24 45 84

d 4 4 3 4

generation polynomial 27 99 1009 80EBD

V. C ONCLUSIONS The paper deals with the problem of detection of fault injection attacks on devices that employ error correcting codes. Such devices are more vulnerable to fault injection attacks than devices that employ only a nonlinear checker, since the decoder can conceal the fact that the device is under attack. The paper presents a new criterion, named t-robustness, for evaluating security oriented error correcting codes. A t-robust code provides reliability by correcting up to t errors and provides security by detecting any attack that tries to change the data with a nonzero probability. The paper introduces bounds on the maximal rate of t-robust codes and their error masking probability. It is shown that it is possible to construct t-robust codes by a serial concatenation of a linear and nonlinear robust codes. The concatenated codes meet the lower bound on the error masking probability. R EFERENCES

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9] [10]

[11]

[12]

[1] N. Admaty, S. Litsyn, and O. Keren, ”Punctuating, Expurgating and Expanding the q-ary BCH Based Robust [13]

[14] Fig. 7: Serial concatenation - Encoder

[15] Fig. 8: Serial concatenation - Decoder

Codes”, The 27-th IEEE Convention of Electrical and Electronics Engineers in Israel, 2012, pp.1–5. S. Engelberg, O. Keren, “A Comment on the KarpovskyTaubin Code,” IEEE Trans. Info. Theory, Vol. 57, No. 12, pp. 8007-8010, 2011. G. Gaubatz, B. Sunar, and M. G. Karpovsky, ”Non-linear residue codes for robust public-key arithmetic,” Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC), 2006. S. Ge, Z. Wang, P. Luo, M. Karpovsky, ”Reliable and Secure Memories Based on Algebraic Manipulation Detection Codes and Robust Error Correction” , Proc. Int. Depend Symp., 2013. S. Ge, Z. Wang, P. Luo, M. Karpovsky, ”Secure Memories Resistant to Both Random Errors and Fault Injection Attacks Using Nonlinear Error Correction Codes”, Proc. Workshop on Hardware and Architectural Support for Security and Privacy, HASP 2013, 2013. M. G. Karpovsky, K. J. Kulikowski, Z, Wang, ”Robust Error Detection in Communication and Computation Channels,” Keynote paper in the Int. Workshop on Spectral Techniques, 2007. M.G. Karpovsky and A. Taubin, “A New Class of Nonlinear Systematic Error Detecting Codes,” IEEE Trans. Info. Theory, Vol. 50, No.8, pp.1818-1820, 2004. K.Kulikowski, Z.Wang, and M.G.Karpovsky, ”Comparative analysis of fault attack resistant architectures for private and public key cryptosystems, ” Workshop on Fault-tolerant Cryptographic Devices, 2008. F.J.MacWilliams and N.J.A.Sloane, The Theory of ErrorCorrecting Codes, North-Holland, 1977. Y. Neumeier and O. Keren, ”Punctured KarpovskyTaubin Binary Robust Error Detecting Codes for Cryptographic Devices,” IEEE International On-Line Testing Symposium, March 2012, pp.156–161. Z. Wang, M. G. Karpovsky, K. Kulikowski, “Design of Memories with Concurrent Error Detection and Correction by Non-Linear SEC-DED Codes,” Journal of Electronic Testing, Vol. 26, 2010. Zhen Wang, Mark G. Karpovsky, Konrad J. Kulikowski, ”Replacing Linear Hamming Codes by Robust Nonlinear Codes Results in a Reliability Improvement of Memories,” Proc. Int. Symp. Dependable Computing, July 2009 I. Shumsky and O. Keren, ”Security-Oriented state assigment”, TRUDEVICE, The 1st Workshop on Trustworthy Manufacturing and Utilization of Secure Devices, 2013. I. Shumsky, O. Keren and M. Karpovsky,” Robustness of Security-Oriented Binary Codes under non-uniform distribution of codewords”, Dependable Computing and Communications Symposium at the International Conference on Dependable Systems and Networks, DSN-DCCS, pp. 25-30, 2013 V. Tomashevich, S. Srinivasan, F. Foerg, I. Polian, ”Cross-level Protection of Circuits Against Faults and Malicious Attacks,” IEEE 18th International On-Line Testing Symposium (IOLTS), Sitges, pp. 150 - 155, 2012.