IEEE TRANSACTIONS ON AC(. TABLE I ... AC-30, pp. 440- .... IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. VOL. 31. NO. 5.
778
IEEE TRANSACTIONS ON AC()USTICS. SPEECH, AND SIGNAL PROCESSING. VOL. 37. NO 5. MAY 1989
OPTIMUM
TABLE I LIKELIHOOD RATIOTESTA N D EQUIVALENT TESTS
~
Signal Level
Likelihood Ratio Threshold
1.0
100
1.0
36
Equivalent T e s t s Threshold in parenthesis cc[0.5,2.01 Linear Wilcoxon (18) Same as Min ( 1 8 ) or Symnetric (6,6,6) cc[0.5,2.0] Linear Wilcoxon ( 1 7 ) as Symnetric (6,6,5)
Same
1.0
11.39
cc[0.5,1.01 Symnetric (6,6,4) cr[0.5,2.0] Linear Wilcoxon (16)
2.0
2.55
cc [ 1 .O,2.01
Symnetric (6,6,4)
some knowledge of signal level and the noise parameter c, an optimum choice of a particular test from the class of monotone admissible tests does not seem possible.
REFERENCES [I] R. R. Tenney and N. R. Sandell, Jr., “Detection with distributed sensors,’’ IEEE Trans. Aerosp. Electron. Syst., vol. AES- 17, pp. 501-510, July 1981. 121 R. Srinivasan, “Distributed radar detection theory,” IEE P r o c . , vol. 133, Pt. F, no. 1, pp, 55-60, Feb. 1986. [3] J . J. Chao and C. C. Lee, “A distributed detection scheme based o n soft local decisions,” in Proc. 24th Annu. Allerton Con8 Commun., Contr., pp. 974-993. [4] L. K . Ekchian and R . R. Tenney, “Detection networks.” in Proc. 21sr IEEE Con5 Decision C o n t r . , Dec. 1982, pp. 686-690. [SI S . C. A . Thomopoulos. R. Viswanathan, and D. P. Bougoulias. “Optimal decision fusion in multiple sensor systems,” IEEE Trans. Aerosp. Electron. S y s t . , vol. AES-23, pp. 644-653, Sept. 1987. 16) R. Viswanathan, S . C. A. Thomopoulos, and R. Tumuluri, “Optimal serial distributed decision fusion,” IEEE Trans. Aerosp. Electron. Sysr., vol. 24, pp. 366-376, July 1988. (71 F. A. Sadjadi, “Hypotheses testing in a distributed environment,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-22, pp. 134-137, Mar. 1986. [SI A. R. Reibman and L. W . Nolte, “Optimal detection performance of distributed sensor systems.” IEEE Trans. Aerosp. Electron. S y s t . , vol. AES-23, pp. 24-30, Jan. 1987. [9] J. Tsitsiklis and M. Athans, “On the complexity of distributed decision problems,” IEEE Trans. Aufomar. Contr., vol. AC-30, pp. 440446, May 1985. [IO] V. R. Algazi and R. M. Lerner, “Binary detection in white nonGaussian noise,” M.I.T. Lincoln Lab, Lexington, MA, Tech. Rep. DS-2 138. [ I I] J. H. Miller and J. B. Thomas, “Detectors for discrete time signals in non-Gaussian noise.” IEEE Trans. Inform. Theoty, vol. IT-18, pp. 241-250, Mar. 1972. [ 121 J . D. Gibbons, Nonparametric Statistical Inference. New York: McGraw-Hill, 1971. [ 131 V. Hedges and I. Olkin, Statistical Method.sfor Meta-Analy.sis. New York: Academic, 1985.
A Block Coding Technique for Encoding Sparse Binary Patterns GENGSHENG ZENG
AND
NASIR AHMED
Abstract-This correspondence introduces an efficient method for encoding sparse binary patterns. This method is very simple to implement and performs in a near optimum way.
Manuscript received April 2 , 1988; revised October 14. 1988. This work was supported by the Sandia National Laboratories, Albuquerque, NM 87185. The authors are with the Departnient of Electrical and Computer Engineering. University of New Mexico. Albuquerque, NM 87131. IEEE Log Number 892669 I .
I. INTRODUCTION The purpose of this correspondence is to introduce an efficient method for encoding sparse binary patterns (images), where the term “sparse” implies that the patterns consist of a small number of ones, relative to the number of zeros. The technique we consider will be referred to as block coding. It is shown that block coding enables us to encode sparse binary patterns with average code word lengths Lo,,( p ) that compare very closely to the source entropy H ( p ) w h e n p is small, wherep is the probability of finding a one in the given pattern. Since I,(,,,( p) closely approximates H ( p ) , we can view such block codes as being close to optimum for encoding sparse binary patterns [ I ] . The sparse pattern we deal with is assumed to be a memoryless binary source. This kind of pattern is found in a 3-D authentication scheme [2]. In data compression, the patterns are usually not memoryless sources. However, when LPC (Linear Prediction Coding) is applied, the resulting error pattern is very close to a memoryless model. Yasuda [3] presented some effective methods to decorrelate 2-D facsimile patterns. For example, Boolean algebra prediction functions 13, p. 8341 are shown to be very effective for typical typewritten English and Japanese documents and weather maps. The error patterns that result via prediction are sparse, and hence our block coding technique may be useful for this application also. After the block coding method is introduced, it is compared to some other existing methods. 11. BLOCKCODING For the purposes of discussion, we consider a (128 X 128) sparse binary pattern in which the probability of finding a one is p = 0.01, As such, the probability of finding a zero is q = I - p = 0.99. If this pattern is scanned on a row-by-row basis, it follows that we obtain a 1-dimensional array consisting of n = 16 384 bits. The proposed block coding scheme consists of the following steps. 1) Map a 2-D image into a I-dimensional array by row-by-row scanning. The I-dimensional array consists of n = 2M ones and zeros. 2) Divide the 2M bit-string obtained in Step 1) into 2“ blocks, with each block consisting of 2’ bits; it then follows that M = a b. 3) Between any two adjacent blocks we introduce a comma, which is encoded as a “0.” 4a) If there is no one in a block, then no coding is needed for the block. 4b) If there are ones in a block, then assign each one a prefix “ I ” followed by b bits to indicate its location in the block. This location is with respect to the left end of the 2”bit string and numbered from 0 through 2’ - 1. The reason for the prefix “ 1 ” is to realize an instantaneous code. 5) The bit-string resulting from Step 4 ) is desired code. The above steps are best illustrated by a simple example. Suppose we consider the ( 4 X 4 ) binary pattern in Fig. 1. Then the results obtained via Steps 1)-5) above are as summarized in Fig. 1, for the case M = 4, and a = b = 2. The decoding procedure is just the reverse of the coding procedure. In general, if we have k ones and n - k zeros, the code length, L ( k ) , is given by
+
L(k)
=
(rows
=
(2” - 1 )
-
+ k(coding bits per one) + k ( b + 1 ) = (2“ - 1 ) + k ( M - a +
1)
1).
(1)
The probability of k ones and n - k zeros occurring is
0096-35 1818910500-0778$01.OO 0 1989 IEEE
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH. A N D SIGNAL PROCESSING. VOL. 37. NO. 5 , MAY 1989
119
Given pallern
0
0
3
0
Step I yields:
0iOOOll000000000
Steps 2-4 2' = 2h = 4 yield:
,
0100 L
0110
I
Ogolglo
no?
Siep 5 yield, the desired code
10
0000
0
I
0000
0
be
?C!G;!?l?lCOO
Fig. 1 . Illustrative example related to block encoding scheme
15.00 14.00 13.00 12.00
5.00 4.00
3.00 2.00 1 .oo
0.00005
0.291 Probability f o r ' 1 '
Fig. 2 . Average codeword length comparison
For the case we are considering, n = 16384 and p = 0.01. Thus, a = 7.885, which upon rounding yields
The average code length, L,,., is found as
"
La,, =
c Pr(k)L(k)
a = 8.
k=O
=
( n ) p k q y " - X [ ( 2Y 1) k=O
= (2"
k
-
1)
+ (M -a +
+ k((M - a +
1) np.
Substituting M = 14, n = 2M = 16384, a = 8, and p = 0.01 in (3) leads to
l)]
(3)
Here, Lo,,is a function of a. We now choose a such that Lo,,achieves its minimum. Setting the derivative of L,,, with respect to a to zero yields
or
(6)
La,, = 1401.88
bits.
(7)
The value of Lo,, in (7) enables us to locate a point on the Lbb:"k'(p) curve in Fig. 2 corresponding t o p = 0.01. The remaining points are obtained in a similar manner. 111. COMPARISON It is well known that the lower bound for the average length of any code is the source entropy H ( p ) . In our example, the source entropy is given by
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. VOL. 31. NO. 5. MAY 1989
780
Fig. 2 shows that H ( p ) is indeed the lower bound of all the coding methods. Obviously, if the pattern is too sparse, one can just code it by the c o o r d i n a t e s of its ones. When n = 214, each one requires 14 bits; thus, the average codeword length is
L!?) = 14np (9) which is a straight line in Fig. 2. Another technique we compare with is the well-known run-length coding [3], [4]. The average codeword length is denoted as Lip“’ in Fig. 2. Last, but certainly not least, another block coding [5] will be compared. Since this method was first developed by Kunt [6], we refer to it as Kunt’s method. In Kunt’s method, a pattern of n bits is divided into n , blocks, each block having n2 bits, i.e., n = n p 2 . If a block is all zero, it is coded by a “0,” or else it is coded by a prefix “ 1 ” and followed by the original block. Clearly, the average codeword length is given by
LLY’’‘)= n, + n2[1 - Probability(al1
-
zero block)]
(10) As indicated in [ 5 ] , it is difficult to find the optimum block size. Here, we find the optimum block size by brute force, and plot L:ynt)versus p in Fig. 2. In Fig. 2, the horizontal axis is fromp = 0.00005 t o p = 0.291. This is because when p < 0.291, average codeword lengths for all methods are greater than 16 384. Observed from Fig. 2, the best coding methods are listed below.
0.191 < p < 0.291 0.00018 < p < 0.191 p < 0.00018
Kunt’s block coding Block coding Coordinate coding.
When 0.0014 < p < 0.0059, run-length coding is a little better than block coding. But run-length coding needs storage of a codebook, whereas block coding does not. IV. CONCLUSION It has been shown that the proposed block coding is very efficient for encoding sparse patterns; e.g., p < 0.191 when n = 16 384. Usually error patterns resulting via prediction satisfy p < 0.191. Therefore, block coding could find applications in coding such error patterns. Also, as indicated in Fig. 2 , Liy!!“k)(p ) is close to the optimal standard H ( p ) when p is small. If a block length is optimized for p = p o , but the actual p turns out to be p , , the resulting loss in performance is I Lb:!wk)( p o ) 15b:!“~’( p , ) 1 as indicated in Fig. 2.
REFERENCES [I]N . Abramson, Information Theory and Coding.
New York: McGrawHill, 1963, Ch. 4. [2] R . Krukar, N. Ahmed, and D. Park, “Authentication scheme using Walsh encoding of random patterns,” IEEE Trans. Elecfromagn. Cornpar. (Walsh Functions Applications), Aug. 1988. 131 Y. Yasuda, “Overview of digital facsimile coding techniques in Japan,” Proc. IEEE, vol. 68, pp. 830-845, July 1980. [4] N . S. Jayant and P. Noll, Digiral Coding of Waveforms, Principles and Applications io Speech and Video. Englewood Cliffs, NJ: Prentice-Hall, 1984. [ 5 ] M. Kunt and 0. Johnsen, “Block coding of graphics: A tutorial review,” Proc. IEEE, vol. 68, pp. 770-786, July 1980. 161 M. Kunt, “Comparison de techniques d’encodage pour la reduction de redondance d’images facsimile a deux niveaux,” Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland, Thesis 183, 1974.