On-Line Compression of High Precision Printer Images by Evolvable Hardware M. Salami, H. Sakanashi1, M. Tanaka2 , M. Iwata, T. Kurita and T. Higuchi Electrotechnical Laboratory (ETL) 1-1-4 Umezono, Tsukuba, Ibaraki, 305-0047, Japan. Email:
[email protected], Fax: +81-298-54-5871 Abstract This paper describes an image compression system based on Evolvable HardWare (EHW) for High Precision Printers (HPP). These printers are especially flexible for book publishing, but require large disk space for images, in particular those of higher resolution. To increase the printing speed and reduce the disk space, the images should be compressed. The system for this compression must be 1) adaptive, so that it changes depending on image characteristics and 2) on-line, which means implemented in hardware. The standard compression methods have a simple template change strategy which is not efficient for the images of HPP. We used an EHW system for compressing HPP images in real time. The EHW is a type of adaptive hardware which allows evolutionary algorithms to change the hardware configuration in real time. It works as fast as other compression systems (like JBIG standard), but changes the image modeling to reflect the changes in the image characteristics. Simulation results show more than a 50% increase in compression ratio compared to JBIG for the printer system.
1. Introduction In the last decade digital data has become the language of communication between electronic systems. They have a few properties that make them very suitable as input-output for electronic systems: First, they are easy to transmit or store with high speed; second, they can represent many types of data, like voice; image or video; and third they can be compressed to reduce the size. Although many systems can work with digital data, for better performance the system should handle the data effectively by providing the optimal speed, and if possible reducing the size of data by compression. Today’s electronic printers use digital data for printing and book publishing. They can be used for realizing distributed printing and on-demand printing with high speed and flexibility. For book publishing these printers have to handle hundreds of pages, and print them thousands times. For example, offset printers usually handle book printing and commonly operate at speed of 250 page/min. High Precision Printers are another kind of electronic printer that use digital data for book publishing. These are much more flexible than offset printers for book publishing but their operating speed is only 72 page/min. The data size of a page for HPP is typically very large, especially for color images. To make HPP more favorable compared to the offset printers we have to improve the speed of printing with compression. Compression will reduce the size of data for one page, allowing HPP to double or triple the printing speed and at the same time reduce the disk space for one page. By reducing the size of data, HPP can also print higher resolution images. There are a few problems for data compressing in HPP. First, the data must be compressed losslessly which means the compression ratio will be low. Second, the encoding process must be fast and not create large delays for the printer. Third, the compression ratio must be relatively high to justify additional system hardware. Fourth, the decoding process must be extremely fast because it directly determines the speed of HPP. The image data for HPP can be classified as a type of bilevel data. There are a few algorithms for bilevel lossless compression like those in JBIG (Joint Bi-Level Image expert Group) [1] or 1 2
Complex Systems Engineering, Hokkaido University, Japan Mitsubishi Heavy Industries. LTD., Nagoya, Japan
1
pattern matching [2], but they are not efficient for the HPP images. For example, the adaptive template changes of JBIG are not sufficient for HPP and the progressive transmission in JBIG is not necessary for HPP because the printer only work with the baselayer images. The fast process of printers rules out any algorithm like pattern matching because of the large computation time. Our approach for compression is similar to other algorithms which consists of modeling and encoding. However because of large data size, any single fixed modeling for one image will be ineffective. The modeling for HPP must be changed during compression based on the characteristics of the image. Additionally, for a new image we need to build different modeling. Most of the algorithms for lossless bilevel compression can only be implemented in software [3] or have a nearly fixed modeling. Applying different modeling is very difficult in software or hardware, and we have decided to work with a parametric modeling and then change the parameters during the compression. To satisfy the speed of HPP, we need to implement this parametric modeling in hardware and use an efficient algorithm (again in hardware) to change adaptively the parameters in real time. There are many questions which need to be addressed with this approach such as how to define the parametric modeling, what algorithm is efficient for the adaptive changes in real time, and how the HPP will be connected to this system for compressing the image. For a parametric model we used the context modeling but with much more freedom, thereby allowing every location in the template to change adaptively. Since this is a new problem and there is no efficient and fast algorithm to handle this problem, we used Genetic Algorithms (GAs) because of their speed and robustness in solving parametric problems. Simulation results show this evolvable hardware (GAs plus hardware) produces very satisfactory results in compression ratio and speed for HPP. It is an alternative method for JBIG standard compression system when the image characteristics change in real time and adaptation is necessary in hardware. We will discuss more about this configuration in the next sections. The remain of this paper describes how an Evolvable Hardware system can be used for image compression in HPP. In section two we discuss how adaptive change is important for image compression in HPP. Section three introduces the evolutionary algorithms with more emphasis on GAs. Section four explains evolvable hardware architectures in more details. Section five discusses the EHW configuration which is used for image compression in HPP. Section six gives more explanation about the hardware interface between EHW and the printer system. Section seven presents simulation results and comparison of the compression performance between EHW and JBIG.
2. Adaptive template selection in JBIG In bilevel image compression, the pixels in the neighborhood of the pixel being coded are treated as the context in which coding is to take place. This means that the patterns formed by the pixels in the neighborhood influences how the pixel is coded. One standard system for bilevel image compression is JBIG. It consists of two parts: 1 Context selection and 2 - Arithmetic coder. The context selection part determines the context, based on a default template. A template is like a mask which determines which pixels in the neighborhood of the current pixels will be used as the context (Figure 1). The context consists of ten pixels in the standard JBIG algorithm for the baselayer compression. Adaptive Template
Image
Adaptive Template Selection
Context
M odel Template Pixel
Adaptive Arithmetic encoder (QM-Coder)
Compressed
Figure 1. The lowest resolution layer JBIG image encoder
2
The location of the pixels in the encoding template is very important and the compression ratio depends on the selection of the pixels in template. There are reports of 30% better compression ratio by selecting the best template [4, 5] for halftone images. However, the location of the best template depends on the characteristics of an image. For on-line operation, which is essential for printers, it is impossible to have such characteristics beforehand. Indeed we have to search for the best template and compress the image simultaneously. In JBIG there is a simple algorithm to determine one pixel of this template adaptively. However the search area for this pixel is very limited and it is not useful for most of images in a printer system. Most images for printers require a larger area to search for the adaptive pixels. Furthermore, all pixels in the template should be selected adaptively. For example, in the printer images there is a very strong relationship between the current pixel for encoding and 5-6 neighboring pixels in a far vertical distance. The standard JBIG algorithm is unable to properly compress these kinds of images because it is based on one adaptive pixel and the search area is limited to only 24 horizontal locations. The next two sections discuss about Genetic Algorithms and Evolvable Hardware. By using the property of on-line adaptation in EHW, it would be possible to search for these adaptive pixels and change the template in real time. If the changes are correct a better compression ratio will be achieved.
3. Genetic Algorithms During the 1950s researchers became interested in genetic processes and the possibility of emulating them in computer systems. The formal theory was initially developed by John Holland and his students and in recent years it has been applied to problems such as optimizations and learning algorithms. The Genetic Algorithms are probabilistic algorithms and their behavior is still in many ways not well understood. A GA operates on a problem that is specified in terms of a number of parameters. For a function optimization, these may be the values of coefficients for the real time operation of an industrial plant or structural details of a neural network, such as the numbers of neurons in each layer or the learning rates. One key feature of GAs is that they hold a population of such parameters, so that many points in the problem space are being manipulated simultaneously. The population is initially generated either at random or by some heuristic. The former is used when the aim is to compare different algorithms. The latter may be more appropriate if the object is to solve a real problem. Each set of parameters may be regarded as a vector, but the traditional name is a string. Another key feature of Holland's GA [6] is that these parameters are bit strings, with real or integer valued parameters being coded by an appropriate number of bits [7]. Each string is rated, by running the system that is specified. In the case of a function evaluation, this may be very quick. For an aircraft simulation or a neural network, the evaluation might take minutes or even hours. A new population is then generated, by choosing the best strings preferentially. A simple way of doing this is to allocate children in proportion to the test performance (or rather, in proportion to the ratio of a string's test performance to the average of all the strings). With no other operators affecting the population, the result is that the best string increases in number exponentially, and thus rapidly takes over the whole population. Novel structures are generated by recombination, a process resembling reproduction mechanism in nature. Two members of the new population are chosen at random, and offspring are produced by mixing parameters from the parents. In the earliest work, a single point crossover was performed in which parameters were copied from one parent up to some randomly chosen point, and then the reminder taken from the other parent. Thus the strings ABCD and EFGH might be crossed to produce AFGH and EBCD. Much subsequent work on GAs has studied the relative merits of different recombination algorithms. The preferred form of recombination is problem and coding-dependent. A second operator that introduces diversity is mutation, in which the value of a parameter is changed arbitrarily. Unlike recombination, this process is not the major source of new structures but serves to produce occasional new "ideas", and to replace combinations that might be lost in
3
the stochastic selection processes. The precise role of mutation depends on the coding used in the genes. The cycle for a basic genetic algorithm is as follows. Generate a population of parameter sets. Evaluate each member against the problem. Select pairs of members for reproduction on the basis of performance. Recombine parameter sets from the pairs and mutate a few of the offspring to generate the new population. Then restart the cycle. Figure 2 shows one cycle of this process. Old Population
New P opulation
C rosso ver 0 1 1 1 0 1 0 1
M utatio n 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 1
Selection 0 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1
0 1 1 0 0 0 1 1 Muta ted bit
Crossover point
Figure 2. Generating a new population by genetic operators
4. Evolvable Hardware It is possible to use genetic algorithms to design, optimize or finally evolve a hardware device. If the evolutionary algorithms designs the hardware it will be called a hardware design algorithm. If it optimizes the hardware it will be called a hardware optimization algorithm. We are interested in the third one, which uses the evolutionary algorithms to evolve the hardware in real time. There is a good definition of the evolvable hardware in [8] and it states that “Evolvable systems are digital systems where reconfiguration of one of its parts is concurrent with execution”. This definition distinguishes the evolvable hardware from reconfigurable hardware where reconfiguration and execution are happening in two separate phases. The advantage of evolvable hardware is its ability for on-line adaptation which can be used for image compression of printers. Research on EHW was initiated in Electrotechnical Lab in Japan around 1992 [9]. The first practical evolvable hardware systems was described in [10] and consists of PLDs (Programmable Logic Devices) that can be changed by an evolutionary algorithm for pattern recognition. This system was implemented on reconfigurable hardware, but in fact it checked the patterns in real time and classified them concurrently. Some of EHW systems are based on Cellular Automata (CA) [11] and some are based on specialized hardware adapted for an application [12]. We used here a specialized hardware to guarantee a reasonable compression ratio compared with other image compression systems. Evolvable Hardware gives us the opportunity to have hardware with the ability of change. There are two main questions with regard to the changing hardware. The first question is “How to change” and the second one is “What to change”. In spite of disagreement about what to change between researchers, it seems that most prefer evolution algorithms like GAs to achieve the change. The range of platforms for this change in hardware is very wide. In some cases software representation of hardware in HDL (Hardware Description Language) is used as a platform of change. This software indeed controls a hardware unit in an FPGA (Field Programmable Gate Arrays) board. Keymeulen et al. [13] uses FPGA for robot navigation, while Thompson [14] uses FPGA for designing analog circuits. Murakawa [15] uses a special hardware for Radial Base Function. Sipper [11] and deGaris [16] uses Cellular Automata. For industrial applications these models are too simple, or too slow to compete with conventional systems and we need application specified platforms for an acceptable performance. Nearly all platforms use Genetic Algorithms for making the changes to the hardware. These GAs can be implemented in hardware or software. The software GAs provides more flexibility with the algorithms, but it forces the evolvable hardware to use large space for implementation.
4
On the other hand the hardware GAs make the final system smaller and faster but limit the ability of changes in the hardware.
5. EHW model for bilevel image compression The JBIG standard approach for bilevel image compression is emphasized on speed and high compression ratio. Obviously these two factors are difficult to achieve in the same time, and if one of them increases then the other one will decrease. Although The EHW aims to increase the compression ratio, the evolution algorithm inherently requires a large computation time to be successful. In the EHW configuration we tried to make a compromise between the required speed and the higher compression ratio. The EHW in fact searches all the possible templates including JBIG default template and selects the best one for further compression. Figure 3 shows the proposed EHW model for compressing printer images. In this configuration the template for content selection will be changed by genetic algorithms depending upon the characteristics of image. The system theoretically must always produce better compression ratio compared to the JBIG standard. Context
Image
Memory Buffe r
Model Template
QM- Coder Pix el
Genetic Algorithms Engine
Best Template
Context
Model Template
QM- Coder
Compressed Code
Pix el
Figure 3. Printer image encoder based on EHW
In Figure 3 two QM-Coder units, two model template modules and the Genetic Algorithms search engine are used to find the best template and apply it for the future compression. It also sends these templates to the decoder for decompressing the image (along with the actual compressed data). It is possible, however, to use only one QM-Coder and one model template and set a time sharing algorithm for them. We used that approach to reduce the cost for the hardware implementation.
6. Printer system and the interface hardware to EHW The rough flow of processes for electronic printing is shown in Figure 4. The image contents which should be printed are created by using the desktop publishing (DTP) systems on computers. The original data is a color image represented in postscript format, and is divided into four halftone dot images by the Raster Image Processor (RIP). Each of the halftone dot images is a binary image data, corresponding to four colors: cyan plane (C), magenta plane (M), yellow plane (Y) and black plane (K), respectively. The total size of the four images is much larger than the size of data before RIP. One A4-size color image of 800 dpi requires about 31Mbyte for storage. A digital printer prints different pages in the speed of 72 page/min and hence it must have huge disk spaces and a very fast data-transfer bus. In order to reduce the costs for storage and communication, the halftone dot images must be compressed as much as possible and the compressed data must be expanded faster than the printing speed. Figure 5 shows a brief diagram related to the hardware implementation of the bilevel image encoder using the EHW shown in Figure 3. This compression module has two function modes, training mode and compression mode, and their processes are almost similar. The only difference is that the compression module outputs the codes only in the compression mode. A halftone dot image data is compressed along the way which is represented by gray arrows in Figure 5. At the beginning, a number of lines from the original data are stored in the line memory. The pixels which may be used to code the current pixel are then selected and stored in the
5
reference buffer. These two components are controlled by the core processor which also run the GA operations. Halftone dot Images DTP on computer Original Image RIP eg) Postscript
Compressed Data
C M Y K
C M Y K
Compression
Raster Image Processor
Binary Data (A4 color: 31Mbyte)
- Lossless compression - High compression ratio
Fast Printing (72 page/min)
Printed Copy
Fast Expansion
Printing Machine
Hard Disk Array
Decompression
Figure 4. The configuration of electronic printer system
In the training mode, all members in the population of GA are sent to the context generator one by one as a candidate template for compression. A context generated from the pixels in the reference buffer using a template is sent to the arithmetic coder with the current pixel from the reference buffer. We used a QM-Coder as the arithmetic coder because it is a very fast and efficient entropy-coder for binary data. The arithmetic coder outputs the size of the compressed codes as the fitness value of the member in the GA population. Image Data
Compressed Code Reconfigurable Hardware Reference Buffer
Context Generator
Control
Template
Arithmetic Coder
Template
Line Memory
Fitness
Local Bus
Core Processor (GA)
Figure 5. Configuration of the compression module in the electronic printer system
In compression mode, only the best template with the largest fitness in the population of GA is used to generate a context, with the arithmetic coder outputting the compressed codes. The compressed codes plus the best templates will be transferred to the decoder. In this system the speed of decoding is much faster than the encoding speed, which is essential for printers. However in situations where small number of pages should be printed, encoding process will be the bottleneck of this system. For a more acceptable system we have to implement the encoding part in hardware too.
7. Simulation results The proposed EHW system is applied for compressing bilevel images for a printer system. The size of images for a printer is fairly large especially when color images are involved. The size of one color image is around 31Mbytes and it will be divided into four separate 8Mbytes images. For a printer system there are three kinds of images. 1- Halftone: The image has only two tones, black and white, where the ratio between the black areas to the white areas is close to one. 2 - Text: The image contains a vast white areas and very small black areas like a letter image. 3- Image and text: The image is a mixture of the other two kinds. The first kind which contains only halftone data is the most difficult one to compress. The third kind because of halftone areas is also difficult to compress. The second kind is much easier to compress and the methods for compressing them are well developed. However these methods are normally inefficient to compress the halftone
6
kind images. Although the JBIG standard method is proposed to overcome this problem by introducing one algorithm to compress these three kinds of images, it is not useful for compressing the printer images. Figure 6 shows two examples of halftone image (C plane ) and halftone+text image (K plane) for a printer. The size of one image is 9472 by 6464 pixels.
Figure 6. The halftone image (C Plane) and halftone+text image (K plane) from the printer system
In the JBIG algorithm there is only one adaptive pixel in the template and that is not efficient for compressing the first and third kinds of printer images. We have simulated the cases when the number of undefined pixel locations in the template are more than one, the results are in Table 1. In all simulations the template size is ten and it means there are 1024 contexts for QM-Coder. The first case is based on JBIG with AT-max (Adaptive pixels range) equal to 8 and in the next four cases the number of undefined pixel locations in the template has changed to one, two, three and six locations. The search area to find these undefined pixels locations are 256 pixels around the current pixel. The ratio of the number of bits required to represent the data before compression to the number of bits required to represent the data after compression is used for calculating the compression ratio (CR). Table 1. The effect of the number of undefined pixel locations in the template on compression ratio (CR) Number of undefined pixel locations in the template
Halftone image-C (CR)
Halftone + text image-K (CR)
JBIG (AT-max = 8) One pixel Two pixels Three pixels Six pixels
3.35 4.32 5.16 5.59 5.98
7.06 7.49 7.49 8.75 9.87
In these simulations no evolution algorithm is used and the undefined pixel locations in template are selected upon their effectiveness for the compression. To find these pixel locations, statistics and heuristics are used for the whole image. The table contains two image files. The first is the C plane of halftone image, and the second one is the K plane of halftone+text image (Figure 6). In the halftone image when the six best pixels are inserted into the default template then the compression ratio is 38% better than JBIG. The table shows that the location of pixels in the template is essential to have a good compression ratio. However including larger number of undefined pixel locations in the template requires more computation time and space for statistics and more complex heuristic methods. In our hardware approach based on GAs, the EHW finds
7
the best 10 undefined pixel locations in the template in real time and apply them for compression using hardware. There are two approaches to search for these pixels. The first one is called two-pass compression which, in the first pass, finds the best locations and selects them based on examining all the image data, and in the second pass, compresses the image from top to bottom with the best template. In this approach the best template will be transmitted to the decoder at the beginning of the compressed data. This method is slow but produces very good compression ratio. In some cases, however, only part of image will be used for selecting the best template to speed up the operation, but they cannot adapt to the changes in the image properly because only a part of image will be examined to find the best template. The second approach is called one-pass compression which during compressing the image, the location of the best pixels in the template will be searched simultaneously. The intermediate change in pixel locations will be sent to the decoder along with the compressed code. This method is faster than the previous one but generally give less compression ratio. We have compared the two approaches in our simulations and observed the two-pass method to give slightly better compression ratio than one-pass (Table 2) but because speed is very important for the compression process, the second approach is more favorable for printers. In the one-pass approach an area of the image (stripe) will be selected and the genetic algorithms uses this data for the fitness calculations. However GAs is limited in the number of generations because the data is not enough to run GAs for a long time, so it is essential for GAs to converge quickly in a limited number of generations. After GAs reach the end of the stripe the best chromosome in the population will be used as template for future compression and will be sent to decoder as well (We can set some conditions for this change of template). There are a few parameters which determine the success of this approach. The first parameter is how many pixels GAs must handle in each time. Typically the total template size is 10 and GAs work with a few pixels and replace only part of the whole template. It is possible for GAs to work with ten pixels and replaces the whole template entirely, but since the replacement pixels must be included in the compressed code then 10 pixels GAs will have more overhead data to present these changes. It would also be more difficult to search for 10 pixels than a smaller number of pixels. On the other hand 10 pixel GAs naturally should produce better compression ratio because it searches all possible combinations, but that is valid only if there is enough time to search. In any case the chromosome length for GAs would be 80 bits if the adaptive template contains 10 pixels. In our work we used one pixel GAs and found it produced 10% better compression ratio than the 10 pixels GAs. In one pixel GAs, each time the GAs works on part of the full chromosome (8 bits) for every new stripe the GAs searches for the best location of the next pixel in the adaptive template. In 10 pixels approach, the GAs always works with the full chromosome (80 bits). The second parameter is the area for search by GAs. If this area is too large then GAs cannot find good locations in a limited time, but if this area is too small then it will reduce the compression ratio. However the small area is preferable because in the image, the probability of finding a good location reduces with the distance. We have tested the compression ratio when the area for search was 256 or 512 pixels and we observed 512 pixels area actually gives less compression ratio compared to the 256 pixels area. For next simulations we selected 256 pixels area in the neighborhood of the current pixel for the GAs search. The third important parameter is the size of the stripe. This parameter is similar to the sample size for fitness calculation in a noisy environments [17]. The larger sample size gives more accurate fitness values, but because the generation number will decrease in GAs it cannot search properly. On the other hand small sample size produces fitness with a high percentage of noise and reduces the efficiency of the GAs operations. Figure 7 compares the compression ratio when different stripe lengths are used for compressing the halftone image in Figure 6. In all simulation the GAs is used with the following properties: Population size : 20, Probability of mutation : 0.03, Probability of crossover: 0.8 Tournament selection, Elitist is included, Linear Scaling
8
6.4
6.2
6
Compression Ratio
5.8
5.6
5.4
5.2
5
4.8
4.6
0
200
400
600 Number of lines in a stripe
800
1000
1200
Figure 7. Compression Ratio vs. The length of stripe
The best length for stripe according to the Figure 7 is 128 lines. The stripe length more than or less than that will produce less compression ratios. Table 2 compares the performance of EHW with the JBIG for 8 test images. Four of them are halftone images which in combination produce a color printer image, the other four are elements of a halftone+text color image. On average EHW produce 51% better compression ratio that JBIG for the one-pass approach. We also compressed a text image with EHW and JBIG. For this particular image EHW compression ratio was 30.57 for one-pass approach (31.38 for two-pass) and JBIG compression ratio was 30.01, which shows that the EHW rate is slightly better. Table 2. Comparison Ratio of the JBIG and EHW (One-pass and Two-pass) in compressing the printer images. HT=HalfTone, H+T=Halftone+Text
image HT image-K HT image-C HT image-M HT image-Y H+T image-K H+T image-C H+T image-M H+T image-Y Average
JBIG (At-max=8) 6.34 3.35 3.37 5.45 7.97 5.83 5.78 7.06 5.64
EHW (Two-pass) 7.53 6.49 6.43 6.68 9.73 9.70 9.66 10.53 8.34
EHW (One-pass) 7.34 6.03 5.75 6.44 10.53 9.83 9.56 10.57 8.25
Percentage of Improvement by EHW (One-Pass)
16% 80% 77% 18% 32% 69% 65% 50% 51%
8. Discussions Although we showed that this system can produce better compression ratio (8.25) compared to the JBIG standard (5.64) with the similar speed, it has some disadvantages too. The volume of the hardware for this system (including GAs) is greater than JBIG system. However the extra hardware performs the template adaptation task. To have an adaptive system for changing the template for any kind of image we have to pay the price of extra hardware. The standard JBIG system does not have an efficient search mechanism, but EHW because of its adaptive template strategy needs GAs hardware and naturally more components. Another important point is the performance of the EHW for the text images. In the above simulations we mainly used halftone images or a halftone+text image for testing the system. For the text images the performance of the EHW is similar to JBIG due to the location of adaptive pixels in the text images being less critical, thereby allowing a fixed template to compress the
9
image efficiently. For text images other factors like run-length and differential coding are more important.
9. Conclusions This paper describes the on-line adaptive system based on evolvable hardware for high precision printer images. The system is used for compression of bilevel images in printer machines for book publishing. The internal architecture of this system is explained and simulation results show 51% better performance in compression ratio compared to the standard JBIG systems for a set of printer test images. The speed of compression is similar to the JBIG system but requires more hardware for implementation because of the GAs adaptation unit. The EHW compresses the halftone printer images much better than JBIG with similar performance for the text images.
Acknowledgment This work is supported by MITI Real World Computing Project (RWCP). We thank Dr. Otsu and Dr. Ohmaki in Electrotechnical Laboratory for their support.
References [1] JBIG. Progressive Bi-Level Image Compression. ISO/IEC International Standard 11544, 1993. [2] Mohiuddin K., Rissanen J.J. and Arps R., “Lossless Binary Image Compression Based on Pattern Matching”, Proceeding of the International Conference on Computers, Systems and Signal Processing, Banglore, India, 1984. [3] Howard P.G., “Lossless and Lossy Compression of Text Images by Soft Pattern Matching”, Proceeding of the Data Compression Conference 1996 (DCC96), IEEE Computer Society Press, 1996, pp. 210-219. [4] Forchhammer S. and Jansen K.S., “Data Compression of Scanned Halftone Images”, IEEE Transactions on Communications, Vol. 42, No. 2, Feb. 1994, pp. 1881-1893. [5] Forchhammer S., “Adaptive Context for JBIG Compression of Bi-Level Halftone Images”, Proceeding of the Data Compression Conference 1993 (DCC93), IEEE Computer Society Press, 1993, page 431. [6] Holland J.H., “Adaptation in Natural and Artificial Systems”, The University of Michigan Press Ann Arbor, 1975. [7] Goldberg D.E., “Genetic Algorithms in Search, Optimization and Machine Learning”, AddisonWesley, 1989. [8] Micheli G.D. and Gupta R.K., “Hardware/Software Co-Design”, Proceedings of IEEE, Vol. 85, No. 3, March 1997, pp. 349-365. [9] Higuchi T., et al, “Evolvable Hardware with Genetic Learning”, In Simulation of Adaptive Behaviour, MIT Press, 1992. [10] Higuchi T., et al, “Evolvable Hardware and its Applications to Pattern Recognition and Fault-Tolerant Systems ”, Proceedings of the International Workshop: Toward Evolvable Hardware, October 1995. [11] Sipper M., “Designing Evolware by Cellular Automata”, Proceedings of The first International Conference on Evolvable Systems (ICES96), Lecturer Notes in Computer Science, Springer-Verlag, 1996, pp. 81-95. [12] Salami M., Iwata M. and Higuchi T., “Lossless Image Compression by Evolvable Hardware”, Proceedings of the Fourth European Conference on Artificial Life (ECAL97), MIT Press, 1997, pp. 407-416. [13] Keymeulen D., et al, “Evolvable Hardware: A Robot Navigation System Testbed”, New Generation Computing Journal, Vol. 16, No. 2, Springer-Verlag, N.Y., Feb. 1998. [14] Thompson A., “Silicon Evolution”, In J.R. Koza et al., eds, Proceedings of Genetic Programming 1996 (GP96), MIT Press, 1996, pp. 444-452. [15] Murakawa M., Yoshizawa S., Kajitani I. and Higuchi T., “Evolvable Hardware for Generalized Neural Networks”, Proceeding of Fifteen International Conference on Artificial Intelligence 1997 (IJCAI97), Morgan Kaufmann Publishers, pp. 1146-51. [16] deGaris H., “CAM-BRAIN: The Evolutionary Engineering of a Billion Neuron Artificial Brain by 2001 which Grows/Evolves at Electronic Speed Inside a Cellular Automata Machine (CAM)”, in "Toward Evolvable Hardware", Sanchez E. and Tomassini (Eds.), Lecturer Notes in Computer Science, Springer-Verlag, 1996, pp. 76-98. [17] Miller B.L. and Goldberg D.E., “Genetic Algorithms, Selection Schemes and the Varying Effects of Noise”, Evolutionary Computation, 4(2), 1996, pp. 113-131.
10