Study of Encoders with data transmission binary

0 downloads 0 Views 472KB Size Report
Jul 5, 2014 - characteristics, the analysis of transient operation, the display of the signal ... The study and design of the encoder for the data transmission channel ..... In a more complex data transmission system there are also signal regenera- ... 9. https://maxwell.ict.griffith.edu.au/yg/teaching/dns/dns_module3_p3.pdf, at.
Study of Encoders with data transmission binary channels without interference. Simulation of the Encoder functioning using the Electronic Workbench software S. Draghici1, C. Anghel Drugarin2, E. Raduca3 {s.draghici; c.anghel; e.raduca}@uem.ro 1

PhD Student, Politehnica University of Timisoara, 2PhD Lecturer IT Engineering, 3

Prof.PhD Eng. at Eftimie Murgu University of Resita, Romania

Abstract

The paper aims to create and test a data transmission channel coder for binary channels with no interferences or with insignificant disturbances. These are the simplest types of data channels known in the theory of information transmission and encoding. We intend to test the encoder using the Electronic Workbench 5.12, software with simple graphical interface and enough libraries to simulate and test the functioning of most analogical and digital electronic circuits. We will use a message source that generates four distinct types of messages which occur with probabilities as negative powers of two, being encoded through four distinct code words, words which build a 4 sized-prefix code! It is very important because, once this type of encoder is built, we can make generalizations for the probability distributions of the source S with n messages type

1 2

n −1

,

1 2 n −1

being the probability of the least likely message in a sequence of n messages generated by the source.

Keywords: data channel coder, message source, probability distribution, inconsistent codes, prefix codes, Shannon-Fano method.

2

1. Introduction 1.1. Data transmission channels, entropy, optimal coding. The data transmitted through a binary channel (symmetrical or not) represent a series of binary symbols: 0 and 1. In the code theory, a code alphabet represents the set of symbols used in the process of encoding the information provided by a source (signals, symbols, messages, etc). For the binary data channel, the code alphabet is the set by relation:

{X } = {0, 1}

(1)

In the code theory it is considered that when the level of interference in the data transmission channel is low enough so that the transmitted symbols are received correctly, the channel is called channel without interference. For the encoder study we will consider a source S that provides N messages

{} with probabilities given by the set of probabilities {P}, thus:

{S } = {s1 , s2 , ... , s N } and {P} = {p( s1 ), p( s 2 ), ... , p( s N )}

(2)

For the binary data transmission channel the encoding operation consists of a set S of messages that is put in a one-to-one correspondence with the set C , which is the set of the codeword. The code words have a certain length given by the number of symbols that build the word; thus, we can also define a set of codeword length, noted {L}; the sets {C} and {L} are given by the relations:

{}

{C} = {c1 , c2 , ... , c N } and {L} = {l1 , l 2 , ... , l N }

{ }

(3)

The length of the codeword can be equal or different. A basic request is that a code should be as short as possible, because the lower the number of elementary signals of a message, with higher information transmission speed, through a data channel of a certain capacity. A coding method that takes into account the previous statement is to assign less frequent messages combinations of longer codes, and to those more frequent, shorter combinations. This coding method brings us to optimal codes. These codes are also inconsistent codes (the ones in which the code words have the same length are called consistent codes). Obviously, the inconsistent codes are shorter than the consistent ones, but a problem with the inconsistent codes represents the recognition of the end of the code word. The introduction of a code words separation signal leads to the considerable increase of the average length of messages transmitted, which is not

3

desirable. However, there is a possibility to build an inconsistent code that should allow the distinction in the reception of a code combination from another code combination without using separation signals between the code combinations! Such a code should be designed so that any shorter code combination does not represent the beginning of a longer code combination, i.e. no shorter code combination is the prefix of a longer one! These are codes with a prefix property. All prefix codes are also uniquely decodable (i.e. if the sequence of code words corresponding to different words of the different length source is distinct). In other words: only a single sequence of the source message should correspond to any sequence of code words! Also, the code in which the average number of elementary signals in code combinations is minimum is called the optimal code! It is the most economical code! We will see in the following paper in which we intend to study and design an encoder for codes without interferences, that the code we use is uniquely decodable with prefix property as well as optimal! The code will be more economical (optimal) as the average length of the code combinations (noted with L ) is smaller.

L is given by the relation:

N

L = ∑ p( si ) ⋅ l i

(4)

i =1

In the present paper and not only, we are interested in finding the minimum possible limit of the length L . Only in this case we can use the most efficient channel of data transmission! The minimum of the average length L is given by the first principle of C. Shannon which states: the message encoding of a source S with the help of m elementary signals (in this case, having a binary channel of data m=2) enabling the transmission of an information amount H(S) by the source S (in this case), L cannot be lower than H(S)/logb m, where b is the cardinal of the code alphabet set. In our case, we can write the inequality:

L ≥ H (S )

(5)

This is due to the fact that we have an encoder for a binary channel with no interference; thus, m=2, the code alphabet is given by the relation (1) and the logarithm is base two! So in this case, the average length of the code combinations will be at least equal to the source entropy. And we will choose such code combinations so that we have equality in the relation (5). As a method of coding we will use the Shannon-Fano coding because the probabilities of occurrence of the selected source messages in our case are negative powers of two! Otherwise, we would

4

have used the Huffman coding. The latter changes the message source (it performs a reduction of the source), being less complicated; and, of course, it is used when the probabilities of occurrence of the source messages are not negative powers of two! Both coding types are based on tabular methods and on representations of structures such as a tree (graph theory)! And both coding types perform the construction of optimal and uniquely decodable codes!

1.2. Short presentation of the coding procedure Shannon-Fano We propose to encode a set of four messages of the source S. Within the Shannon – Fano coding procedure we write the source messages in a table, in the descending order of their occurrence probabilities, thus: Table 1. Source Words code Length P(si) Partitions S Xi1 Xi2 Xi3 word(li )

1 2 1 4 1 8 1 8

s1 s2 s3 s4

0

0

-

-

1

1

0

-

2

0

1

1

0

3

1

1

1

1

3

0 1 1

As we notice, the source messages are split in groups of messages of equal probabilities and each group of messages is coded with 0 and 1. This procedure is continued until all source messages are depleted. We will write the sets {S }and {P}, considering the table1, as follows:

 s1 s 2 s3 s 4  1 1 1 1   2 4 8 8

(6)

Thus, we represent more effectively the distribution of the messages s1, s2, s3, s4 of the source S. 4

∑ p( s ) = 1 i

i =1

For the resulted code, the source entropy H(S) is:

(7)

5

4

H ( S ) = −∑ p ( si ) ⋅ log 2 p ( s i ) = i =1

1 1 1 1 + ⋅ 2 + ⋅ 3 + ⋅ 3 = 1,75 bit / event 2 4 8 8

(8)

According to the first principle of C. Shannon, in the case of the binary channel of data transmission, with no interferences, lmin = H(S) =1.75; According to the relation (9) for the average length of the code words: 4

L = ∑ p( si ) ⋅ li =1,75

(9)

i =1

we can state that the code efficiency (defined as η

l min . =

L

) is 1 in this case!

So, the code is optimal as well as with prefix property. The prefix property is shown in the following graph:

Fig. 1. The prefix property graph

The signals provided by the source are s1, s2, s3, s4 and they occur with the following probabilities:

p ( s1 ) =

1 1 1 1 , p ( s 2 ) = , p ( s3 ) = , p ( s 4 ) = . 2 4 8 8

They are assigned with the code words c1=0, c2=10, c3=110 and c4=111.

1.3. Short presentation of the Electronic Workbench soft. The Electronics Workbench soft, developed by the company Interactive Image Technologies Ltd. is a CAD application („Computer Aided Design”). It is intended

6

for the design and simulation of electrical and electronic circuits. Besides the creation of circuits and the simulation of their functioning with the help of different indicating symbols, of measuring devices, power supplies and signal generators, the soft allows the performance of some complex analysis on the operation of the electronic circuits (the direct voltage display in the marked points of the diagram, the lay-out of the amplitude-frequency and phase-frequency characteristics, the analysis of transient operation, the display of the signal harmonic components, the waveform, etc). For digital circuits, it provides TTL logic levels (indicator +Vcc used in conducting logic diagrams of the direct mod-8 counter in this paper) as well as CMOS logic levels (indicator +Vdd ). In order to simplify the indication of the levels 0 and 1, we can use at the digital circuit output a so-called „indicator lamp, which in case of ignition (red in the present paper) indicates the logic value 1, and if not lit, it indicates the logic value 0. It is very useful in testing the presented counter diagram, because we can easily follow the correlation between the states table and the logical sequence of the states at the counter output at each clock pulse. Basically, this type of indicator confirms or disproves the proper functioning of the counter through its „ignition”. In the simulation and testing of the encoder we will use the Electronic Workbench soft, version 5.12; but there are much newer versions (10.1 being the last one), but the soft used here is sufficient for the functioning of the data channel encoder. The files used in the simulation of the diagram with the Electronic Workbench soft 5.12 is saved with the extension .EWB and can be further used in other superior variants of the Electronic Workbench soft, even MultiSim, as well as in most variants of p-Spice soft.

2. The study and design of the encoder for the data transmission channel with no interference Due to the probability distribution of messages emitted by the source S, distribution shown in relation (6), we will choose a frame of eight messages in the data transmission. It is obviously a mathematical problem of a common denominator, the highest denominator being eight. The message frame will contain 4 messages s1, 2 messages s2 and one message s3 and s4. We propose, for example, the sequence of messages: s 4 s1 s 2 s1 s 3 s1 s 2 s1 it is not the only possible sequence, for such an encoder! The coding will be made so as to achieve a most efficient use of the data channel, and the obtained code is uniquely decodable (which actually happened, according to the presented sequence, because we have a prefix code). The message source can be a source with controlled rate, in the sense that it the messages generated by it do not occur at equal points in time, but at certain times given by the command block of the coding. The model of the signal source is a

7

determinist one, but the source may also have a statistical character! Thus, the proposed encoding model has a certain character of generality! The source will generate four messages s i , i = 1...4 , as pulses of logical level 1 at the output i and logical level 0 at the other outputs! We notice that the message s4 generated by the source S needs a state (from the eight possible states of the data frame), message s3 needs one state, message s2 needs two distinct states, and s1 needs four distinct states. That is a total of eight states! It is obvious that we need a counter with eight states. We propose a direct 8-mod synchronous counter controlled by eight clock pulses!

2.1. Synthesis of the direct 8-mod synchronous counter We propose to perform this type of counter with J-K flip-flops, being flip-flops order 2 and most often used in the design of counters. According to the mathematical relation: n = [log 2 N ] + 1

(10)

where: N being the maximum number to which the counter counts (in our case N=7), and [ ] is the mathematical function integer part. Thus, we will need three J-K flip-flops in design (n=3)! It will be a 3-bits counter: we will note them with A, B, C; A being the less significant one (LSB), and C being the most significant bit (MSB). In the design of the 8-mod synchronous counter we will take into account the table of states of the three bits, the sequence of clock pulses and hence the eight states of the counter, thus: Table2. Number clock pulse 1 2 3 4 5 6 7 8 Bit A (point in time t) 0 1 1 1 0 1 0 1 Bit B (point in time t) 0 0 0 1 0 0 1 1 Bit C (point in time t) 0 0 0 0 1 1 1 1 Bit A (point in time t+1) 1 0 1 0 1 0 1 0 Bit B (point in time t+1) 0 1 1 0 0 1 1 0 Bit C (point in time t+1) 0 0 0 1 1 1 1 0 Message generated by the s4 s1 s2 s1 s3 s1 s2 s1 source Table2 renders the correspondence between the previous point in time t of the counter functioning and the point of time t+1 in the beginning of its functioning. This correspondence in the state table will help us minimize the Boolean

C t +1 which are all Boolean t t t expressions that depend on the variables A , B and C . The minimum t +1 t +1 t +1 expressions for A , B and C , being Boolean functions dependent on only

expressions for the logical functions A

t +1

, B

t +1

and

8 t

t

t

three variables ( A , B and C ), will be obtained with the help of the VeitchKarnaugh diagrams. Thus:

Ct+1

Bt 00 01

11

10

0

0

0

1

0

Ct 1

1

1

0

1

At Minimizing, we obtain the expression of

C t +1 as follows:

C t +1 = C t ⋅ B t + C t ⋅ B t ⋅ A t + C t ⋅ A t ⋅ B t

(11)

This relation will be written as a general relation that represents the equation of a J-K flip-flop, in order to determine the coefficients JC and KC which invert in this equation; the general equation of a JK flip-flop is:

C t +1 = J C ⋅ C t + K C ⋅ C t

(12)

To reach a similar form with the equation (11), we will rewrite the relation (12) using the De Morgan formulas, thus:

)

(

C t + 1 = B t + At ⋅ B t ⋅ C t + B t ⋅ A t ⋅ C t = B t ⋅ ( At + B t ) ⋅ C t + B t ⋅ At ⋅ C t = t

t

t

(13)

t

= B ⋅ A ⋅ C + B ⋅ At ⋅ C t

According to this last relation, we notice that the coefficients JC and KC have the values:

J C = K C = B t ⋅ At The Boolean expression of B Karnaugh diagram:

(14) t +1

will be obtained from the following Veitch-

9

Minimizing, according to the presented diagram, we obtain: (15)

B t +1 = B t ⋅ A t + A t ⋅ B t

According to this relation, as well as to relation (12), the coefficients JB and KB have the following values:

J B = K B = At

(16) t +1

The Boolean expression of A will be deducted taking into account the minimizing presented in the following Veitch-Karnaugh diagram:

Hence:

At +1 = At = 1 ⋅ A t + 1 ⋅ A t

(17)

Similarly, taking into account relation (12),we deduct the values for JA and KA:

JA = KA =1

(18)

Taking into account the coefficients J and K deducted in the relations (15), (17) and (18), we will obtain the logical scheme of the direct 8-mod counter (performed with the Electronic Workbench 5.12. soft):

10

Fig. 2. The signals.

The CLOCK signal is applied by commuting with the space key between the logic level 1 TTL (+VCC) and 0 logic TTL (ground), as shown in figure 2. Still, any key can be set. There are also CLOCK signal generators in the Electronic Workbench 5.12. soft, but we don’t want complicated schemes.

2.2. Synthesis of the main encoder We have seen that the source S generates four different messages si , i = 1, ... ,4 ; we consider that they are pulses with logic level 1 on the output i and logic level 0 on the other outputs. Thus, we build the truth table that shows the logic correspondence between the messages si , i = 1, ... ,4 generated by the source S and the messages

S i , i = 1, ... ,4 from the main encoder output.

The role of the data channel encoder is to

give the message

si , i = 1, ... ,4 generated by the source S a unique code word ci , i = 1, ...,4 . The code words ci , i = 1, ...,4 from the encoder output are a combination of binary symbols which we note as follows:

ci = [xi1 , xi 2 , xi 3 ], i = 1, ... ,4

(19)

When the source S generates the message si , i will occur the symbols xi1 , xi 2 , xi 3 , i

= 1, ... ,4 , at the encoder output

= 1, ... ,4 . This will be highlighted in the

simulations with the Electronic Workbench 5.12.soft in the following.

11

Obviously, these symbols

xi1 , xi 2 , xi 3 , i = 1, ...,4 which build the code words

ci , i = 1, ...,4 will be entered in a right shift-register type parallel input-serial output (PISO). But we will present and simulate it separately in order not to complicate the scheme. Results the table for the main encoder will be as follows: Table3. Message from xi1 xi2 xi3 S2 S3 S4 S1 source S: si s1 1 0 0 0 0 (0) (0) s2 0 1 0 0 1 0 (0) s3 0 0 1 0 1 1 0 The 0 logic values in brackets indicate the fact that the initial state of the cells of the right shift-register is 0 logic, after previously being deleted. Obviously, we need a logic scheme of the encoding block control. We will present it as a block scheme in the next chapter, because its simulation is more difficult in the Electronic Workbench 5.12 and not very suggestive. Moreover, it would complicate the figures presented here! As noticed (see table2), taking into account the relations De Morgan, it is clear that the messages S i , i = 1, ... ,4 are a logic sum of the source S messages, i.e.:

 S1 = A  S 2 = A ⋅ B  S3 = A ⋅ B ⋅ C S = A ⋅ B ⋅ C  4 The symbols

(20)

xi1 , xi 2 , xi 3 , i = 1, ...,4 are:

 xi1 = S 2 + S 3 + S 4   xi 2 = S 3 + S 4 x = S 4  i3 The symbols “+” and “·” are the somewhat known Boolean operators. Hence, the scheme of the data channel encoder for a channel with no interference is the following:

12

Fig. 3. Clock 2, 4, 6 and 8.

Figure3 represents the code word at the output associated to the clock pulse 6; in this case S1=1, and S2=S3=S4=0; the code word c1 = 0, (0), (0) is channelled at the output, as noticed in the above figure; see table3 , as well as table 2. The representation of the figure was made only for clock 6 to avoid unnecessary similar figures.

[

]

Fig. 4. Clock 3 and 7.

For the same reason the representation of figure 4. was made only for clock 3; the code word channelled at the output is the same for clock 3 and 7, i.e.: c 2 = 1, 0, (0) ; and S2=1 andS1=S3=S4=0.

[

]

13

Fig. 5. Clock 5

In this case, S3=1 and S1=S2=S4=0, and the code word at the output is:

c3 = [1, 1, 0] , as noticed in the figure as well.

Fig. 6. Clock 1

In this case, S4=1 and S1=S2=S3=0, and the code word at the output is:

c 4 = [1, 1, 1] . So we have all four code words: c1, c2, c3 and c4. As we notice, the encoder modifies the sequence of messages provided by the source S (s4 s1 s2 s1 s3 s1 s2 s1) into the sequence of code words: c4 c1 c2 c1 c3 c1 c2 c1. I.e., at the output we will have the following sequence of 14 binary symbols: 111 0 10 0 110 0 10 0, corresponding to the 8 clock signals applied to the mod-8 counter. The total length of this binary sequence is 14, because 8 x 1,75 =14, 1,75 being the average length of a code word, equal with the source S entropy ( H(S) ) for a data

14

transmission binary channel and for an absolute optimal code such as the one in this case! Obviously, the sequence of the 14 logic 0 and 1 binary symbols, is a serial one, and at the encoder’s outputs, as noticed in figures 3, 4, 5 and 6, the sequence of logic 0 and 1 binary symbols is a parallel one, corresponding to the code words: c1 = 0 , c 2 = 1, 0 , c3 = 1, 1, 0 and c 4 = 1, 1, 1 . It is clear that we need

[]

[ ]

[

]

[

]

to load them in a right shift register type parallel input - serial output (PISO). For this reason, the code words c1 and c2 were presented in figures 3 and 4, resulted after the simulation with the soft EWB5.12, as being equal in length with c3 and c4; but the 0 logic symbols represented between brackets indicate the fact that the cells of the shift register are initially set in the 0 logic state, after previously being deleted! Hence, practically, the four code words do not have equal lengths! On the contrary: c1 is of length 1, c2 is of length 2, and c3 and c4 are of length 3. They are the code words of a 4 sized code with prefix property, as presented in figure 1.

2.3. Synthesis of the right shift registers It is a 3 cell register (corresponding to the three symbols xi1, xi2 and xi3 which build the code words c1, c2, c3 and c4), i.e. on three bits. The easiest way to make such a right shift register PISO with 3 cells is by using D type flip-flop circuits. It is basically an insertion of these flip-flops, with parallel inputs and right output (at the output of the last flip-flop in the chain). The scheme is simple because the Dflip-flops are flip-flops which repeat the input after the clock signal. But they are flip-flops of order 1. And for the levelling of the encoder scheme, we rather use the J-K flip-flops, being of order II, and furthermore we used them in the design of the direct mod-8 synchronous counter. We will use the conversion formulas between the D type flip-flops into J-K flip-flops. We know that the equation of a J-K flip-flop (for a cell with output signals Q and Q ) is given by the relation (12) where the place of variable C is taken by the variable Q! Connecting a NOT circuit between the input J and the input K of the J-K flip-flop, we will obtain:

Fig. 7. D type flip-flop

15

Q t +1 = J ⋅ Q t + J ⋅ Q t = J ⋅ (Q t + Q t ) = J t This relation basically represents the equation of a D type flip-flop: Q being only a problem of notation: J=D!

(21) t +1

= Dt ,

In relation (12) we also replaced K = J , as seen in figure 7. We will present and simulate the right shift register with the J-K flip-flops for the code combination ctest = 0, 1, 0 , a combination that is not to be found in the

[

]

four code words that form the presented prefix code. We will also perform only the simulation for the code word registering operations, its shift to the right (the symbol logic 1 will appear at the output of the last flip-flop) and the deletion of the register cells. These are presented in the following figures as a consequence of the simulations with the Electronic Workbench 5.12. Soft:

Fig. 8. Loading the code combination

ctest = [0, 1, 0] in the register

We can observe that the loading occurs when the L key (L = Load) is active on 0 logic! Once loading is performed, the L key becomes inactive on logic 1 and by commuting with the space key, we perform the serial right shift of the code combination! The commutation with the space key within simulations (of the direct mod-8 counter, as well as of the encoder) occurs on the decreasing front of the clock pulse!

16

Fig. 9. Right shift

The symbol 1 logic in the input xi2 (controlled by the key 2) occurs at the output of the register, as we can observe from figure 9. The key C (C=Clear) controls the deletion of the register cells (J-K flip-flops). It becomes active on logic 1, as noticed in the following figure 10.

Fig. 10. Deletion of the register cells

Obviously, for the counter (and the message source S) to generate the next message, the next code combination respectively, we need a circuit for the recognition of the end of the code word. Moreover, for the simulated and studied encoder to function having many digital circuits, we need synchronization; so, we need signal generators that should provide synchronizing pulses: of the counter, of the shift register, etc. We need control logic of the data transmission channel encoder scheme.

17

2.4. The encoder control scheme For the generation of the source S synchronizing signals (i.e. of the mod-8 direct counter) with the right shift register, we can use a multivibrator circuit. The Electronic Workbench soft is richer in analogical circuits than in digital ones. We can perform a multivibrator circuit using discrete electronic components. Any models of multivibrator circuits can be used: the Jordan connection (with the polarization resistances connected to the load or to the external source) or the Schmitt connection. We can also use integrated digital circuits which contain multivibrator circuits. For simplicity, however, we can use a source that generates a rectangular signal with a duty cycle existing in the Electronic Workbench 5.12. libraries. We will connect to it the virtual oscilloscope existing within this simulation soft in order to see the shape of the generated signal. Output voltage levels can be adjusted in the range + 5 V and 0 V (logic levels TTL). See the following figure:

Fig. 11. Simulation of the multivibrator circuit

The setting of the working frequency (2 Hz) is indicative. It was set like this so that we can clearer see the image of the virtual oscilloscope. The commutation occurs between +5V and 0V as noticed, with a duty cycle η

=

1 . 2

In addition, in order to recognize the end of the code words c1, c2 and c3 we only need a NOT circuit, because the last digit of these words is logical 0! Slightly

18

more complicated is to recognize the end of the code word c4, where we need a left shift register type SISO (serial input – serial output), which consists of two cells. We also need a monostable circuit that controls the load of the code word information in the right shift register type PISO. Without separately simulating these components, we will present in this chapter, the block scheme of the data channel encoder for a data channel with no interferences or with insignificant disturbances:

A

SOURCE

B

Encoder

CLC

C

Xi3

Xi2

Xi1

Ai

Bi

Ci

Load

M.C.

Decoder

clock

Multivibrator circuit Q

Q’ C1

B1

Fig. 12. Block scheme of the data channel encoder.

The previous figure presents the block scheme of the encoder together with the control logic. The following abbreviations existing in the encoder block scheme represent: CLC = combinational logical circuit; M.C. = monostable circuit. B1 and C1 are the two cells of the left shift register type SISO (performed with the J-K flip-flops). To recognize the end of the code word c 4 = 1, 1, 1 the last digit Xi3 of the code word c4 is simultaneously shifted to the right in the right shift register, as well as

[

]

19

to the left in the left shift register! It reaches simultaneously in the cells Ci and C1 of the two registers; it passes through the gate OR and determines the opening of the gate AND in the rhythm of clock pulses that control: 1) the generation of the next message by the message source S via the mod-8 counter; 2) switching the circuit M.C. to the logical level 1 on a period

T < t < T , where 2

with T we noted the period of the clock signal generated by the multivibrator circuit with duty cycle

η=

T 1 . In the period < t < T occurs the load of the 2 2

symbols xi1, xi2 and xi3 of the code words in the right shift register. When M.C. switches to the 0 logic level, there occurs the serial shift of the code word symbols and their evacuation to the decoder within the simulations of the right shift register functioning). When recognizing the end of the other code words c1, c2 and c3, the last symbol xi3=0 and the gate OR is driven directly via the NOT circuit. There is no need for the left shift register anymore. Otherwise, the functioning of the encoder control logic is the same with the situations presented previously in paragraphs 1) and 2).

3. Experimental results The scheme obtained for the data channel coder for binary channel with no interference (together with the channel decoder) can be used for the creation of a laboratory model used in the study of binary data channel encoding and decoding! Normally, the data coding implies their decoding at the reception, a classical and more general block scheme, for a data transmission channel ( not necessarily binary and not necessarily without interference), as follows:

Fig. 13. General block scheme of a data channel coder and decoder.

20

For the case of the coder synthesized in this paper, the source coder (from fig. 13) is represented by the set made up of the source S and the signals S i , i = 1, ... ,4 provided by CLC, and the channel coder is represented by the set made up of the Encoder (see fig. 12), the one that channels the code symbols xi1, xi2 and xi3 together with the right shift register type PISO. We can guess that, depending on the probability distribution of the S message source, the encoding schemes for the data channels without interference bear generalizations (prevalence) of order n, thus: - for a source S of n messages ( s1, s2, ... , sn) and a probability distribution of the source S type

 s1 s 2 ... s n −1 s n  [S , P ] =  1 1 1 1  ... n −1  2 22 2 2 n −1 

(22)

we will clearly need a mod-2n-1 counter that should provide messages on n-1 bits, the code words will contain the symbols xi1, xi2, ... , xin-1, being of maximum length n-1, and the right shift register type PISO will be on n-1 bits, containing n-1 flipflops! In general, this is how data encapsulation is done within a data transmission frame in the modern systems for a prefix code! Noting the relations 14, 16 and 18 which give the values of the coefficients J and K for the direct mod-8 counter, we can generalize these relations using the principle of mathematical induction, thus: if we note n flip-flops with A1, A2, ... ,An, the coefficients J An and K An of the flip-flop An with the highest binary range, will be given by the following relations: n

J An = K An = ∏ Ai −1

(23)

i =2

This relation is valid for a direct mod- 2n-1counter. We can notice that we can use logic gates AND with two inputs in the implementation of such a counter! The right shift register type PISO will contain in this case n-l cells (flip-flops). These types of registers are generalized constructions, which can be proved based on the principle of mathematical induction. Of course, n cannot be very high, out of technological considerations, intervening the delays in switching, the digital signal conveyance, the combinational hazard, etc.

3.1. Practical ways of data encoder implementation The set of logic gates AND, OR, NOT specific to CLC and the Encoder, the control scheme, can be implemented with native digital integrated circuits: CDB

21

400E, CDB 403E, CDB404E, CDB405E, CDB 410E (if we work in NOT logic) or CDB 408E, CDB 409E (for a normal logic). These are in technology TTL. Or MMC 4001, MMC 4011, MMC 4014 (for the shift register PISO) in CMOS technology. These are older models of digital integrated circuits, but they still exist on the market! Newer models are the ones from Texas Instruments: SN74S04, SN74S00,SN74H00, SN 54S00, SN54H00, SN54S04, (the last three with military purpose). The practical implementation methods are varied, as one can see. We can also use within the simulations with the Electronic Workbench, digital integrated circuits existing in the libraries of this software, but they have a totally different name and do not always have a physical counterpart! For the clock generator we can use from Texas Instruments: SN74LS362, SN74LS424 etc. For the direct mod-8 counter we can use: SN54S163 (it works up to 40 MHz frequencies), SN54163 (it works up to 25 MHz) etc. For the right shift register on 3 bits we can use: SN54165, SN54LS165, SN54166, SN54LS166. They are all shift registers PISO on 8 bits, performed with D-type flip-flops, functioning in the range of temperatures with military purpose! They work up to 40 MHz. In this case, we will use only 3 bits in the implementation of the data coder.

4. Conclusions In this article, we used the term data channel coder, for both the source coder and the channel coder, considering them as part of a single coding block scheme! Practically, it is not so. There may be several coders, as well as data channel decoders. In a more complex data transmission system there are also signal regenerators incorporated, signal amplifiers, for both the electrical and optical environment, devices for encapsulating data in a data frame-structure (frame and multiframe) etc. More recently, the ATM technology is used (Asynchron Transfer Mode) working with data package switching at high speed! One way of encoding data is structuring the data transmission frame that must be known by the equipments situated at the reception! The data transfer and routing systems use virtualrouters, which perform a certain encoding of the data package sent to the recipient! We can concluded the more important is the study and the simulation of the binary data channel coder on 3 bits, as proving it by using the Electronic Workbench 5.12 Soft basically implies the testing of any kind of data channel coder regardless of the number of bits on which it functions! As an observation, the series of messages of the source S (s4 s1 s2 s1 s3 s1 s2 s1) presented here is not unique: any other sequence is possible, but we have to consider the probability distribution of the source S (according to relation (6) for this particular case or according to relation (22) for a general case). This is valid only for the Shannon-Fano encoding method! For the Huffman method, the principle is much the same, only the scheme for implementing the source coder,

22

the Encoder and even the shift register type PISO can be different! But the general block scheme is the same (see figure 12). In the future, other simulation software can be used, such as: MultiSim or Proteus 9.1., these being of higher quality with libraries rich in digital integrated circuits.

Acknowledgment The authors acknowledge the support of the Managing Authority for Eftimie Murgu University of Resita and Polytechnics University of Timisoara. Warm grateful to Prof.univ.PhD.Eng. Octavian Prostean, PhD advisors of students, for his professional support in our research activities.

References 1. Raicu, S., Rosca, E., Burciu, S. Teoria Informatiei si Coduri, AGIR Press, Bucuresti 2012, ISBN: 978-973-720-416-5; 2. The TTL Data Book for Design Engineers – Second Edition; The Engineering Staff of TEXAS INSTRUMENTS INCORPORATED, Dallas 1976, TEXAS 75222; 3. Drăgulănescu N. Agenda Radioelectronistului, second Edition, TEHNIC Press, Bucuresti 1989, ISBN: 973-31-0079-X; 4. C.C.N.A. – Ghid de studiu independent: CCNA Basics (CCNAB); Cisco Systems Teams; Tradus de: Bogdan Caranda, Răzvan Raica, Bucuresti 2007, ISBN: 973-571-506-6; 5. Răduca, E., Răduca, M., Ungureanu Anghel, D. Circuite Digitale, “Eftimie Murgu” Press, Resita 2010, ISBN: 978-973-1906-90-4; 6. Haţiegan C. , Răduca M., Frunzăverde D. , Răduca E. , Pop N. , Gillich G. R. The modeling and simulation of the thermal analysis on the hydrogenerator stator winding insulation, J Therm Anal Calorim, 1572-8943, September 2013, Volume 113, Issue 3, pp. 1217-1221; 7. Yu.M. Boiko, Boryachok R.O. Improving effectiveness for processing signals in data transmission channels with phase manipulation ISBN: 978-966335-395-1 In proceeding of: Microwave and Telecommunication Technology (CriMiCo), 23rd International Crimean Conference, 01/2013; 8. Gavin Wood A C++ Language Workbench, www.arxiv.org; 9. https://maxwell.ict.griffith.edu.au/yg/teaching/dns/dns_module3_p3.pdf, at 07.05.2014; 10.http://www.tutorialspoint.com/computer_logical_organization/digital_regist ers.htm, at 07.05.2014; 11 http://en.wikipedia.org/wiki/Shift_register, at 07.05.2014.