Explaining Structured Errors in Gigabit Ethernet Andrew W. Moore, Laura B. James, Richard Plumb and Madeleine Glick IRC-TR-05-032 March 2005
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Copyright © Intel Corporation 2003 * Other names and brands may be claimed as the property of others.
1
Explaining Structured Errors in Gigabit Ethernet
Andrew W. Moore , Laura B. James , Richard Plumb and Madeleine Glick
University of Cambridge, Computer Laboratory
[email protected] University of Cambridge, Department of Engineering, Centre for Photonic Systems lbj20,rgp1000 @eng.cam.ac.uk Intel Research, Cambridge
[email protected]
Abstract— A physical layer coding scheme is designed to make optimal use of the available physical link, providing functionality to higher components in the network stack. This paper presents results of an exploration of the errors observed when an optical Gigabit Ethernet link is subject to attenuation. In the unexpected results, highlighted by
I. I NTRODUCTION Many modern networks are constructed as a series of layers. The use of layered design allows for the modular construction of protocols, each providing a different service, with all the inherent advantages of a module-
some data payloads suffering from far higher probability
based design. Network design decisions are often based
of error than others, an analysis of the coding scheme
on assumptions about the nature of the underlying layers.
combined with an analysis of the physical light path leads
For example, design of an error-detecting algorithm, such
us to conclude that the cause is an interaction between the
as a packet checksum, will be based upon a number of
physical layer and the 8B/10B block coding scheme. We
premises about the nature of the data over which it is to
consider the physics of a common type of laser used in optical communications, and how the frequency characteristics of data may affect performance. A conjecture is made that there is a need to build converged systems, with the combinations of physical, data-link, and network layers
work and assumptions about the fundamental properties of the underlying communications channel over which it is to provide protection. Yet the nature of the modular, layered design of
optimised to interact correctly. In the mean time, what will
network stacks has caused this approach to work against
become increasingly necessary is both an identification of
the architects, implementers and users. There exists a
the potential for failure and the need to plan around it.
tension between the desire to place functionality in the most appropriate sub-system, ideally optimised for each incarnation of the system, and the practicalities of modular design intended to allow independent developers to
Topic Keywords: Optical Networks, Network
construct components that will inter-operate with each
Architectures, System design.
other through well-defined interfaces. Past experience
has lead to assumptions being made in the construction
not explicitly taken into account by existing layers (such
or operation of one layer’s design that can lead to
as networks, transport-systems or applications). In fact,
incorrect behaviour when combined with another layer.
quite the opposite situation occurs where existing layers
There are numerous examples describing the problems
are assumed to operate normally.
caused when layers do not behave as the architects of certain system parts expected. One might be the previous vendor recommendation (and common practice) for disabling UDP checksums, because the underlying Ethernet FRC was considered sufficiently strong, led to data corruptions for UDP-based NFS and network name (DNS) services [1]. Another example is the re-
Outline Section II describes our motivations for this work including new optical networking technologies which change previous assumptions about the physical layer and the implications of the limits on the quantity of power useable in optical networks.
use of the 7-bit scrambler for data payloads [2], [3].
We present a study of the 8B/10B block-coding system
The 7-bit scrambling of certain data payloads (inputs)
of Gigabit Ethernet [6], the interaction between an (N,K)
results in data that is (mis)identified by the SONET [4]
block code, an optical physical layer, and data trans-
system as belonging to the control channel rather than the
ported using that block code in Section III. In Section IV
information channel. Also, Tennenhouse [5] noted how
we document our findings on the reasons behind the
layered multiplexing had a significant negative impact
interactions observed.
upon an application’s jitter.
Further to our experiments with Gigabit Ethernet, in Section V we illustrate how the issues we identify
It is our conjecture that this na¨ıve layering, while often considered a laudable property in computer communications networks, can lead to irreconcilable faults due to
have ramifications for systems with increasing physical complexity. Alongside conclusions, Section VI states a number of the directions that further work may take.
differences in such fundamental measures as the number and nature of errors in a channel, a misunderstanding of the underlying properties or needs of an overlaid layer.
II. M OTIVATIONS A. Optical Networks
We concentrate upon the na¨ıve layering present when
Current work in all areas of networking has led
assumptions are made about the properties of protocol
to increasingly complex architectures: our interest is
layers below or above the one being designed. This is
focused upon the field of optical networking, but this
further exacerbated by layers evolving as new technolo-
is also true in the wireless domain. Our exploration
gies drive speed, availability, etc. Often such evolution is
of the robustness of network systems is motivated by
the increased demands of these new optical systems.
Additionally, we are increasingly impacted by operator
Wavelength Division Multiplexing (WDM), capable for
practice. For example, researchers have observed that
example of 160 wavelengths each operating at 10 Gbps
up to 60% of faults in an ISP-grade network are due
per channel, able to operate at over 5,000 km without
to optical events [13]: defined as ones where it was
regeneration [7], is a core technology in the current com-
assumed errors results directly from operational faults
munications network. However the timescale of change
of in-service equipment. While the majority of these will
for such long haul networks is very long: the same
be catastrophic events (e.g., cable breaks), a discussion
scale as deployment, involving from months to years of
with the authors allows us to speculate that a non-trivial
planning, installation and commissioning. To take advan-
percentage of these events will be due to the issues of
tage of capacity developments at the shorter timescales
layer-interaction discussed in this paper.
relevant to the local area network, as well as system and storage area networks, packet switching and burst
B. The Power Problem
switching techniques have seen significant investigation
If all other variables are held constant an increase
(e.g. [8], [9]). Optical systems such as [10] and [11] have
in data rate will require a proportional increase in
lead us to recognise that the need for higher data-rates
transmitter power. A certain number of photons per bit
and designs with larger numbers of optical components
must be received to guarantee any given bit error rate
are forcing us towards what traditionally have been
(BER), even if no thermal noise is present. The arrival
technical limits.
of photons at the receiver is a Poisson process. If 20 photons must arrive per bit to ensure a BER of
Further to the optical-switching domain, there have
for a given receiver, doubling the bit rate means that
been changes in the construction and needs of fibre-
the time in which the 20 photons must arrive is halved.
based computer networks. In deployments containing
The number of photons sent per unit time must thus be
longer runs of fibre using large numbers of splitters for
doubled (doubling the transmission power) to maintain
measurement and monitoring and active optical devices,
the BER.
the overall system loss may be greater than in today’s
The ramifications of this are that a future system
point-to-point links and the receivers may have to cope
operating at higher rates will either require twice the
with much-lower optical powers. Increased fibre lengths
power (a 3dB increase), be able to operate with 3dB
used to deliver Ethernet services, e.g., Ethernet in the
less power for the given information rate (equivalent
first mile [12], along with a new generation of switched
to the channel being noisier by that proportion) or a
optical networks, are examples of this trend.
compromise between these. Fibre nonlinearities impose
limitations on the maximum optical power able to be
for the electrical signals of the upcoming PCI Express
used in an optical network. Subsequently, we maintain
standard [17].
that a greater understanding of the low-power behaviour of coding schemes will provide invaluable insight for future systems. For practical reasons including availability of equipment, its wide deployment, a tractability of the problemspace and and well documented behaviour we concentrate upon the 8B/10B codec. C. 8B/10B Block Coding The 8B/10B codec, originally described by Widmer & Franaszek [14], is widely used. This scheme converts
The 8B/10B codec defines encodings for data octets and control codes which are used to delimit the data sections and maintain the link. Individual codes or combinations of codes are defined for Start of Packet, End of Packet, line Configuration, and so on. Also, Idle codes are transmitted when there is no data to be sent to keep the transceiver optics and electronics active. The Physical Coding Sublayer (PCS) of the Gigabit Ethernet specification [6] defines how these various codes are used.
8 bits of data for transmission (ideal for any octet-
Individual ten-bit code-groups are constructed from
orientated system) into a 10 bit line code. Although
the groups generated by 5B/6B and 3B/4B coding on the
this adds a 25% overhead, 8B/10B has many valuable
first five and last three bits of a data octet respectively.
properties; a transition density of at least 3 per 10 bit
During this process the bits are re-ordered, such that
code group and a maximum run length of 5 bits for clock
the last bits of the octet for transmission are encoded at
recovery, along with virtually no DC spectral component.
the start of the 10-bit group. This is because the last
These characteristics also reduce the possible signal
5 bits of the octet are encoded first, into the first 6
damage due to jitter, which is particularly critical in
bits of code, and then the first 3 bits of the octet are
optical systems, and can also reduce multimodal noise
encoded to the final 4 transmitted bits. Some examples
in multimode fibre connections.
are given in Table I; the running disparity is the sign
This coding scheme is royalty-free, well understood,
of the running sum of the code bits, where a one is
and sees current use in a wide range of applications.
counted as 1 and a zero as -1. During an Idle sequence
In addition to being the standard Physical Coding Sub-
between packet transmissions, the running disparity is
layer (PCS) for Gigabit Ethernet [6], it is used in the
changed (if necessary) to -1 and then maintained at
Fibre Channel system [15]. This codec is also used for
that value. Both control and data codes may change
the 800Mbps extensions to the IEEE 1394 / Firewire
the running disparity or may preserve its existing value;
standard [16], and 8B/10B will be the basis of coding
examples of both types are shown in Table I. The code-
TABLE I E XAMPLES OF 8B/10B CONTROL AND
DATA CODES
Type
Octet
Octet bits
Current RD - Current RD + Note
data
0x00
000 00000 100111 0100
011000 1011
preserves RD value
data
0xf2
111 10010 010011 0111
010011 0001
swaps RD value
control K27.7 111 11011 110110 1000
001001 0111
preserves RD value
control K28.5 101 11100 001111 1010
110000 0101
swaps RD value
group used for the transmission of an octet depends
(also negative disparity); these decode to give bytes with
upon the existing running disparity – hence the two
4 bits of difference. In addition, the running disparity
alternative codes given in the table. A received code-
after the code-group may be miscalculated, potentially
group is compared against the set of valid code-groups
leading to future errors. There are other similar examples
for the current-receiver running disparity, and decoded
in [6].
to the corresponding octet if it is found. If the received code is not found in that set, the specification states
III. B IT E RROR R ATE VERSUS PACKET E RROR
that the group is deemed invalid. In either case, the
As illustration of the discontinuity of measurements
received code-group is used to calculate a new value for
made by engineers of the physical layer and those
the running disparity. A code-group received containing
working in the network layers, we compare bit error rate
errors may thus be decoded and considered valid. It
(BER) measurements versus measurements of packet-
is also possible for an earlier error to throw off the
loss. This Section contains an extended form of the
running disparity calculation causing a later code-group
results first presented in James et al. [18].
may be deemed invalid because the running disparity
A. Experimental Method
at the receiver is no longer correct. This can amplify the effect of a single bit error at the physical layer. Line coding schemes, although they handle many of the physical layer constraints, can introduce problems. In the case of 8B/10B coding, a single bit error on the line can lead to multiple bit errors in the received data byte. For example, with one bit error the code-group D0.1 (current running disparity negative) becomes the code-group D9.1
We investigate Gigabit Ethernet on optical fibre, (1000BASE-X [6]) under conditions where the received power is sufficiently low as to induce errors in the Ethernet frames. We assume that while the Functional Redundancy Check (FRC) mechanism within Ethernet is sufficiently strong to catch the errors, the dropped frames and resulting packet loss will result in a significantly higher probability of packet errors than the norm for
both original and errored frames are stored for later analysis. 1) Octet Analysis: Each octet for transmission is coded by the Physical Coding Sublayer of Gigabit Fig. 1.
Main test environment
Ethernet using 8B/10B into a 10 bit code-group or symbol, and we analyze these for frames which are
certain hosts, applications and perhaps users. In our main test environment an optical attenuator is placed in one direction of a Gigabit Ethernet link. A traffic generator feeds a Fast Ethernet link to an Ethernet switch, and a Gigabit Ethernet link is connected between this switch and a traffic sink and tester (Figure 1). The variable optical attenuator and an optical isolator are placed in the fibre in the direction from the switch to the sink. We had previously noted interference due to reflection and took the precaution to remove this aspect from the results. A packet capture and measurement system is implemented within the traffic sink using an enhanced driver for the SysKonnect SK-9844 network interface card (NIC). Among a number of additional features, the modified driver allows application processes to receive error-containing frames that would normally be discarded. As well as purpose-built code for the receiving system we use a special-purpose traffic generator and comparator. Pre-constructed test data in tcpdump-format
received in error at the octet level. By comparing the two possible transmitted symbols for each octet in the original frame to the two possible symbols corresponding to the received octet we can deduce the bit errors which occurred in the symbol at the physical layer. In order to infer which symbol was sent and which received, we assume that the combination giving the minimum number of bit errors on the line is most likely to have occurred. This allows us to determine the line errors which most probably occurred. Various types of symbol damage may be observed. One of these is the single-bit error caused by the low signal to noise ratio at the receiver. A second form of error results from a loss of bit clock causing smeared bits: where a subsequent bit is read as having the value of the previous bit. A final example results from the loss of symbol clock synchronization. This can lead to the symbol boundaries being misplaced, so that a sequence of several symbols, and thus several octets, will be incorrectly recovered.
is transmitted from one or more traffic generators using
2) Real Traffic: Some results presented here are
an adapted version of tcpfire [19]. Transmitted frames are
conducted with real network traffic referred to as the
compared to their received versions and if they differ,
day-trace. This network traffic was captured from the
1
interconnect between a large research institution and
0.9 0.8
the Internet over the course of two working days. We
0.7 0.6
consider it to contain a representative sample of network
0.5
traffic for an academic/research organisation of approx-
0.3
0.4
0.2 0.1
imately 150 users.
0 0
256
512
768
1024
1280
1536
Offset of octet within frame Structured test data
Other traffic tested included pseudo-random data, consisting of a sequence of frames of the same number
Fig. 2.
Pseudo-random data
Cumulative distribution of errors versus position in frame
and size as the day-trace data, although each is filled with a stream of octets whose values were drawn from a pseudo-random number generator. Structured test data consists of a single frame containing repeated octets: 0x00–0xff, to make a frame 1500 octets long. The low error testframe consists of 1500 octets of 0xCC data (selected for a low symbol error rate); the high error testframe is 1500 octets of 0x34 data (which displays a high symbol error rate).
transmitted repeatedly and the error count examined. For the BER measurements presented here, a directly modulated 1548nm laser was used. The optical signal was then subjected to variable attenuation before returning via an Agilent Lightwave (11982A) receiver unit into the BERT (Agilent parts 70841B and 70842B). The BERT was programmed with a series of bit sequences, each corresponding to a frame of Gigabit Ethernet data encoded as it would be for the line in 8B/10B. Purpose-
3) Bit Error Rate Measurements: Optical systems
built code is used to convert a frame of known data
experiments commonly use a Bit Error Rate Test kit
into the bit-sequence suitable for the BERT. The bit
(BERT) to assess the behaviour of the system [20].
error rates for these packet bit sequences were measured
This comprises both a data source and a receiving
at a range of attenuation values, using identical BERT
unit and compares the incoming signal with the known
settings for all frames (e.g. 0/1 thresholding value).
transmitted one. Individual bit errors are counted both during short sampling periods and over a fixed period
B. Initial Results
(defined in time, bits, or errors). The output data can be
We illustrate how errors are position independent but
used to modulate a laser and a photodiode can be placed
dependent upon the encoded data. Figure 2 contrasts the
before the BERT receiver to set up an optical link; optical
position of errors in the pseudo-random and structured
system components can then be placed between the two
frames. The day-trace results are not shown because
parts of the BERT for testing. Usually, a pseudo-random
this data, consisting of packets of varying length, is not
bit sequence is used but any defined bit sequence can be
directly comparable. The positions of the symbols most
1
and bit error with an Agilent Lightwave receiver, each
0.9
have different sensitivities. We see that these different
0.8
testframes lead to substantially different BER perfor-
0.7 0.6
mance. Importantly the relationship between the test data
0.5 0.4
and BER results has little relationship with the packet5
10
15 20 25 30 35 Symbols in error per Frame
Structured test data High error testframe
40
45
50
Low error testframe
error rates for the same test data. While not presented here, the results of Figure 4 have been validated for an
Fig. 3.
Cumulative distribution of symbol errors per frame
optical power-range of 4dB. What we can illustrate here is that a uniformly-
subject to error can be observed. For the random data the position distribution is uniform — illustrating that there is no frame-position dependence in the errors — but the structured data clearly shows that certain payload-values display significantly higher error-rates. Figure 3 indicates the number of incorrectly received symbols per frame. The low error testframe generated no symbols in error and thus no frames in error. The high error testframe results show an increase in errored symbols per frame. It is important to note that the
distributed set of random data, once encoded with 8B/10B will not suffer code-errors with the same uniformity. We considered that the Gigabit Ethernet PCS, 8B/10B, was actually the cause of this non-uniformity. While not illustrated here, sets of wide-ranging experiments allowed us to conclude that Ethernet-frames containing a given octet of certain value were up to 100 times more likely to be received in error (and thus dropped), when compared with a similar-sized packet that did not contain such octets.
errors occur uniformly across the whole packet and that there are no correlations evident between the positions of errors within the frame. We interpret this result as
IV. C AUSE
AND
E FFECT
A. Effects on data sequences
confirming that errors are highly localised within a frame
We have found that individual errored octets do not
and from this we are able to assume that the error-
appear to be clustered within frames but are independent
inducing events occur over small (bit-time) time scales.
of each other. However, we are interested in whether
Figures 4(b) and 4(a) show packet error rate and bit
earlier transmitted octets have an effect on the likelihood
error rate, respectively, for a range of received optical
of a subsequent octet being received in error. We had
power. The powers in these two figures differ due to the
anticipated that the use of running-disparity in 8B/10B
different experimental setup; packet error is measured
would present itself as correlation between errors in
using the SysKonnect 1000BASE-X NIC as a receiver,
current codes and the value of previous codes.
1e-05
0.1 0.01
Packet error rate
Bit error rate
1e-06 1e-07 1e-08 1e-09
0.0001 1e-05
1e-10 1e-11 -8.6
0.001
-8.4
-8.2 -8 Received power / dBm
-7.8
-7.6
1e-06
Low error testframe
Structured test data High error testframe
-23.2
Structured test data High error testframe
(a) Bit error rate versus received power Fig. 4.
-23.4
-23 -22.8 -22.6 -22.4 Received power / dBm
-22.2
-22
Low error testframe
(b) Packet error rate versus received power
Contrasting packet-error and bit-error rates versus received power
We collect statistics on how many times each trans-
in Figure 5(a) are indicative of an octet that is error-
mitted octet value is received in error, and also store
prone independently of the value of the previous octet.
the sequence of octets transmitted preceding this. The
In contrast, horizontal bands indicate a correlation of
error counts are stored in 2D matrices (or histograms)
errors with the value of the previous octet.
of size , representing each pair of octets in
It can be seen from Figure 5 that while correlation
the sequence leading up to the errored octet: one for
between errors and the value in error, or the immediately
the errored octet and its immediate predecessor, one
previous value, are significant, beyond this there is no
for the predecessor and the octet before that, and so
apparent correlation. The equivalent plot for
on. We normalise the error counts for each of these
produces a featureless white square.
histograms by dividing by the matrix representing the frequency of occurrence of this octet sequence in the original transmitted data. We then scale each histogram matrix so that the sum of all entries in each matrix is 1.
B. 8B/10B code-group frequency components and their effects It is illustrative to consider the octets which are most subject to error, and the 8B/10B codes used to represent
Figure 5(a) shows the error frequencies for the “cur-
them. In the psuedo-random data, the following ten
rent octet” (the correct transmitted value of octets
octets give the highest error probabilities (independent
received in error), on the x-axis, versus the octet which
of the preceding octet value): 0x43, 0x8A, 0x4A, 0xCA,
was transmitted before each specific errored octet,
,
0x6A, 0x0A, 0x6F, 0xEA, 0x59, 0x2A. It can be seen
on the y-axis. Figure 5(b) shows the preceding octet
that these commonly end in A, and this causes the first
and the octet before that:
vs
. Vertical lines
5 bits of the code-group to be 01010. The octets not
1.0
1.0
0.5
0.5
xi-1
0xFF
0
0x7F
0x00
0x00
0x7F
0xFF
xi
0
200 400 625 Frequency / MHz
0
0
200 400 625 Frequency / MHz
(a) FFT of code-group
(b) FFT of code-group
for high error octet
for high error octet
0x4A
0x0A
1.0
1.0
0.5
0.5
(a) Error counts for ! vs. "$#&%
0xFF 0
200 400 625 Frequency / MHz
xi-2
0
0x7F
0
0
200 400 625 Frequency / MHz
(c) FFT of code-group
(d) FFT of code-group
for
for low error octet 0x9D
low
error
octet
0xAD 0x00
Fig. 6. 0x00
0x7F
xi - 1
Contrasting FFTs for a selection of code-groups
0xFF
code-groups of 8B/10B. Examining the FFTs of the (b) Error counts for !$#&% vs. "$#' Fig. 5.
Error counts for pseudo-random data octets
code-groups for the high error octets, Figures 6(a) and 6(b), for example, the peak corresponding to the base frequency (625MHz, half the line rate) is pronounced in
beginning with this sequence in general contain at least 4
most cases, although there is no such feature in the FFTs
alternating bits. Of the ten octets giving the lowest error
of the code-groups of the low error octets (Figures 6(c)
probabilities (independent of previous octet), which are
and 6(d)).
0xAD, 0xED, 0x9D, 0xDD, 0x7D, 0x6D, 0xFD, 0x2D,
The pairs of preceding and current octets leading
0x3D and 0x8D, the concluding D causes the code-
to the greatest error (which are most easily observed
groups to start with 0011.
in Figure 5) give much higher error probabilities than
Fast Fourier Transforms (FFTs) were generated for
the individual octets. The noted high error octets (eg.
data sequences consisting of repeated instances of the
0x8A) do occur in the top ten high error octet pairs and
Receiver output amplitude
C. Laser Physics
traces from "good" data blocks traces from "bad" 10101010 block
0.5
0.06
0.4
0.05
0.3
0.04
An eye diagram is commonly used to assess the 460 480 500 520
0.2
effects of noise, distortion and jitter can be seen in
0.1 0
physical layer of a communications system, where the
200
0
400
600
800
1000
time in ps
Fig. 8.
Eye diagram: DFB laser at 1.25Gbps NRZ
a single diagram [20]. Received analogue waveforms (before any decision circuit) for each bit of a pseudorandom sequences are superimposed as in Figure 8. The central open part of the eye represents the margin between “1”s and “0”s, where both the height (repre-
normally follow an octet giving a code-group ending in 10101 or 0101, such as 0x58, which serves to further empasise that frequency component.
senting amplitude margin) and the width (representing timing margin) are significant. The diagram shown is computed and includes statistical jitter and amplitude
The 8B/10B codec defines both data and control
noise from the laser source but not thermal noise in the
encodings, and these are represented on a 1024x1024
receiver, the inclusion of which would have broadened
space in Figure 7(a), which shows valid combinations
the lines significantly and makes the whole diagram less
of the current code-group ( (! ) and the preceding one
clear. Electrically, semiconductor lasers are just simple
( ()
). The regions of valid and invalid code-groups are
diodes, but the interaction between electron and photon
defined by the codec’s use of 3B/4B and 5B/6B blocks
populations within the device makes the modulation
(see II-C).
response complex. A first-order representation of the
laser and driver may be obtained via a pair of rate In Figure 7(a) the octet errors found in the day-trace have been displayed on this codespace, showing the regions of high error concentration for real Internet data. It can be seen that these tend to be clustered and that the clusters correspond to certain features of the code-
equations, one each for electrons (N) and photons (P): 021 043
5
0>
687:9
023
*,L ;8= 0&1
1
1BADCFE
?
9 1 9
9
1BADCFEMI
?HG
1JI
L+N K
1
(
1
C
9