Neuromorphic function learning with carbon nanotube ... - CiteSeerX

35 downloads 635 Views 2MB Size Report
Dec 9, 2013 - The work was partially funded by the French National Research Agency through the ... [9] Seo K et al 2011 Nanotechnology 22 254023. [10] Nayak A ... Vuillaume D 2012 A memristive nanoparticle/organic hybrid synapstor for ...
Home

Search

Collections

Journals

About

Contact us

My IOPscience

Neuromorphic function learning with carbon nanotube based synapses

This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2013 Nanotechnology 24 384013 (http://iopscience.iop.org/0957-4484/24/38/384013) View the table of contents for this issue, or go to the journal homepage for more

Download details: IP Address: 129.175.97.14 The article was downloaded on 12/09/2013 at 16:09

Please note that terms and conditions apply.

IOP PUBLISHING

NANOTECHNOLOGY

Nanotechnology 24 (2013) 384013 (9pp)

doi:10.1088/0957-4484/24/38/384013

Neuromorphic function learning with carbon nanotube based synapses Karim Gacem1 , Jean-Marie Retrouvey2,3 , Djaafar Chabi2,3 , Arianna Filoramo1 , Weisheng Zhao2,3 , Jacques-Olivier Klein2,3 and Vincent Derycke1 1

CEA, IRAMIS, Service de Physique de L’Etat Condens´e (CNRS URA 2464), Laboratoire d’Electronique Mol´eculaire, F-91191 Gif-sur-Yvette, France 2 IEF, Univ Paris-Sud, UMR 8622, F-91405 Orsay, France 3 CNRS, F-91405 Orsay, France E-mail: [email protected] and [email protected]

Received 20 December 2012, in final form 17 May 2013 Published 2 September 2013 Online at stacks.iop.org/Nano/24/384013 Abstract The principle of using nanoscale memory devices as artificial synapses in neuromorphic circuits is recognized as a promising way to build ground-breaking circuit architectures tolerant to defects and variability. Yet, actual experimental demonstrations of the neural network type of circuits based on non-conventional/non-CMOS memory devices and displaying function learning capabilities remain very scarce. We show here that carbon-nanotube-based memory elements can be used as artificial synapses, combined with conventional neurons and trained to perform functions through the application of a supervised learning algorithm. The same ensemble of eight devices can notably be trained multiple times to code successively any three-input linearly separable Boolean logic function despite device-to-device variability. This work thus represents one of the very few demonstrations of actual function learning with synapses based on nanoscale building blocks. The potential of such an approach for the parallel learning of multiple and more complex functions is also evaluated. (Some figures may appear in colour only in the online journal)

1. Introduction

circuit-levels remain very scarce [15–18]. The possibility of obtaining a collective behavior relevant for an application from the behavior of elementary synapses largely remains to be proven. Methods to move from individual behavior to collective functions in the framework of artificial neural networks (ANNs) have been well known since the 1960s and were generalized to multilayer networks in the 1980s, but translating them to an assembly of nanodevices used as synapses requires the insertion of peripheral elements around each synapse, which significantly lowers the prospects for high-density integration. Thus, one great challenge is to drastically simplify the learning algorithms usually involved in software applications of neural networks in order to be compatible with nanodevice implementation, without losing their efficiency. In this paper, we demonstrate that an assembly of carbon-nanotube-based memory devices can be used as

As the size of electronic devices approaches ultimate nanoscale dimensions, the issues of defects and device-todevice variability are becoming increasingly important in the design and operation of electronic circuits. In particular, traditional electronic functions rigidly implemented at the fabrication step and unable to be adjusted or reprogrammed in a post-fabrication step will have decreasing chance to operate satisfactorily. Conversely, other paradigms could be more adaptable to operate networks of nanodevices. In particular, the neuromorphic approach is often cited as one of the most promising ones to tackle certain classes of complex problems while coping with device imperfections [1–6]. However, in recent years experimental demonstrations have to a large extent been limited to characterization of individual nano-synapses [7–14] and reported results at the function and 0957-4484/13/384013+09$33.00

1

c 2013 IOP Publishing Ltd Printed in the UK & the USA

Nanotechnology 24 (2013) 384013

K Gacem et al

gate dielectrics of photo-induced electrons generated in the polymer layer [32, 35]. These electrons shift the threshold voltage of the transistor in a non-volatile way to such an extent that the device remains in the on-state after the light has been turned off. Then, applying negative bias pulses on the gate electrode [31] or positive bias pulses on the drain electrode [33] allows adjustment of the density of trapped electrons and thus the channel conductivity. Interestingly, we have showed that OG-CNTFETs possess the required characteristics to be used as synapses in neural network circuits [33, 36, 19]. In particular: (i) the conductance of the channel can be finely adjusted in a large range, (ii) the resulting state is stored in a non-volatile way, (iii) programming can be performed by applying bias pulses on the input/output terminals (source and/or drain), (iv) the gate electrode can be shared by series of devices and used as an enable/disable signal for the programming. Indeed, a positive gate voltage protects the device conductance from being altered by source or drain programming pulses [36]. Note that while our previous experimental studies were focused at the device level, we tackle here the combined properties of multiple devices as proposed in [19] and use them for the first time to build a demonstrator of a circuit with function learning capabilities. For that purpose, we embed the series of OG-CNTFETs in a circuit, the simplified schematic of which is presented in figure 1(d). The details of this circuit are presented in section 3.2.

synapses in a neural network type of circuit able to learn its function in a post-fabrication step through a supervised learning algorithm. This corresponds to the experimental implementation of a strategy we first introduced in [19]. As an example for this demonstration, we use Boolean logic functions and notably show that the same nanotube-based circuit can be trained and re-trained to perform all the linearly separable two-input and three-input logic functions. This learning is achieved with devices that display a high level of variability, thus highlighting the interest of the neuromorphic approach when dealing with largely non-ideal devices based on nanoscale compounds. With the proposed strategy, the number of accessible functions scales very favorably with the number of devices: while eight devices (forming four synapses) allow the coding of the 104 linearly separable three-input logic functions, the addition of just two synapses (four devices) would give access to five-input logic functions among which >90 000 are linearly separable.

2. Carbon-nanotube-based memory devices Carbon nanotubes (CNTs) are known for their exceptional electronic properties, which translate into excellent performances for individual devices, in particular field-effect transistors [20–22]. Yet, even when simple functions have been demonstrated [23–29] the large device-to-device variability arising from the CNTs diversity and from the difficulty in placing CNT at very precise locations [30] has prevented the development of efficient CNT-based circuits in classical design architectures, a situation that is unlikely to find a convincing solution any time soon despite continuous progress. In our recent work, we studied in detail a certain type of carbon nanotube device named OG-CNTFET (optically gated carbon nanotube field-effect transistors) [31–33]. These consist in carbon nanotube FETs coated with a photo-conducting polymer, which makes them light sensitive and provides them with non-volatile memory capabilities. The channel of the FET can be an individual single-wall carbon nanotube (SWNT) [31, 34] or a random network of CNTs [33]. In the following, we consider series of CNT-network devices in a back-gate configuration (with a 10 nm-thin SiO2 gate dielectric) as displayed in figure 1(a). An example of the transfer characteristics ID (VGS ) of eight as-made CNTFETs (i.e. before the polymer coating has been applied) from the same series are presented in figure 1(b). This displays a high level of device-to-device variability originating from differences in the morphology of the CNT networks (CNT density and orientation). In particular the ON-state current varies by a factor of ten between the two extreme devices. Such level of variability would compromise the use of these devices in conventional logic circuits, but turns out not to be an issue in neuromorphic ones, as is demonstrated in section 4. As shown in the example of figure 1(c), after polymer functionalization, the conductivity of the CNT channel of an OG-CNTFET, measured in the dark at a constant and positive gate bias, is strongly modified by a light pulse. This effect is associated with the trapping in the SiO2

3. Proposed circuit architecture 3.1. Principle of perceptrons From a mathematical point of view, an artificial neural network is a graph whose nodes are the neurons and whose links are the weighted synaptic connections. Each neuron first computes a continuous value (the post-synaptic potential) corresponding to a weighted sum of the states of its neighbors. Then it computes its state as the output value of a nonlinear decision function with the post-synaptic potential as input argument. Learning corresponds to determining the set of connection weights (or synaptic weights) that minimizes, for k }, the a set {x1 , . . . , xM } of M input vectors xk = {x1k , . . . , xN error between the expected output states tk = {t1k , . . . , tNk } and those actually obtained yk = {yk1 , . . . , ykN }. Most of the supervised learning algorithms have in common, when restricted to single-layer networks, the updating of each synaptic weight Wi,j from input i to output j by following a variant of the Widrow–Hoff’s mean least square ‘Delta’ rule [37]:  X  1Wi,j = α xik yki − tik (1) i

where α corresponds to the learning step. The learning process is repeated until the error yki − tik becomes null (or sufficiently small) for every input pattern xk . Here, we focus on monolayer differential architectures, in which the binary input states are provided to the network as a pair of complementary voltages (X+ , X− ) corresponding to 2

Nanotechnology 24 (2013) 384013

K Gacem et al

Figure 1. (a) Left: optical microscope image of a series of eight nanotube-based programmable resistances forming three signed synapses and one bias input. Right: scanning electron microscope image of one device showing the patterned network of carbon nanotubes used as channel material. (b) Transfer characteristics of eight nanotube transistors from the same series. (c) Effect of illumination on the transfer characteristics of an OG-CNTFET. Curves are acquired in the dark, before and after a light pulse. After illumination and at a fixed VGS value (1.5 V), the channel resistance can be programmed continuously between RMIN and RMAX by applying electrical pulses on the source (or drain) electrode. (d) Simplified schematic representation of the circuit used for the learning of three-input logic functions.

(VH , VL ) for logic ‘one’ and (VL , VH ) for logic ‘zero’. Input voltage VH is positive while input voltage VL is negative. In multilayer networks, the parameter α plays an important role and must be carefully adjusted for the convergence of the learning process but here, with monolayer networks, it can be chosen arbitrarily (e.g. α = 1) as far as the sign of 1Wi,j is preserved, and the convergence is very fast: usually less than ten presentations of the pattern set is sufficient. In this context, the principle of the learning method that we applied here for the single-layer network with binary states can be described as follows. The synaptic weight should increase where:

Conversely, the synaptic weight should decrease where: • c/a positive input (VH ) is connected to an output high while expected low, • d/a negative input (VL ) is connected to an output low while expected high. 3.2. Proposed implementation using OG-CNTFETs More specifically, in our approach we use the above-described OG-CNTFETs as synaptic elements. In such approach, light is used only once before the learning of the circuit’s function, and globally to the whole synaptic array. It serves as a preset signal to set all the devices in their most conductive memory state. A possible drawback of this circuit could come from the fact that the conductivity of one particular device can only be decreased by electrical stimuli, while a synaptic weight should evolve in both directions. As proposed in [19, 12],

• a/a positive input (VH ) is connected to an output low while expected high, • b/a negative input (VL ) is connected to an output high while expected low. 3

Nanotechnology 24 (2013) 384013

K Gacem et al

Figure 2. Detailed description of the learning circuit associated with an array of OG-CNTFETs. The blue-shaded area is implemented in a FPGA board.

this limitation can be overcome by grouping devices by pairs to formed signed synapses as explained above. In this way, the pair can: (i) yield a positive or negative contribution to the total signal incoming into the neuron, (ii) be modified in both directions (increasing and decreasing weight). Learning a function consists in evolving collectively the conductivity of all the devices in the synaptic array so as to set the suitable synaptic weights. The triggering threshold of the neuron can be adapted and depends on the function to be learnt. Thanks to the differential inputs and the gate protection control [36], selection elements to manage the sneak paths are not required for this array. Such selection elements would not be necessary either in the case of multiple neurons connected to a crossbar array of synapses. The learning scheme described above is obtained thanks to the learning circuit described in figure 2. At this stage, it is made of the combination of an FPGA board (DE2-Altera) and a custom PCB where the nanotube device array is mounted in a DIL package and discrete analog circuits are used to convert the logic levels between the array of OG-CNTFETs and the FPGA. In the former, a finite-state machine (FSM) sends global control signals periodically. Then a 3-bit counter-scans the truth table of the function to be learned, feeds the neuron inputs Yi and selects the expected outputs Tj . Each input Xi controls a dual analog multiplexor that drives the presynaptic voltage Xi+ and Xi− for a pair of OG-CNTFETs with either positive or negative voltage V+ = VH and V− = VL . One of the input pair, corresponding to the bias neuron (i.e. learnable threshold) has a constant bias. The OG-CNTFETs are used as programmable resistors: they share the same gate and output electrodes and the entire learning process takes place at a fixed positive VGS bias. In such configuration, the network computes directly a linear combination Vj of the input Xi and a voltage comparator produces the binary output of the neuron Yj that is stored in the D-Latch. Then the FSM asserts successively signals A and B corresponding to two learning steps.

During the first step, signal Vp2p is asserted to send a programming pulse Vp instead of the high levels VH so that all gates are protected except those associated with the error output Yj = +1 while Tj = −1. During the second step, signal VP2m is asserted to generate a programming pulse Vp instead of the low level VL to all entries V− corresponding to a low level when all the gates of the transistors are protected except those corresponding to the error Yj = −1 while Tj = +1. The combination of these two pulses imposes a variation of all conductances associated with output error. This variation follows the conditions imposed by the sign of the product between input and error as required (see equation (1)). In addition, the update of all synaptic weights can be controlled collectively from input voltages and gate voltages without having to insert any selection elements into the matrix.

4. Function learning 4.1. Learning of all the two-input logic functions We now present the principle for function learning and the main experimental results. To illustrate the method, we start with the demonstration of a two-input logic function learning using 6 OG-CNTFETs. Figure 3(a) illustrates the learning cycle and figure 3(b) recalls the truth table of AND logic to be learned. In the following, all functions are labeled using the last column of their truth table for simplicity. First, a single light pulse is applied to the whole circuit to set the six devices in their most conductive state (RMIN ). Each input of the truth table is applied successively and the initial function is determined. For that purpose, the analog value Voutput corresponding to the post-synaptic potential is compared with a threshold (dotted red line in figure 3(c)) and converted into a digital output. In the example of figure 3, this initial function is ‘0010’ (figure 3(d) upper part). Then, for each line of the truth table, the obtained output is compared with the expected output. Whenever the 4

Nanotechnology 24 (2013) 384013

K Gacem et al

Figure 3. (a) General principle of the learning algorithm, (b) example of the truth table used in (c) and (c) example of a two-input function learning. Top black curves: input values, blue curve: output signal before digital conversion using the dotted red line as threshold, red curve: expected output for the function to be learned (AND), bottom black curve: measured output. (d) Read output signal for the four combinations of inputs before (blue) and after (black) digital conversion, before and after the learning cycle.

obtained and expected outputs are different, the synaptic weights are slightly modified. Depending on the sign of the error signal, the conductivity of the left or right device in the pair is decreased by applying a programming pulse on the corresponding input as explained in section 3.1. This programming pulse partially removes the nearby trapped charges, modifying thereby the resistance of the device. Once all the lines of the truth table have been presented at the input, either the function is learned, or the cycle restarts for a second presentation of the truth table. By applying sufficiently small corrections to the synaptic weights at each presentation of the truth table, such strategy always converges for linearly separable logic functions. In the example of figure 3(c), it can be seen that after just three presentations of the truth table, the expected and obtained digital output are similar. However, we note that this intermediate state is not stable yet because some of the output values are too close to the threshold level, with a difference comparable to the noise level. Conversely, after seven presentations of the truth table, the four levels are clearly separated from the threshold and the function is satisfactorily learned and stable. At that step, the learning stage is completed and the function can be used at fixed synaptic weights corresponding to a fixed trapped charges landscape. Then, if required, a new function can be prepared for the same circuit by applying a new light pulse to reset the trapped charges, i.e. the initial synapse states. In this way, we have demonstrated that the same circuit with six CNT-devices can be used to prepare all

the eight linearly separable two-input Boolean logic functions. The result of the learning of the eight functions is presented in figure 4. 4.2. Learning of three-input logic functions The number of accessible functions grows extremely fast when the number of inputs increases, so that they become too numerous to be systematically plotted in this paper. Nevertheless, most of them can be categorized in classes of functions equivalent through logic inversion of input or output, or exchange between inputs. There are 104 linearly separable logic functions with three-inputs but the three classes illustrated below (NAND and X1.X2./X3 belong to the same class) are representative of 72 functions (see figure 5). The 32 other functions not covered by these three classes are either two-input (24), one-input (six) or zero-input (two) functions.

5. Discussion and conclusions 5.1. Tolerance to device variability One of the main reasons to develop neuromorphic architectures is to obtain circuits tolerant to defects and variability. As shown in figure 1(b), the transistors used in this study differ by a factor of ten in their ON-state current and their threshold voltages are also dispersed. However, in the 5

Nanotechnology 24 (2013) 384013

K Gacem et al

Figure 4. Result of the learning of all the linearly separable two-input logic functions by the same circuit.

proposed approach, neither the exact value of RMIN and RMAX , nor the initial channel conductance after the fabrication steps are of critical importance. One of the only requirements for satisfactory function learning is the existence of a common range of programmable resistivity values at a given (and fixed) positive VGS bias. Future progress on device-to-device variability would notably increase the width of this common resistivity range. Together with a further reduction of the noise level, it would allow increasing the number of logic inputs (see section 5.2). The main source of noise comes from the sensitivity of CNTFETs to their environment. Due to their 1D nature, high carrier mobility and limited density of states, CNTs are an ideal channel material to build very sensitive charge sensors [38, 39]. This represents one of the reasons to use them in the present context. However, in return, they are extremely sensitive to minute changes in their electrostatic environment and in particular to unwanted charge trapping/detrapping events. In a recent study, we notably showed how water and oxygen affect the operation and stability of OG-CNTFETs [35] and proposed improvement strategies. Importantly, we showed that without control of humidity conditions, a stabilization delay is required after the initial light-induced preset step. As the preset is performed only once, before the beginning of the learning cycle, this would only affect performances in cases where the learned function must be often modified. Another source of variability could concern the efficiency of the electron detrapping mechanism from device to device. In principle, there is limited variability on this parameter since it relates mainly to the gate efficiency (oxide thickness and permittivity) and the density of electron accepting states, which are similar for all devices. However, even large differences in this parameter would not prevent function learning. Indeed, by performing numerical simulations [19], we predicted that such architecture is insensitive to the amplitude variations of the weight update steps. Only the

sign of this variation is actually significant. Such tolerance enables effective hardware learning for high-density networks of nanodevices. 5.2. Potential in terms of device and circuit scaling The proposed demonstration is based on nanotube networks which are 7 µm in width. The robustness of such network-based devices allowed us to focus our attention on the learning algorithm rather than on technological issues. However, OG-CNTFETs can be built on individual SWNTs as we showed in [31, 33]. In such case, the channel is just 1–2 nm in width and its length could in principle be scaled-down below 10 nm as shown for conventional nanotube FETs [22]. Recently, we demonstrated that multiple memory devices can in fact be integrated along a single SWNT and programmed independently without the need for individual gate electrodes [40]. It shows that in future studies, each function could use only one SWNT to store all the associated synaptic weights. Yet, we have not tried function learning at that scale yet. In terms of function complexity, with the proposed strategy, the number of accessible functions scales very rapidly with the number of devices. With only 12 OGCNTFETs per neuron, one could in principle code any of the 94 572 linearly separable five-input logic functions. Accessing nonlinearly separable functions would require a multilayer ANN. The learning approach described here is relevant for monolayer networks. As long as we consider programmable logic applications, it is not an important limitation. Indeed, in any digital design flow, complex logic elements are split into simpler functions based on logic gates implemented in a standard cell library. Here, the primitives in the library must be linearly separable, but this does not preclude the possibility of cascading them and building complex nonlinearly separable functions. On the other hand, it is also likely that the neural 6

Nanotechnology 24 (2013) 384013

K Gacem et al

Figure 5. Result of the learning of four linearly separable three-input logic functions by the same circuit.

electrodes can be implemented using oxidized silicon wires as back-gate electrode [34]. The parallel learning of multiple functions in crossbar arrays of OG-CNTFETs is beyond the scope of this paper and requires extensive additional efforts at the technological level. However, the supervised learning algorithm presented here is directly usable for such parallel learning of multiple functions. In addition, the digital control emulated in the FPGA board can be efficiently implemented through a CMOS co-integration. Indeed, the whole process flow associated with the fabrication of OG-CNTFETs is fully compatible with their integration directly above prefabricated CMOS circuitry. This represents another important advantage of CNTs in this context. Indeed, very few channel materials can be as naturally deposited on top of pre-existing circuitry (and predefined back-gate electrodes) while conserving a high carrier mobility.

approach would be interesting for non-logic applications, notably for pattern recognition. In that case, it would remain feasible to run a software off-chip learning (e.g. with a back propagation algorithm) in order to identify the functions required in the different layers, then to learn these functions on-chip, layer by layer. In addition, increasing function complexity and the number of logic inputs requires a sufficiently large range of conductance levels. In [33], we demonstrated the ability of OG-CNTFET to control resistance ratio with values from 1 to 64. With such resolution, the work presented in [41] permits us to extrapolate the possibility to learn complex functions with at least 16 binary inputs provided that the same large range of programmable resistivity values is accessible for all the devices. The circuit presented above allows the repeated learning of one function (one neuron). The extension to the parallel learning of multiple functions is particularly natural. Indeed, we showed in [36] and [19] that OG-CNTFET synapses can be arranged in crossbar arrays in which the synapses associated with the same neuron (function) share the same source and gate electrodes, while synapses connected to the same input share the same drain electrode. The shared gate electrode can very efficiently be used to protect the already trained synapses from being modified by the continuation of the learning. It thus represents a very powerful addressing strategy for arrays of synapses without the use of diodes or selection transistors. From a technological point of view, shared gate

5.3. Conclusion In conclusion, we have demonstrated the first implementation of artificial synapses based on carbon nanotubes. We combined an array of non-conventional memory devices based on nanoscale building blocks with a supervised learning algorithm implemented in conventional electronics to achieve successively the neuromorphic learning of all the 3-input linearly separable logic functions. The proposed approach combines many key advantages. In particular, it displays a remarkable tolerance to variability at the device level 7

Nanotechnology 24 (2013) 384013

K Gacem et al

and does not require any selection device or diode. Thanks to common gate electrodes, such strategy can be extended to learn multiple functions in parallel in a very efficient way. The considered nanotube-based synapse arrays could be fabricated above CMOS circuits. Finally, the proposed strategy and learning method are readily transferable to any purely electric memristive dipole as long as its characteristic includes a bias range where its non-volatile conductance state remains unaffected, a characteristic shared by several recently demonstrated memristors.

[15] Choi H, Jung H, Lee J, Yoon J, Park J, Seong D, Lee W, Hasan M, Jung G Y and Hwang H 2009 An electrically modifiable synapse array of resistive switching memory Nanotechnology 20 345201 [16] Kim K H, Gaba S, Wheeler D, Cruz-Albrecht J M, Hussain T, Srinivasa N and Lu W 2012 A functional hybrid memristor crossbar-array/CMOS system for data storage and neuromorphic applications Nano Lett. 12 389–95 [17] Park S et al 2012 RRAM-based synapse for neuromorphic system with pattern recognition function IEDM Tech. Dig. 10.2.1–.4 [18] Bichler O, Zhao W, Alibart F, Pleutin S, Lenfant S, Vuillaume D and Gamrat C 2012 Pavlov’s dog associative learning demonstrated on synaptic-like organic transistors Neural Comput. 24 1–18 [19] Liao S Y et al 2011 Design and modeling of a neuro-inspired learning circuit using nanotubes-based memory devices IEEE Trans. Circuit Syst. 58 2172–81 [20] Javey A, Guo J, Farmer D B, Wang Q, Yenilmez E, Gordon R G, Lundstrom M and Dai H J 2004 Self-aligned ballistic molecular transistors and electrically parallel nanotube arrays Nano Lett. 4 1319 [21] Chaste J, Lechner L, Morfin P, F`eve G, Kontos T, Berroir J M, Glattli D C, Happy H, Hakonen P and Plac¸ais B 2008 Single carbon nanotube transistor at GHz frequency Nano Lett. 8 525 [22] Franklin A D, Luisier M, Han S J, Tulevski G, Breslin C M, Gignac L, Lundstrom M S and Haensch W 2012 Sub-10 nm carbon nanotube transistor Nano Lett. 12 758–62 [23] Bachtold A, Hadley P, Nakanishi T and Dekker C 2001 Logic circuits with carbon nanotube transistors Science 294 1317–20 [24] Derycke V, Martel R, Appenzeller J and Avouris P 2001 Carbon nanotube inter- and intra-molecular logic gates Nano Lett. 1 453–6 [25] Javey A, Wang Q, Ural A, Li Y M and Dai H J 2002 Carbon nanotube transistor arrays for multistage complementary logic and ring oscillators Nano Lett. 2 929–32 [26] Chen Z, Appenzeller J, Lin Y M, Sippel-Oakley J, Rinzler A G, Tang J, Wind S J, Solomon P M and Avouris P 2006 An integrated logic circuit assembled on a single carbon nanotube Science 311 1735 [27] Kang S J, Kocabas C, Ozel T, Shim M, Pimparkar N, Alam M A, Rotkin S V and Rogers J A 2007 High-performance electronics using dense, perfectly aligned arrays of single-walled carbon nanotubes Nature Nanotechnol. 2 230–6 [28] Cao Q, Kim H S, Pimparkar N, Kulkarni J P, Wang C, Shim M, Roy K, Alam M A and Rogers J A 2008 Medium-scale carbon nanotube thin-film integrated circuits on flexible plastic substrates Nature 454 495 [29] Ding L, Zhang Z, Liang S, Pei T, Wang S, Li Y, Zhou W, Liu J and Peng L M 2012 CMOS-based carbon nanotube pass-transistor logic integrated circuits Nature Commun. 3 677 [30] Park H, Afzali A, Han S J, Tulevski G S, Franklin A D, Tersoff J, Hannon J B and Haensch W 2012 High-density integration of carbon nanotubes via chemical self-assembly Nature Naotechnol. 7 787–91 [31] Borghetti J, Derycke V, Lenfant S, Chenevier P, Filoramo A, Goffman M, Vuillaume D and Bourgoin J-P 2006 Optoelectronic switch and memory devices based on polymer-functionalized carbon nanotube transistors Adv. Mater. 18 2535 [32] Anghel C, Derycke V, Filoramo A, Lenfant S, Giffard B, Vuillaume D and Bourgoin J P 2008 Nanotube transistors as direct probes of the trap dynamics at dielectric–organic interfaces of interest in organic electronics and solar cells Nano Lett. 8 3619–25

Acknowledgments The authors thank C Gamrat, D Querlioz, G Agnus, D Vuillaume, S Lenfant, F Ardiaca, J-P Bourgoin and C Maneux for valuable inputs. The work was partially funded by the French National Research Agency through the Panini Project (ANR-07-ARFU-008) and the European Union through the FP7 Project Nabab (Contract FP7-216777).

References [1] Turel O, Lee J H, Ma X L and Likharev K K 2004 Neuromorphic architectures for nanoetectronic circuits Int. J. Circuit Theory Appl. 32 277 [2] Snider G S 2007 Self-organized computation with unreliable, memristive nanodevices Nanotechnology 18 365202 [3] He M, Klein J O and Belhaire E 2008 Design and electrical simulation of on-chip neural learning based on nanocomponents Electron. Lett. 44 575 [4] Strukov D B 2011 Smart connections Nature 476 403 [5] Querlioz D, Bichler O and Gamrat C 2011 Simulation of a memristor-based spiking neural network immune to device variations IJCNN: Proc. Int. Joint Conf. on Neural Networks p 1775 [6] Zhao W, Querlioz D, Klein J O, Chabi D and Chappert C 2012 IEEE Int. Symp. Circuit Syst. (ISCAS) pp 2509–12 [7] Jo S H, Chang T, Ebong I, Bhadviya B B, Mazumder P and Lu W 2010 Nanoscale memristor device as synapse in neuromorphic systems Nano Lett. 10 1297 [8] Ohno T, Hasegawa T, Tsuruoka T, Terabe K, Gimzewski J K and Aono M 2011 Short-term plasticity and long-term potentiation mimicked in single inorganic synapses Nature Mater. 10 591 [9] Seo K et al 2011 Nanotechnology 22 254023 [10] Nayak A, Ohno T, Tsuruoka T, Terabe K, Hasegawa T, Gimzewski J K and Aono M 2012 Controlling the synaptic plasticity of a Cu2S gap-type atomic switch Adv. Funct. Mater. 22 3606–13 [11] Alibart F, Pleutin S, Bichler O, Gamrat C, Serrano-Gotarredona T, Linares-Barranco B and Vuillaume D 2012 A memristive nanoparticle/organic hybrid synapstor for neuroinspired computing Adv. Func. Mater. 22 609–16 [12] Kuzum D, Jeyasingh R G D, Lee B and Wong H S P 2012 Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing Nano Lett. 12 2179–86 [13] Bichler O, Suri M, Querlioz D, Vuillaume D, DeSalvo B and Gamrat C 2012 Visual pattern extraction using energy-efficient ‘2-PCM synapse’ neuromorphic architecture IEEE Trans. Electron Devices 59 2206 [14] Suri M, Bichler O, Querlioz D, Traor´e B, Cueto O, Perniola L, Sousa V, Vuillaume D, Gamrat C and DeSalvo B 2012 Physical aspects of low power synapses based on phase change memory devices J. Appl. Phys. 112 054904 8

Nanotechnology 24 (2013) 384013

K Gacem et al

[37] Widrow B and Hoff M E 1960 Adaptive switching circuits IRE WESCON Convention Record 4 96–104 [38] Gruneis A, Esplaniu M J, Garcia-Sanchez D and Bachtold A 2007 Detecting individual electrons using a carbon nanotube field-effect transistor Nano Lett. 7 3766 [39] Brunel D, Mayer A and M´elin T 2010 Imaging the operation of a carbon nanotube charge sensor at the nanoscale ACS Nano 4 5978 [40] Brunel D, Anghel C, Kim D Y, Tahir S, Lenfant S, Filoramo A, Kontos T, Vuillaume D, Jourdain V and Derycke V 2013 Integrating multiple memory devices on a single carbon nanotube Adv. Funct. Mater. doi:10.1002/adfm.201300775 [41] Chabi D, Querlioz D, Zhao W and Klein J-O 2013 Robust learning approach for neuro-inspired nanoscale crossbar architecture ACM J. Emerg. Technol. Comput. Syst. at press

[33] Agnus G, Zhao W S, Derycke V, Filoramo A, Lhuillier Y, Lenfant S, Vuillaume D, Gamrat C and Bourgoin J-P 2010 2-terminal carbon nanotube programmable devices for adaptive architectures Adv. Mater. 22 702–6 [34] Agnus G, Filoramo A, Lenfant S, Vuillaume D, Bourgoin J-P and Derycke V 2010 High-speed programming of nanowire-gated carbon nanotube memory devices Small 6 2659–63 [35] Brunel D, Levesque P L, Ardiaca F, Martel R and Derycke V 2013 Control over the interface properties of nanotubebased optoelectronic memory devices Appl. Phys. Lett. 102 013103 [36] Zhao W S, Agnus G, Derycke V, Filoramo A, Gamrat C and Bourgoin J-P 2010 Nanotube devices based crossbar architecture: toward neuromorphic computing Nanotechnology 21 175202

9