Dynamical Reconnection and Stability Constraints ... - Semantic Scholar

7 downloads 51 Views 558KB Size Report
Sep 4, 2009 - J. A. Henderson, E. Matar, and P. Riley. School of Physics ... time scales and develops during the life of each organism. These anatomical ...
PRL 103, 108104 (2009)

PHYSICAL REVIEW LETTERS

week ending 4 SEPTEMBER 2009

Dynamical Reconnection and Stability Constraints on Cortical Network Architecture P. A. Robinson School of Physics, University of Sydney, Sydney, New South Wales 2006, Australia Brain Dynamics Center, Sydney Medical School–Western, University of Sydney, Westmead Hospital, Westmead, New South Wales 2145, Australia

J. A. Henderson, E. Matar, and P. Riley School of Physics, University of Sydney, Sydney, New South Wales 2006, Australia

R. T. Gray National Center in HIV Epidemiology and Clinical Research, University of New South Wales, Sydney, New South Wales 2010, Australia (Received 5 April 2009; published 4 September 2009) Stability under dynamical changes to network connectivity is invoked alongside previous criteria to constrain brain network architecture. A new hierarchical network is introduced that satisfies all these constraints, unlike more commonly studied regular, random, and small-world networks. It is shown that hierarchical networks can simultaneously have high clustering, short path lengths, and low wiring costs, while being robustly stable under large scale reconnection of substructures. DOI: 10.1103/PhysRevLett.103.108104

PACS numbers: 87.19.ll, 87.19.lj, 89.75.Hc

The brain comprises a network of components connected in a complex architecture that evolves on geological time scales and develops during the life of each organism. These anatomical connections support a set of functional connections that change continually in response to stimuli and resulting information processing demands, including learning and memory. Both connectivities display hierarchies [1–8], whose origins are poorly understood. Certainly they have high clustering, like regular networks [Fig. 1(a)], and short mean path lengths like random networks [Fig. 1(b)], leading to suggestions of small-world (SW) structure [see Fig. 1(c)] [1–4,9]. The hierarchical networks (HNs) in Figs. 1(d) and 1(e) resemble experimental connectivities seen in Fig. 1(f) [5]. Similar structures occur in networks of cellular, metabolic, evolutionary, ecological, social, genetic, economic, transport, communication, and other interactions [4,6,10,11], suggesting wide applicability of common underlying structural and dynamic principles, as yet unknown. Many attempts to find underlying principles for brain architectures, especially cortical ones, have been based on postulates that the brain maximizes measures such as complexity, mutual information, and/or clustering of connections in graphs corresponding to observed connectivities [1,4,8]. However, such approaches still leave the question of why a given measure should be maximal. Physical constraints have also been proposed as determinants of brain architecture. These include minimizing wiring length and volume to conform with available cranial and metabolic capacity, and the competing imperative of minimizing the number of information processing steps to enable rapid responses. Stability of network activity has been invoked to obtain additional constraints on network architecture [3,6,11–13]. Only certain combinations of 0031-9007=09=103(10)=108104(4)

network size, connectivity, and connection strength (or gain) are permitted for the network to remain stable [3,4,6,12]; we discuss these further below. Recently, attempts to minimize path length and total connections, while maximizing stability, have pointed to HNs being able to satisfy both constraints simultaneously [6]. However, maximal stability is unlikely to be an appropriate constraint because real brain dynamics is close to marginal stability, which facilitates adaptability and flexibility of responses [3,14,15], whereas maximal stability suppresses activity and instability leads to seizure [3,15]. In this Letter we discuss known physical constraints on possible architectures of brain networks and infer new ones

FIG. 1. Schematic connection matrices (CMs) of networks, with neural populations labeling rows and columns, and white entries for a connection between a given row and column. (a) Regular network. (b) Random network. (c) Small-world network. (d) Hierarchical network (HN) shaded to show levels. (e) Example HN. (f) Cat cortex CM adapted from [5].

108104-1

Ó 2009 The American Physical Society

PRL 103, 108104 (2009)

week ending 4 SEPTEMBER 2009

PHYSICAL REVIEW LETTERS

We now list requirements that must be satisfied by realistic brain networks and discuss them vs the above architectures, assuming n is large, regular networks have k=n  1, RNs are sparse, and SWNs and HNs are in their SW regimes. Table I shows which network architectures in Fig. 1 can satisfy the following criteria that: I. There be high clustering C to match experiment, with C being the mean ratio of actual connections between neighbors of nodes to the maximum possible number of such connections. For RNs, C  p, which is usually small, regular networks have C ¼ 3ðk  2Þ=4ðk  1Þ  3=4 for large k, SWs and HNs have 0  C  1, depending on parameters, but have high clustering in general. II. Characteristic path lengths L be short to minimize processing steps, certainly much shorter than for regular networks where L  n. RNs have L  lnn= lnðnpÞ [4], while a fully connected network has L ¼ 1. The functional forms for SWNs and HNs range from regular to near random, depending on parameters. Figure 2(a) shows that our HN has a SW regime with high C and low L when 0:5 & pc & 1 and 0:2 & q & 0:4 for n ¼ 64, thus satisfying criteria I and II. (Similar results are seen for a case with 4 main blocks, as in Fig. 2(b), which is closer to Fig. 1(f). The main difference is that the contours are shifted toward the lower left because fewer modules have fewer hierarchical levels and are thus easier to keep connected.) A separate analysis shows that in this region the HN has modularity qualitatively similar to cat cortex: when the data are divided into Na modules for analysis, modularity peaks at small Na then decreases [1]. III. The total number of connections Nc increase in proportion to n to keep wiring length to a constant fraction of total brain volume at large n; Nc cannot increase slower without the network becoming disconnected. Regular networks satisfy this criterion with Nc  n, but RNs and SWNs have Nc  N 2 . We find HNs have Nc  nð1  nqlog2 n Þ  n for large n, provided q < 1=2, so HNs can satisfy criteria I–III simultaneously. IV. Networks be divisible into, or combinable from, two subnetworks of any relative size: (a) Without changing their architecture or the strength of more than a small 1

2

Y N Y Y

N Y N N

Y Y N N

Y Y Y Y

q

q 0.4

5

Hierarchical

1.2

Small world

2.2 5

0.4

0.6

1.5

Random

5

2

2

I II III IV

Regular

0.6

1.7

1.75

Criterion

0.6

2.5

TABLE I. Networks that can (Y), or can not (N), satisfy criteria I–IV (see text). Regular networks are assumed to have k=n  1, RNs to be sparse, and SWNs and HNs to be in their SW regimes. Modular networks without hierarchy fail criterion II (q  0) or criteria III and IV (q  1).

2.5

(a)

0.2

0.2

0

pc

0.5

1

0

0.4 (b) 0.2

1.

75

0

1 0.8

1.5

0.8

1.25

1.75

25

5 1.

1.75 2 2.25

0.8

1.

1

1

2.5

from dynamical and structural considerations. We show that dynamical stability during evolution (increase of brain size and complexity), development and aging (brain growth, neural pruning, degeneration), learning and forgetting (formation and deletion of connections), and information processing (transient connections) impose powerful additional constraints. We introduce a HN general enough to have regular, random, and SW limits. By analyzing structure and dynamics, the latter using a simple physiologically based model, we show that our HN can satisfy all constraints considered, unlike several widely studied architectures. Consequences for networks subject to evolution, development, and information processing demands are discussed, with implications for what structures can exist. The networks in Fig. 1 have n nodes that each represent large populations of neurons (typically 107 –109 in applications, corresponding to 2–40 cm2 of cortex). They are as follows. (a) Regular, with each node connected to its k nearest neighbors. (b) A random network (RN), where connections are made with probability p. (c) A smallworld network (SWN) [4,9] which comprises a regular ‘‘spine,’’ connected with high probability, and random off-diagonal connections of lower probability [4,9]. (Many similar rules exist; we choose this as typical.) (d) and (e) Our new HN where m on-diagonal blocks of size s are filled with connection probability pm and n ¼ ms. The first level of off-diagonal blocks of the matrix are also of size s and have intrablock connection probability pc . Subsequent levels of off-diagonal blocks are successively of size 2s, 4s, 8s; . . . ; n=2, with connection probabilities pc q, pc q2 ; . . . , where q sets the rate of decrease of connectivity with level. Thus connections, not nodes, are hierarchical (e.g., consistent with the fact that neural plasticity acts on connections, not nodes), and we find that the HN degree distribution is Gaussian, not scale free. (The brain has log2 m  5–10 levels, depending how one counts, e.g., neuron, assembly, minicolumn, hypercolumn, Brodmann area, lobe, hemisphere.) A RN is recovered for pc ¼ pm ¼ p and q ¼ 1. The above SWN is approximated by q ¼ 1 and pc  pm  1, and the regular case by pc ¼ 0 or q ¼ 0 and pc ¼ pm  1. Sporns examined structural properties of a network corresponding to the case pc ¼ pm ¼ 1 with s a power of 2 [1]. (f) Experimental cat cortical connectivity [5].

1.5 1 75 2

2 0

pc

0.5

1

0

FIG. 2. (a) Clustering coefficient C (shading) and path length L (contours) vs pc and q for HN with n ¼ 64, m ¼ 16, s ¼ 4, pm ¼ 1, Nc ¼ hkin [cf. Eq. (3)]. The network becomes disconnected (L ! 1) slightly below the L ¼ 2:5 contour. (b) As for (a) but for m ¼ 4 and s ¼ 16.

108104-2

fraction f of the total number of connections Nc , ideally such that f ! 0 as n ! 1. This lowers the mean connection density hki=n without disconnecting the network or sending it unstable, as would occur in a RN if the total connections or mean connection density, respectively, were held constant. During evolution, this criterion allows specialization by splitting of structures, or brain growth by duplication of substructures, a common evolutionary path to large organ size [16] and a natural route to hierarchical architectures (we do not rule out other mechanisms or further reorganization and evolution of function after duplication of a module). It also permits assembly and disassembly of functional networks to meet changing stimulus-processing needs, e.g., to combine inputs from different senses. Conversely, it rules out changes of connections that would alter the architecture, although such modifications may occur in pathologies. (b) Without appreciably changing the probability of stability Ps . This dynamical reconnectability criterion is more stringent than only requiring stability under small perturbations [11] and ensures, for example, that marginally stable networks can be combined or divided while preserving marginal stability. We describe each node’s aggregate dynamics via a lowfrequency version of a physiologically based model that has successfully accounted for a wide variety of brain dynamics [3,15]. Here, where all nodes are assumed identical and excitatory, we incorporate the mean decay rate   100 s1 of activity (neural spiking), mean decay (  50 s1 ) and rise (  200 s1 ) rates of soma responses to incoming activity, and the matrix G ¼ gC of gains (a constant gain g  0 multiplies the connection matrix (CM), C, and internodal delays are zero). This provides a starting point for discussion of network stability, which can be generalized to more complex cases (e.g., multiple neural types, inhibition) [3,15]. 2

(a) Imag(λ)

1 0 −1 −2 −2

λ1 0

2

week ending 4 SEPTEMBER 2009

PHYSICAL REVIEW LETTERS

PRL 103, 108104 (2009)

4

6

8

If we assume the network dynamics has a fixed point, we must determine if it is linearly stable via solutions of 0 ¼ detðG  IÞ;

(1)

 ¼ ð1  i!=Þð1  i!=Þð1  i!=Þ2 ;

(2)

where I is an identity matrix and ! is the angular frequency. Stable solutions have eigenvalues with Im! < 0; marginal stability has small negative values of Im!. We explore stability of the dynamics given by (1) on our HN by examining the roots  ¼ i! vs the above parameters. Similar analyses of regular, random, and SW networks were done previously [3]. The main results are (i) as seen in Fig. 3(a), the eigenvalue spectrum of a HN has a circular cluster centered on the origin and progressively smaller clusters along the positive real axis, one for each level of hierarchy, (ii) the mean principal eigenvalue is h1 i ¼ hki ¼ sfpm þ pc ½1  ð2qÞlog2 m =½1  2qg; (3) which converges to a finite limit for q < 1=2 as m ¼ n=s ! 1, (iii) the clusters corresponding to each level have approximately normal distributions that overlap at higher levels [see Fig. 3(b)], and (iv) a sharp transition from high to low Ps occurs when the total nodal gain is unity (ghki ¼ 1) as in Fig. 3(c) and other architectures [3]. We next examine the effects of dividing networks in half while preserving their architecture (similar results are found for unequal divisions; combination of subnetworks is analogous). For regular networks, division involves only removing a few (k) connections; for RNs and SWNs roughly half must be removed (e.g., upper right and lower left quadrants in Fig. 1(b). In HNs n2 pc qlog2 m1 connections need be removed, which is small for large m and q < 1=2. As seen in Table I, only the regular and HN cases satisfy criterion IV(a). The dynamical reconnection criterion IV(b) was tested by examining the effects of division or combination while holding all other parameters constant. The rationale is that, especially in functional reconnection, one does not expect the entire brain to change its connections to compensate for every change in functional connectivity in each of its subnetworks, which happen quickly and sample a wide variety of configurations. In contrast, architectures that 3

10

10

Real(λ) 1

(c)

0.3

2

10

0.6 Ps

P(real[λ])

λ

0.8

1

(b)

0.4

0.2

0.4

0.1

0.2

0

0

5 λ

10

0

1

10 1 10

2

3

10

10 n

0

0.05

g 0.1

0.15

FIG. 3. HN properties for n ¼ 512, m ¼ 128, s ¼ 4, pm ¼ 1, pc ¼ 1, q ¼ 0:22, Nc ¼ hkin from Eq. (3). (a)  spectrum. (b) Distribution of Re from an ensemble of HNs. (c) Ps vs g.

FIG. 4. Principal connection matrix eigenvalue 1 vs network size n for RN (4) p ¼ 0:34, SWðþÞ p ¼ 0:25 off diagonal with a regular spine k ¼ 4, regular () k ¼ 11 and HN ( ) with pm ¼ 1, pc ¼ 1, q ¼ 0:22, s ¼ 4, Nc ¼ hkin.

108104-3

PRL 103, 108104 (2009)

PHYSICAL REVIEW LETTERS

satisfy criterion IV(b) enable such reconnections to occur freely without much affecting stability. Figure 4 shows that, of the four types considered, only regular networks and HNs have stability that remains nearly fixed on division in half. In contrast, RN and SW stability is significantly changed because of the large numbers of connections that must be removed (introduced) to divide (combine) subnetworks, with both becoming unstable as n increases [4,10] (and overstable when divided). In networks whose nodes have identically distributed connections (as here), criterion III implies criterion IV: the condition Nc  n from criterion III implies a constant number of connections per node, independent of n. The Gerschgorin circle theorem bounds 1 above by the largest row sum of G minus the diagonal entry of that row. Given criterion III, this bound is independent of n, so satisfying criterion IV(b). As n changes, a negligible number of connections per node need be altered to maintain stability, so criterion IV(a) is satisfied. The above results show the importance of maintaining stability under network reconnection and how hierarchical architecture can achieve this simply. The dynamical reconnectability criterion proposed here provides powerful constraints on allowable brain networks, ruling out regular and random architectures (which also fail clustering and path length constraints, respectively), but also commonly considered small-world architectures. This strongly limits structures accessible to evolution and development. Our HN shows that all the constraints discussed can be satisfied simultaneously (other architectures may yet be discovered that do so), which may help explain why experimental CMs show evidence of hierarchy. Other proposed criteria can be explored in this framework, e.g., (i) the suggestion that networks have balanced excitation and inhibition [14], (ii) the similar criterion that networks be in a critical state of marginal stability [3,13,15,17], which would allow large excitatory and inhibitory gains to coexist, enabling rapid adaptability when either is changed by a small fraction [14], as in many engineering control systems. The approximately scale-free activity in such a state may help to seed hierarchical structure during development, possibly facilitating coupling to stimuli at all scales, and (iii) criteria that dictate that architectures should restrict the spread of abnormal activity [13]. Our results imply that the first two criteria would be relatively easy to maintain dynamically in HNs, whose structure also favors the third [13]. Dynamical reconnectability will likely constrain architectures of networks other than neural ones, since many networks need to remain stable as their parts commence and cease interactions, and many have a hierarchical structure. Examples include webs of interacting species, whose subnetworks may only interact in certain seasons or years, economic and commercial networks where analogs of functional connections are made temporarily between and within firms to accomplish projects, and transport and power networks where links must be continually made and broken without causing failures. In these and

week ending 4 SEPTEMBER 2009

other areas [11], dynamical reconnectability may dictate what structures can be robustly realized in the real world, and the ease with which HNs fulfill this criterion may explain their widespread occurrence. The Australian Research Council supported this work.

[1] O. Sporns, BioSystems 85, 55 (2006); E. Bullmore and O. Sporns, Nat. Rev. Neurosci. 10, 186 (2009). [2] T. I. Netoff, R. Clewley, S. Arno, T. Keck, and J. A. White, J. Neurosci. 24, 8075 (2004); M. Breakspear, E. T. Bullmore, K. Aquino, P. Das, and L. M. Williams, NeuroImage 30, 1230 (2006); S. Achard and E. Bullmore, PLoS Comp. Biol. 3, e17 (2007); O. Sporns, D. R. Chialvo, M. Kaiser, and C. C. Hilgetag, Trends Cogn. Sci. 8, 418 (2004); C. Zhou, L. Zemanova´, G. Zamora, C. C. Hilgetag, and J. Kurths, Phys. Rev. Lett. 97, 238103 (2006); O. Sporns and J. D. Zwi, Neuroinformatics 2, 145 (2004). [3] R. T. Gray and P. A. Robinson, Neurocomputing 71, 1373 (2008); 70, 1000 (2007); R. T. Gray, C. K. C. Fung, and P. A. Robinson, Neurocomputing 72, 1565 (2009). [4] R. Albert and A.-L. Baraba´si, Rev. Mod. Phys. 74, 47 (2002); S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwang, Phys. Rep. 424, 175 (2006); N. Kashtan and U. Alon, Proc. Natl. Acad. Sci. U.S.A. 102, 13 773 (2005). [5] J. W. Scannell, C. Blakemore, and M. P. Young, J. Neurosci. 15, 1463 (1995); http://sites.google.com/a/ brain-connectivity-toolbox.net/bct/datasets. [6] R. K. Pan and S. Sinha, Phys. Rev. E 76, 045103 (2007); E. A. Variano, J. H. McCoy, and H. Lipson, Phys. Rev. Lett. 92, 188701 (2004). [7] C. J. Honey, R. Ko¨tter, M. Breakspear, and O. Sporns, Proc. Natl. Acad. Sci. U.S.A. 104, 10 240 (2007). [8] O. Sporns, G. Tononi, and G. M. Edelman, Cereb. Cortex 10, 127 (2000). [9] R. Monasson, Eur. Phys. J. B 12, 555 (1999); M. E. J. Newman and D. J. Watts, Phys. Rev. E 60, 7332 (1999). [10] A. W. Rives and T. Galitski, Proc. Natl. Acad. Sci. U.S.A. 100, 1128 (2003); M. Girvan and M. E. J. Newman, ibid. 99, 7821 (2002); E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai, and A.-L. Baraba´si, Science 297, 1551 (2002); R. M. May, Nature (London) 238, 413 (1972). [11] R. J. Prill, P. A. Iglesias, and A. Levchenko, PLoS Biol. 3, e343 (2005). [12] V. K. Jirsa and M. Ding, Phys. Rev. Lett. 93, 070602 (2004). [13] M. Kaiser, M. Go¨rner, and C. C. Hilgetag, New J. Phys. 9, 110 (2007). [14] M. V. Tsodyks and T. J. Sejnowski, Netw., Comput. Neural Syst. 6, 111 (1995). [15] P. A. Robinson, C. J. Rennie, and J. J. Wright, Phys. Rev. E 56, 826 (1997); G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear, and K. Friston, PLoS Comp. Biol. 4, e1000092 (2008). [16] R. Dawkins, Climbing Mount Improbable (Viking, London, 1998). [17] J. M. Beggs and D. Plenz, J. Neurosci. 23, 11 167 (2003).

108104-4

Suggest Documents