Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs. Gorka Zamora-L´ opez,1, 2, ∗ Yuhan Chen,3, 4, 5 Gustavo Deco,1, 2, 6 Morten L. Kringelbach,7, 8, 9 and Changsong Zhou3, 4, 10, 11, 12 1
Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain 3 Department of Physics, Hong Kong Baptist University, Hong Kong, P.R. China 4 Centre for Nonlinear Studies, Hong Kong Baptist University, Hong Kong, P.R. China 5 State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, P.R. China 6 Instituci´ o Catalana de la Recerca i Estudis Avan¸cats, Universitat Pompeu Fabra, Barcelona, Spain 7 Department of Psychiatry, University of Oxford, Oxford, UK 8 Center of Functionally Integrative Neuroscience (CFIN), Aarhus University, Aarhus, Denmark 9 Oxford Functional Neurosurgery and Experimental Neurology Group, Nuffield Departments of Clinical Neuroscience and Surgical Sciences, University of Oxford, UK 10 The Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems, Hong Kong, P.R. China 11 Beijing Computational Science Research Center, Beijing, China 12 Research Centre, HKBU Institute of Research and Continuing Education, Shenzhen, China
arXiv:1602.07625v1 [q-bio.NC] 24 Feb 2016
2
The major structural ingredients of the brain and neural connectomes have been identified in recent years. These are (i) the arrangement of the networks into modules and (ii) the presence of highly connected regions (hubs) forming so-called rich-clubs. It has been speculated that the combination of these features allows the brain to segregate and integrate information but a detailed investigation of their functional implications is missing. Here, we examine how these network properties shape the collective dynamics of the brain. We find that both ingredients are crucial for the brain to host complex dynamical behaviour. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which one of the two features is destroyed, the functional complexity of the perturbed networks is always decreased. Moreover, a comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain at rest is in a dynamical state that reflects the largest complexity the anatomical connectome is able to host. In other words, the brain operates at the limit of the network resources it has at hand. Finally, we introduce a new model of hierarchical networks that successfully combines modular organisation with rich-club forming hubs. Our model hosts more complex dynamics than the hierarchical network models previously defined and widely used as benchmarks.
I.
INTRODUCTION
The study of interconnected natural systems by use of network analysis has uncovered the existence of common principles of organisation across scientific domains. Two of those striking and pervasive features are the organisation of networks into modules and the presence of highly connected nodes or hubs. It was soon recognised that those two features are signatures of hierarchical organisation but attempts to incorporate both in realistic network models have been of limited success [1]. Detailed investigations of anatomical neural and brain networks shed light on how nature takes advantage of both features [2, 3]. The nervous system acquires information about the environment through different channels, known as sensory modalities. At early stages the information of each channel is independently processed by specialised neural components giving rise to the formation of segregated clusters and modular structure. On the other hand, an adequate and efficient integration of the information of those different channels is necessary for survival [4, 5].
∗
[email protected]
This integration step might be performed by the highly connected hubs, who have access to information from the many channels [2, 3, 6–8]. The question is thus, to identify the right topologies that help optimally balance the coexistence of both segregated subsystems and an efficient integration of their information. This question was investigated in [9] by a series of numerical experiments. Starting from a population of random graphs, an evolutionary algorithm would select those maximising a measure of complexity intended to quantify a balanced integration and segregation [2]. In subsequent iterations the winners would be mutated – slightly rewired – to produce another population to start over. This procedure gave rise to networks with interconnected communities capturing the relevance of modules for the segregation of information. In such modular networks, however, integration is not efficient because it happens either via global synchrony, which is undesired in neural networks, or via the unorganised crosscommunication between the clusters. More recent analysis of anatomical cortical networks have uncovered the missing component of the puzzle. Highly connected cortical areas have access to information of the different sensory modalities and are densely interconnect with each other forming a rich-club [2, 11, 12]. Similar structure
2 has also been identified in the neural network of the nematode Caenorhabditis elegans [13]. In this paper we investigate how these two ingredients, modular organisation and highly connected hubs, influence the collective dynamics of neural networks. Therefore we consider the neural connectivity of the C. elegans, the corticocortical networks of cats and macaque monkeys, and the structural connectome of the human brain based on diffusion imaging and tractography. We first make a theoretical estimation of the functional connectivity that these anatomical networks give rise to and then we calculate the complexity of the functional connectivity in terms of its variability. By increasing the coupling strength of the links we can evaluate the evolution of the complexity as the networks undergo a transition towards global synchrony. Functional complexity emerges for intermediate levels of the coupling, when the collective dynamics are far from the trivial extremal cases: independence and global synchrony. A comparison of the real networks to modified versions of them in which either the presence of hubs forming rich-clubs or the modular structure is destroyed reveals that both topological features are crucial ingredients for high functional complexity. In the modified networks complexity is always reduced. In the case of the human dataset we could also compare the complexity of the estimated functional connectivity to the empirical resting-state functional connectivity obtained via functional magnetic resonance. The comparison shows that the human brain, at rest, lies in a dynamical state that reflects the largest complexity that the anatomical connectome is able to host. Based on the topological features found in neural and brain networks we introduce a new model of hierarchical networks which successfully combines nested modules with the presence of hubs. This is achieved by replacing the random inter-modular connectivity by a preferential attachment rule that centralises inter-modular communications through a few nodes. The richness of dynamical regimes that can be hosted in such a network is highlighted. It achieves larger complexity than well-known graph models: random graphs, scale-free graphs, modular networks and hierarchically modular networks. The manuscript is organised as follows. First we introduce a measure of functional complexity that is based on the variability of the values that the functional connectivity matrix takes. We then introduce a novel nonlinear model to map the expected cross-correlation matrix out of the structural connectivity matrix with only one parameter, the global coupling strength. The mapping resembles a diffusive process with decay for longer paths to avoid information to perpetually flow through the network. In Section 2 we investigate the complexity of neural networks and compare to results after perturbing their topologies. In Section 3 we compare the complexity of random and hierarchical network models and we introduce a new model of modular hierarchical networks with centralised inter-modular connectivity.
II.
MEASURING FUNCTIONAL COMPLEXITY
Despite the common use of the term “complex networks” a formal quantitative measure is missing to determine how complex a network is as compared to others. Here we take an indirect approach and estimate the complexity of the collective dynamical behaviour that the network can host. Given a network of N coupled dynamical nodes, e.g. neurones or oscillators, the pair-wise correlation matrix R reflects the degree of interdependencies among the nodes. When the nodes are disconnected their time-courses are independent of each other; hence, no complex collective dynamics emerge and all correlation values rij are near zero for i 6= j. In the opposite extreme, global synchrony is a state that emerges from the interactions of the nodes, however, global synchrony is also not a complex state because all nodes follow the same behaviour. In this case all the values of the correlation matrix are rij = 1. Complex dynamical interactions happen only when the collective dynamics are characterised by intermediate states between independence and global synchrony. In terms of the correlation matrix such states are characterised by a broad distribution of rij values, see Figure 1. Following the argumentation above we define the functional complexity C of a coupled dynamical system in terms of the broadness of the distribution p(rij ) of its correlation values rij . The broader the distribution the more complex the system is and, the narrower the distribution, as it happens in both uncoupled globally synchronised states, the less complex it is. Therefore we calculate C as the integral between the curves of p(rij ) and of the uniform distribution p¯. If p(rij ) is computed 1 using m bins, then p¯ = m and the integral between the curves is replaced by the sum of their differences: m 1 1 X C = 1− (1) pl (rij ) − m , Cm l=1
m where | · | means the absolute value and Cm = 0.5 m−1 is the summed difference between the uniform distribution 1 p¯ = m and the δm -dirac function, when all rij take the same value. After this normalisation C → 0 when p(rij ) is a narrow peak around any correlation value and C → 1 when it approaches the uniform distribution p¯. As we are only interested in the pair-wise interactions the diagonal values of R are discarded in the calculation of C.
This measure of functional complexity is easily applied to both empirically obtained and simulated data. In the following we present a mapping that allows to analytically estimate the correlation matrix R out of an adjacency matrix A without the need of running simulations. It also allows to compare across networks with a single parameter, the global coupling strength.
Probability
3 In practical terms the variable xi might be interpreted as the activity level of a cortical area [15, 16], or as the mean firing rate of its neurones [17]. The covariance matrix of this multivariate Gaussian system can be analytically 1 computed by solving it such that x = 1−gA ·ξ = Q·ξ, and averaging over the states produced
by successive values of ξ: COV (X) = x ⊗ xT t = (QT · ξ) ⊗ (ξ T · Q) t = QT · Q, where ⊗ is the outer product and h·it is the average over time. The system has several poles and diverges to infinity when g is equal to any of the eigenvalues of the adjacency matrix A [2]. In order to compare networks of different size and density, the coupling strength shall be normalised such that g˜ = g / λmax where λmax is the largest eigenvalue of A. In this case, the series of Q converge for all g˜ ∈ [0, 1).
0.3 0.2 0.1 1/m 0 0.0 0.2 0.4 0.6 0.8 1.0 0.3 0.2 0.1 1/m 0 0.0 0.2 0.4 0.6 0.8 1.0 0.3 0.2 0.1 1/m 0 0.0 0.2 0.4 0.6 0.8 1.0
At this point we note that the matrix Q can be expanded into a series as:
Correlation
∞
Q=
X 1 = 1+gA+g 2 A2 +g 3 A3 +· · · = g l Al . (3) 1 − gA l=0
FIG. 1. Illustration of the measure for functional complexity. When the collective dynamics of a network are close to independence or to global synchrony (top and bottom panels) the distribution of the cross-correlation values are characterised by narrow peaks close to rij = 0 or to rij = 1. Complex dynamical interactions happen when the collective behaviour is characterised by intermediate states leading to a broad distribution of the correlation values (middle panel). The broader this distribution the more complex the system can be considered with the limiting case of the 1 uniform distribution, p¯ = m (red line, m is the number of bins).
A.
Mapping functional connectivity from anatomical connectomes
(2)
∞ X g l Al
g 2 A2 g 3 A3 + + · · · = egA . l! 2! 3! l=0 (4) From a physical point of view this represents a diffusion process on the network with nonlinear decay for longer paths and it has the advantage of being free of the divergence problem of the previous approach. Since information decays in neural systems due to the stochasticity of synaptic transmission and the interaction between excitatory and inhibitory neurones in local circuits, it is natural to assume that shorter paths are more relevant than longer ones. The exponential of the adjacency matrix is also used in the analysis of networks to define communicability, a measure that generalises the concept of distance [18]. Qexp =
The collective dynamics of coupled systems depend on several factors such as the topology of the network, the dynamical model of the nodes and the coupling function between them. In the present paper our aim is to investigate the influence of the connection topology and keep other effects aside, e.g. the influence of the parameters of local dynamical models. Therefore we restrict our investigations to a simple non-linear diffusive model with decay for longer paths. Following [2, 14], the steady-state of a linear system whose N elements x = (x1 , x2 , . . . , xN ) are driven each by an independent Gaussian noise withP unit variance ξ = (ξ1 , ξ2 , . . . , ξN ), is described by xi = g j Aij xj +ξi , where g is the coupling strength. Written in matrix form: x = gA · x + ξ.
It is well-known that the total number of paths of length l between two nodes is given exactly by the powers of the adjacency matrix Al . This includes paths with internal recurrent loops. From this point of view Qij represents the total influence exerted by node i over node j, accumulated over all possible paths of all lengths. It assumes that the paths of any length are equally influential and the series only converge because the powers g l decrease faster than the increase in the number of paths Al of length l. Mathematically the general problemP is to find ∞ a set of coefficients {cl } for which the series l=0 cl Al converge for any adjacency matrix and coupling strength. One solution to this problem is to consider the coefficients cl = 1 / l !. In this case the influence matrix Q becomes, by definition, the exponential of the matrix A: = 1 + gA +
In the following, we will compute the covariance matrices as stated above, COV (X) = Q · Qt but replacing the matrix Q by Qexp .
4 III.
FUNCTIONAL COMPLEXITY OF NEURAL NETWORKS
In this section we investigate the functional complexity of anatomical brain and neural connectomes. We study the binary corticocortical connectivities of cats, macaque monkeys and humans, and also the neuronal wiring of the C. elegans. For all networks we investigate the evolution of their functional complexity as the coupling strength g is increased and the collective dynamics undergo a transition from independence to global synchrony. At every value of g the covariance matrix COV of the network is estimated using the exponential mapping Qexp in Equation (4). Then, the correlation matrix R is calculated by normalising the covariance matrix as usual. The level of global synchrony is evaluated by the mean correlation value hrij i, diagonal entries rii discarded. Mean correlations are computed applying the Fisher’s z-transformation. The functional complexity is computed from R after Equation (1), considering the distribution of correlation values p(rij ) with m = 50 bins and correlation values ranging from 0 to 1. Because the coupling required to reach global synchrony depends on the size and the density of the network, the interesting range of g at which the transition happens is different in every case. For convenience and for illustrative reasons we normalise the adjacency matrix by its largest real eigenvalue λmax before the calculation of Qexp . This normalisation aligns the transition to synchrony (for most networks) to happen in the range g˜ ∈ [0, 10), where g˜ = g λmax . From now on we refer to the coupling strength of the normalised system as g. In order to show that the combination of modular structure and hubs forming rich-clubs in the networks are the two fundamental ingredients leading to complex dynamical regimes, we compare the results to two types of surrogate networks: (i) rewired networks that conserve the degree distribution and (ii) random modular networks which preserve the community structure of the original network, see Methods. In the rewired networks the hubs are still present although the modular structure vanishes and the rich-club is weakened to its random expectation. On the contrary, the modularity preserving networks contain no hubs. For completeness, we also compare the results to those of random graphs of the same number of nodes and links. All results are averages of 1000 realisations. The results are shown in Figure 2. As expected, the functional complexity follows a bell-shape in all the cases as the coupling strength increases and the collective network dynamics undergo the transition from a set of uncoupled nodes to global synchrony. Complexity emerges at intermediate states. All real networks (solid lines) achieve larger complexity than any of the surrogates along the whole range of g. The bar plots summarise the peak values of complexity. The lowest value corresponds always to the random graphs (dotted lines) while the rewired (dashed lines) and the modularity preserving
(dot-dashed lines) networks take the intermediate values. These results show that it is the combination of hubs and modular structure what allows the real networks to reach the highest functional complexities. Destroying one of these features, either the hubs or the modular structure, leads to a notable reduction in complexity. Another observation is that the transition to synchrony of the real networks is “slower” than that of the network models. This implies that there is a wide range of g for which the complexity remains high and that the real networks are more robust against perturbations that might lead the brain into the undesired state of global synchrony. Lastly, we compare the functional complexity estimated out of anatomical connectomes with the complexity of the brain at work. First, we consider structural connectivity (SC) matrices for 21 participants obtained through diffusion imaging and tractography, see Methods. For increasing values of g we estimate the simulated FC using the exponential mapping and calculate the mean correlation and the functional complexity of each participant, grey solid lines in Figures 3(A) and (B). The average of all the curves are represented by the solid black curves. Second, we computed empirical functional connetivity (FC) matrices from resting-stating state fMRI for a cohort of 16 participants, see Methods. For each of the empirical FC matrices we compute their mean correlation and functional complexity, represented as solid grey horizontal lines. The red solid lines represent the average of the empirical values. Comparing these results and focusing at the intersection between empirical and theoretically estimated outcomes, we find that the functional complexity of the human brain at rest lies (within the limitations of cross-subject variability) at the largest functional complexity that its anatomical structure can host. Moreover, we note that this intersection happens at the coupling strength at which the simulated FCs fit closest the empirical average FC, Figure 3(C). Here we quantify closeness as the euclidean distance between the two matrices, diagonal lines ignored. So far, we have found that the key ingredients leading anatomical connectomes to reach high complexity are the combination of modular architecture with hubs forming rich-clubs. We have also found that the human brain, at rest, lies in a dynamical state which matches the largest possible complexity that the underlying anatomical connectome can host. Next, we need to better understand how those anatomical features give rise to large functional complexity. Therefore, in the following we will study the complexity of common network models.
IV. FUNCTIONAL COMPLEXITY OF SYNTHETIC NETWORK MODELS
In this section we explore the functional complexity of common synthetic network models. We start by in-
2
4
6
8
10
Macaque
0 1 2 3 4 5 6 7 8
1.0 0.8 0.6 0.4 0.2 0.0
Coupling strength
Real network Rewired Modular Random
0 1 2 3 4 5 6 7 8
Human
0 1 2 3 4 5 6 7 8
Coupling strength
0.8
Elegans
Cat
0.6 0.4 0.2 0.0 1.0 0.8
R Re eal wir Mo ed d Ra ular nd om Re Re al w Mo ired d Ra ular nd om
0
Cat
1.0
Macaque
Human
0.6 0.4 0.2 0.0
R Re eal wir Mo ed d Ra ular nd om Re Re al w Mo ired d Ra ular nd om
1.0 0.8 0.6 0.4 0.2 0.0
C. elegans
1.0 0.8 0.6 0.4 0.2 0.0
Max. Complexity
1.0 0.8 0.6 0.4 0.2 0.0
Max. Complexity
Complexity
Complexity
5
FIG. 2. Functional complexity of anatomical connectomes. Comparison of the evolution of complexity for four neural and brain networks (solid lines) to the results of network ensembles perturbing the original topology: random graphs of same size and number of links (dotted lines), rewired networks conserving only the degree of nodes (dashed lines), and modularity preserving random graphs (dash-dotted lines). The right-hand panels summarise the largest complexity achieved in each case.
vestigating random graphs and scale-free networks. We then study modular and hierarchical networks. We also introduce a new model of hierarchical networks with a centralised hierarchy through local and global rich-clubs. This model is inspired by observations in neural and brain networks. As in the previous section, we first estimate the expected functional connectivity (FC) or correlation matrix of the adjacency matrices by applying the exponential mapping and then we calculate the functional complexity out of the FC matrix using Equation (1).
A.
Random and scale-free networks
We begin studying random graphs of N = 1000 nodes and scale-free networks of the same size. Networks were generated with link densities ρ = {0.01, 0.05, 0.1, 0.3}. Figures 4(A) and (B) show the results for random graphs and Figures 4(C) and (D) the results for scale-free networks. As expected, the mean correlation increases monotonically with the coupling strength. Full circles (•) and full triangles (N) highlight the coupling at which mean correlation reaches 0.85. As seen, dense networks are easier to synchronise than sparse networks. This is highlighted in Figure 4(E) were the real coupling (before normalisation) needed for the network to achieve hrij i = 0.85 is observed to decrease with density. The functional complexity of all networks peaks in the middle of the transition. For large enough densities this happens at g ≈ 3.5 in random networks and at g ≈ 3.8 in scale-free networks. The largest complexity decreases
with density in random graphs but in scale-free networks it converges to C = 0.5, Figure 4(F). The complexity of scale-free networks is notably higher than that of random graphs. The reason for this difference is that in scale-free networks hubs synchronise with each other at weaker couplings than the rest of the nodes [19, 20]; therefore, at intermediate values of g a synchronised population of nodes coexists with other populations of weakly synchronised and of uncorrelated nodes. See the correlation matrices in Supplementary Figure S1.
B.
Modular networks
We now investigate modular networks without hierarchies for later comparison. We generate networks of N = 256 nodes arranged into four modules of 64 nodes. Both internal links within the modules and external links across modules are seeded at random. We compare networks of varying strength of the modular organization by tuning the ratio of internal to external links while conserving the total mean degree of hki = 24. Therefore, the mean internal degree κi of modules is varied from 12 to 24 and the mean external degree κe across modules from 12 to 0. The strength of the modular organisation is quantified by the modularity measure q [21]. As κi increases and κe decreases the modules turn stronger until they become disconnected when κi = 24 and κe = 0. The first conclusion of the comparison is that achieving global synchrony requires stronger coupling as modularity increases, see Figure 5A. The easiest network
Complexity
Correlation
6
1.0 0.8 0.6 0.4 0.2 0.0 0 1.0 0.8 0.6 0.4 0.2 0.0 0
Average (estimated) Average (empirical)
2
4
6
8
10
B
C.
2
4
6
8
10
C
30
Distance
A
to synchronise them. The behaviour of functional complexity is rather different and does not monotonically increase with modularity, Figures 5C and D. There is an optimal balance between the internal and the external degrees for which complexity maximises. In our case this happens for the networks with κe , κi = (5, 19) and modularity q = 0.50. In the following, we repeat the analysis for networks with both modular and hierarchical organisation.
20
We finish the section on synthetic networks studying the complexity of modular networks with nested hierarchies. We compare three models; the first two are wellknown in the literature and we introduce a new model which is motivated by the properties found in real brain networks, see Figure 6.
1.
10 0
2
4
6
Coupling strength, g
8
10
FIG. 3. Functional complexity of human resting-state. Comparison between the complexity of theoretically estimated and empirically measured functional connectivity in the resting-state dynamics of humans. We obtained both structural connectivity (SC) via DSI/tractography and resting-state functional connectivity (FC) out of BOLD signals. (A) Mean correlation and (B) functional complexity of the FC matrices. The curves correspond to the results out of estimated FC given an anatomical SC as coupling g increases. The horizontal lines correspond to the results out of the empirical FC (one FC matrix per subject, plotted as lines for visualisation). Grey curves are the results for every subject and bold lines are the population averages. (C) Euclidean distance between the population average empirical FC matrix and the theoretically estimated FC matrices at different values of g. Blue bold curve is the population average. The light vertical line corresponds to the g at which the fit between the estimated and empirical FC is best.
to synchronise is the random graph. Figure 5B shows three snapshots of the transition for the network with κe , κi = (5, 19). As coupling increases the modules become internally synchronised but to synchronise them with each other requires further effort. The sparser the connections between the modules, the more difficult it is
Hierarchical and modular networks
Fractally hierarchical and modular networks
In an attempt to combine the modular organisation and the scale-free degree distribution found in many biological networks, Ravasz and Barab´asi proposed a treelike, self-similar network model[1, 24], see Figure 6(A). The generating motif of size N0 is formed of a central hub surrounded by a one-dimensional ring of N0 −1 nodes which are connected to their first neighbours. To add hierarchical levels, every node is replaced by such a motif in which the original node becomes the new local hub. Finally, to achieve a scale-free-like degree distribution the hubs are connected to all non-hub nodes in the branches below. The example in Figure 6(A) is the version with N0 = 5 and three hierarchical levels with a total of 125 nodes. For the calculations of the complexity we consider a version of the model with N0 = 6 and three hierarchical levels with a total size of N = 216. Due to the deterministic nature of the model, this is the closest we can approximate to the 256 nodes of the other studied networks. The evolution of mean correlation and complexity for increasing coupling strength are shown in Figures 7(A) and (B). The mean correlation of the Ravasz-Barab´ asi network does not distinguish from that of the rewired networks conserving the degrees of the nodes. The model achieves a very poor complexity which is overcome by both the rewired and the random counterparts. The distinctive behaviour of the random networks in this case can be explained by its sparse density (see Figure 4); it has a density of only ρ = 0.031. The reason for why the Ravasz-Barab´asi model even fails to match the complexity of the rewired networks despite having a scale-free-like degree distribution is because, by construction, the hubs are preferentially connected to the non-hub nodes of the branches below them. This choice leads to a situation in which the hubs are poorly connected with each other
A
B
Coupling strength, g'
Scale-free
1.0 0.8 0.6 0.4 0.2 0.0 0 1 2 3 4 5 6 7 8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 1 2 3 4 5 6 7 8
C
Real coupling
Random
1.0 0.8 0.6 dens = 0.01 0.4 dens = 0.05 dens = 0.10 0.2 dens = 0.20 0.0 0 1 2 3 4 5 6 7 8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 1 2 3 4 5 6 7 8
D
100
Random Scale-free
10-1
E
10-2 0.00 0.05 0.10 0.15 0.20 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.00 0.05 0.10 0.15 0.20
Complexity
Complexity
Mean correlation
7
Coupling strength, g'
F
Density
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
B
A
g = 1.5
g = 3.2
g = 5.7
Random Q = 0.417 Q = 0.542 Q = 0.667
0
1
2
3
4
5
6
7
8
C
0
Complexity
Mean correlation
1.0 0.8 0.6 0.4 0.2 0.0
Complexity
FIG. 4. Functional complexity of random and scale-free networks. (A) and (B) Mean correlation and functional complexity for random graphs of N = 1000 nodes as the network dynamics undergo the transition from independent to global synchrony tuned by coupling strength, g. Curves are for random networks of increasing link density. (C) and (D), same as (A) and (B) but for scale-free networks. (E) Coupling strength required for the network to achieve a mean correlation of hRij i = 0.85 depending on link density. The real coupling is g real = g / λmax . (F) Largest complexity values achieved by random and scale-free networks of different densities. All results are averages of 100 realisations.
1
2
3
4
5
6
Coupling strength, g'
7
8
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.2
1.0 0.8 0.6 0.4 0.2 0.0
D Modular Random
0.3
0.4
0.5
0.6
Modularity
0.7
0.8
FIG. 5. Functional complexity of modular networks. (A) Total mean correlation and (C) functional complexity of modular networks as coupling strength increases. Networks consist of N = 256 nodes partitioned into 4 communities of 64 nodes. Curves for networks with increasing modularity (ratio of internal to external links) are shown, total number of links are conserved. Blue line corresponds to random graphs of same size and number of links. (B) Example of the evolution of correlation matrices R. As coupling g increases the distribution becomes bivariate, reflecting the stronger correlations within the modules and the weaker correlations across modules. Data corresponds to one modular network realisation with q = 0.542. (D) Largest complexity achieved by modular networks of varying modularity. Blue dashed line corresponds to the maximal complexity achieved by equivalent random graphs. All data points are averages of 100 realisations. contrary to what happens in a rich-club. The density of
connections within the subset of hubs is only of ρ = 0.33;
8
FIG. 6. Hierarchical and modular network models: (i) The Ravasz & Barab´asi model is a fractally hierarchical structure that was proposed to reproduce hierarchical features of many natural networks which posses both modules and a scale-free degree distribution. (ii) The model by Arenas, D´ıaz-Guilera & P´erez-Vicente is a nested modular network in which sets of random subgraphs (the modules) link at random to form larger communities [22, 23]. (iii) A new hierarchical and modular network model that combines the presence of both a nested modular hierarchy and a scale-free-like degree distribution. This is achieved by centralising inter-modular connectivities through local and global hubs that form local and global rich-clubs.
see Supplementary Figure S2.
2.
Nested hierarchical model with random connectivity
In [22], Arenas, D´ıaz-Guilera and P´erez-Vicente introduced a hierarchical network model in which a network of N = 256 nodes is divided into four modules of 64 nodes, each subdivided by another four submodules of 16 nodes, Figure 6B. The links within and across modules at all levels are shed at random. The hierarchy is defined by increasing density for the deeper levels. In the previous section we found that in a network composed of four modules of 64 nodes complexity was optimised when the mean external and internal degrees were κe , κi = (5, 19). Taking these values as the starting point we set the mean degree of the nodes at the first level κ1 = κe = 5 and the remaining 19 connections are distributed among the two deeper levels. The combination κ2 = 6 and κ3 = 13 optimises the complexity achieved by the model. The results for the mean correlation and the functional complexity are shown in Figures 7C and D. The behaviour of the nested random networks is very similar to that of modular networks. The transition into global synchrony is governed by the synchronisability of the four large modules with each other as happened for the modular networks because synchronisation between the small modules at the deepest level is rather easy to achieve. The largest complexity reached by the model is C = 0.48, only slightly above that of the modular networks C = 0.46.
3.
Hierarchical and modular networks with centralised connectivity
Neural and brain networks possess both a modular structure and a broad degree distribution. The hubs are
scattered among the modules but densely interconnected forming a rich-club [2, 11, 12]. While in the nested random model the connections between the modules are shed at random, in neural and brain networks inter-module connections and communication paths tend to be centralised through the hubs [25]. Hence, we now propose a hierarchical model that combines both features. For that we modify the nested hierarchical model to replace the random connectivity between modules by a preferential attachment rule, see Methods. The inter-modular links at the second level are planted using exponent γ 2 = 2.0 and the links between the four major modules at first level are placed with γ 1 = 1.7. These values for the exponents are chosen such that hubs in the resulting networks have a rich-club with internal density ρ ∼ 0.8 as in real neuronal networks, see Supplementary Figure S2. The results for the mean correlation and the functional complexity are shown in Figures 7E and F. As before, curves for equivalent random networks and for rewired networks are included for comparison. The largest complexity achieved by the nested centralised networks, with C = 0.57, overcomes the values reached by any other model here investigated. Also, the decay of its tail is slower than that of random networks indicating robustness against global synchrony, but faster than that of modular and of nested random networks due to the influence of the hubs. So far, we have shown that a network model constructed using the topological features in empirical neural networks, a combination of modular structure with hubs centralising the communication between modules, achieves larger complexity than other network models. Specially relevant is the improvement over the fractal model by Ravasz and Barab´asi which has been the only network model proposed so far to combine the modular organisation and the presence of hubs in biological networks.
Complexity
Mean correlation
9
1.0 0.8 0.6 0.4 0.2 0.0 0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
Fractal
A
B
2
4
2
4
1.0 0.8 0.6 0.4 HM network Random 0.2 Rewired 0.0 6 8 10 0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 6 8 10 0
Coupling strength, g
Nested random
C
D
2
4
2
4
1.0 0.8 0.6 0.4 HM network Random 0.2 Rewired 0.0 6 8 10 0 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 6 8 10 0
Coupling strength, g
E
Nested centralised
HM network Random Rewired
F
2
4
6
8
10
2
4
6
8
10
Coupling strength, g
FIG. 7. Functional complexity of hierarchical and modular networks. Mean correlation (upper panels) and functional complexity (lower panels) of dynamics on three hierarchical and modular network models at different coupling strength representing the transition from node independence to global synchrony. (A) and (B) Ravasz & Barab´ asi model, (C) and (D) HM random model and (E) and (F) centralised HM network model. All curves represent average of 100 network realisations. Variances were very small in all cases and are not shown. For the rewired cases of HMR and HMC, dashed lines in (C - F) 100 HM networks were generated and for each realisation 100 rewired networks were evaluated. The apparent dash-dotted curves in (C) and (E) are the overlap of results for random and rewired cases.
V.
SUMMARY AND DISCUSSION
In the present paper we have investigated the richness of collective dynamical regimes that networks with different characteristics can host. For that, we have proposed a measure of functional complexity which quantifies the variability in the strength of functional interactions between the nodes. It captures the fact that complexity vanishes in the two extreme cases: when the nodes are dynamically independent of each other and when the system is globally synchronised. Functional complexity arises in intermediate states, when the collective dynamics spontaneously organise into heterogeneous clusters that interact with each other. Following the discovery in the recent years that neural and brain networks are organised into modular structures with hubs forming rich-clubs that facilitate cross-modular communication, we have tested whether this architecture leads to complex collective dynamics. First, we have found that perturbation of neural and brain networks such that their modular structure is destroyed while conserving the degree of the nodes, and the other way around, leads to networks with reduced functional complexity. The result is in agreement with the observation that rich clubs increase the set of attractors in a network of spin-glass elements beyond a scale-free topology [26]. We also find that the regime of high complexity is stable and robust against the network accidentally shifting towards global synchrony. These observa-
tions reinforce previous indications that the combination of modules with rich-club forming hubs might be essential for the working of the brain. In particular for the capacity of the neural systems to simultaneously process information of different modalities in parallel by separated groups of neurones and to integrate that multi-sensory information [2, 6, 7, 27, 28]. Second, we have compared theoretically estimated functional connectivity (based on structural connectivity via DTI and tractography) to empirically obtained resting-state functional connectivity from fMRI in humans. The result shows that the human brain, at rest, lies in a dynamical state which is characterised by the largest functional complexity the underlying anatomical network can host. This carries profound implications for understanding the relationship between structural and functional connectivity. It is now well-known that the resting-state activity is highly structured into spatiotemporal patterns [29]. Although the origin and the detailed role of the resting-state dynamics are still debated, there is wide agreement that both consciousness and cognitive capacities benefit from the presence of a large pool of accessible states and to the ability to switch between them. This is supported by the finding that the dynamical repertoire of the brain is drastically decreased during sleep [30] and under anaesthesia [31, 32]. The variability of brain signals have been found to increase with age from childhood to adulthood [33]. Maturation appears to lead to a brain with greater functional variability what may
10 reflect a broader repertoire of metastable brain states and more fluid transitions among them. Structurally, we can assume that the major anatomical pathways remain relatively unaltered over time in healthy brains what imposes permanent constraints to the computational capacity. The question is then how could the brain host a wide dynamical repertoire within the constraints of the underlying anatomical network? A possible explanation is that information does not always have to flow over shortest paths. Indeed, there exist multiple and alternative paths of processing between any two regions in the brain [25, 34]. The simple dynamical model we used here, Equation (4), captures this in the sense that for weak coupling strengths information is transmitted only through the shortest paths causing segregation and at strong couplings all paths of all lengths become active. At this regime integration is favoured at the cost of the network becoming globally synchronised which is an undesirable pathological state. It is only at intermediate regimes of the coupling, when an adequate balance between the relevance of short and long paths is reached, that the network achieves largest complexity. A complementary approach has been proposed that the computational power of the network could significantly increase if the brain were able to selectively activate or inactivate paths of communication according to the sensory stimulation and the ongoing activity [28, 35]. While the precise mechanisms for this routing are yet debated [36, 37], our result already indicates that the brain operates at the limit of the network resources it has at hand. Last, not least, we have introduced a new model of hierarchical and modular networks that leads to higher functional complexity than any of the models previously proposed and commonly used as benchmarks. Our model succeeds where previous efforts have failed: to combine nested modules with highly connected nodes. Specially remarkable is the improvement over the fractal model by Ravasz and Barab´ asi, which was introduced to explain the co-existence of modular and scale-free-like degree distribution in biological networks [1, 24]. Their model fails to foster complex dynamics; it even performs worse than comparable random and rewired graphs. The problem is that, by construction, the hubs of the Ravasz-Barab´asi networks are preferentially connected to the non-hub nodes in the deeper branches. As a consequence the hubs become sparsely connected with each other (see Supplementary Figure S2) contrary to the observations in brain connectomes whose hubs tend to form rich-clubs. Our model solves the issue by centralising the cross-modular communications through hubs with a preferential attachment rule.
A.
Related work
This work builds upon the efforts by Tononi, Sporns & Edelman who tried to answer the question of which is the optimal network structure that allows the brain to si-
multaneously segregate and integrate information [9, 38]. We complete here the search for such structures by incorporating the knowledge gained over the recent years on the topological ingredients of neural and brain networks: modular structures, highly connected hubs and the formation of rich-clubs. Nevertheless, the bridge between high complexity and the balancing between integration and segregation is not fully understood yet. From a technical point of view we have proposed an alternative measure of functional complexity which is based on the variability of pair-wise values in the functional connectivity matrix. Our measure satisfactorily vanishes in both extremal cases: dynamical independence and global synchrony. On the contrary, the ‘neural complexity’ measure proposed in [2] grows monotonically with coupling strength and becomes infinity when the network is globally synchronised (see Supplementary Material for details). Our measure derives from the one introduced in [39, 40]. Here, we have opted to replace the nonlinear influence of their entropy-based measure by a linear comparison (the integral) between the observed and the uniform distributions. Our measure is also more robust than entropy to changes in the number of bins taken for the distribution. The variability of the correlation values has also been employed before to evaluate the richness of the dynamical repertoire [30, 31]. Despite the success of those efforts to capture increased variability for complex states they lack of an adequate normalisation.
B.
Limitations and Outlook
As a graph model, the hierarchical model we have presented is very satisfactory. From a biological point of view, we recognise the need to define models which can explain how brain networks developed their current topology in the course of evolution. We foresee that the key ingredients of such evolutionary models are: (i) identification of the “driving-forces”, e.g., the balancing between integration and segregation, and (ii) a growth process that accounts for the increase in the number of neurones or cortical surface over time [41, 42]. Other ingredients shall include (iii) the trade-off between the cost and the efficiency of the resulting networks given the spatial constraints of the brain [43–45] and (iv) the patterns of axonal growth during development [46]. The measure of functional complexity we have introduced is an excellent candidate as a clinical marker for connectivity-related conditions. As explained above the measure decreases when the balance between segregation and integration is broken. Both cases, either too much segregation or too much integration can be regarded as pathological. Over the last decade it has been consistently reported how resting-state functional connectivity differs across healthy subjects and patients suffering from diverse conditions [47]. Most of those reports are based in the graph analysis of functional connectivity which unfortunately depend on several arbitrary choices [48, 49],
11 e.g., the need to set a threshold to binarise the correlation matrix. Our measure of complexity requires no unreasoned choices, it is easy to apply and interpret. It is also very fast to compute and is thus suitable for online monitoring systems. The temporal evolution of the functional complexity is a very relevant application we have ignored here. For example, investigation of the fluctuations in complexity during rest will further inform about the transient dynamical states the brain undergoes and its dynamical repertoire. Very important is also the examination of the variation in complexity during tasks and external estimuli. As recently stated, segregation and integration shall be trully understood as the response of the brain to inputs [50]; segregation being the capacity of the brain to convey the amount of information provided by individual external inputs and integration its ability to combine external inputs distributed across different brain regions. A complete characterisation of these phenomena, however, still requires further developments because the complexity of a coupled dynamical system is composed by two aspects. One, is the formation of complex coalitions between the nodes. This is the interaction complexity that we have studied here and which we have named functional complexity for consistency with the nomenclature of ‘functional connectivity’. The other aspect is the complexity of the time-courses traced by the individual nodes. In this sense, an oscillator is less complex than a chaotic attractor because oscillatory behaviour is predictable; but random noise is also not complex despite being unpredictable [51]. As a final remark, we shall notice that although we have restricted our investigations to the study of neural and brain networks, we are confident that the modular organisation with centralised intercommunication is a general principle of organisation in biological networks. It is reasonable to argue that the assumption of balancing between integration and segregation as the principal driving-force shaping the large-scale topology of the brain is also applicable to other networked biological systems. As an example, we recently reported that the transcriptional regulatory network of the Mycobacterium tuberculosis shares fundamental properties with those of neural neural networks [52]. In the end, the transcriptional regulatory network is the responsible in the small bacterium to collect information of the environment through different channels, to process and interpret that information, and to efficiently combine it to “take decisions” that improve the chances of survival.
VI.
MATERIALS AND METHODS A.
Connectivity datasets
Caenorhabditis elegans: The C. elegans is a small nematode of approximately 1 mm long and is one of the most studied organisms. Its nervous system consists of
302 neurones which communicate through gap junctions and chemical synapses. We use the collation performed by Varshney et al. in [13]; the data can be obtained in http://wormatlas.org/neuronalwiring.html. After organising and cleaning the data we ended with a network of N = 275 neurones and L = 2990 links between them. For the general purposes of the paper we consider two neurones connected if there is at least one gap junction or one chemical synapse between them. We ignored all neurones that receive no inputs because they would always be dynamically independent. The resulting network has a density of ρ = 0.04 and a reciprocity of 0.470 meaning that 47% of links join neurones A and B in both directions while the remaining 53% connect two neurones in only one direction. Most of the reciprocal connections come from the gap junctions, which are always bidirectional; only 21% of the chemical synapses are devoted to connect two neurones in both directions. Cat cortex: The dataset of the corticocortical connections in cats was created after an extensive collation of literature reporting anatomical tract-tracing experiments [53, 54]. It consists of a parcellation into N = 53 cortical areas of one cerebral hemisphere and L = 826 fibres of axons between them. After application of data mining methods [53, 55] the network was found to be organised into four distinguishable clusters which closely follow functional subdivisions: visual, auditory, somatosensory-motor and frontolimbic. The network has a density of ρ = 0.30 and 73% of the connections are reciprocal. Macaque monkey: The macaque network is based on a parcellation of one cortical hemisphere into N = 95 areas and L = 2390 directed projections between them [43]. The dataset, which can be downloaded from http://www.biological-networks.org, is a collation of tract-tracing experiments gathered in the CoCoMac database (http://cocomac.org) [56]. Ignoring all cortical areas that receive no input we ended with a reduced network of N = 85 cortical areas. The network has a density of ρ = 0.33 and reciprocity r = 0.74. Human structural connectivity: Structural connectivity was acquired from 21 healthy right-handed volunteers. Find full details in [57, 58]. Diffusion imaging data were acquired on a Philips Achieva 1.5 Tesla Magnet in Oxford from all participants using a single-shot echo planar sequence with coverage of the whole brain. The scanning parameters were echo time (TE) = 65 ms, repetition time (TR) = 9390 ms, reconstructed matrix 1766176 and reconstructed voxel size of 1.861.8 mm and slice thickness of 2 mm. Furthermore, DTI data were acquired with 33 optimal nonlinear diffusion gradient directions (b = 1200 s/mm2) and 1 non-diffusion weighted volume (b = 0). We used the Automated Anatomical Labeling (AAL) template to parcellate the entire brain into 90 cortical and subcortical (45 each hemisphere) as well as 26 cerebellar regions (cerebellum and vermis) [59]. The parcellation was conducted in the diffusion MRI native space. We estimated the connectivity probability by
12 applying probabilistic tractography at the voxel level using a sampling of 5000 streamline fibres per voxel. The connectivity probability between region i to j was defined by the proportion of fibres passing through voxels in i that reach voxels in j [60]. Because of the dependence of tractography on the seeding location, the probability from i to j is not necessarily equivalent to that from j to i. However, these two probabilities were highly correlated, we therefore defined the undirectional connectivity probability Pij between regions i and j by averaging these two probabilities. We implemented the calculation of regional connectivity probability using in-house Perl scripts. Regional connectivity was normalised using the regions volume expressed in number of voxels. The 21 networks so constructed were all composed of N = 116 brain regions and a number of links ranging from L = 1110 undirected links for the sparsest case (density ρ = 0.17) to L = 1614 for the densest (ρ = 0.24). These networks are individually used for the results in Figure 3. In order to derive an average connectome, used in the results of Figure 2, we performed an iterative procedure which automatically prunes outlier links (datapoints falling out of 1.5 times the inter-quartile range). For each link between regions i and j outlier values of Cij are identified among the initial 21 measures available for the link (one per subject) as those values out of the 1.5 inter-quartile range (IQR). If outliers are identified we remove them from the dataset and search again for outliers. The procedure stops when no further outliers are identified. This method allows to clean the data without having to set an arbitrary threshold for the minimally accepted prevalence of the link across subjects. The average network contains approximately the same number of links as the individual matrices. Defining the average connectivity by computing the simple mean across the 21 Cij matrices (a usual approach in the literature) leads to an average connectivity matrix that contains more than twice the links in the matrices for individual subjects. For consistency with the datasets of the cats and the macaque monkeys, we show in the paper the results for the subnetworks formed only by the N = 76 cortical regions (38 per hemisphere) and ignoring all subcortical areas. We found qualitatively the same results in the cortical subnetwork and in the full-brain network, with the only difference that the cerebellum and the vermis form a very densely interconnected community that synchronises easily. Human functional connectivity: Data were collected at CFIN, Aarhus University, Denmark, from 16 healthy right-handed participants (11 men and 5 women, mean age: 24.75+/-2.54). All participants were recruited through the online recruitment system at Aarhus University excluding anyone with psychiatric or neurological disorders (or a history thereof). The study was approved by the internal research board at CFIN, Aarhus University, Denmark. Ethics approval was granted by the Research Ethics Committee of the Central Denmark Region (De Videnskabsetiske Komiter for Region Midtjylland).
Written informed consent was obtained from all participants prior to participation. The MRI data (structural MRI and rs-fMRI) were collected in one session on a 3T Siemens Skyra scanner at CFIN, Aarhus University, Denmark. The parameters for the structural MRI T1 scan were as follows: voxel size of 1 mm3; reconstructed matrix size 256x256; echo time (TE) of 3.8 ms and repetition time (TR) of 2300 ms. The resting-state fMRI data were collected using whole-brain echo planar images (EPI) with TR = 3030 ms, TE = 27 ms, flip angle = 90, reconstructed matrix size = 96x96, voxel size 2x2 mm with slice thickness of 2.6 mm and a bandwidth of 1795 Hz/Px. Approximately seven minutes of resting state data were collected per subject. We used the automated anatomical labeling (AAL) template to parcellate the entire brain into 116 regions [59]. The linear registration tool from the FSL toolbox (www.fmrib.ox.ac.uk/fsl, FMRIB, Oxford) [61] was used to coregister the EPI image to the T1-weighted structural image. The T1-weighted image was coregistered to the T1 template of ICBM152 in MNI space [62]. The resulting transformations were concatenated and inversed and further applied to warp the AAL template [59] from MNI space to the EPI native space, where interpolation using nearest-neighbor method ensured that the discrete labeling values were preserved. Thus the brain parcellations were conducted in each individuals native space. Data preprocessing of the functional fMRI data was carried out using MELODIC (Multivariate Exploratory Linear Decomposition into Independent Components) Version 3.14 [63], part of FSL (FMRIB’s Software Library, www.fmrib.ox.ac.uk/fsl). We used the default parameters of this imaging pre-processing pipeline on all participants: motion correction using MCFLIRT [61]; non-brain removal using BET [64]; spatial smoothing using a Gaussian kernel of FWHM 5mm; grand-mean intensity normalisation of the entire 4D dataset by a single multiplicative factor and high pass temporal filtering (Gaussian-weighted least-squares straight line fitting, with sigma = 50.0s). We used tools from FSL to extract and average the time courses from all voxels within each AAL cluster. We then used Matlab (The MathWorks Inc.) to compute the pairwise Pearson correlation between all 116 regions, applying Fisher’s transform to the r-values to get the z-values for the final 116x116 FC-fMRI matrix.
B.
Community detection
After application of data mining methods the corticocortical network of the cat network was found to be organised into 4 distinguishable clusters which closely follow functional subdivisions: visual, auditory, somatosensory-motor and frontolimbic [53, 55]. For the other three real datasets we investigated their modular structure using Radatools
13 (http://deim.urv.cat/∼sergio.gomez/radatools.php), a software that allows to detect graph communities by alternating different methods. We run the community detection such that it would first perform a coarse grained identification of the communities using Newman and Girvan’s method [21] and then a method by G´ omez and Arenas named ‘Tabu Search’ [23] was applied. Final optimisation of the partitions was performed by a reposition method. The neural network of the C. elegans was partitioned into four modules of size 8, 64, 92 and 111 neurones respectively with modularity q = 0.417. The corticocortical network of the macaque monkey was divided into three modules of 4, 38 and 47 cortical areas with q = 0.402. The average human corticocortical connectome was divided into three modules of sizes 20, 26 and 30 with modularity q = 0.33. In two of the modules there is a dominance of one cortical hemisphere while the third module contains left and right areas in similar numbers.
C.
Surrogates and synthetic network models
The network analysis, the generation of network models and the randomisation of networks has been performed using GAlib (https://github.com/gorkazl/pyGAlib), a library for graph analysis in Python. The network generation functions are located in the submodule gamodels.py. Random graphs: A random graph is a network in which two nodes are chosen uniformly at random and are connected provided they were not connected yet. The procedure is repeated until the desired number of links have been added. The function RandomGraph generates random graphs of desired number of nodes N and number of links L. Scale-free networks: Random graphs with scale-free degree distribution were generated following the method in [65]. The nodes are ranked as i = 1, 2, . . . , N and −α they are assigned a weight p(i) = Pi j −α . To place j the links, two nodes i and j are chosen at random with probabilities p(i) and p(j) respectively and they become connected if they were not already linked. The procedure is repeated until the desired number of links are reached. Scale-free networks generated using this preferential attachment rule achieve, on the limit of large and sparse networks, a degree distribution p(k) ∝ e−γ with γ = (1 + α)/α > 2. Tuning α in the range [0, 1) scale-free networks with exponent γ ∈ [2, ∞) are generated. The function ScaleFreeGraph generates scale-free networks with desired size N , number of links L and exponent γ. Rewired networks: Given a real network it is often desirable for comparison to construct equivalent random graphs which also have the same degree distribution as the original network. A common procedure is to itera-
tively choose two links at random, e.g. (i,j) and (i0 ,j 0 ), and to switch them such that (i,j 0 ) and (i0 ,j) are the new links. The method is usually attributed to Maslov & Sneppen [66] but it had been proposed by several authors before [67–71]. The function RewireNetwork returns rewired versions of a given network. In order to guarantee the convergence of the algorithm into the subspace of maximally random graphs of given degree sequence, the link-switching step is repeated for 10 × L iterations, where L is the number of links. Modular and nested-hierarchical network: Random modular networks are generated, as the random graphs, by choosing two nodes at random and connecting them if they were not previously linked. The only difference is that the two nodes can be chosen either from the same module or from different modules. The nested hierarchical model with random connectivity [22] is an extension of this procedure such that modules are subdivided into further modules. The function HMRandomGraph generates both modular and nested hierarchical networks by seeding links between randomly chosen nodes of the same or of different modules until each module internally, and the connectivity across modules, contains the desired number of links in each case. Nested-hierarchical networks with centralised connectivity: While in the nested hierarchical random model the connections between the modules are shed at random, in neural and brain networks inter-module connections and communication paths tend to be centralised through the hubs [25]. We here propose a model of hierarchical and modular networks that combines both features. For that we modify the nested hierarchical model to replace the random connectivity between modules by a preferential attachment rule. We start by creating the 16 random graphs of 16 nodes each and mean degree κ3 = 13 of the deepest level. Then, the nodes of each submodule are ranked as i = 1, 2, . . . , N3 = 16 and they −α are assigned a weight p(i) = Pi j −α . To place the interj modular links, two nodes i and j are chosen at random from two different modules with probability p(i) and p(j) respectively, and they become connected if they were not already linked. The procedure is repeated at each hierarchical level until the mean degree of inter-modular links are κ2 = 6 and κ1 = 5 as we had in the nested random hierarchical networks. Scale-free networks generated using this preferential attachment rule achieve, on the limit of large and sparse networks, a degree distribution p(k) ∝ e−γ with γ = (1 + α)/α > 2 [65]. Tuning α in the range [0, 1) scale-free networks with exponent γ ∈ [2, ∞) are generated. The inter-modular links at the second level are planted using γ 2 = 2.0 and the links between the four major modules at first level are placed with γ 1 = 1.7. The function HMCentralisedGraph generates nested-modular hierarchical networks with the intermodular links centralised, seeding the inter-modular links at each level with a preferential attachment rule of desired exponent γ. Modularity preserving random graphs: Given a
14 Network C. elegans Cat Macaque Human (21 subs) Human (average)
N L Density Reciprocity Comms. Modularity 275 2990 0.04 0.47 4 0.417 4 0.270 53 826 0.30 0.73 85 2356 0.33 0.74 3 0.402 — —– 76 655 - 1061 0.23 -0.37 undir. 76 935 0.33 undir. 3 0.33
TABLE I. Summary of neural and brain network datasets: In this paper we have used four real datasets of structural connectivity. The table sketches their principal properties: their size N (number of neurones or cortical areas), their number of links L or the number of communities identified (Comms.). The human structural connectivity is the only undirected dataset because tractography does not distinguish directionality of the projections.
real network with a partition of its nodes into n communities, random graphs with the same modular structure can be generated. Therefore we first need to count the number of links Lrs between all communities r, s = 1, 2, . . . , n. Lrr are the number of internal links within the community r and Lrs are the number of links leaving from community r and reaching community s. The generation procedure is the same as for the random modular networks, considering now the specific number of links to be planted in each case. The resulting random modular networks have the same modularity q as the original real network because the fraction of internal to external links and the average degree of the nodes in all communities are conserved. The function ModularityPreservingGraph returns random graphs of same modular structure as the given real network. Ravasz-Barab´ asi networks: The Ravasz-Barab´asi model is composed of a ring of N0 − 1 nodes connected to their first neighbours surrounding a central hub. To generate subsequent hierarchical levels every node becomes the central node of a copy of the original motif. Finally, the central nodes are connected to all the non-hub nodes in the lower hierarchical levels of the branch they belong
to. The function RavaszBarabasiGraph creates RavaszBarab´asi networks with desired size N0 for the original motif and desired number of hierarchical levels.
[1] E. Ravasz and A.-L. Barab´ asi. Hierarchical organization in complex networks. Phys. Rev. E, 67:026112, 2003. [2] G. Zamora-L´ opez, C. S. Zhou, and J. Kurths. Cortical hubs form a module for multisensory integration on top of the hierarchy of cortical networks. Front. Neuroinform., 4:1, 2010. [3] G. Zamora-L´ opez, C.S. Zhou, and J. Kurths. Exploring brain function from anatomical connectivity. Front. Neurosci., 5:83, 2011. [4] A. R. Damasio. The brain binds entities and events by multiregional activation from convergence zones. Neural Comp., 1:123–132, 1989. [5] J. M. Fuster. Cortex and Mind: unifying cognition. Oxford University Press, New York, 2003. [6] B. J. Baars. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Progress in Brain Research, 150:45–53, 2005. [7] M. Shanahan. A spiking neuron model of cortical broadcast and competition. Consciousness and Cognition,
17:288–303, 2007. [8] O. Sporns, C. J. Honey, and R. K¨ otter. Identification and classification of hubs in brain networks. PLoS ONE, 10:e1049, 2007. [9] O. Sporns and G. M. Tononi. Classes of network connectivity and dynamics. Complexity, 7(1):28–38, 2001. [10] G. Tononi, O. Sporns, and G. M. Edelman. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc. Nat. Acad. Sci., 91:5033–5037, 1994. [11] M.P. van den Heuvel and O. Sporns. Rich-club organization of the human connectome. J. Neurosci., 31(44):15775–15786, 2011. [12] Logan Harriger, Martijn P. van den Heuvel, and Olaf Sporns. Rich club organization of macaque cerebral cortex and its role in network communication. PLoS ONE, 7(9):e46497, 2012. [13] L. A. Varshney, B. L. Chen, E. Paniagua, D. H. Hall, and D. B. Chklovskii. Structural properties of the caenorhab-
ACKNOWLEDGMENTS
The authors thank Mario Senden for discussions and early revision of the manuscript. This work has been supported by (GZL) the European Union Seventh Framework Programme FP7/2007-2013 under grant agreement number PIEF-GA-2012-331800 and the German Federal Ministry of Education and Research (Bernstein Center II, grant no. 01GQ1001A). YC and CSZ were supported by the Hong Kong Baptist University (HKBU) Strategic Development Fund, the Hong Kong Research Grant Council (GRF12302914), HKBU FRG2/14-15/025 and the National Natural Science Foundation of China (No. 11275027). GD is supported by ERC Advanced Grant: DYSTRUCTURE (295129) and by the Spanish Research Project PSI2013-42091-P; and MLK by the European Research Council (ERC) Consolidator Grant: CAREGIVING (615539).
15 ditis elegans neuronal network. PLoS Comput. Biol., 7(2):e1001066, 2011. [14] A. Papoulis. Probability, random variables and stochastic processes. McGraw-Hill, New York, 1991. [15] R. K¨ otter and F. T. Sommer. Global relationship between anatomical connectivity and activity propagation in the cerebral cortex. Phil. Trans. R. Soc. London B, 355:127–134, 2000. [16] M. P. Young, C.-C. Hilgetag, and J. W. Scannell. On imputing function to strcuture from the behavioural effects of brain lesions. Phil. Trans. R. Soc. Lond. B, 355:147– 161, 2000. [17] P. beim Graben, C. S. Zhou, M. Thiel, and J. Kurths, editors. Lectures in supercomputational neuroscience. Springer-Verlag, Berlin, 2007. [18] E. Estrada and N. Hatano. Communicability in complex networks. Phys Rev. E, 77:036111, 2008. [19] C. S. Zhou and J. Kurths. Hierarchical synchronization in networks of oscillators with heterogeneous degrees. Chaos, 16:015104, 2006. [20] T. Pereira. Hub synchronization in scale-free networks. Phys. Rev. E, 82:036201, 2010. [21] M. E. J. Newman and M. Girvan. Finding and evaluating community structure in networks. Phys. Rev. E, 69:026113, 2004. [22] A. Arenas, A. D´ıaz-Guilera, and C. P´erez-Vicente. Synchronization reveals topological scales in complex networks. Phys. Rev. Lett., 96:114102, 2006. [23] A. Arenas, A. Fern´ andez, and S. G´ omez. Analysis of the structure of complex networks at different resolution levels. New J. Phys., 10:05039, 2008. [24] E. Ravasz, A. L. Somear, D. A. Mongru, Z. N. Oltvai, and A.-L. Barab´ asi. Hierarchical organization of modularity in metabolic networks. Science, 297:1551 – 1555, 2002. [25] G. Zamora-L´ opez, C. S. Zhou, and J. Kurths. Graph analysis of cortical networks reveals complex anatomical communication substrate. Chaos, 19:015117, 2009. [26] M. Senden, G. Deco, M.A. de Reus, R. Goebel, and M.P. van den Heuvel. Rich club organization supports a diverse set of functional network configurations. NeuroImage, 96:174–178, 2014. [27] A. R. Damasio. Time-locked multiregional retroactivation: A systems-level proposal for the neuronal substrates of recall and recognition. Cognition, 33:25–62, 1989. [28] S. L. Bressler and J. A. S. Kelso. Cortical coordination dynamics and cognition. Trends Cogn. Sci., 5(1):26–36, 2001. [29] R.F. Betzel, M.A. Erikson, M. Abell, B.F O’Donnell, W.P. Hetrick, and O. Sporns. Synchronization dynamics and evidence for a repertoire of network states in resting eeg. Front. Comput. Neurosci., 6:74, 2012. [30] G. Deco, P. Hagmann, A.G. Hudetz, and G. Tononi. Modeling resting-state functional networks when the cortex falls asleep: Local and global changes. Cereb. Cortex, 24(12):3180–3194, 2013. [31] A.G. Hudetz, C.J. Humphries, and J.R. Binder. Spinglass model predicts metastable brain states that diminish in anesthesia. Front. Syst. Neurosci., 8:234, 2014. [32] A.G. Hudetz, X. Liu, and S. Pillay. Dynamic repertoire of intrinsic brain states is reduced in propofol-induced unconsciousness. Brain Connectivity, 0(0):1–13, 2014. [33] A.R. McIntosh, N. Kovacevic, and R.J. Itier. Increased brain signal variability accompanies lower behavioral variability in development. PLoS Comput. Biol.,
4(7):e1000106, 2008. [34] J. Go˜ ni, M.P. van den Heuvel, A. Avena-Koenigsberger, N. Velez de Mendizabal, R.F. Betzel, A. Griffa, P. Hagmann, B. Corominas-Murtra, J.P. Thiran, and O. Sporns. Resting-brain functional connectivity predicted by analytic measures of network communication. Proc. Nat. Acad. Sci., 111(2):833–838, 2014. [35] S. L. Bressler and V. Menon. Large-scale brain networks in cognition: emerging methods and principles. Trends Cogn. Sci., 14:277–290, 2010. [36] L. L. Colgin, T. Denninger, M. Fyhn, T. Hafting, T. Bonnevie, O. Jensen, M.-B. Moser, and E. I. Moser. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature, 462:353–357, 2009. [37] O. Jensen, B. Gips, T. O. Bergmann, and M. Bonnefond. Temporal coding organized by coupled alpha and gamma oscillations prioritize visual processing. Trends Neurosci., 37(7):357–369, 2014. [38] G. Tononi, G. M. Edelman, and O. Sporns. Complexity and coherency: integrating information in the brain. Trends Cog. Sci., 2:12, 1998. [39] M. Zhao, C.S. Zhou, Y. Chen, B. Hu, and B.-H. Wang. Complexity versus modularity and heterogeneity in oscillatory networks: Combining segregation and integration in neural systems. Phys Rev. E, 82:046225, 2010. [40] M. Zhao, C.S. Zhou, J. L¨ u, and C.H. Lai. Competition between intra-community and inter-community synchronization and relevance in brain cortical networks. Phys Rev. E, 84:016109, 2011. [41] M. Kaiser and C. C. Hilgetag. Modelling the development of cortical systems networks. Neurocomputing, 58–60:297– 302, 2004. [42] M. Kaiser and C.-C. Hilgetag. Spatial growth of realworld networks. Phys Rev. E, 69:036103, 2004. [43] M. Kaiser and C. C. Hilgetag. Nonoptimal component placement, but short processing paths, due to longdistance projections in neural systems. PLoS Comp. Biol., 2(7):e95, 2006. [44] Y. Chen, S. Wang, C.-C. Hilgetag, and C.S. Zhou. Tradeoff between multiple constraints enables simultaneous formation of modules and hubs in neural systems. PLoS Comp. Biol., 9(3):e1002937, 2013. [45] D. Samu, A.K. Seth, and T. Nowotny. Influence of wiring cost on the large-scale architecture of human cortical connectivity. PLoS Comp. Biol., 10(4):e1003557, 2014. [46] S. Lim and M. Kaiser. Developmental time windows for axon growth influence neuronal network topology. Biol. Cybern., 109:275–286, 2015. [47] M.D. Fox and M. Greicius. Clinical applications of resting state functional connectivity. Front. Syst. Neurosci., 4:19, 2010. [48] A. Fornito, A. Zalesky, and M. Breakspear. Graph analysis of the human connectome: Promise, progress, and pitfalls. NeuroImage, 80:426–444, 2013. [49] D. Papo, M. Zanin, J.A. Pineda-Pardo, S. Boccaletti, and J.M. Buld´ u. Functional brain networks: great expectations, hard times and the big leap forward. Phil. Trans. R. Soc. B, 369:20130525, 2014. [50] G. Deco, G. Tononi, M. Boly, and M.L. Kringelbach. Rethinking segregation and integration: contributions of whole-brain modelling. Nat. Rev. Neurosci., 16:430–439, 2015. [51] R. Wackerbauer, A. Witt, H. Atmanspacher, J. Kurths, and H. Scheingraber. A comparative classification of com-
16 plexity measures. Chaos, solitons & Fractals, 4(1):133– 173, 1994. [52] F. Klimm, J. Borge-Holthoefer, N. Wessel, J. Kurths, and G. Zamora-L´ opez. Individual node’s contribution to the mesoscale of complex networks. New J. Phys., 16:125006, 2014. [53] J. W. Scannell and M. P. Young. The connectional organization of neural systems in the cat cerebral cortex. Curr. Biol., 3(4):191–200, 1993. [54] J. W. Scannell, C. Blakemore, and M. P. Young. Analysis of connectivity in the cat cerebral cortex. J. Neurosci., 15(2):1463–1483, 1995. [55] C. C. Hilgetag and M. Kaiser. Clustered organisation of cortical connectivity. Neuroinf., 2:353–360, 2004. [56] R. K¨ otter. Online retrieval, processing, and visualization of primate connectivity data from the cocomac database. Neuroinformatics, 2:127–144, 2004. [57] J. Cabral, E. Hugues, M. L. Kringelbach, and G. Deco. Modeling the outcome of structural disconnection on resting-state functional connectivity. NeuroImage, 62:1342–1353, 2012. [58] T.J. van Hartevelt, J. Cabral, G. Deco, A.L. Green A. Moller, T.Z. Aziz, and M.L. Kringelbach. Neural plasticity in human brain connectivity: The effects of long term deep brain stimulation of the subthalamic nucleus in parkinson’s disease. PLoS ONE, 9(1):e86496, 2014. [59] N. Tzourio-Mazoyera, B. Landeaub, D. Papathanassioua, F. Crivelloa, O. Etarda, N. Delcroixa, B. Mazoyerc, and M. Joliot. Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain. Neuroimage, 1(273–289), 200215. [60] T.E. Behrens, H.J. Berg, S. Jbabdi, M.F. Rushworth, and M.W Woolrich. Probabilistic diffusion tractography with multiple fibre orientations: What can we gain? Neuroimage, 34(144–155), 2007. [61] M. Jenkinson, P. Bannister, M. Brady, and S. Smith. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage, 17:825–841, 2002. [62] D. Collins, P. Neelin, T. Peters, and A.C. Evans. Automatic 3d intersubject registration of mr volumetric data in standardized talairach space. J. Computer Assisted Tomography, 18:192–205, 1994. [63] C.F. Beckmann and S.M. Smith. Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Trans. Med. Imaging, 23(2):137– 152, 2004. [64] S. Smith. Fast robust automated brain extraction. Human Brain Mapping, 17(3):143–155, 2002. [65] K.-I. Goh, B. Kahng, and D. Kim. Universal behaviour of load distribution in scale-free networks. Phys. Rev. Lett., 87:27, 2001. [66] S. Maslov and K. Sneppen. Specificity and stability in topology of protein networks. Science, 296:910–913, 2002. [67] L. Katz and J. H. Powell. Probability distributions of random variables associated with a structure of the sample space of sociometric investigations. Ann. Math. Stat., 28:442–448, 1957. [68] P W. Holland and S. Leinhardt. Sociological Methodology, chapter The statistical analysis of local structure in social networks, pages 1–45. Jossey-Bass, San Francisco, 1977. [69] A. R. Rao and S. Bandyopadhyay. A markov chain monte
carlo method for generating random (0, 1)-matrices with given marginals. Sankhya A, 58:225–242, 1996. [70] R. Kannan, P. Tetali, and S. Vempala. Simple markovchain algorithms for generating bipartite graphs and tournaments. Random Structures and Algorithms, 14:293–308, 1999. [71] J. M. Roberts. Simple methods for simulating sociomatrices with given marginal totals. Social Networks, 22:273– 283, 2000.
1
Supplementary material for: “Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs.” by Gorka Zamora-L´ opez, Yuhan Chen, Gustavo Deco, Morten L. Kringelbach, and Changsong Zhou
In the following pages we extend the information provided in the main article. First, the evolution of the correlation matrices as the coupling strength increases are shown for some of the models presented. Then, the formation of rich-clubs in the four neural network studied in the paper are shown. The rich-club properties of these networks have been extensively reported in the literature, we include them here for completeness. Also, we add the rich-club analysis for the new hierarchical and modular network model and for the Ravasz-Barab´asi model. Finally, we revisit the measure of complexity defined in Tononi, Sporns & Edelman (1994) to study its limitations.
I.
COMPARISON OF CORRELATION MATRICES FOR DIFFERENT MODELS
In this paper we have introduced a measure of complexity that is based on the variability of the pair-wise values that the cross-correlation matrix R or functional connectivity (FC) matrix takes. To better understand the measure we show in Figure SS1 the correlation matrices for some of the models investigated. As the coupling strength increases the networks undergo a transition from independence to global synchrony. At weak couplings the distribution p(rij ) is a narrow distribution with values near rij = 0 while for strong values of the coupling p(rij ) becomes another narrow distribution but approaching rij → 1. Complex behaviour happens in the intermediate values of the coupling when partial coalitions between the nodes happen; reflected by a broadening of the distribution. This is true except for the Ravasz-Barab´ asi model in which the p(rij ) is always a narrow peak evidencing the lack of complex dynamics.
2
0
0
0
0
250
250
250
250
250
500
0
250
500
g = 2.2
0.1
500
0
250
500
g = 2.8
0.1
500
0
250
500
500
g = 3.4
0.1
0
250
500
g = 3.8
0.1
500
0
250
500
g = 4.6
0.1
0.8
Scale-Free
1.0
0
0.4
HM random
0.6
Ravasz-Barabasi
0.0
HM centralized
0.5
1.0
0.0
0.0
0.5
1.0
0.0
0.0
0.5
1.0
0.0
0.0
0.5
1.0
0.0
0.0
0
0
0
0
0
100
100
100
100
100
200 0.4 0.3 0.2 0.1 0.0
200 0
100
200
g = 1.8
0.0
0.5
1.0
0.4 0.3 0.2 0.1 0.0
200 0
100
200
g = 2.2
0.0
0.5
1.0
0.4 0.3 0.2 0.1 0.0
200 0
100
200
g = 2.8
0.0
0.5
1.0
0.4 0.3 0.2 0.1 0.0
100
200
g = 3.4
0.0
0.5
1.0
0.4 0.3 0.2 0.1 0.0
0
0.0
0
0
0
100
100
100
100
100
200
200
200
200
200
0
100
200
0
0.2
g = 1.8
0.1
100
200
0.1
0.0
0.5
1.0
0.0
0
0.2
g = 2.6
100
200
0.1
0.0
0.5
1.0
0.0
0
0.2
g = 3.4
100
200
0.5
1.0
0.0
0.0
0.5
1.0
0.0
0.0
0
0
0
100
100
100
100
100
200
200
200
200
200
0.1
200
0
0.0
0.1
0.5
Correlation
1.0
0.0
100
200
0
g = 2.8
0.0
0.1
0.5
1.0
Correlation
0.0
100
200
0
g = 3.6
0.0
0.5
Correlation
0.0
100
200
0.0
0.5
Correlation
0.5
0
g = 4.4
0.1
1.0
200
0.0
200
g = 5.0
0.1
1.0
100
1.0
0.0
0.5
Correlation
1.0
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.0
100
100
1.0
0.1
0
g = 1.8
200
g = 5.6
0
0
100
0.5
0
0.2
g = 4.4
0.1
0.0
1.0
g = 4.6
0
0.2
0.5
200 0
0
0.0
0.2
0.0
FIG. S1. Evolution of correlation matrices with increasing coupling strength g for several network models. All matrices are adjusted to the same limits with blue corresponding to rij = 0 and red to rij = 1. All results are for one sample of the network models instead of ensemble averages.
3 II.
RICH-CLUB OF NEURAL AND SYNTHETIC NETWORKS
A complex network is said to have a rich-club when the nodes with largest degree are densely interconnected. To quantify this behaviour Zhou and Mondrag´on introduced the measure k-density, Φ(k 0 ), which is the density of links in the subnetwork composed by the nodes with degree k > k 0 [S1]. In other words, Φ(k 0 ) is the ratio between number of links L0 contained in the subnetwork composed by the nodes with degree k > k 0 and all the links possible 1 /2 N 0 (N 0 − 1) in that subnetwork. N 0 is the number of nodes with k > k 0 , the size of the subnetwork. The factor 1/2 is applied for undirected networks. Formally written: 2L0 . N 0 (N 0 − 1)
Φ(k 0 ) =
(S1)
Macaque
0
10
20
30
Degree
40
50
1.0 0.8 0.6 0.4 0.2 0.0
0
5 10 15 20 25 30
Human
0 5 10 15 20 25 30 35 40 45
Degree
1.0 0.8 0.6 0.4 0.2 0.0 1.0 0.8 0.6 0.4 0.2 0.0
HM centralised
0 10 20 30 40 50 60 70 80
Ravasz-Barabasi
0 5 10 15 20 25 30 35 40
Degree
1.0
0 10 20 30 40 50 60 70
Cat
0.8
Real network Rewired Modular Random
1.0 0.8 0.6 0.4 0.2 0.0
0.6
1.0 0.8 0.6 0.4 0.2 0.0
C. elegans
0.4
1.0 0.8 0.6 0.4 0.2 0.0
0.2
k-density k-density 0.0
0.0
0.2
0.4
0.6
0.8
1.0
Now, Φ(k 0 ) can be repeatedly applied for all k 0 = 0, 1, 2, . . . , k max − 1 (where k max is the largest degree observed in the network) and draw the resulting curve. The initial point at k 0 = 0 is the original density of links of the network. The question is thus whether for successive k 0 the curve grows above the initial density Φ(0), whether it remains stable or whether it decreases below Φ(0). Three of the four real networks investigated are directed. Since the k-density is a priori defined for undirected networks, in those cases we define the degree of node i as the average of its input and output degrees: k = 0.5 (kiin +kiout ). This is a reasonable approximation due to the high fraction of reciprocal connections in these networks and the large correlation between input and output degrees. For more detailed applications k-density can be computed ranking the nodes according to their k in or their k out separately. The results in Figure SS2 show how the k-density of the four real networks (black solid lines) monotonically ascend and reach very large values, a clear indication of the presence of a rich-club. To determine the composition of the rich-club in each case we consider the set of hubs remaining at the degree k 0 for which Φ(k 0 ) ≤ 0.8. The table below summarises their rich-club properties such as the degree for which the k-density becomes larger than 0.8, the number of nodes remaining at that k 0 and the actual density, Φ(k 0 ), of the subnetwork formed by them.
FIG. S2. Comparison of the k-density of real and model networks. All results for rewired and random networks are the average curves for ensembles of 1000 realisations, except for the HM centralised model. Since the HM centralised model is stochastic and every realisation is different, we generated 100 networks. The black solid line is their average curve. For each of the 100 networks, 100 realisations of the rewired and of the random graphs were created. For comparison we include also the ensemble average k-density curves for three null-models, the same used in the comparisons of complexity: (i) rewired networks conserving the degree of the nodes, (ii) random graphs of same
4 Network C.elegans Cat Macaque Human
k0 32 23 40 35
Φ(k0 ) nodes Neurons or cortical areas 0.833 5 AVAL, AVAR, AVBL, AVBR, PVCR 0.864 11 20a, 7, AES, EPp, 6m, 5Al, 1a, 1g, CGp, 35, 36 0.833 7 46, 7a, 7b, LIPd, LIPv, MT, VIPl Precuneus (L/R), FrontSup. (L/R), OccMid. (L/R) 0.833 6
TABLE I. Summary of rich-clubs in the neural and brain network datasets: In this paper we have used four real datasets of structural connectivity. The table sketches their rich-club properties such as the degree k 0 for which the k-density is larger than 0.8, the number of nodes remaining at that k 0 (that is, the size of the rich-club), the names of those neurones or areas, and the actual density, Φ(k 0 ), of the subnetwork formed by them.
size and number of links as the network, and (iii) random networks with the same modular structure as the original network (see Materials and Methods section in the main text). As expected, the rewired networks (dashed lines) follow closely the k-density of the real networks. At the largest degrees, however, the real networks still achieve the largest values. In the case of the macaque and the human tractography this relationship is the closest implying that the presence of the rich-club might be “explained” by their degree distribution alone. Random networks (dotted lines) and modularity preserving random networks (dash-dotted lines) also tend to increase Φ(k 0 ) although significantly slower than the real and the rewired networks. They only reach maximal densities slightly above the initial Φ(0). The early cut-off is because the degree distribution of random graphs is a Poissonian distribution with all nodes having degree comparable to the mean. The neural and brain networks, however have a broad degree distribution with largest degrees above the expectation in random graphs. Finally, the k-density of the hierarchically modular network with centralised intra-modular connectivity (HM centralised) is shown in Figure SS2 to corroborate that the model gives rise to a rich-club with the parameters used for the results in Figure 7 of the main text. The k-density for the Ravasz-Barab´asi model demonstrates our argumentation that it fails to generate networks with densely interconnected hubs despite the scale-free-like degree distribution it attains.
III.
LIMITATIONS OF THE ‘NEURAL COMPLEXITY’ MEASURE
Tononi, Sporns & Edelman (1994) introduced a measure of complexity, named as ‘neural complexity’, intended to quantify the balanced coexistence of both local and global collective coherent behaviour in coupled dynamical systems of multiple elements. The main idea was that the measure would become largest for network that can balance between segregation and integration. Functional segregation is regarded as the relative statistical independence between groups of elements of the system, and functional integration as the opposite, the global statistical dependence between the groups. A system shall be complex when it contains dynamical clusters that are weakly correlated between them. Given a multivariate dynamical system (or network) X consisting of N coupled components (nodes), the measure for complexity was defined as the sum of the average mutual information of all possible bipartitions of the system, for bipartitions of size k = 1, 2, . . . , N/2:
C N (X) =
N/2 D
X
E ˜ jk ) . M I(Xjk ; X
(S2)
k=1
˜ k } is a bipartition of the system into two complementary subsets Here M I stands for mutual information and {X k , X ˜ k. of nodes with size k and N − k respectively such that X = X k ∪ X The mutual information between two subsets can be computed in terms of the integration measure I, also defined PN in [S2]. Integration is itself an extension of mutual information for more than two variables: I(X) = i=1 H(xi ) − H(X). Here H(xi ) is the entropy of each variable (node) and H(X) is the joint entropy of the coupled system as a whole. The mutual information between two complementary subsets can be rewritten as: ˜ k ) = I(X) − I(X k ) − I(X ˜ k ). M I(X k ; X
(S3)
In real applications, measuring the joint distribution of multivariate time-series can be unfeasible because it requires large amounts of data to be available. This problem can be avoided by estimating integration I(X) out of the pairwise
Complexity
5
1.0 0.8 0.6 0.4 0.2 0.0
2000
A
Functional Complexity
B
1500
Neural Complexity
1000 500 0
1
2
3
4
5
Coupling strength
6
0
0
1
2
3
4
5
Coupling strength
6
FIG. S3. Comparison of complexity measures. The evolution of the complexity for one random graph of N = 100 nodes and density ρ = 0.2 as the coupling strength increases. For each value of g the correlation matrix R of the network is estimated using the exponential mapping and the two complexity measures of complexity are calculated out of R. (A) Shows the result for functional complexity [Equation (1) of main text] and (B) the result for neural complexity C N (X).
cross-correlation matrix R(X) of the system as: I(X) = − 21 log (|R(X)|) where |R| stand for the determinant of the correlation matrix. Substituting in Equation (S3), the mutual information for a given bipartition becomes: ˜ k ) = 1 log |R(X k )| + log |R(X ˜ k )| − log |R(X)| M I(X k ; X 2 " # ˜ k )| 1 |R(X k )| |R(X = log 2 |R(X)|
(S4) (S5)
˜ k ) are two sub-matrices of R(X) taking only the nodes in the subsets X k and X ˜ k respectively. where R(X k ) and R(X
This measure suffers from few limitations. On the one hand, it requires to compute the mutual information between PN/2 N! all possible bipartitions. For a network of size N there are k=0 k! (N −k)! such bipartitions making the computational time only accessible for very small networks. The problem can be partially overcome by computing the result for only a small sample of bipartitions. On the other hand, the measure takes its largest value when the system is globally synchronised, diverging to infinity. When the nodes are uncoupled C N (X) = 0 as it is expected. As the coupling strength of the links increases C N (X) grows monotonically. The problem is that when all values of the correlation matrix are rij = 1, the mutual information between any bipartition M I(X k ; X − X k ) in Equation (S2) becomes infinite. To demonstrate this, imagine that the network is almost synchronised; there is a small number 0 < δ 1 such that rii = 1 and rij = 1 − δ for all i 6= j:
1 1−δ 1−δ 1−δ 1 1−δ COR(X) = 1 − δ 1 − δ 1 .. .. .. . . .
··· ··· ··· .. .
.
(S6)
(N ×N )
In this case the determinant of R can be easily expressed and integration reduces to I(x) = −0.5 log ( |COR(x)| ) = − log N δ N −1 − (N − 1)δ N . Substituting in Equation (S5) we obtain that the mutual information for a given bipartition is: ˜ jk ) M I(Xjk ; X
[ (k (1 − δ) + δ ] [ (N − k)(1 − δ) + δ ] = 0.5 log δ [ N (1 − δ) + δ ]
.
(S7)
6 When δ → 0 this expression can be approximated by: ˜ k ) = 0.5 log k (N − k) M I(Xjk ; X j δN
(S8)
which diverges to infinity as δ → 0. Thus, neural complexity C N (X) becomes infinity in the globally synchronised state what is contradictory with the intention of the measure. At the globally synchronised state there is no segregation and hence there is no optimal balance between segregation and integration. Figure SS3 shows the numerical comparison between functional complexity and neural complexity applied to the same random graph of N = 100 nodes. As seen, C N (X) monotonically increases with coupling strength. Our measure, functional complexity, successfully vanishes again at strong g.
[S1] S. Zhou and R.J. Mondrag´ on. The rich-club phenomenon in the internet topology. IEEE Comm. Lett., 8(3):180–182, 2004. [S2] G. Tononi, O. Sporns, and G. M. Edelman. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc. Nat. Acad. Sci., 91:50335037, 1994.