1 Departments of Neurology and Physiology, Medical College of Wisconsin, Milwaukee, WI 53226, ... 2 College of Engineering & Applied Science, University of ...
Exploring Connectivity Dynamics using Deep Neural Network Models from Magnetoencephalography Data Zachary J. Harper1,2, Roseric Azondekon2, Charles M. Welzig1,
Departments of Neurology and Physiology, Medical College of Wisconsin, Milwaukee, WI 53226, USA 2 College of Engineering & Applied Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA 1
Introduction While modern machine learning techniques, such as deep neural network architectures, can be excellent for interrogating complex dynamics in various biological data sources, small sample size and excess input dimensionality can be detrimental to training or learning performance. To reduce data requirements, we optimize spatiotemporal dynamic representation in magnetoencephalographic (MEG) data to fewer parameters. Spatial dimensions are reduced to macroscale parcel pairs linked by resting state functional connectivity networks [1]. Temporal and frequency dimensions are reduced to band limited power of coherence between parcel pairs. This project evaluates a machine learning approach to search for spatiotemporal dimensions that accommodate successful deep neural network training to classify motor task epochs.
Spatiotemporal encoding Step 1: Parse four network interactions and label according to subject, → condition and time
Step 2: Gather spectral power from ← three bands per frame using custom wavelet transform
/
Step 3: Normalize values at each sample into intensities in a sensor space grid
Conditions Frequency Connectivity Mapping
To optimize input representation of coherence bands between parcels, we compute multi-dimensional frequency-specific connectivity subspaces [3], examine task based synchronization [4], and detect information flow directionality and causal relations [5, 6]. Clustered putative hubs marked in the image at left are sample seeds for producing network interaction representations.
MEG Signal processing
Unique characteristics in functional connectivity are reprensented in space, shown in the fingerprints at right, and through temporal dynamics assessed within respective band limited power envelopes. As putative hub voxels are not consistent across subject morphologies, we begin by clustering subject-specific, mesoscale resting-state MRI data into stable macroscale network hubs. [1]. Band-limited power interactions between these hubs provide rich, low-dimensional data that preserves spatiotemporal features critical for deep learning and classifier performance. 1. DataCheck: Perform sanity checks on MEG data 2. BadData: Clean bad segments and channels 3. ICAclass: Correct signal artifacts using ICA 4. tMEGpreproc: Segment continuous signals and organize categorical tags and response accuracy per trial 5. icamne: Project sensor maps of independent components 6. icablpenv: estimates the band limited power (BLP) envelope of source-level signals 7. icablpcorr: creates a dense connectome at source-level 8. icaimagcoh: creates source-level, frequency specific connectomes using the “Multivariate Interaction Measure” 9. bfblpenv: similar to icablpenv, but projects parcellated results using Yeo et al. 2011 17-network parcellations for each frequency band
We successfully trained a deep learning system to classify the motor task conditions from the MEG derived network presentation input. The average time required to train each seeded input set on a four GPU system is 5.94 minutes. Highest accuracy, where 71.19% input frames per epoch correctly classified, is produced by training on lower frequency theta, alpha and beta bands with motor cortex seeds. Higher frequency gamma bands with motor cortex seeds performed lower, at 60.52% frames per epoch. Prefrontal cortex seeded input data did not mediate performance above chance; lower frequency and gamma bands trained to 49.08 and 48.02% accuracy respectively.
→
Step 4: Combine band intensities ← into lossless RGB PNG images
Experiment Models are trained using MEG data from 61 participants in the Human Connectome Project [2]. Input epochs are extracted from two motor task conditions: onset of electromyogram (EMG) signal during left hand movement, and EMG onset for left foot movement. Foot sensors are placed on the lateral superior surface of the extensor digitorum brevis muscle & medial malleolus. Hand sensors are placed on the first dorsal interosseus muscle between the thumb and index & the styloid process of the ulna at the wrist.
Results
Step 5: Use 2-D gridded, cubic interpolation to expand to 49 by 49 pixels
→
Machine Learning The Deep learning system extracts features by convolving kernels shown at left over input images, then processing through a Convolution Neural Network architecture shown below. Convolutions Input
Full Connection Output
Subsampling
Gaussian Connection
Four seeded parcel pairs are selected per input training iteration based either on spatial contiguity or presence in functional networks. For each subject, all images are shuffled and sorted by the two conditions. Approximately 10 million spatiotemporal encoding images are indexed by condition, seeds & bands, then shuffled within a Lightning Memory-Mapped Database. Data from 30 subjects are allocated for network validation and 5 subjects for model testing. Trained models attempt to clasify test data; results are unshuffled to recreate temporal sequence of images per condition, shown in the Results section. Image classification models are trained using Inception v2 network [7] on four NVIDIA Titan X GPUs. Learning progress shown above represents training in β/γ, α & θ bands with seeds in the motor cortex. The plot at right is trained on γ-high, γ-mid & γ-low with the same seeds. Higher accuracy indicates less entropy in training data.
The figure above shows coherence waveforms that mediate the strongest performance and classifier precision throughout the time series. Plots are theta (4-8Hz), alpha (8-15Hz) and low beta (15-26Hz) in the right hemisphere. Colors represent functional network activation in somatosensory-motor (SM) and dorsal attention (DA) networks. The bottom plot shows classifier accuracy over the time-series where yellow represents time where the classifier is accurate during epochs of each condition. An advantage of deep learning architectures is that we can explore what spatiotemporal patterns the system is learning and extract features that would be otherwise hard to observe. Displayed below are a single input frame and network activations from the first convolutional layer of a trained model. As these activations represent salient spatio→ temporal patterns that mediate classification.
Outlook Successful classifier performance seen in both lower frequency and gamma bands is consistent with prior research [8,9]. We will assess characteristics of this exploratory modeling approach in two directions. First, further study is required to understand factors that mediate higher performance in lower frequency bands. Second, processing requirements inherent to connectivity research must be reduced in order to apply this approach for closed-loop & real-time applications. Reducing classifier input space can help identify specific regions and frequencies that mediate classification. This information can help define masks for source-localized data, then even raw sensor data to train real-time classifiers.
References & Acknowledgements 1. Yeo B, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M, Roffman JL, Smoller JW, Zöllei L, Polimeni JR et al: The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of neurophysiology 2011, 106(3):11251165. 2. Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, Ugurbil K: The WUMinn Human Connectome Project: An overview. NeuroImage 2013, 80:62-79. 3. Ewald A, Marzetti L, Zappasodi F, Meinecke FC, Nolte G: Estimating true brain connectivity from EEG/MEG data invariant to linear and static transformations in sensor space. NeuroImage 2012, 60(1):476-488. 4. Lachaux JP, Rodriguez E, Martinerie J, Varela FJ: Measuring phase synchrony in brain signals. Human Brain Mapping 1999, 8(4):194-208. 5. Nolte G, Ziehe A, Nikulin VV, Schlögl A, Krämer N, Brismar T, Müller KR: Robustly estimating the flow direction of information in complex physical systems. Physical review letters 2008, 100(23). 6. Chen Y, Bressler SL, Ding M: Dynamics on networks: Assessing functional connectivity with Granger causality. Computational and Mathematical Organization Theory 2009, 15(4):329-350. 7. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016: 2016: IEEE Computer Society; 2016: 2818-2826. 8. Kauhanen L, Nykopp T, Lehtonen J, Jylänki P, Heikkonen J, Rantanen P, Alaranta H, Sams M: EEG and MEG brain-computer interface for tetraplegic patients. IEEE Trans Neural Syst Rehabil Eng 2006, 14(2):190-193. 9. Muthukumaraswamy, SD: High-frequency brain activity and muscle artifacts in MEG/ EEG: A review and recommendations. Frontiers in human neuroscience 2013(MAR). “Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.”