Combining Multiple Sources of Data for Situational Awareness of Geomagnetic Disturbances Sébastien Guillon1, Martin de Montigny2, Innocent Kamwa2
Chumki Basu, Manikandan Padmanaban IBM Research India Research Laboratory Bangalore, India {chumbasu, manikap2}@in.ibm.com
Abstract—With the increasing complexity of the grid and increasing vulnerability to large-scale, natural events, control room operators need tools to enable them to react to events faster. This is especially true in the case of high impact events such as geomagnetic disturbances (GMDs). In this paper, we present a data-driven approach to building a predictive model of GMDs that combines information from multiple sources such as synchrophasors, magnetometers, etc. We evaluate the utility of our model on real GMD events and discuss some interesting results. Index Terms-- wide-area situational awareness, synchrophasors, geomagnetic disturbances.
I. INTRODUCTION Wide-area monitoring systems that provide situational awareness of large-scale disturbances play a critical role in maintaining grid resiliency. One particular kind of disturbance is a geomagnetic disturbance (GMD) or storm, which may produce geomagnetically induced currents (GIC). In this paper, we present a systematic, data-driven approach to combining data from multiple sources (synchrophasor data, geomagnetic/geoelectric field data, expert knowledge) in a stream computing environment to infer properties related to power system impacts. Specifically, we are interested in inferring time delay and understanding properties of GMDs based on time delay. Time delay is an estimate of the time between the onset of magnetic activity and harmonics observed on the grid. We can also interpret time delay as the lead or early warning time during which the operator can take corrective action. II. BACKGROUND GMDs are one type of high-impact, low frequency event that can affect the power system [1][2][3][4][5][6]. Resulting geomagnetically induced current (GIC) can cause an increase in reactive power consumption and harmonics generation, which can lead to blackouts such as the one experienced on March 13, 1989 [7]. One approach to modelling GIC is using layered Earth modelling [8][9] to calculate geoelectric fields.
978-1-4673-8040-9/15/$31.00 ©2015 IEEE
1
Hydro-Quebec, TransEnergie 2 Hydro-Quebec, IREQ
[email protected], {demontigny.martin,kamwa.innocent}@ireq.ca In this paper, we use voltage distortion level as an index of impact severity due to GIC. Voltage distortion is a function of GIC, network topology and grid operating condition. Most of the events discussed in the paper may be considered as recorded under similar grid conditions (e.g., no series compensation were removed, line maintenance was similar and most transformers were the same) and similar network topology (e.g., due to its radial network, same transmission lines were operated). Hydro-Quebec grid conditions are directly reflected in terms of network topology due to its radial network. GIC depends on geoelectric field and network topology. Again, due to its main transmission characteristic, network topology is considered similar. We propose a systematic, data-driven approach to understanding geomagnetic disturbances by combining data from heterogeneous sources - voltage harmonics from synchrophasors, sensor data from magnetometers, domain knowledge from post-event analyses of past geomagnetic disturbances [10][11][12] and subject matter expert (SME) inputs. Although previous work on expert systems relied on SMEs to hand-craft rules, we extracted a simple "domain model" from post-disturbance reports and adapted it using SME operational knowledge. We then evaluated the model on the task of inferring properties of GIC-related grid effects. III.
DATA SOURCES
A.
Geomagnetic/Geoelectric Field Data Geomagnetic data are available from Canadian magnetic observatories at a temporal resolution of one data sample per minute1. The data includes, Bx (northward), By (eastward), and Bz (vertical downward) components of the magnetic field. Similarly, geoelectric field data, E, including Ex (northward) and Ey (eastward) components of the electric field, are available at the same resolution. In this paper, to develop our approach, we consider training data for Feb18-2011, Sep271
We acknowledge use of data available at the Natural Resources Canada (NrCan), Government of Canada website: http://geomag.nrcan.gc.ca/indexeng.php. Daily data can be downloaded at the end of a 24 hour period.
2011, Oct25- 2011, Mar9-2012, Mar12-2012, Oct1-2012 and Oct9-2013 (UTC) from two magnetometer locations, OTT (Ottawa) and SNK (Sanikiluaq). We also have synchrophasor data (event records) for the same days at 10 locations in Quebec. We use a magnetic index, K-index, computed by Natural Resources Canada (NrCan) for three Canadian observatories and available from their website. Two of them provide Kvalues to compute planetary Kp-index [13]. Since Kp-index forecasts are computed 24-72 hours in advance (at 3-hour intervals), by definition, they serve as a type of coarse, "early warning" for utilities of storm severity. We propose to take advantage of the high temporal resolution of the geomagnetic and geoelectric field data to monitor the characteristics of geomagnetic activity within a 3-hour interval. Our training data has two characteristics: 1) significant geomagnetic activity was observed as reported by K-indices and 2) fluctuations in voltage harmonics were observed as reported by synchrophasors. We hypothesize that data from these complementary sources are correlated and that we can use the first source to build a predictive model of events in the second source. We test our approach on magnetometer data and voltage harmonics from synchrophasors for actual grid events that occurred during the period between 2011-2013. Geoelectric field depends on magnetic field (B) and soil characteristic (Z) (in frequency domain, E(w) = Z(w) B(w)). We use calculated E given by NrCan that is based on a global typical Quebec Earth model [3]. In the Quebec area, some local geology specificities may differ from the global Earth model and the calculated geoelectric field. For the same B, if we notice higher correlation between calculated E and harmonic distortion at one location, then we have an index that the local geology may follow NrCan Earth model. If we notice poor correlation, then the local geology may differ from the NrCan model. By looking at different locations with the same B and comparing results of calculated E, we could draw inferences about the local geology. B. Synchrophasor Data We have ground truth as voltage harmonics per phase in the power system recorded by 10 synchrophasors in the Hydro-Quebec service region [14]. For training, we have 2nd8th order harmonics sampled every 4 secs for grid events recorded on training set days. From these measurements, we computed levels of even harmonics distortion (EHD) ratio. From the same voltage distortion per phase, we use 0.6% during a time window of seconds as the baseline limit violation threshold for a GIC-related grid event based on operational knowledge. Taking voltage distortion from the three phases over a period of a few seconds, we can pinpoint voltage distortion that is GIC-related. We use the same EHD ratio threshold for training and testing. For testing, we evaluate the performance of our predictive models on events occurring on other dates in our sample data from 2011-2013. IV. PROBLEM APPROACH Our objective is to automate analysis of geomagnetic disturbances by designing a "cognitive" system that generates
inferences and recommendations about GMDs. We hypothesized that geomagnetic/geoelectric field data are good predictors of GIC-related harmonics activity on the grid and can be used to alert operators in advance of large-scale events. A. Definitions and Problem Statement To formulate the problem of generating warnings for large-scale grid disturbances as one of event prediction, we examined related work on using time-series data to predict telecommunications failures [15][16] and computer system failures [17][18]. We base our problem formulation on prior work on predicting rare events in the telecommunications network [15][16], which focused on non-numerical data. We broadened this formulation to handle multiple data streams that could contain both numerical and non-numerical data. (A space weather forecast feed is an example of a data stream containing non-numerical data that can be time-aligned with numerical data streams. In this paper, we consider only numerical data streams.). Since these streams originate from independent data sources, we conjecture that they are related based on temporal correlation [19]. We also conjecture that we can learn features from one stream, which can be used to predict the occurrence of an event on the other stream. We were motivated to adapt this formulation since GIC is a complex, geophysical problem, and as shown in previous work [20], two time instants may have the same GIC but not necessarily the same values of B or dB/dt. Event prediction allows us to ease the constraints of strict time-alignment. We define two data streams, X and Y, which record sensor measurements. The sampling rate of the measurements can be different but each measurement has an associated timestamp, and therefore, the streams can be time-aligned. Measurements on X can trigger an alarm, a, represented by a feature, f, if the one minute data sample from X exceeds a pre-defined threshold for f. Similarly, measurements on Y represent some observed event, e, if 30% of the data samples in a one minute time window exceed a pre-defined threshold (i.e., EHD ratio). We note that all alarms on X originate from magnetometer data streams and all events on Y originate from harmonics data streams. For an alarm, a, triggered at time tX, we define A = alarm monitoring time [15] of any stream, Y, for the onset of an event, e. Borrowing from economic theory [21], we deconstruct the monitoring time, A, of an alarm by dividing it into two intervals - a recognition lag time (B) and an action lag time (A-B) (Figure 1). The recognition lag represents the time needed by an operator to notice a change in the environment (alarm) and decide on a course of action. The action lag represents the time the operator has to take corrective action. The time delay for an alarm, D, gives the operator an upper bound on the time to complete an action. Although technically we define D as starting from tAlarm, in practice, we do not consider the operator recognition time, B, as actionable time. We discuss time delay in detail in a later section. As in [15], an alarm, a, triggered on X is a true predictor for an event, e, detected on Y at tY, iff tY-tX = B. For a given alarm, if there are no events that satisfy the above criterion, then it is a false alarm. For an event, e, on Y, we have a correct prediction of e if there is at least one alarm, a, on X no earlier than tY-A and no later than tY-B.
address this tradeoff, we ensure that precision does not drop below some acceptable minimum while maximizing recall.
Alarm Monitoring Time (A)
Time Delay (D)
Time
tAlarm
Action Lag (A-B)
Recognition Lag (B)
Figure 1. Alarms Lag Times and Time Delay
B. Model Generation Deriving a set of good features that model the domain is typically an iterative process that requires human engineering. Given that post-disturbance analyses play an integral role in power systems resiliency and often discuss causal factors of events, we benefited from these reports by automating some of the cognitive aspects of feature engineering. First, we extracted observations from key sections (e.g., abstract and figure captions) of post-disturbance reports [10][11] using lexico-syntactic patterns. Examples of these features include dB/dt (rate of change of magnetic field) and E (electric field). We then expand the feature set with the help of an SME. We converge on the following features, F:
D. Time Delay We define time delay, D, as the time between the triggering of some alarm, a, on X, and the time all consequent grid effects, e, are observed on Y. In practice, we discount the alarm recognition time, B, and introduce the constraint that time delay cannot exceed the alarm monitoring time, A. We calculate D such that recall is maximized for the given alarm type. For different limit violation threshold settings of the alarm, we attain different precision and recall values. So, we qualify the selection of D with the threshold value, h, that gives us the best precision for the maximum value of recall. (Threshold settings are discussed in the experimental methodology section.) Another way to interpret D is that it is the lead or early warning time the operator has to take action. Since the monitoring time of alarms may overlap resulting in multiple alarms being "active" at any given time, our cognitive system uses time delay to compute a simple function, R, for ranking alarms and deciding which alarm to service next. We rank the alarms according to time delay depending on the K-value or severity (G1-G5) of the current situation (storm) as shown in (1). Corrective actions can be recommended to the operator for each alarm type such that any dependencies are preserved and the time to complete an action does not exceed its corresponding D. (Note that when the situation is not severe, the operator may prefer to service alarms according to additional criteria.)
f1: dB/dt (rate of change of total magnetic field)
f2: dBx/dt (rate of change of northward component of magnetic field)
f3: dBH/dt (rate of change of horizontal component of magnetic field)
R 1 D Dmin / Dmax Dmin if ( K 5) or (sev G1).
f4: Emag (magnitude of the electric field)
f5: Ex-ang (direction change in Ex-angle within 10minute time window)
f6: Ey-ang (direction change in Ey-angle within 10minute time window)
V. EXPERIMENTAL METHODOLOGY AND RESULTS Now, we model grid behavior observed at the synchrophasors as a function of these features. Unlike a typical learning problem where the objective is to learn a unified model for all synchrophasors, the range of values of one or more of the magnetometer features is likely to vary based on location due to changes in local geology [11]. Since we do not have a one-to-one correspondence between magnetometers and synchrophasors, we first map each synchrophasor to its most correlated magnetometer and then learn its predictive features.
Next, we combine features using the following methods: conjunction, disjunction and concatenation. A conjunctive/disjunctive feature represents the co-occurrence of multiple features within the same time window (e.g., "f5 AND f6 within time=X", "f5 OR f6 within time=X"). A concatenative feature represents a temporal ordering or sequence of features (e.g., "f1 followed by f2 within time=X"). Since this is a recursive definition, each of the combined features may be a complex feature. C. Evaluation We use precision and recall to evaluate predictive performance. Recall is the percentage of events in Y for which we have alarms in X. As in [15], we use normalized precision or the percentage of alarms generated that are true predictors. In this domain, the cost of a missed alarm (false negative) is greater than the cost of a false alarm due to the catastrophic potential of these storms. Therefore, we should maximize recall. However, we do not want to flood the operator console with many false alarms, which ultimately may be ignored. To
A. Experimental Design We studied two magnetometers, SNK and OTT. Our SME provided an initial mapping of synchrophasors to these magnetometers based on prior correlation studies - the synchrophasor at T is highly correlated with SNK and the other 9 synchrophasors are highly correlated with OTT. We encode these mappings as domain knowledge. We ran our experiments on different magnetometer-synchrophasor pairs, but present results for OTT since this magnetometer is in the same zone (sub-auroral) as the main network of synchrophasors. For SynchrophasorT, we also expect that there will be differences in the feature signatures we learn for OTT vs. SNK. For each synchrophasor, we detected the total number of events for a given date in our test set. We tested our
B. Results and Discussion We consider two window sizes, A=30 and A=180 minutes, representing small and large alarm monitoring times. Then, we varied the recognition time, setting B=10 and B=15. For smaller A, B=10 generally outperformed B=15, so we only show the results of these runs. Each point on the P-R curve for a feature represents a specific value of the limit violation threshold for that feature. Our target is to be in the upper right corner of the P-R plot to achieve the best performance. In Figure 2, we plot P-R curves representing the predictive models for OTT-SynchrophasorT. We observe a trend in the results for f1-f4 and f7. Increasing the action lag time (A-B) gives us better performance in most cases. This is consistent with what we expect since there may be a time delay from the instant when the alarm is triggered to the time when the grid effect is observed. A sample curve in Figure 2, dB_A30_B10, shows the levels of precision and recall for dB/dt, using A=30 and B=10, for different threshold settings. From this curve, if we want to maximize recall with a minimum precision of 25%, we choose the limit violation threshold for the point, X=0.65 and Y=0.25. From Figure 2, we see that the best performing models are f=Emag and f=Ecomp, A=180, and B=10.
To summarize, we often have to tradeoff on precision to achieve high recall. However, higher recall reduces the number of false negatives, and we miss fewer real events. For GMDs, it is critical to minimize the number false negatives. In Table 2, we list some statistics for our features. We find that the highest precision occurs for electric field magnitude, Emag, (0.74) and lowest for dBx (0.22). Variation of Bx (northward component) only is therefore insufficient to characterize a GMD event. GMD events are more strongly related to electric field magnitude. Features, dB and dBh, have the same precision, but dBh has shorter time delay. Since the only difference is dBz, we conjecture that while dBz delays the 1
Emag_A30_B10 Ecomp_A30_B10
0.8
dB_A30_B10 Precision
We evaluated the predictive performance of f1, f2, f3, f4, f5, f6 and one complex feature, f7, which expresses both a temporal relationship and a conjunctive relationship between individual features: f7 = "f4 followed by (f5 AND f6)". We performed multiple runs, varying the size of the time windows, A (30 minutes, 180 minutes) and B (10 minutes, 15 minutes). For each run, for each fi, we computed the range of values, (max, min), of that feature from the training phase and computed a micro-averaged precision-recall (P-R) curve by varying the limit violation threshold in increments of 10%. All results are evaluated on the test set. Each P-R curve represents a prediction model (features + threshold settings). By plotting these curves on the same graph, we can compare the relative performance of these models.
same, precision decreased as expected in both cases (more false alarms): precision(D1)=41.6% and precision(D7)=46.5%. This suggests that while f1 is less discriminative than f7 in filtering out false alarms, its consequent harmonic distortion may be detected before that of f7.
dBh_A30_B10
0.6
dBx_A30_B10 0.4
Emag_A180_B10 Ecomp_A180_B10
0.2
dB_A180_B10 dBh_A180_B10
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
dBx_A180_B10
1
Recall
Figure 2. Performance of select features for OTT-SynchrophasorT
1 0.9
dB_A30_B10
0.8
Precision
models on event data for 2012, and we present results for Apr 23, 2012 and July 15, 2012.
0.7
dB_A40_B10
0.6
dB_A50_B10
0.5
dB_A60_B10
0.4 dB_A70_B10
0.3 0.2
dB_A80_B10
0.1 0
We repeated the experiments by adding a month of nonevent days to the test set. Although D1 and D7 remained the
0
0.1
0.2
0.3
0.4
0.5 Recall
0.6
0.7
0.8
0.9
1
Figure 3. Inferring time delay of feature dB/dt for OTT-SynchrophasorT
1 0.9
Ecomp _A30_B10
0.8
Ecomp _A40_B10
0.7 Precision
By varying the size of A (starting with 30 minutes) in fixed increments, we can infer the time delay, D, between the time of the occurrence of an alarm type and the time when the grid effects are observed. In Figure 3, we plot P-R curves for f1, dB/dt, for select values of A. We found maximum achievable precision=54% when recall=100%. The smallest value of A that gives us this performance is 80. Phrasing this differently, once we observe changes in dB/dt at OTT, we expect there to be harmonic distortion on the grid observed at SynchrophasorT in the next 80 minutes. The corresponding time delay, D1, for f1 is 70. Similarly, in Figure 4, the maximum achievable precision for f4 when recall=100% is 47.6%. The smallest value of A that gives us this performance is 90, and the computed time delay, D7, for f7 is 80. Since precision(D1) > precision(D7), we infer that f1 performs better than f7 on our test data. Table 1 gives limit violation threshold values corresponding to the P-R curves for D1 and D7.
Ecomp _A50_B10
0.6 0.5
Ecomp _A60_B10
0.4
Ecomp _A70_B10
0.3
Ecomp _A80_B10
0.2 Ecomp _A90_B10
0.1 0 0
0.2
0.4
Recall
0.6
0.8
1
Figure 4. Inferring time delay of complex feature for OTT-SynchrophasorT
dB/D1=70
REFERENCES
Ecomp/D7=80
Threshold
Precision
Recall
Threshold
Precision
Recall
9.9615
0.019685
1
14.8312
0.03252
1
19.923
0.064725
1
29.6237
0.11834
1
29.8845
0.16949
1
44.4162
0.24096
1
39.8461
0.34483
1
59.2087
0.37037
1
49.8076
0.54054
1
74.0012
0.47619
1
59.7691
0.5
0.5
88.7937
0.52
0.65
69.7306
0.71429
0.5
103.5862
0.65
0.65
79.6921
0.83333
0.5
118.3788
0.72222
0.65
89.6536
0.90909
0.5
133.1713
0.76471
0.65
99.6152
1
0.5
147.9638
0.83333
0.5
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Table 1. Limit violation thresholds for P-R values. [8] Feature
Settings
Recall
Precision
Time delay
False Negatives
dBx Ecomp dB dBh Emag
A80_B10 A90_B10 A90_B10 A80_B10 A90_B10
1 1 1 1 1
0.22 0.476 0.54 0.55555 0.741
70 80 80 70 80
0 0 0 0 0
[9] [10]
[11] Table 2. Determining feature relevancy for GMDs. [12]
response time, it does not provide additional information for event recognition. The shortest delay is for dBx and dBh, but dBh has higher precision. This suggests that dBh would be more relevant for GMDs than dBx and dB. Emag has much higher precision than dBh - the main difference is that electric field takes account of local geology. We confirm that for a GMD event, impact depends on the soil properties. We find that we may achieve higher precision at the expense of greater time delay - this introduces a second tradeoff. For synchrophasorT, we find that Emag is a better feature since we achieve high precision while maximizing recall and bounding time delay to the maximum value for this location. Evaluating the precision of our overall approach is a challenging task and an open research topic. Recent results were published using a Quebec Earth model to model the effects of uncertainty [3]. They show that the calculated values of E have error bars of 40%. Related results on Earth models in the US have also been published recently [22]. We are not aware of other published papers on this topic. VI. CONCLUSION In this paper, we have shown how a fine-grained, localized analysis of geomagnetic activity within a 3 hour time window can be used to predict potential impact at nearby locations on the grid. This analysis can be used to infer properties such as time delay, which would ultimately help operators react to GMD events faster and understand their characteristics.
[13] [14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
J. G. Kappenman, Geomagnetic Disturbances and Impacts upon Power System Operation. The Electric Power Engineering Handbook, L.L. Grigsby (Ed.), 2nd Ed., 16-1–16-22, CRC Press/IEEE Press, 2007. T.S. Molinski, Why Utilities Respect Geomagnetically Induced Currents. Journal of Atmospheric and Solar-Terrestrial Physics, 64, 16, 1765-1778, 2002. D.H. Boteler. The Evolution of Quebec Earth models used in Modelling Geomagnetically Induced Currents. IEEE Transactions on Power Delivery, Vol. PP, Issue 99, Dec. 2014. D.H. Boteler, R. M. Shier, T. Watanabe, and R. E. Horita. Effects of Geomagnetically Induced Current in the BC Hydro 500 kV System. IEEE Trans. Power Del., vol.4, No. 1, pp. 818-823, January 1989. N. Takasu, T. Oshi, F. Miyawaki, S. Saito and Y. Fujiwara. An Experimental Analysis of DC Excitation of Transformers by Geomagnetically Induced Currents. IEEE Trans. Power Del., vol.9, No. 2, pp. 1173-1182, April 1994. R.S. Girgis and C-D. Ko. Calculation Techniques and Results of effects of GIC Currents as Applied to Two Large Power Transformers. IEEE Trans. Power Del., vol.7, No. 2, pp. 699-705, April 1992. P. Czech, S. Charo, H. Huynk, and A. Dutel. The Hydro-Québec System Blackout of March 13, 1989: System Response to Geomagnetic Disturbance. Proc. of EPRI Conference on Geomagnetically Induced Current, San Francisco, Nov 8-10, 1989. R. Pirjola. Review on the Calculation of Surface Electric and Magnetic Fields and of Geomagnetically Induced Currents in Ground-based Technological Systems. Surveys in Geophysics, 23(1), 71-90, 2002. D. H. Boteler. Geomagnetically Induced Currents: Present Knowledge and Future Research. IEEE Trans. on Power Delivery, 9, 50-58, 1994. L. Bolduc, P. Langlois, D. Boteler, and R. Pirjola. A Study of Geoelectromagnetic Disturbances in Québec, 2. Detailed Analysis of a Large Event”. IEEE Transactions on Power Delivery, 13, 4, 1998. L. Bolduc, P. Langlois, D. Boteler, and R. Pirjola. A Study of Geoelectromagnetic Disturbances in Québec, 1. General Results. IEEE Transactions on Power Delivery, 13, 4, 1998. L. Bolduc. GIC Observations and Studies in the Hydro-Quebec Power System. Journal of Atmospheric and Solar-Terrestrial Physics, 64, 1793 – 1802, 2002. M. Menvielle and A. Berthelier, The K-derived Planetary Indices: Description and Availability, Rev. Geophysics,. 29, 3, 415-432, 1991. I. Kamwa, J. Beland, G. Trudel, R. Grondin, C. Lafond, and L. McNabb. Wide-Area Monitoring and Control at Hydro-Québec: Past, Present and Future, Proc. of IEEE Power Engineering Society General Meeting, 2006. G. M. Weiss and H. Hirsh. Learning to Predict Rare Events in Event Sequences. Proc. of Fourth International Conference on Knowledge Discovery and Data Mining, 359-363, 1998. G. M. Weiss. Timeweaver: a Genetic Algorithm for Identifying Predictive Patterns in Sequences of Events. Proc. of the Genetic and Evolutionary Computation Conference, 1999. R. K. Sahoo, A. J. Oliner, I. Rish, M. Gupta, J. E. Moreira, S. Ma, R. Vilalta, and A. Sivasubramaniam. Critical Event Prediction for Proactive Management in Large-scale Computer Clusters. Proc. of Ninth International Conference on Knowledge Discovery and Data Mining, 2003. E. Aharoni, S. Fine, Y. Goldschmidt, O. Lavi, O. Margalit, M. RosenZvi, and L. Sphigelman. Smarter Log Analysis. IBM Journal of Research and Development, Vol. 55, No. 5, Paper 10, Sep/Oct 2011. F. He, X. Mu, and L. Liu. Wavelet De-noising and Correlation Analysis in GIC Signal during Magnetic Storm. International Workshop on Modeling, Simulation and Optimization, Dec. 2008. A. Viljanen. The Relation between Geomagnetic Variations and their Time Derivatives and Implication for Estimation of Induction Risks. Geophysical Research Letters, Vol. 24, No 6, pp. 631-634, 1997. F. Rendigs and C.E. Hinshaw (eds). Forecasting and Recognizing Business Cycle Turning Points, National Bureau of Economic Research, 1968. L. H. Wei, M. Homemeiter, J. H. Gannon. Surface Electric Fields for North America during Historical Geomagnetic Storms. Space Weather, Vol. 11, 451-462, 2013.