An Ensemble-Boosting Algorithm for Classifying Partial ... - MDPI

4 downloads 21 Views 3MB Size Report
Aug 8, 2017 - ... Mas'ud 1,*, Jorge Alfredo Ardila-Rey 2, Ricardo Albarracín 3 ID ... Universidad Técnica Federico Santa María, Santiago 8940000, Chile;.
machines Article

An Ensemble-Boosting Algorithm for Classifying Partial Discharge Defects in Electrical Assets Abdullahi Abubakar Mas’ud 1, *, Jorge Alfredo Ardila-Rey 2 , Ricardo Albarracín 3 and Firdaus Muhammad-Sukki 4 1 2 3

4

*

ID

Department of Electrical and Electronics Engineering, Jubail Industrial College, P.O. Box 10099, Jubail Industrial City 31261, Saudi Arabia Department of Electrical Engineering, Universidad Técnica Federico Santa María, Santiago 8940000, Chile; [email protected] Departamento de Ingeniería Eléctrica, Electrónica, Automática y Física Aplicada, Escuela Técnica Superior de Ingeniería y Diseño Industrial, Universidad Politécnica de Madrid, Ronda de Valencia 3, Madrid 28012, Spain; [email protected] School of Engineering, Robert Gordon University, Aberdeen AB10 7GJ, Scotland, UK; [email protected] Correspondence: [email protected]; Tel.: +96-653-813-8814

Received: 18 July 2017; Accepted: 4 August 2017; Published: 8 August 2017

Abstract: This paper presents an ensemble-boosting algorithm (EBA) for classifying partial discharge (PD) patterns in the condition monitoring of insulation diagnosis applied for electrical assets. This approach presents an optimization technique for creating a sequence of artificial neural network (ANNs), where the training data for each constituent of the sequence is selected based on the performance of previous ANNs. Four different PD faults scenarios were manufactured in the high-voltage (HV) laboratory to simulate the PD faults of cylindrical voids in methacrylate, point-air-plane configuration, ceramic bushing with contaminated surface and a transformer affected by the internal PD. A PD dataset was collected, pre-processed and prepared for its use in the improved boosting algorithm using statistical techniques. In this paper, the EBA is extensively compared with the widely used single artificial neural network (SNN). Results show that the proposed approach can effectively improve the generalization capability of the PD patterns. The application of the proposed technique for both online and offline practical PD recognition is examined. Keywords: condition monitoring; insulation diagnosis; electrical assets; partial discharge; artificial neural networks; single artificial neural network; ensemble-boosting algorithm

1. Introduction High-voltage (HV) equipment maintenance is an important aspect of the power industry. As a result, the reliability and continuity of service of this equipment are of utmost importance to the industry. To prevent damage to equipment and ensure minimal shutdowns, it is important to establish and follow appropriate criteria for insulation selection [1] and then test and monitor HV apparatuses and analyze them to prevent potential faults. Some of the effects that can lead to premature aging of the HV insulation systems (i.e., polymers) used in electrical machines (such as motors or transformers) are electrical stress, thermal stress, environmental stress and mechanical stress [2]. Most of these negative effects can provoke serious degradation of the insulation of the equipment and, as a result, an effective condition-monitoring (CM), condition-based maintenance (CBM) and preventive maintenance actions are required [3,4]. One well-known CM technique is partial discharge (PD) testing. PD refers to electrical discharges commonly happening in the HV apparatus whose continuous activity can cause a

Machines 2017, 5, 18; doi:10.3390/machines5030018

www.mdpi.com/journal/machines

Machines 2017, 5, 18

2 of 13

total breakdown of the electrical insulation system of the machinery/asset. Thus, it is essential to detect PD faults at an early stage before it can lead to disastrous and expensive failures of electrical apparatus. It is known that PDs are ionization effects that can be classified as three main classes of sources, namely, internal, surface, and corona [5,6]. Internal PD can occurs in voids filled with gas or liquid. These sources are the most dangerous ones for the insulation systems and CM of electrical machines based on PD measurements. For the most part, the goal is to try to identify this kind of source and its evolution over time, that is, in solid insulation systems [3–7], both for rotating machines such as motors [8,9] and transformers [10]. Surface PD occurs between two dielectrics, commonly between the air and the insulation system, that is, due to pollution or moisture in insulators of transformers [11]. Therefore, it is of utmost importance to detect this type of source to allow the correct operation of power transformers. Corona discharges occur in gases or liquids subjected to highly divergent electric fields near sharp metallic points [5]. PDs are commonly represented superimposed with an AC cycle of the voltage reference. This representation is known as a phase-resolved PD (PRPD) pattern due to the fact that each kind of PD source follows a typical form. In this way, internal PD has a cluster spanning from zero crossings, 0◦ or 180◦ , of the AC signal whose pulses have different peak amplitudes. In a surface PD, the PRPD pattern accumulates around the maximum and minimum of the AC voltage; signals also have different peak amplitudes depending on the semi-cycle. Finally, corona PDs commonly have PDs around 90◦ and 270◦ , with all the pulses having similar peak amplitudes. The first step in PD CM is capturing the PD data as either PRPD 2D or phase-amplitude number (phi-q-n) 3D patterns. Based on the literature, the phi-q-n was the most widely used [12,13]. They are pre-processed and converted to 2D-derived in order to extract certain statistical features that can be applied as input to any pattern recognition tool. However, the PRPD represented as 2D plots have not been well investigated as the input for PD pattern recognition by artificial intelligence techniques such as artificial neural networks (ANN) [14,15]. It is the aim of this paper to extract suitable PD fingerprints from the PRPD without having to carry out any statistical feature extraction. This manuscript investigates the robustness of the ensemble-boosting algorithm (EBA) in recognizing different PD fault sources within different HV insulation systems. The results are then compared with the widely applied single neural network (SNN). Several pattern recognition techniques have been applied for PD classification [16–18], and among them the SNN has been the most successful. EBA has been considered in this paper because research shows that it has high-recognition capability when applied to datasets of other disciplines [19,20]. PD faults sources identification was firstly tested in the laboratory with two test objects; in cylindrical vacuoles in methacrylate and with a point-plane arrangement in air. Moreover, two additional PD sources, which can take place in typical electrical elements in an industrial environment, such as a ceramic bushing with a contaminated surface and internal defects in a power transformer, have been analyzed to study the robustness of the EBA. This paper is organized as follows: Section 1 is the introduction; Section 2 describes the experimental setup; Section 3 provides the description of the EBA algorithm; Section 4 explains the SNN algorithm; Section 5 clarifies the PD input parameters for the EBA algorithm; Section 6 presents the strategies for training the EBA; Section 7 presents the results and discussions, while; Section 8 is the conclusions. 2. Experimental Setup In order to generate a stable activity of PD that correctly evaluates the aging of each of the test objects used in this paper, a classical indirect detection circuit has been implemented together with a pre-programmed acquisition system. The aforementioned circuit and acquisition system are configured to acquire each PD pulse without the presence of external sources or disturbances associated with electrical noise. According to the requirements of the IEC 60270 standard [21], the measuring circuit used must consist of a HV variable source up to 100 kV and a capacitive voltage divider, which acts as a low impedance path for high-frequency currents from the PD pulses (see Figure 1). For this circuit,

Machines 2017, Machines 2017, 5, 5, 1818

3 of 3 of 1313

impedance Zm. This setup enables two things: (1) obtaining the reference signal, and; (2) plotting each theofsynchronization the grid frequency was done impedance Zm . This setup one the PD sourceswith in conventional PRPD patterns tothrough be usedan later for the evaluation of theenables state two things: (1) obtaining the reference signal, and; (2) plotting each one of the PD sources in of the insulation. conventional PRPD patterns to be used later for the evaluation of the state of the insulation.

Figure Experimental setup used according the IEC 60270 standard. Figure 1. 1. Experimental setup used according toto the IEC 60270 standard.

The ThePD PDsignals signalswere werecaptured capturedbybyusing usinga ahigh-frequency high-frequencycurrent currenttransformer transformer(HFCT) (HFCT)with witha a bandwidth of up to 80 MHz, which was coupled to the branch of the voltage divider just after the . bandwidth of up to 80 MHz, which was coupled to the branch of the voltage divider just afterZmthe These transients were were stored and and processed by an system formed byby a aNI-PXIe-1082 Zm . These transients stored processed by acquisition an acquisition system formed NI-PXIe-1082 chassis, a NI-PXI-5124 acquisition board with a sampling frequency of 200 MS/s, and a NI-PXIe-8115 chassis, a NI-PXI-5124 acquisition board with a sampling frequency of 200 MS/s, and a NI-PXIe-8115 controller. allall testtest objects, PD PD were acquired by establishing a voltage level level wherewhere PD activity was controller.For For objects, were acquired by establishing a voltage PD activity stable. Additionally, the trigger levellevel of the acquisition system waswas placed above the was stable. Additionally, the trigger of the acquisition system placed above thelaboratory laboratory background backgroundnoise, noise,ininorder ordertotoobtain obtainonly onlyassociated associatedsignals signalswith withPD PDfrom fromeach eachtest testobject. object. The Themeasurement measurementprocedure procedurestarted startedonce oncethe thePD PDactivity activitywas washeld heldstable stableatata acertain certainvoltage voltage level leveldepending dependingononthe thetype typeofoftest testobject objectbeing beinginvestigated. investigated.Then, Then,bybymaintaining maintainingthe theinitial initialvoltage voltage level for 7 h, the PD activity was recorded every 20 min. In each measurement, the number of pulses level for 7 h, the PD activity was recorded every 20 min. In each measurement, the number of pulses captured sample and andaaclear clearPRPD PRPDin capturedwas was2000. 2000.This Thiswas wascarried carriedout outto toobtain obtain aa statistically statistically significant significant sample interms termsofofthe thenumber numberofofpulses. pulses.InIntotal, total,for foreach each test object, 20 measurements composed by 2000 test object, 20 measurements composed by 2000 PD PD signals were performed.This Thisindicates indicatesthat thatthe thetotal totalnumber numberofofPD PDpulses pulsesstored storedand andprocessed processedin signals were performed. ineach eachtest testobject objectwas was40,000. 40,000. InIneach result obtained fourtest testobjects objectswere wereused used create different types of each result obtained experimentally, experimentally, four toto create different types of PDs: PDs: • Cylindrical vacuoles in methacrylate: For the generation of internal PD, three pieces of • Cylindrical vacuoles methacrylate: For the generation of two internal PD,cylindrical three pieces of methacrylate with twoinperforations in the centerpiece (to obtain artificial vacuoles methacrylate with two perforations in the centerpiece (to obtain two artificial cylindrical of 3 mm of height in each one) were used, as illustrated in Figure 2a. The test object was placed vacuoles 3 mm of height each one) weretoused, as illustrated in Figurein2a. The test object betweenof two electrodes thatin will be subjected HV before being immersed mineral oil to avoid was placed between two electrodes that will be subjected to HV before being immersed in surface discharges. mineral oil to avoid surface discharges. Point-air-plane configuration: To obtain a stable corona PD activity, a thin metal tip (connected to • • Point-air-plane obtain a stable corona PD activity, a thin metal tip (connected the HV source)configuration: was placed atTo 3 cm above a metallic ground, as presented in Figure 2b. To avoid toundesired the HV source) wassupporting placed at 3frame cm above a metallic ground,material. as presented in Figure 2b. To arcing the is made with insulating avoid undesired arcing the supporting frame is made with insulating material. In order to use the technique proposed in electrical assets resemble closely to a typical real In order to use the technique proposed in electrical assets resemble closely to a typical real environment, the next two specimens were used, and they are described as follows: environment, the next two specimens were used, and they are described as follows: Ceramicbushing bushingwith withcontaminated contaminatedsurface: surface:This Thistest testobject objectisisa aceramic ceramicbushing bushingcontaminated contaminated • • Ceramic with saline solution to favor the ionization paths for surface discharges, see Figure 2c.HV HVwas was with saline solution to favor the ionization paths for surface discharges, see Figure 2c. placedatatitsitsends endsthrough throughtwo twoelectrodes. electrodes. placed • Transformer affected by internal 150 kVA, 12 kV/400 V, using oil-paper • Transformer affected by internal PD: PD:AApower powertransformer, transformer, 150 kVA, 12 kV/400 V, using oilas an insulating system, was tested, as indicated in Figure 2d. The rated voltage was applied paper as an insulating system, was tested, as indicated in Figure 2d. The rated voltage was between the primary winding and ground (with the secondary grounded). applied between the primary winding and ground (with the secondary grounded).

Machines 2017, 5, 18

4 of 13

Machines 2017, 5, 18

4 of 13

(a)

(b)

(c)

(d)

Figure Cylindrical vacuoles; (b) (b) Point-plane experimental specimen; (c) Figure 2. 2. Test Testobjects: objects:(a) (a) Cylindrical vacuoles; Point-plane experimental specimen; Contaminated ceramic bushing, and; (d) Power Transformers (150 kVA). (c) Contaminated ceramic bushing, and; (d) Power Transformers (150 kVA).

3. Description of EBA 3. Description of EBA Boosting involves the creation of a sequence of ANNs, where the training data for each Boosting involves the creation of a sequence of ANNs, where the training data for each constituent constituent of the sequence is selected based on the performance of the previous ANNs. Boosting of the sequence is selected based on the performance of the previous ANNs. Boosting works in such works in such a way that the testing data wrongly predicted by preceding ANNs have higher a way that the testing data wrongly predicted by preceding ANNs have higher selection probability selection probability than the testing data that was accurately predicted. In essence, boosting always than the testing data that was accurately predicted. In essence, boosting always tries new ANN tries new ANN members for the ensemble that are more capable of predicting the testing set that members for the ensemble that are more capable of predicting the testing set that current ensembles current ensembles cannot predict well. The two fundamental forms of boosting that exist in practice cannot predict well. The two fundamental forms of boosting that exist in practice are arcing and are arcing and AdaBoost [22]. The arcing technique is analogous to bagging, where training AdaBoost [22]. The arcing technique is analogous to bagging, where training fingerprints of size m are fingerprints of size m are selected for a number of ANNs by randomly chosen (with replacement) selected for a number of ANNs by randomly chosen (with replacement) training vectors from original training vectors from original m input examples. Unlike bagging, the likelihood of choosing an m input examples. Unlike bagging, the likelihood of choosing an example is not the same across example is not the same across the training examples. This probability depends on how regular that the training examples. This probability depends on how regular that example was misclassified by example was misclassified by previous ANNs forming the ensembles. The arcing technique initially previous ANNs forming the ensembles. The arcing technique initially sets the probability of selecting sets the probability of selecting the training examples to (1/M), where M is the size of the training the training examples to (1/M), where M is the size of the training samples. The arcing technique samples. The arcing technique applies a simple mechanism to obtain probability of including applies a simple mechanism to obtain probability of including examples in the training fingerprints. examples in the training fingerprints. For any i-th example in the training data, assuming a value ni For any i-th example in the training data, assuming a value ni represents a period where that instance represents a period where that instance was misclassified by previous ANNs, then the probability Pj was misclassified by previous ANNs, then the probability Pj for choosing an example j to be among for choosing an example j to be among the next ANN training fingerprints is determined as follows: the next ANN training fingerprints is determined as follows:

Pj =

Pj =

1+ n

4

1 + nii 4 N

N (1 + n )  (1 + n i 4 ) ∑ 4

(1) (1).

i

i =1 i= 1

Breiman [22] determines the value of the power “4” numerically after trial and error with Breiman [22] determines the value of the power “4” numerically after trial and error with different different parameters. AdaBoost is another ensemble technique that applies any of the following parameters. AdaBoost is another ensemble technique that applies any of the following procedures of procedures of choosing training fingerprints for the ensemble [23]: choosing training fingerprints for the ensemble [23]: (a) Choosing the training example based on the probability of each example. (a) Choosing the training example based on the probability of each example. (b) Using all of the examples and weighting the error of each training example by its probability (b) Using all of the examples and weighting the error of each training example by its probability (i.e., (i.e., instances having higher probability demonstrate more effect on the error). instances having higher probability demonstrate more effect on the error). AdaBoost has the advantage that each training example forms at least part of the training AdaBoost has the advantage that eachtechnique training example forms least part of the training fingerprints. Similar to arcing, the AdaBoost also initially setsatthe probability of selecting fingerprints. Similar to arcing, AdaBoost technique alsosamples. initially sets probability selecting training example to 1/M, wherethe M is the size of the training It is the assumed that εrofrepresents training example to 1/M, where M is the size of the training samples. It is assumed that εr represents the sum of probabilities of misclassified examples for the presently trained ANN classifier Cr. Then the subsequent sum of probabilities of misclassified examples for theby presently trained classifier Then for for ANNs, these probabilities are generated multiplying theANN probability of Cr. Cr wrongly subsequent ANNs, these probabilities are generated by multiplying the probability of Cr wrongly misclassified examples by a factor βr = (1 − εr)/εr, and all probabilities are then normalized to sum up misclassified examples by a factor = (1 predictions − εr)/εr, and all probabilities thencriterion, normalized to sum to 1. The AdaBoost aggregates allβr ANN using a weighted are voting where Cr weights the term log (βr). These weights make AdaBoost disregard outputs of ANNs that appear

Machines 2017, 5, 18

5 of 13

Machines 18 up to 1.2017, The5,AdaBoost

5 ofCr 13 aggregates all ANN predictions using a weighted voting criterion, where weights the term log (βr). These weights make AdaBoost disregard outputs of ANNs that appear inaccurate comparedto toother otherpredictions. predictions.InInthis thiswork, work,the thearcing arcingtechnique technique will applied because inaccurate compared will bebe applied because of of wide application. its its wide application.

4. 4. The The Single Single Neural Neural Network Network (SNN) (SNN) Over Over the the years, years, the the SNN SNN have have been been extensively extensively being being applied applied for for recognizing recognizing PD PD patterns patterns with with great success [9,12,14]. ANNs are artificial intelligence models that imitate the way humans great success [9,12,14]. ANNs are artificial intelligence models that imitate the way humans categorize categorize patterns [24]. The arrangement of the ANN comprises input thelayer hidden patterns [24]. The arrangement of the ANN comprises the inputthe layer, thelayer, hidden andlayer the and the output layer. The size of the output layer is related to the classes to be classified. In each layer output layer. The size of the output layer is related to the classes to be classified. In each layer of of ANN, there or more neurons, which calculate the of sum the incoming and thethe ANN, there are are oneone or more neurons, which calculate the sum theofincoming signalssignals and move move to a non-linear squashing function, for example, a sigmoid function to produce an output of it to a itnon-linear squashing function, for example, a sigmoid function to produce an output of the the signal. These neurons in the ANN are related to each other by synapses commonly known as signal. These neurons in the ANN are related to each other by synapses commonly known as weights. weights. A process learningofprocess of the ANN is Figure 3 below. A learning the ANN is shown inshown Figure in 3 below.

Differences

ANN

Target vector

s

Output vector

Input vector r

Adjust weights Figure Figure 3. 3. A A learning learning cycle cycle of of the the artificial artificial neural neural network network (ANN). (ANN).

In of the the ANNs, ANNs, the the training training begins begins by by randomly randomly chosen chosen weights weights normally normallywithin within− −0.1 In most most of 0.1 to to 0.1. These weights are continuously adjusted based on the distinction between the target value 0.1. These weights are continuously adjusted based on the distinction between the target value and and the the output output value value according according to to aa certain certain algorithm. algorithm. The The back-propagation back-propagation (BP) (BP) algorithm algorithm is is the the widely widely used PD patterns patterns [12,14]. [12,14]. The used algorithm algorithm for for the the ANN ANN and and has has been been very very successful successful in in recognizing recognizing PD The BP BP algorithm algorithm is is aa kind kind of of supervised supervised learning, learning, comprising comprising the the forward forward and and backward backward learning learning [14,24]. [14,24]. In output pattern data areare continuously fed fed to the and at each In the the BP BPalgorithm, algorithm,the theinput inputand and output pattern data continuously to ANN the ANN and at instant the error is backpropagated and and the the weights are are updated until certain minimum error is each instant the error is backpropagated weights updated until certain minimum error accomplished. is accomplished. 5. 5. PD PD Input Input Fingerprints Fingerprints for for the the EBA EBA and and the the SNN SNN Initially, the PD PD data data is is captured captured as as PRPD PRPD patterns patterns showing showing the the PD PD magnitude magnitude (in the Initially, the (in V) V) and and the corresponding phase angle. angle. The corresponding phase The PRPD PRPD plot plot is is cumulatively cumulatively acquired acquired by by continuously continuously capturing capturing data data over a number of power cycles. This comprises the PD amplitude (in V) at different phase angles of over a number of power cycles. This comprises the PD amplitude (in V) at different phase angles of the the power onthe this, the is PRPD is then processed into a plot different plotthe showing the half half power cycle.cycle. BasedBased on this, PRPD then processed into a different showing maximum maximum amplitude versus phase angle that will serve as the input for training and testing the amplitude versus phase angle that will serve as the input for training and testing the developed EBA. developed EBA. A tremendous amount of simulation time is required for the EBA to learn the original PRPD A tremendous of simulation is required for the learn the data original PRPD patterns of the fouramount PD defects. However,time in order to reduce thisEBA time,tothe PRPD is further patterns of the four PD defects. However, in order to reduce this time, the PRPD data is further reduced. Initially, the PRPD is captured as a vector of amplitude and phase angle each of size ◦ , and reduced. the of PRPD is angle captured as a vector of amplitude andorder phasefrom angle size (2594 × 1).Initially, The vector phase is further rearranged in ascending 0◦ each to 360of (2594 × 1). The vector of phase angle is further rearranged in ascending 0° are to 360°, and then the amplitude vector is properly matched. To reduce the PRPD, the order phase from vectors grouped ◦ then the amplitude vector is properly matched. To reduce the PRPD, the phase vectors are grouped into 6 resolution each, and in each resolution the maximum amplitude is taken as the data. Therefore, into 6° resolution each, and in each resolution the maximum amplitude is taken as the data. Therefore, each PRPD pattern is now represented as a (1 × 60) vector. Figure 4 shows the PRPD patterns and their corresponding reduced patterns of amplitude versus phase angle for application to the ANN and the EBA. The PRPD patterns of vacuoles in methacrylate (Figure 4a), shows that the discharges

Machines 2017, 5, 18

6 of 13

each PRPD is now represented as a (1 × 60) vector. Figure 4 shows the PRPD patterns6 and Machines 2017, pattern 5, 18 of 13 their corresponding reduced patterns of amplitude versus phase angle for application to the ANN are centered 20°~120° in the positive half power cycle,(Figure and 180°~320° in the power and the EBA.around The PRPD patterns of vacuoles in methacrylate 4a), shows thatnegative the discharges ◦ ◦ ◦ ◦ halfcentered cycle with the uncalibrated ofpower 0.6 V. cycle, It can and be observed patterns are are around 20 ~120 in PD the amplitude positive half 180 ~320that in the PD negative power wider in phase anduncalibrated amplitude because of the large and volume to half cycle with the PD amplitude of 0.6surface V. It canarea be observed thatfor thethe PDdischarges patterns are occur and, consequently, leads to larger deposition of conducting particles within the vacuole surface. wider in phase and amplitude because of the large surface area and volume for the discharges to Figureand, 4b shows the corona discharge fromofa conducting point-planeparticles arrangement. is practically a occur consequently, leads to largerpattern deposition withinThere the vacuole surface. smaller number of discharges in the positive half power cycle and a large number of discharges in Figure 4b shows the corona discharge pattern from a point-plane arrangement. There is practically a the negative halfofpower cycle concentrating 15 mV amplitude. Thenumber small number of discharges smaller number discharges in the positivewithin half power cycle and a large of discharges in the in the positive halfcycle power cycle are indication of positive onset [25]. ofThese positive negative half power concentrating within 15 mV amplitude. Thestreamers small number discharges in streamers represent current pulses of short duration, low repetition rate and high amplitude created the positive half power cycle are indication of positive onset streamers [25]. These positive streamers due to huge number of ionic development during the streamer around thedue anode. The represent current pulses of short duration, low repetition rate anddevelopment high amplitude created to huge large number discharges around negative development half power cycle are the indication of Trichel pulses, number of ionicofdevelopment during the the streamer around anode. The large number which are negative ofhalf small amplitude short duration eachare other [25]. of discharges arounddischarges the negative power cycle areand indication of Trichelsucceeding pulses, which negative Figure 4c are PD faults of a ceramic bushing with a contaminated surface, discharges discharges of small amplitude and short duration succeeding each other [25]. where Figure the 4c are PD faultsare of found to be centered around 20°~140° in the positive half power cycle, and 200°~300° in the negative a ceramic bushing with a contaminated surface, where the discharges are found to be centered around ◦ ~140 ◦ in cycle. ◦ ~300◦ indue half power This ishalf indication of surface discharges ionization on the 20 the positive power cycle, and 200 the to negative half paths powercreated cycle. This is ceramic bushing. This kind of PD can the insulation through trapped gases and indication of surface discharges due todeteriorate ionization paths created on the ceramic bushing. Thispossibly kind of the can formation of chemicals [13]. Figure 4dtrapped are surface PDand in power transformer insulation where[13]. the PD deteriorate the insulation through gases possibly the formation of chemicals discharge distribution to that of ainsulation contaminated in Figure 4c. These types to of Figure 4d are surface PDisinsimilar power transformer wherebushing the discharge distribution is similar discharges are triggered by high electric field stress along paper-oil transformer insulation and in the that of a contaminated bushing in Figure 4c. These types of discharges are triggered by high electric long stress term can lead to tracking damage.insulation and in the long term can lead to tracking damage. field along paper-oil transformer

(a) Figure 4. Cont.

Machines 2017, 5, 18

7 of 13

Machines 2017, 5, 18

7 of 13

(b)

(c) Figure 4. Cont.

Machines 2017, 5, 18 Machines 2017, 5, 18

8 of 13 8 of 13

(d) Figure Figure 4. 4. Phase-resolved Phase-resolved partial partial discharge discharge (PRPD) (PRPD) patterns patterns and and their their corresponding corresponding reduced reduced patterns patterns for the single artificial neural network (SNN) and the ensemble-boosting algorithm (EBA) for: for: (a) for the single artificial neural network (SNN) and the ensemble-boosting algorithm (EBA) Cylindrical vacuoles in methacrylate; (b) Point-air-plane configuration; (c) Ceramic bushing with (a) Cylindrical vacuoles in methacrylate; (b) Point-air-plane configuration; (c) Ceramic bushing with contaminated and; (d) (d) Power Power Transformers Transformers (150 (150 kVA). kVA). contaminated surface, surface, and;

6. and Testing Testing the the SNN SNN and andEBA EBA 6. Strategies Strategies for for Training Training and The The training training and and testing testing parameters parameters for for the the SNN SNN and and EBA EBA consist consist of of PRPD PRPD raw raw data data in in both both the the positive and negative half power cycle of cylindrical vacuoles, point-air-plane configuration, ceramic positive and negative half power cycle of cylindrical vacuoles, point-air-plane configuration, ceramic bushings bushings and and power power transformers. transformers. Their Their reduced reduced PRPD PRPD are are applied applied as as input input into into the the SNN SNN and and EBA. EBA. In the development of the SNN and EBA for PD recognition, certain procedures were considered In the development of the SNN and EBA for PD recognition, certain procedures were considered to to obtain results. The are as as follow: follow: obtain optimum optimum results. The procedures procedures are 1. 2.

Use Use the the PRPD PRPD data data as as inputs inputs for for training training and and testing testing both both the the SNN SNN and and EBA. An An EBA EBA consisting consisting of of six six SNNs SNNs were were constructed. constructed. Each Each of of the the SNN SNN is a multilayer perceptron networks areare chosen because of its and and overall success rate over networks(MLPNs). (MLPNs).MLPNs MLPNs chosen because of performance its performance overall success rate the ANN models [24]. A number of MLPNs were selected for this investigation in order to have over the ANN models [24]. A number of MLPNs were selected for this investigation in order atopractical size of size diverse modelsmodels to improve the generalization capability of the EBA. In have a practical of diverse to improve the generalization capability of the EBA. determining the the input fingerprints for for thethe EBA, thethe training fingerprints areare selected forfor a In determining input fingerprints EBA, training fingerprints selected number of ANNs by randomly chosen (with (with replacement) certaincertain trainingtraining vectors from original a number of ANNs by randomly chosen replacement) vectors from input examples. For everyFor training the EBA certain PDcertain defects, sixdefects, EBA members original input examples. everysession trainingofsession ofwith the EBA with PD six EBA that best predict thepredict PD fingerprints are chosenare after up toafter 10 iterations, as shown as in shown Figure in 5. members that best the PD fingerprints chosen up to 10 iterations, The output of output the EBA obtained by dynamically weighting of the ensemble members’ output Figure 5. The ofisthe EBA is obtained by dynamically weighting of the ensemble members’ [26]. In deciding the EBA,the theEBA, hidden and the learning rate are continuously varied, though output [26]. In deciding the layer hidden layer and the learning rate are continuously varied, the momentum rate remains the same. The aim of the momentum rate is to hasten the training though the momentum rate remains the same. The aim of the momentum rate is to hasten the session offering resistance to irregular weight changes. learning is normally trainingand session andstrong offering strong resistance to irregular weight The changes. Therate learning rate is chosen to chosen be between 0 and 1. 0After and error layers, 10 hidden layers are normally to be between and 1.trial After trial andwith errorseveral with several layers, 10 hidden layers decided to be the optimal ones. In this investigation, the learning rate of 0.05 is chosen while are decided to be the optimal ones. In this investigation, the learning rate of 0.05 is chosen momentum rate of 0.6 is adopted as the momentum rate for all instances. No validation samples

Machines 2017, 5, 18

9 of 13

Machines 2017, 5, 18 momentum

3. 3. 4. 4. 5. 5. 6. 6. 7. 7.

9 of 13 rate of 0.6 is adopted as the momentum rate for all instances. No validation samples were applied in this work due limited data samples. For each training session of the neural were applied in this work due limited data For each error training session the neural 3 is network, training is said to be completed whensamples. the mean square target of 10−of reached. network, training is said to be completed when the mean square error target of 10−3 is reached. The backpropagation is applied to train each of the ANN member in the ensemble. The BP The backpropagation is applied to train each of the ANN member in the ensemble. The BP algorithm is a kind of supervised learning [25]. algorithm is a kind of supervised learning [25]. In this work the “newff” function in the Matlab toolbox was used for training the SNN and EBA. In this work the “newff” function in the Matlab toolbox was used for training the SNN and EBA. “Tansig” is used as the hidden layer function for the neural network while the “logsig” is applied “Tansig” is used as the hidden layer function for the neural network while the “logsig” is applied as the output layer function. The training algorithm of the neural network is “traingdm”. as the output layer function. The training algorithm of the neural network is “traingdm”. The time needed for the SNN and EBA to learn the PRPD patterns must be short enough. The time needed for the SNN and EBA to learn the PRPD patterns must be short enough. Since the ANN is sensitive to different initial weights and biases, results must be obtained over a Since the ANN is sensitive to different initial weights and biases, results must be obtained over number of iterations. a number of iterations. In In aa comparison comparison of of the the SNN SNN and and EBA, EBA, the the same same network network parameters parameters are are chosen chosen for for both both cases. cases.

Figure 5. 5. An An EBA EBA learning learning process process for forthe theEBA. EBA. Figure

The training and testing strategies for the both the SNN and EBA are as follows. Firstly, training The training and testing strategies for the both the SNN and EBA are as follows. Firstly, both the SNN and EBA with any of the four PD defects (cylindrical vacuoles, point-air-plane training both the SNN and EBA with any of the four PD defects (cylindrical vacuoles, point-air-plane configuration, ceramic bushings and power transformers) and testing is carried out with the same configuration, ceramic bushings and power transformers) and testing is carried out with the same PD PD defects. This is important in order to understand and compare the capabilities of the SNN and defects. This is important in order to understand and compare the capabilities of the SNN and EBA in EBA in recognizing certain PD defects from PRPD raw data. The training fingerprints comprise (20 × recognizing certain PD defects from PRPD raw data. The training fingerprints comprise (20 × 60) raw 60) raw PRPD data of a certain defect. For each PD defect, the testing fingerprints comprises eight PRPD data of a certain defect. For each PD defect, the testing fingerprints comprises eight rows of rows of matrix of each defect, that is, (8 × 60) are chosen. matrix of each defect, that is, (8 × 60) are chosen. 7. Results Results and and Discussion Discussion 7. In this this section, section, results results showing showing the the performance performance capabilities capabilities of of the the SNN SNN and and EBA EBA are are presented. presented. In Figures6–9 6–9show showthe therecognition recognition rate when both SNN EBA trained onedefect PD defect Figures rate when both thethe SNN andand EBA are are trained withwith one PD and and testing is carried out with the same PD defect. For each figure, results were obtained over 100 testing is carried out with the same PD defect. For each figure, results were obtained over 100 iterations iterations weights biases of the neural network. Theresult average for both EBA of weightsof and biasesand of the neural network. The average forresult both the SNNthe andSNN EBAand trained trained andwith tested, one PD defectinshown Figure 10. It is interesting to note that for the EBA, and tested, onewith PD defect shown Figurein10. It is interesting to note that for the EBA, a higher a higher recognition rate of up to 97% is obtained for each PD defect for the SNN and EBA trained recognition rate of up to 97% is obtained for each PD defect for the SNN and EBA trained and tested and tested with vacuole.for However, the SNN and transformer tested with transformer PD fault, a with vacuole. However, the SNNfor trained and trained tested with PD fault, a recognition recognition ratebeofseen. 95% However, can be seen. average for each defect, the EBA algorithm rate of 95% can on However, average foroneach PD defect, thePD EBA algorithm provides 90% provides 90% recognition efficiency, while the SNN provides around 80%. The result clearly implies recognition efficiency, while the SNN provides around 80%. The result clearly implies that for both the that for both the SNN and EBA trained and tested with a similar PRPD pattern, there is consistently SNN and EBA trained and tested with a similar PRPD pattern, there is consistently greater average greater average recognition probability theSNN. EBA The overEBA the SNN. The EBA consistently shows lower recognition probability of the EBA overofthe consistently shows lower variance and variance and standard mean square error over the SNN, as shown in Figure 11. However, the standard mean square error over the SNN, as shown in Figure 11. However, the variance and standard variance and standard error ofrates the mean recognition rates the EBA to and be lower in vacuoles error of the mean recognition for the EBA appear to for be lower in appear vacuoles transformer PD and transformer PD faults as compared to others. These might be due to the fact that PD faults of vacuole and transformer are unique and do not vary considerably even with prolonged degradation, unlike corona and bushing PD faults that have a complex mechanism and degradation. The overall

Machines 2017, 5, 18

10 of 13

Machines Machines 2017, 2017, 5, 5, 18 18

10 10 of of 13 13

faults as compared to others. These might be due to the fact that PD faults of vacuole and transformer result indicates improved performance the algorithm over widely applied SNN for result indicates an improved performance ofwith the EBA EBA algorithm over the the unlike widelycorona applied SNN for PD PD are unique and doan not vary considerably evenof prolonged degradation, and bushing defects. defects. PD faults that have a complex mechanism and degradation. The overall result indicates an improved A the effectiveness of EBA and SNN A presumption presumption regarding the the effectiveness of the the EBA anddefects. SNN leading leading to to rigorous rigorous performance of the EBA regarding algorithm over widely applied SNN for PD experimental investigation revealed that the EBA is efficient in recognizing PD patterns, is experimental investigation that the EBA is EBA efficient recognizing PD patterns, is capable capable of of A presumption regardingrevealed the effectiveness of the andin SNN leading to rigorous experimental learning from exemplars, and produces a better generalization than the SNN. This was demonstrated learning from exemplars, andEBA produces a better generalization the SNN. This was demonstrated investigation revealed that the is efficient in recognizing PDthan patterns, is capable of learning from in 6–9, when just training and eight testing aa classification probability in Figures Figuresand 6–9,produces when with with just 20 20generalization training exemplars exemplars andSNN. eightThis testing sets, classification probability exemplars, a better than the wassets, demonstrated in Figures 6–9, of up to 90% for some input classes was found. In addition, no misclassifications were recorded when of upwith to 90% input classes was In addition, misclassifications were recorded when when justfor 20some training exemplars andfound. eight testing sets, no a classification probability of up to 90% all the are tested against each see noticeable allsome the fingerprints fingerprints are tested against each other other to see the the resemblance. resemblance. Additionally, noticeable for input classes was found. In addition, noto misclassifications were Additionally, recorded when all the development in classification potential for the based raw development in the the against classification potential for resemblance. the measures measuresAdditionally, based on on PRPD PRPD raw data data has has been been fingerprints are tested each other to see the noticeable development presented. The results imply that the EBA can be considered as a potential algorithm for online PD The results imply the EBA can beon considered a potential online PD inpresented. the classification potential for that the measures based PRPD rawasdata has beenalgorithm presented.for The results detection problems. detection problems. imply that the EBA can be considered as a potential algorithm for online PD detection problems. 11

Recognitionrate rate Recognition

0.95 0.95 0.9 0.9 0.85 0.85 0.8 0.8 0.75 0.75 0.7 0.70 0

EBA EBA SNN SNN 20 20

40 40

60 60

Number Number of of iterations iterations

80 80

100 100

Figure 6. Training and testing both the SNN and EBA with vacuole defect. Figure Training and testing both the SNN and EBA with vacuole defect. Figure 6.6.Training and testing both the SNN and EBA with vacuole defect. 11 0.95 0.95

Recognitionrate rate Recognition

0.9 0.9 0.85 0.85 0.8 0.8 0.75 0.75 0.7 0.7 SNN SNN EBA EBA

0.65 0.65 00

20 20

40 60 40 60 Number Number of of iterations iterations

80 80

100 100

Figure 7. Training and testing both the SNN and EBA with corona defect. Figure Figure 7. 7. Training Training and and testing testing both both the the SNN SNN and and EBA EBA with with corona corona defect. defect.

Machines 2017, 5, 18 Machines 2017, 5, 5, 1818 Machines 2017,

11 of 13 1111 ofof 1313

Recognition rate Recognition rate Recognition rate

1 11 0.95 0.95 0.95 0.9 0.90.9 0.85 0.85 0.85 0.8 0.80.8 0.75 0.75 0.75 0.7 0.70.7 0.65 0.65 0.65 0.6 0.60.6 0.55 0.55 0 0.55 00

SNN SNN SNN EBA EBA EBA 20 2020

40

60

4040 6060 Number of iterations Number ofof iterations Number iterations

80 8080

100 100 100

Figure 8. Training and testing both the SNN and EBA with defect on bushing surface. Figure 8. Training and testing both the SNN and EBA with defect onon bushing surface. Figure Training and testing both the SNN and EBA with defect bushing surface. Figure 8.8. Training and testing both the SNN and EBA with defect on bushing surface. 1 11

Recognition rate Recognition rate Recognition rate

0.95 0.95 0.95 0.9 0.90.9

0.85 0.85 0.85 0.8 0.80.8

0.75 0.75 0.75 0.7 0.70.7 0.65 0.65 0 0.65 00

20 2020

40

60

80

100

4040 6060 8080 100 100 Number of iterations Number ofof iterations Number iterations Figure9.9.Training Trainingand andtesting testingboth boththe theSNN SNNand andEBA EBAwith with defect a transformer. Figure defect inin a transformer. Figure 9.9. Training and testing both the SNN and defect inina transformer. Figure Training and testing both the SNN andEBA EBAwith with defect a transformer.

Recognition rate Recognition rate Recognition rate

100 100 100 80 8080 60 6060 40 4040 20 2020 0 00

Vacoule vacuole Vacoule Vacoule vacuole vacuole

corona bushing transformer transformer corona bushing corona bushing transformer corona bushing bushingtransformer transformer corona corona bushing transformer EBA SNN EBA EBA SNN SNN

Figure 10. Recognition efficiencies of training and testing with each of the PD defects. Figure 10.10. Recognition ofof training and testing with each ofof the PD defects. Figure Recognitionefficiencies efficiencies training and testing with each the PD defects. Figure 10. Recognition efficiencies of training and testing with each of the PD defects.

Machines 2017, 2017, 5, 5, 18 18 Machines

12 of of 13 13 12

Figure 11. 11. Variance Variance and and standard standard error error of of the the mean mean recognition recognition efficiencies efficiencies of of SNN SNN and and EBA EBA for for each each Figure defect (σS (σS ==variance variance of of the the SNN, SNN, σSM σSM ==standard standard error error of of the the mean mean of of the the SNN, SNN, σE σE == variance variance of of the the defect EBA, σEM σEM ==standard standard error error of of the the mean mean of of the the EBA). EBA). EBA,

8. Conclusions Conclusions 8. In this this paper, paper, an EBA algorithm algorithm was was proposed proposed for for recognizing recognizing PD PD defects defects from from PRPD PRPD raw raw data data In captured for fault classes, namely, cylindrical vacuoles, point-air-plane configurations, ceramic captured forfour fourPD PD fault classes, namely, cylindrical vacuoles, point-air-plane configurations, bushingsbushings and power The results ofresults the EBA extensively compared compared with the widely ceramic andtransformers. power transformers. The ofwere the EBA were extensively with applied SNN. In this work, raw data from PRPD were applied as input parameters for the SNN the widely applied SNN. In this work, raw data from PRPD were applied as input parameters for and the the EBA having to carry outto thecarry widely statistical feature extraction. The results clearly SNN andwithout the EBA without having outused the widely used statistical feature extraction. The show that the EBA outperforms the SNN in recognizing defects. Itthese is a clear indication results clearly showgenerally that the EBA generally outperforms the SNN these in recognizing defects. It is a that the application of the EBA algorithm can improve the generalization capability and accuracy clear indication that the application of the EBA algorithm can improve the generalization capability of the SNN inofthe of PD faults. Thefaults. resultsThe further imply that it is that possible and accuracy the pattern SNN in classification the pattern classification of PD results further imply it is to integrate the EBA for PD-based condition monitoring of transformers, underground cables and possible to integrate the EBA for PD-based condition monitoring of transformers, underground electrical cables andmachines. electrical machines. As part part of of further further research, research, itit will will be be necessary necessary to to investigate investigate the the robustness robustness of of the the EBA EBA for for As complex PD faults and other faults mixed with noise. It will also be necessary to test the EBA algorithm complex PD faults and other faults mixed with noise. It will also be necessary to test the EBA with field measurements to verify that to the verify lab-based data recognition be successfully algorithm with field measurements thatPD the lab-based PD can data recognition applied can be for practical investigations. To further verify the effectiveness of the the EBA, it is necessary successfully applied for practical investigations. To further verify effectiveness of to thecompare EBA, it its is performance with other ensemble techniques such as random forest and XGBoost. necessary to compare its performance with other ensemble techniques such as random forest and XGBoost.

Acknowledgments: The authors hereby acknowledge the support given by the High-Voltage research laboratories of Universidad Técnica Federico Santa María in conducting the experiments presented in this paper. The authors Acknowledgments: The of authors hereby acknowledge support under given the by project the High-Voltage research acknowledge the support the Chilean Research Council the (CONICYT), Fondecyt 11160115. laboratories of Universidad Técnica Federico Santa María in conducting the experiments presented in this paper. Author Contributions: Abdullahi Abubakar Mas’ud wrote the paper. Jorge Alfredo Ardila-Rey carried out the The authors acknowledge the support of the Chilean Research Council (CONICYT), under the project Fondecyt experimental measurements. Ricardo Albarracín contributed with the introduction, proposed improvements for 11160115. the analysis of the results and for the manuscript. Firdaus Muhammad-Sukki carried out a proofreading on the article and provided suggestions for improvement. Author Contributions: Abdullahi Abubakar Mas’ud wrote the paper. Jorge Alfredo Ardila-Rey carried out the Conflicts of Interest: The authors declare no conflict of interest. experimental measurements. Ricardo Albarracín contributed with the introduction, proposed improvements for the analysis of the results and for the manuscript. Firdaus Muhammad-Sukki carried out a proofreading on the article and provided suggestions for improvement. References

Conflicts of O.; Interest: TheB. authors declare no of interest. 1. Barré, Napame, The Insulation forconflict Machines Having a High Lifespan Expectancy, Design, Tests and Acceptance Criteria Issures. Machines 2017, 5, 7. [CrossRef] References 2. Haque, S.M.; Ardila-Rey, J.A.; Masud, A.A.; Umar, Y.; Albarracín, R. Electrical Properties of Different Polymeric MaterialsB.and Applications: The Influence Electric Field. Expectancy, In Properties Design, and Applications of 1. Barré, O.; Napame, Thetheir Insulation for Machines Having aofHigh Lifespan Tests and Polymer Dielectrics; Boxue, D., Ed.; InTech: Rijeka, Croatia, 2017; Chapter 3. Acceptance Criteria Issures. Machines 2017, 5, 7.

Machines 2017, 5, 18

3. 4. 5. 6. 7. 8. 9.

10. 11.

12. 13. 14.

15. 16. 17.

18.

19. 20. 21. 22. 23. 24. 25. 26.

13 of 13

Jardine, A.K.; Lin, D.; Banjevic, D. A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mech. Syst. Signal Process. 2006, 20, 1483–1510. [CrossRef] Gill, P. Electrical Power Equipment Maintenance and Testing, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2008. Kreuger, F.H. Partial Discharge Detection in High-Voltage Equipment; Butterworths: London, UK, 1989. Boggs, S. Partial discharge. III. Cavity-induced PD in solid dielectrics. IEEE Electr. Insul. Mag. 1990, 6, 11–16. [CrossRef] Cavallini, A.; Montanari, G.C.; Contin, A.; Pulletti, F. A new approach to the diagnosis of solid insulation systems based on PD signal inference. IEEE Electr. Insul. Mag. 2003, 19, 23–30. [CrossRef] Stone, G.C.; Sedding, H.G.; Costello, M.J. Application of partial discharge testing to motor and generator stator winding maintenance. IEEE Trans. Ind. Appl. 1996, 32, 459–464. [CrossRef] Asiri, Y.; Vouk, A.; Renforth, L.; Clark, D.; NeuralWare, J.C. Neural network based classification of partial discharge in HV motors. In Proceedings of the 2011 Electrical Insulation Conference (EIC), Annapolis, MD, USA, 5–8 June 2011; pp. 333–339. Markalous, S.M.; Tenbohlen, S.; Feser, K. Detection and location of partial discharges in power transformers using acoustic and electromagnetic signals. IEEE Trans. Dielectr. Electr. Insul. 2008, 15, 1576–1583. [CrossRef] Albarracín, R.; Ardila-Rey, J.A.; Masud, A.A. On the Use of Monopole Antennas for Determining the Effect of the Enclosure of a Power Transformer Tank in Partial Discharges Electromagnetic Propagation. Sensors 2016, 16, 148. [CrossRef] [PubMed] Gulski, E.; Krivda, A. Neural networks as a tool for recognition of partial discharges. IEEE Trans. Electr. Insul. 1993, 28, 984–1001. [CrossRef] Gulski, E. Computer-Aided Recognition of Partial Discharges Using Statistical Tools; Delft University: Delft, The Netherlands, 1991. Masud, A.A.; Albarracín, R.; Ardila-Rey, J.A.; Muhammad-Sukki, F.; Illias, H.A.; Bani, N.A.; Munir, A.B. Artificial Neural Network Application for Partial Discharge Recognition: Survey and Future Directions. Energies 2016, 9, 574. [CrossRef] Lai, K.X.; Phung, B.T.; Blackburn, T.R. Application of data mining on partial discharge Part I: Predictive modelling classification. IEEE Trans. Dielectr. Electr. Insul. 2010, 17, 846–854. [CrossRef] Hao, L.; Lewin, P.L. Partial discharge source discrimination using a support vector machine. IEEE Trans. Dielectr. Electr. Insul. 2010, 17, 189–197. [CrossRef] Ma, H.; Chan, J.C.; Saha, T.K.; Ekanayake, C. Pattern recognition techniques and their applications for automatic classification of artificial partial discharge sources. IEEE Trans. Dielectr. Electr. Insul. 2013, 20, 468–478. [CrossRef] Günter, S.; Bunke, H. New Boosting Algorithms for Classification Problems with Large Number of Classes Applied to a Handwritten Word Recognition Task; International Workshop on Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2003; pp. 326–335. Ivanov, Y.; Hamid, R. Weighted ensemble boosting for robust activity recognition in video. Mach. Graph. Vis. 2006, 15, 415. IEC-60270. High-Voltage Test Techniques—Partial Discharge Measurements, 3rd ed.; International Electrotechnical Commission (IEC): New Delhi, India, 2000. Opitz, D.; Maclin, R. Popular ensemble methods: An Empirical study. J. Artif. Intell. Res. 1999, 11, 169–198. Brieman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–124. [CrossRef] Schapire, R.E. The Strength of Weak Learnability. Mach. Learn. 1990, 5, 197–227. [CrossRef] Haykin, S.S. Neural Networks: A Comprehensive Foundation; Prentice Hall: Upper Saddle River, NJ, USA, 1998. Giao, T.N. Partial discharge XIX: Discharge in air part I: Physical mechanisms. IEEE Electr. Insul. Mag. 1995, 11, 23–29. Masud, A.A.; Stewart, B.G.; McMeekin, S.G. Application of an ensemble neural network classifying partial discharge patterns. Electr. Power Syst. Res. 2014, 110, 154–162. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents