Document not found! Please try again

Three Data Partitioning Strategies for Building Local ... - Google Sites

0 downloads 102 Views 2MB Size Report
Experiments. ○ CLU, CLU2, FEA and meta ensemble (MMM). ○ Baselines: naive (NAI), random partitoning. (RAN) and no pa
Three Data Partitioning Strategies for Building Local Classiers Indrė Žliobaitė TU Eindhoven 2010, September 20

Set up

Ensembles Training set for each member Randomized procedure

Evaluation Competence of each member Assigned region of competence

Deterministic procedure

Ensembles Training set for each member Randomized procedure

Evaluation Competence of each member Assigned region of competence

Deterministic procedure

Set up



Specific types of ensembles, which ●

Partition the data into non intersecting regions



Train one classifier per partition



Use classifier assignment for the final decision

Classifier 4 Classifier 1

Classifier 5 Classifier 2

Classifier 3

Classifier 4 Classifier 1

Classifier 5 Classifier 2

Classifier 3

Set up ● ●

We will explore three data partitioning strategies We will build a meta ensemble consisting of local experts

Set up ● ●



We will explore three data partitioning strategies We will build a meta ensemble consisting of local experts Motivation ●

divide and conquer



use different views to the same learning problem



assess the impact of class labels to partitions



building blocks for handling contexts / concept drift

Partitioning

Three partitioning techniques ●

Cluster the input data



Cluster each class separately



Partition based on a selected feature

Toy data

Clustering all (CLU) Cluster the input data

Clustering all (CLU) Cluster the input data

Build classifiers

Clustering all (CLU) Cluster the input data

Build classifiers

Select the relevant classifier

Clustering within classes Cluster the first class

A B

Clustering within classes Cluster the first class

A B

Cluster the second class

D C

Clustering within classes Cluster the first class

A B

Build the classifiers (pairwise)

A D

B A

Cluster the second class

C

D C

D

B

C

Clustering within classes Build the classifiers (pairwise)

D

A D

Select two closest clusters = the relevant classifier

B A C

B

C

Partitioning based on a feature Slice the data and build classifiers

Partitioning based on a feature Slice the data and build classifiers

Select the relevant classifier

Experiments

Experiments ● ●



CLU, CLU2, FEA and meta ensemble (MMM) Baselines: naive (NAI), random partitoning (RAN) and no partitioning (ALL) Classification datasets from various domains ●

dimensionalities 7-58



sizes 500- 44000



two classes

Intuition ●

Partition makes sense if CLU, CL2, FEA < ALL



Small sample size problem if ALLNAI>ALL>CLU>RAN>CL2 Shut: FEA>EEE>CL2>CLU>RAN>ALL>NAI Marc: EEE>FEA>CLU>CL2>ALL>RAN>NAI Spam: EEE>CLU>FEA>RAN>CL2>ALL>NAI Elec:

EEE>CLU>RAN>FEA>CL2>ALL>NAI

Chess: EEE>CLU>ALL>CL2>RAN>FEA>NAI

0 .3 6

0 .0 6

ALL 0 .3 4

ALL

0 .3 2

C L2

FE A

0 .3

C LU

0 .2 8 0 .2 6

R A N

0 .0 4

's h u t ' d a t a

C LU 0 .0 3

C L2 0 .0 2

M M M 0 .0 1

M M M

0 .2 4 0 .2 2

0 .0 5

R A N te s tin g e r r o r

te s t in g e r r o r

How many partitions?

FEA

'e le c ' d a t a 2

4

6

8

n u m b e r o f p a r t it io n s ( k )

10

0

2

4

6

8

n u m b e r o f p a r titio n s ( k )

10

Summary ●

Better with more partitions, but there is a risk of small taining sample



Mediciore performance individually



Meta ensemble performs well

Outlook ●

Partitioning with constraints

(e.g. Ignoring label informaition) ●

Data partitioning

for contextual and adaptive learning

Thank you

Transition images from www.images.com