2011 IEEE International Conference on Granular Computing
Parallel Reduction Based on Condition Attributes Dayong Deng, Dianxun Yan, Lin Chen College of Mathematics, Physics and Information Engineering Zhejiang Normal University Jinhua, China, 321004 Email:
[email protected]
In [23] we proposed a method of parallel reducts which is based on a matrix of attribute significances and obtain both parallel reducts and dynamic reducts in polynomial time. In [24] we presented a method of decision system decomposition with which an inconsistent decision system was divided into a family of consistent decision sub-tables. The parallel reducts from the family of consistent decision sub-tables are equal to attribute reducts obtained from Hu’s discernibility matrices and functions. Both parallel reducts and dynamic reducts could deal with a large mount of data but not a large mount of condition attributes. In this paper, we propose a new method which decompose condition attributes of a decision system into parts, and reduce some redundant condition attributes in a parallel way. The algorithm of condition attribute reducts is only a framework, in which some heuristic information could be added so that the efficiency of the algorithm could be improved.
Abstract—In this paper, we propose a parallel algorithm to obtain reducts. The algorithm firstly divides the decision system into a number of subsystems, then reduce conditional attributes in each subsystem, and merge some subsystems together. This process repeats until all subsystems are merged into one. In this way, conditional attributes are reduced efficiently. The effectiveness of our method is analyzed and validated through experiments on seven datasets. Keywords-rough sets; decomposition; parallel reducts; dynamic reducts;
I. I NTRODUCTION Rough set theory is a valid mathematical tool, which deals with imprecise, vague and incomplete information [1], [4]. Some researchers tried to deal with incremental, dynamical or tremendous data by rough set theory. Deng and Huang[9], [11] presented a method of discernibility matrix and function between two decision tables. The method computes reducts in parallel, distributed and incremental way. Liu[6] introduced an algorithm of the smallest attribute reducts with increase data. Wang and Wang [7] proposed a distributed algorithm of attribute reduction based on discernibility matrix and function. In fact Liu’s algorithm [6] and Wang’s algorithm [7] are special cases of Deng’s algorithm [9], [11]. Zheng et al. [8] presented an incremental algorithm based on positive region. Kryszkiewicz and Rybinski[10] introduced an algorithm of attribute reducts in composed decision systems. Deng [13] presented a method of attribute reduction by voting in a series of decision subsystems. Bazan et al.[2], [3] presented the concept of dynamic reducts to solve the problem of large amount of data or incremental data. In[14], [15], [16], [17], [23] we extended rough set theory and introduced a new method to compute the stable reducts in a series of decision subsystems, which is called parallel reducts. The parallel reducts have all of the advantages of dynamic reducts but avoid at least two their drawbacks. The time complexity of dynamic reducts is NP-complete and the way to obtain dynamic reducts is not complete because the intersection of all of Pawlak reducts in a series of sub-tables may be empty. The time complexity of parallel reducts could be the same as that of the best algorithm of Pawlak reducts. We will always obtain parallel reducts in any kinds of cases.
978-1-4577-0371-3/11/$26.00 ©2011 IEEE
II. P RIMARY K NOWLEDGE A. Rough Sets An information system is a pair 𝑆 = (𝑈, 𝐴), where 𝑈 is the universe of discourse with a finite number of objects(or entities), 𝐴 is a set of attributes defined on 𝑈 . Each 𝑎 ∈ 𝐴 corresponds to the function 𝑎 : 𝑈 → 𝑉𝑎 , where 𝑉𝑎 is called the value set of 𝑎. Elements of 𝑈 are called situation, objects or rows, interpreted as, e.g., cases, states[3, 6]. With any subset of attributes 𝐵 ⊆ 𝐴, we associate the information set for any object 𝑥 ∈ 𝑈 by 𝐼𝑛𝑓𝐵 (𝑥) = {(𝑎, 𝑎(𝑥)) : 𝑎 ∈ 𝐵} An equivalence relation called 𝐵-indiscernible relation is defined by 𝐼𝑁 𝐷(𝐵) = {(𝑥, 𝑦) ∈ 𝑈 × 𝑈 : 𝐼𝑛𝑓𝐵 (𝑥) = 𝐼𝑛𝑓𝐵 (𝑦)} Two objects 𝑥, 𝑦 satisfying the relation 𝐼𝑁 𝐷(𝐵) are indiscernible by attributes from 𝐵. [𝑥]𝐵 is referred to as the equivalence class of 𝐼𝑁 𝐷(𝐵) defined by 𝑥. An attribute 𝑎 ∈ 𝐵 could be reduced in the information system 𝑆 = (𝑈, 𝐴) if 𝐼𝑁 𝐷(𝐵 − {𝑎}) = 𝐼𝑁 𝐷(𝐵).
162
In an information system 𝑆 = (𝑈, 𝐴), 𝐵 ⊆ 𝐴 is a subset of attributes, and 𝑋 ⊆ 𝑈 is a subset of discourse, the sets
C. Parallel Reducts Both dynamic reducts and parallel reducts are to obtain stable condition reducts from a series of decision sub-systems.But dynamic reducts have at least two drawbacks[15], [16], [17], [23], [24]:1. The problem of obtaining dynamic reducts is NP-hard. 2. The method to obtain dynamic reducts is not complete. Parallel reducts have overcome these two drawbacks. Definition 4. Let 𝐷𝑆 = (𝑈, 𝐴, 𝑑) be a decision system, and 𝑃 (𝐷𝑆) the set of all subsystems of 𝐷𝑆, 𝐹 ⊆ 𝑃 (𝐷𝑆). 𝐵 ⊆ 𝐴 is called a parallel reduct of 𝐹 (𝐹 -parallel reduct in short) iff B satisfies the following two conditions: (1) For any subsystem 𝐷𝑇 ∈ 𝐹 it satisfies
𝐵(𝑋) = {𝑥 ∈ 𝑈 : [𝑥]𝐵 ⊆ 𝑋} 𝐵(𝑋) = {𝑥 ∈ 𝑈 : [𝑥]𝐵
∩
𝑋 ∕= 𝜙}
are called 𝐵-lower approximation and 𝐵-upper approximation respectively. The lower approximation is also called positive region, denoted by 𝑃 𝑂𝑆𝐵 (𝑋). In a decision system 𝐷𝑆 = (𝑈, 𝐴, 𝑑), where {𝑑}∩𝐴 = ∅, the decision attribute 𝑑 divides the universe 𝑈 into parts, denoted by 𝑈/𝑑 = {𝑌1 , 𝑌2 , ..., 𝑌𝑝 }, where 𝑌𝑖 is an equivalence class. The positive region is defined as ∪ 𝑃 𝑂𝑆𝐴 (𝑌𝑖 ) 𝑃 𝑂𝑆𝐴 (𝑑) =
𝑃 𝑂𝑆𝐵 (𝐷𝑇, 𝑑) = 𝑃 𝑂𝑆𝐴 (𝐷𝑇, 𝑑) .
𝑌𝑖 ∈𝑈/{𝑑}
(2) For any 𝑆 ⊂ 𝐵, there exists at least a subsystem 𝐷𝑇 ∈ 𝐹 such that
Sometimes the positive region 𝑃 𝑂𝑆𝐵 (𝑑) is also denoted by 𝑃 𝑂𝑆𝐵 (𝐷𝑆, 𝑑). In rough set theory there are various kinds of definitions of attribute reducts[1], [4], [12], the most popular definition is Pawlak reducts(reducts in short) in a decision system. It could be showed as follows: Definition 1.Given a decision system 𝐷𝑆 = (𝑈, 𝐴, 𝑑), 𝐵 ⊆ 𝐴 is called the reduct of the decision system 𝐷𝑆 if 𝐵 satisfies the following two conditions: (1) 𝑃 𝑂𝑆𝐵 (𝑑) = 𝑃 𝑂𝑆𝐴 (𝑑), (2)For any 𝑆 ⊂ 𝐵, 𝑃 𝑂𝑆𝑆 (𝑑) ∕= 𝑃 𝑂𝑆𝐴 (𝑑). All reducts of a decision system 𝐷𝑆 is denoted by 𝑅𝐸𝐷(𝐷𝑆).
𝑃 𝑂𝑆𝑆 (𝐷𝑇, 𝑑) ∕= 𝑃 𝑂𝑆𝐴 (𝐷𝑇, 𝑑) . Definition 5 Let 𝑃 𝑅𝐸𝐷 be the set of parallel reducts of 𝐹 , then the intersection of elements of 𝑃 𝑅𝐸𝐷 is called the core of 𝐹 -parallel reducts, denoted by ∩ 𝑃 𝐶𝑂𝑅𝐸 = 𝑃 𝑅𝐸𝐷 III. D ECISION S YSTEM D ECOMPOSITION FOR C ONDITION ATTRIBUTES Both parallel reducts and dynamic reducts divide a decision system into parts but they only decompose the elements of the decision system, not condition attributes. In the following paragraphs we will investigate how to obtain attribute reducts when condition attributes are decomposed into parts in a decision system. A decision system 𝐷𝑆 = (𝑈, 𝐴, 𝑑) could be divided into 𝐷𝑆 ∪𝑘 𝑖 = (𝑈, 𝐴𝑖 , 𝑑)(𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑘.𝑘 < ∣𝐴∣),𝐴𝑖 ⊆ 𝐴, 𝐴 = 𝑖=1 𝐴𝑖 . When a decision system 𝐷𝑆 = (𝑈, 𝐴, 𝑑) is divided into a series of decision sub-systems 𝐷𝑆𝑖 = (𝑈, 𝐴𝑖 , 𝑑)(𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑘.𝑘 < ∣𝐴∣), its corresponding information system 𝐼𝑆 = (𝑈, 𝐴) is decomposed into a family of information sub-systems 𝐼𝑆𝑖 = (𝑈, 𝐴𝑖 )(𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑘.𝑘 < ∣𝐴∣). For these decision sub-systems and their corresponding information sub-systems, the two following theorems are satisfied. Theorem 1. For an information sub-system 𝐼𝑆1 = (𝑈, 𝐴1 ) and its corresponding decision sub-system 𝐷𝑆1 = (𝑈, 𝐴1 , 𝑑), if an attribute 𝑎 ∈ 𝐴1 could be reduced in the information sub-system 𝐼𝑆1 , it could also be reduced in the decision sub-system 𝐷𝑆1 . Proof: Suppose 𝑎 ∈ 𝐴1 could be reduced in the information sub-system 𝐼𝑆1 . According to the definition of an attribute reduced in an information system, 𝐼𝑁 𝐷(𝐴1 −{𝑎}) = 𝐼𝑁 𝐷(𝐴1 ), that is to say, for any object 𝑥 ∈ 𝑈 [𝑥]𝐴1 −{𝑎} = [𝑥]𝐴1 . Therefore, 𝑃 𝑂𝑆𝐴1 −{𝑎} (𝑑) = 𝑃 𝑂𝑆𝐴1 (𝑑), this is to
B. Dynamic Reducts The purpose of dynamic reducts is to obtain the stable reducts from decision subsystems. We will give their definitions in the following. Definition 2. If 𝐷𝑆 = (𝑈, 𝐴, 𝑑) is a decision system, then any system 𝐷𝑇 = (𝑈 ′ , 𝐴, 𝑑) such that 𝑈 ′ ⊆ 𝑈 is called a subsystem of 𝐷𝑆. By 𝑃 (𝐷𝑆) we denote the set of all subsystems of 𝐷𝑆. Let 𝐷𝑆 = (𝑈, 𝐴, 𝑑) be a decision system and 𝐹 ⊆ 𝑃 (𝐷𝑆). By 𝐷𝑅(𝐷𝑆, 𝐹 ) we denote the set ∩ 𝑅𝐸𝐷(𝐷𝑇 ) 𝑅𝐸𝐷(𝐷𝑇 ) ∩ 𝐷𝑇 ∈𝐹
Any elements of 𝐷𝑅(𝐷𝑆, 𝐹 ) is called an 𝐹 -dynamic reduct of 𝐷𝑆. Definition 3. Let 𝐷𝑆 = (𝑈, 𝐴, 𝑑) be a decision system and 𝐹 ⊆ 𝑃 (𝐷𝑆). By 𝐺𝐷𝑅(𝐷𝑆, 𝐹 ) we denote the set ∩ 𝑅𝐸𝐷(𝐷𝑇 ) 𝐷𝑇 ∈𝐹
Any elements of 𝐺𝐷𝑅(𝐷𝑆, 𝐹 ) is called an 𝐹 -generalized dynamic reduct of 𝐷𝑆.
163
Table I D ECISION S YSTEM 𝐷𝑆
say, the attribute 𝑎 could be reduced in the decision subsystem 𝐷𝑆1 . According to Theorem 1, a reduct of a decision system is a subset of the reduct of its corresponding information system. In some situation we could regard a reduct of an information system as a reduct of its corresponding decision system. Theorem 2. Assumed 𝐼𝑆1 = (𝑈, 𝐴1 ) and 𝐼𝑆2 = (𝑈, 𝐴2 ) are two information sub-systems, and 𝐴1 ⊆ 𝐴2 . If an attribute 𝑎 ∈ 𝐴1 could be reduced in 𝐼𝑆1 , it could also be reduced in 𝐼𝑆2 . Proof: Suppose 𝐴3 = 𝐴2 − 𝐴1 . Because the attribute 𝑎 ∈ 𝐴1 could be reduced, 𝐼𝑁 𝐷(𝐴1 − {𝑎}) = 𝐼𝑁 𝐷(𝐴1 ). Therefore, 𝐼𝑁 𝐷(𝐴1 + 𝐴3 − {𝑎}) = 𝐼𝑁 𝐷(𝐴1 + 𝐴3 ), i.e. 𝐼𝑁 𝐷(𝐴2 − {𝑎}) = 𝐼𝑁 𝐷(𝐴2 ). So, the attribute 𝑎 ∈ 𝐴2 could be reduced in 𝐼𝑆2 .
U 𝑥1 𝑥2 𝑥3 𝑥4 𝑥5
a 1 1 0 0 0
b 1 1 1 1 0
c 1 1 1 1 0
e 0 0 0 0 1
f 1 1 1 1 1
d 0 1 0 1 2
Table II I NFORMATION S YSTEM 𝐼𝑆1 U 𝑥1 𝑥2 𝑥3 𝑥4 𝑥5
According to Theorem 1 and 2, we could design an algorithm framework for attribute reducts in a decision system which has many condition attributes as follows: Algorithm 1. The algorithm framework of condition attribute reducts in a parallel way for condition attributes(REDPCA in short).
a 1 1 0 0 0
b 1 1 1 1 0
c 1 1 1 1 0
some redundant condition attributes which depend on other condition attributes repeatedly, and it is only an algorithm framework. Some heuristic information could be used in the processing of both decomposing the information system and reducing the redundant condition attributes. Besides heuristic information, some problems should be solved in the Algorithm 1. Firstly, how many should information sub-systems be divided into? Secondly, how could the information system be decomposed? Thirdly, which types of decision systems are fitted to be used with the method of the Algorithm 1? We will continue to investigate these problems in our future research. Example 𝐷𝑆 = (𝑈, 𝐴, 𝑑) is a decision system, corresponding to Table 1, where 𝐴 = {𝑎, 𝑏, 𝑐, 𝑒, 𝑓 } is the set of condition attributes, 𝑑 is the decision attribute. According to Algorithm 1, suppose that the decision system 𝐷𝑆 is decomposed into two information system 𝐼𝑆1 , 𝐼𝑆2 corresponding to Table 2, Table 3 respectively. The attributes 𝑐 and 𝑓 could be reduced in parallel computing in Table 2 and Table 3.The reduced Table 2 and Table 3 are merged into Table 4. In the new decision 𝐷𝑆2 the condition attributes could be reduced. It is easy to know that {𝑎, 𝑏} and {𝑎, 𝑒} are reducts of the new decision system 𝐷𝑆2 , and they are also reducts of the decision system 𝐷𝑆. In the above process, some reducts may be lost, for example, {𝑎, 𝑐} is a reduct of decision system 𝐷𝑆, but it could not be obtained from the new decision system 𝐷𝑆2 .
1) Decompose the information system related to a decision system 𝐷𝑆 into 𝑘 information sub-systems 𝐼𝑆𝑖 = (𝑈, 𝐴𝑖 )(𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑘) according to condition ∪𝑘 attributes in the decision system, where 𝐴 = 𝑖=1 𝐴𝑖 . 2) For 𝑖 = 1 to 𝑘 do 𝑅𝐸𝐷𝐼𝑆(𝐼𝑆𝑖 ) 3) Loop the following steps until the information subsystems combined into one: a) Select several information sub-systems and combine them into an information sub-system 𝐼𝑆𝑗 (𝑗 = 1, 2, ⋅ ⋅ ⋅ , 𝑚.𝑚 < 𝑘). // ’𝑚’ is the new number of information sub-systems. In the loop it becomes smaller and smaller until it is equal to ’1’. b) For 𝑗 = 1 to 𝑚 do 𝑅𝐸𝐷𝐼𝑆(𝐼𝑆𝑗 ). 4) Call 𝑅𝐸𝐷(𝐷𝑆 ′ ) // 𝐷𝑆 ′ is an new decision system which is consisted of all of above reduced information subsystems 5) Output condition reducts of the decision system 𝐷𝑆 = (𝑈, 𝐴, 𝑑). In Algorithm 1, 𝑅𝐸𝐷𝐼𝑆() is any algorithm for information system reduction, and 𝑅𝐸𝐷() is also any algorithm for decision system reduction. The number 𝑘 of information sub-systems could be a proper one according to the real need. The time complexity of this algorithm is determined by 𝑅𝐸𝐷𝐼𝑆(),𝑅𝐸𝐷() and how to decompose the decision system, we remain this problem to the future investigation. The Algorithm 1 could compute in parallel way when several information sub-systems reduce their redundant condition attributes. The idea of the Algorithm 1 is to delete
Table III I NFORMATION S YSTEM 𝐼𝑆2 U 𝑥1 𝑥2 𝑥3 𝑥4 𝑥5
164
e 0 0 0 0 1
f 1 1 1 1 1
Table IV D ECISION S YSTEM 𝐷𝑆 U 𝑥1 𝑥2 𝑥3 𝑥4 𝑥5
a 1 1 0 0 0
b 1 1 1 1 0
e 0 0 0 0 1
Table VI C OMPARISON OF REDPCA AND PAWLAK REDUCTS d 0 1 0 1 2
Data Abalone Blood Transfusion Service Center Liver Disorders Pima Indians Diabetes Tic-Tac-Toe Endgame Yeast Chess (King-Rook vs. King-Pawn)
Table V E XPERIMENTAL DATA Data Abalone Blood Transfusion Service Center Liver Disorders Pima Indians Diabetes Tic-Tac-Toe Endgame Yeast Chess (King-Rook vs. King-Pawn)
Condition 8 4
Decision 1 1
Instances 4177 748
6 8
1 1
345 768
9
1
958
8 36
1 1
1484 3196
REDPCA 4 + 0: 3.619 1 + 0: 0.015
Pawlak Reducts 4: 5.865 1: 0.021
1 + 1: 0.026 4 + 1: 0.143
2: 5:
0.031 0.224
1 + 0: 0.238
1:
0.295
3 + 1: 0.399 2 + 5:23.371
3: 7:
0.65 24.549
others could be reduced in the new decision system. After reducing redundant attributes, the time consumption could be reduced a lot when redundant attributes are reduced in the new decision system. V. C ONCLUSION In this paper, an algorithm framework of attribute reducts has proposed. The algorithm could obtain attribute reducts of a decision system in parallel computing. We decompose a decision system into several information subsystems at first, and reduce some redundant attributes in each information subsystem respectively, then combine these reduced information subsystems into a bigger one, and repeat the above reducing process until these information subsystems are combined into one. At last we could reduce redundant attributes in the new decision system. The principal idea of this algorithm framework is to divide the condition attributes into parts, then reduce redundant attributes in parallel way. In these reducing processes some heuristic information could be added so that the time complexity of the algorithm could be reduced. In the future research, we will investigate some heuristic information in the algorithm framework of attribute reducts.
IV. E XPERIMENTS The data in our experiments come from UCI repository of machince learning databases, showed in Table 5. We use RIDAS system(developed by Chongqing University of Posts and Telecommunications) to count Pawlak reducts and to normalize data. In this experiment, the condition attributes are divided into several partial information sub-systems which include 2 condition attributes in each group at first. Then, redundant condition attributes are reduced in every information subsystem respectively. Thirdly, two adjacent groups are merged into one, and redundant condition attributes are reduced, until all of information sub-systems are combined into a big one. At last, we reduce redundant condition attributes in the new decision system. The indexes of our computer are as follows: Computer Model: DELL OPTIPLEX GX260, CPU: Intel Pentium IV 1.8Ghz, Memory: 768MB RAM, Hard disk: 40G, OS: Windows XP Professional (5.1 2600). The results of our experiments are in Table 6. In Table 6, the symbol 𝑥 + 𝑦 : 𝑧 is a compound one, where 𝑥 denotes the number of reduced condition attributes in all of information sub-systems, 𝑦 stands for the number of reduced condition attributes in the last decision system, and 𝑧 denotes time consumption of reducing redundant attributes in the last decision system. The symbol 𝑢 : 𝑣 is also a compound one,where 𝑢 denotes the number of redundant attributes, and 𝑣 stands for time consumption of reducing redundant attributes in the old decision system. The unit of time is second. From Table 6, we could find that several condition attributes are reduced in some information sub-systems, and
ACKNOWLEDGMENT This work is partially supported by Foundation of Zhejiang Education Office(NO.Y200805421). The authors would like to thank Doctor Fan Min from Zhangzhou Normal University, who has given us many meaningful suggestions. R EFERENCES [1] Z.Pawlak, Rough Sets-Theoretical Aspect of Reasoning about Data. Kluwer Academic Publishers, Dordrecht,1991. [2] G. J.Bazan, A Comparison of Dynamic Non-dynamic Rough Set Methods for Extracting Laws from Decision Tables. In: L. Polkowski and A. Skowron (eds.), Rough Sets in Knowledge Discovery 1: Methodology and Applications, Physica-Verlag, Heidelberg,1998,pp. 321-365.
165
[18] S.Liu, Q.Sheng and Z.Shi, A New Method for Fast Computing Positive Region. Journal of Computer Research and Development(in Chinese). Vol 40. No.5,2003,pp.637-642.
[3] G.J.Bazan, H. S.Nguyen, S. H.Nguyen, P.Synak and J.Wroblewski, Rough Set Algorithms in Classification Problem. In: L.Polkowski, S.Tsumoto, T.Y.Lin(eds), Rough Set Methods and Applications, Physica-Verlag,2000,pp.49-88.
[19] J.Wang, S.Chen and A.Luo, Study for Dynamic Reduct Based on Rough Set. Mini-Micro System(in Chinese), Vol.27, No.11, 2006,pp.2057-2060.
[4] Q.Liu, Rough Sets and Rough Reasoning. Science Press (in Chinese),2001. [5] G.Wang, Calculation Methods for Core Attributes of Decision Table. Chinese Journal of Computers (in Chinese), Vol.26, No.5,2003, pp.611-615.
[20] G.J.Bazan, Dynamic Reducts and Statistical Inference . Proceedings of the Sixth International Conference, Information Processing and Management of Uncertainty in Knowledge Based Systems ( IPMU’96 ) , July 125, Granada, Spain, (2) ,1996,pp.1147-1152.
[6] Z.Liu, An Incremental Arithmetic for the Smallest Reduction of Attributes. Acta Electronica Sinica (in Chinese), Vol.27, No.11, 1999,pp.96-98.
[21] X.Hu and N.Cercone,Learning in Relational Databases: a Rough Set Approach, Computational Intelligence, 11(2),1995,pp.323-337.
[7] J.Wang and J.Wang, Reduction algorithms based on discernibility matrix: The order attributes method. Journal of Computer Science and Technology, Vol.16, No.6,2001, pp. 489-504.
[22] A.Skowron and C.Rauszer, The discernibility matrices and functions in information systems. In: Intelligent Decision Support Handbook of Applications and Advances of the Rough Sets Theory. Dordrecht: Kluwer Academic Publishers, 1991, pp.331-362.
[8] Z.Zheng, G.Wang and Y.Wu, A Rough Set and Rule Tree Based Incremental Knowledge Acquisition Algorithm. In: Proceedings of 9th International Conference of Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, 2003, pp.122129.
[23] D.Deng, D.Yan and J.Wang,Parallel Reducts Based on Attribute Significance, LNAI6401, 2010,pp.336-343.
[9] D.Deng and H.Huang, A New Discernibility Matrix and Function. In: Proceeding of First International Conference on Rough Sets and Knowledge Technology (RSKT2006), LNAI4062, 2006,pp.114-121.
[24] D.Deng, D.Yan, J.Wang and L.Chen, Parallel Reducts and Decision System Decomposition. Proceedings of the Fourth Joint Conference on Computational Sciences and Optimization(CSO2011),2011, pp.799-803.
[10] M.Kryszkiewicz and H.Rybinski, Finding Reducts in Composed Information Systems. In: Proceedings of International Workshop on Rough Sets and Knowledge Discovery (RSKD93), 1993, pp.259-268. [11] D.Deng, Research on Data Reduction Based on Rough Sets and Extension of Rough Set Models (Doctor Dissertation). Beijing Jiaotong University,2007. [12] D.Deng, H.Huang and X.Li, Comparison of Various Types of Reductions in Inconsistent Decision Systems. Acta Electronica Sinica, vol. 35, No. 2, 2007,pp.252-255. [13] D. Deng, Attribute Reduction among Decision Tables by Voting. In: Proceedings of 2008 IEEE International Conference of Granular Computing, 2008,pp183-187. [14] D.Deng, J.Wang and X.Li, Parallel Reducts in a Series of Decision Subsystems. In: Proceedings of the Second International Joint Conference on Computational Sciences and Optimization(CSO2009), Sanya,Hainan,China,2009,pp.377-380. [15] D.Deng, Comparison of Parallel Reducts and Dynamic Reducts in Theory. Computer Science(in Chinese), 36(8A),2009, pp.176-178 [16] D.Deng, Parallel Reducts and Its Properties. Proceedings of 2009 IEEE International Conference on Granular Computing, 2009,pp.121-125. [17] D.Deng, (𝐹 ,𝜀)-Parallel Reducts in a Series of Decision Subsystems. Proceedings of the Third International Joint Conference on Computational Sciences and Optimization(CSO2010), 2010,pp.372-376.
166