Mining Class-Association Rules with Constraints Dang Nguyen and Bay Vo
Abstract. Numerous fast algorithms for mining class-association rules (CARs) have been developed recently. However, in the real world, end-users are often interested in a subset of class-association rules. Particularly, they may consider only rules that contain a specific item or a specific set of items. The nave strategy is to apply such item constraints into the post-processing step. However, such approaches require much effort and time. This paper proposes an effective method for integrating constraints that express the presence of user-defined items (for example (Bread AND Milk)) into the class-association rule mining process. First, we design a tree structure in that each node contains the constrained itemset. Second, we develop a theorem and a proposition for quickly pruning infrequent nodes and weak rules. Final, an efficient algorithm for mining CARs with item constraints is proposed. Experiments show that the proposed algorithm outperforms the post-processing approach.
1 Introduction The integration of classification and association rule mining was firstly introduced by Liu et al. in 1998 [1]. The problem is described as follows. First, the complete set of CARs that satisfy the user-specified minimum support and minimum confidence thresholds is mined from the training dataset. Second, a subset of CARs is then selected to form the classifier. Numerous approaches have been proposed to solve this problem. Examples include CBA [1], CMAR [2], CPAR [3], MCAR [4], ACME [5], ECR-CARM [6], LOCA [7], and CAR-Miner [8]. Dang Nguyen University of Information Technology, Ho Chi Minh, Vietnam e-mail:
[email protected] Bay Vo Information Technology Department, Ton Duc Thang University, Ho Chi Minh, Vietnam e-mail:
[email protected] V.-N. Huynh et al. (eds.), Knowledge and Systems Engineering, Volume 2, Advances in Intelligent Systems and Computing 245, DOI: 10.1007/978-3-319-02821-7_28, © Springer International Publishing Switzerland 2014
307
308
D. Nguyen and B. Vo
In practice, end-users often consider only a subset of CARs, for instance, those that contain a user-defined itemset. The item constraints reduce the number of CARs and decrease the search space so that the performance of the mining process can be improved. Additionally, constrained CARs help to discover interesting or useful rules particular to the end-user. For example, while classifying the risk of populations for HIV infection, epidemiologists often concentrate on rules that include demographic information such as sex, age, and marital status. Under this context, the present study considers constraints in the form of the presence of specific items in the rule antecedents. The main contributions of this paper are stated as follows. Firstly, we propose a tree structure named Single Constraint Rule-tree (SCR-tree) for efficiently mining CARs with item constraints. At the first level, the tree contains the constrained node which includes the constrained itemset and frequent nodes which include frequent 1-itemsets. At the following levels, the tree contains constrained nodes only. Secondly, we develop a theorem and a proposition for quickly pruning infrequent nodes and weak classification rules. Finally, we propose a fast algorithm for mining CARs with item constraints.
2 Preliminary Concepts Let D be a training dataset with n attributes {A1 , A2 , ..., An } and |D| objects (cases). Let C = {c1 , c2 , ..., ck } be a list of class labels. A specific value of an attribute Ai and class C are denoted by lower-case letters ai and c, respectively. Definition 1. An itemset is a set of pairs, each of which consists of an attribute and a specific value for that attribute, denoted by {(Ai1 , ai1 ) , (Ai2 , ai2 ) , ..., (Aim , aim )}. Definition 2. Let Constraint Itemset be a specific itemset considered by end-users. Definition 3. A class-association rule r has form {(Ai1 , ai1 ) , ..., (Aim , aim )} → c j , where {(Ai1 , ai1 ) , ..., (Aim , aim )} is an itemset and c j ∈ C is a class label. Definition 4. A strong rule is defined as a rule with the highest confidence among rules generated from a given node. Otherwise, that rule is called a weak rule. Definition 5. The actual occurrence ActOcc (r) of rule r in D is the number of objects in D that match rs antecedent. Definition 6. The support of rule r, denoted by Sup(r), is the number of objects in D that match rs antecedent and is labeled with rs class. Definition 7. The confidence of rule r, denoted by Con f (r), is defined as: Con f (r) =
Sup (r) ActOcc (r)
A sample training dataset is shown in Table 1 where each OID is an object identifier. It contains eight objects, three attributes, and two classes (1 and 2). Considering
Mining Class-Association Rules with Constraints
309
the rule r : {(A, a1)} → 1. We have ActOcc (r) = 3 and Sup(r) = 2 because there are three objects with A = a1, in that two objects have the same class 1. In addition, Sup(r) = 23 . Con f (r) = ActOcc(r)
Table 1 Example of a training dataset
OID A B C Class 1 2 3 4 5 6 7 8
a1 b1 c1 a1 b2 c1 a2 b2 c1 a3 b3 c1 a3 b1 c2 a3 b3 c1 a1 b3 c2 a2 b2 c2
1 2 2 1 2 1 1 2
3 Related Work 3.1 Mining Association Rules with Item Constraints Since the introduction of mining association rules with item constraints [9], various strategies have been proposed. The nave strategy, post-processing approaches, first mines frequent itemsets by using an algorithm such as Apriori [10], FP-Growth [11], or Eclat [12] and then filters out the ones that do not satisfy the item constraints in the post-processing step. Some examples are Apriori+ [13] and FP-Growth+ [14]. This kind of strategy is very inefficient because all frequent itemsets must be generated and often a huge number of candidate itemsets must be tested in the last step. Another strategy, constrained itemset filtering, tries to integrate the item constraints into the actual mining process in order to generate only the frequent itemsets that satisfy the constraints. Since this strategy can use the properties of the constraints much more effectively, its execution time is much lower than those of other strategies. CAP [13] and MFS-Contain-IC [15] belong to this group. The two strategies for mining association rules with item constraints cannot be applied for mining CARs with item constraints because they do not generate constrained CARs directly. Moreover, to calculate the confidence of association rules, algorithms for mining constrained association rules have to scan the original database again to count the support of rule antecedents. Since frequent itemsets in rule antecedents do not contain constrained itemsets, their support cannot be known directly.
310
D. Nguyen and B. Vo
3.2 CAR-Miner-Post Algorithm Liu et al. [1] proposed a method for mining CARs based on the Apriori algorithm. However, the method is time-consuming because it generates a lot of candidates and scans the dataset several times. Vo and Le proposed another method for mining CARs by using an Equivalence Class Rule tree (ECR-tree) [6]. An efficient algorithm, called ECR-CARM, was also proposed in their paper. ECR-CARM scans the dataset only once and uses the intersection of object identifiers to determine the support of itemsets quickly. However, it needs to generate and test a huge number of candidates because each node in the tree contains all values of one attribute. Nguyen et al. [8] modified the ECR-tree structure to speed up the mining time. In their enhanced tree, named MECR-tree, each node contains only one value of an attribute instead of the whole group. Moreover, they also provided some theorems to identify the support of child nodes and prune unnecessary nodes quickly. Based on MECR-tree and these theorems, they presented the CAR-Miner algorithm for effectively mining CARs. However, CAR-Miner cannot be applied directly for mining CARs with item constraints. To deal with item constraints, an extended version of CAR-Miner named CAR-Miner-Post is proposed here. Firstly, CAR-Miner is used to discover all CARs from the dataset. Secondly, the post-processing step filters out rules that do not satisfy the item constraints. The pseudo code of the CAR-MinerPost algorithm is shown in Figure 1. Input: Dataset D, minSup, minConf, and Constraint_Itemset Output: All CARs satisfying minSup, minConf, and Constraint_Itemset 1. 2.
CARs = CAR-Miner( Lr , minSup, minConf) Constraint_CARs = filterRules(CARs, Constraint_Itemset)
Procedure: filterRules(CARs, Constraint_Itemset) 3. Constraint_CARs 4. for each rule CARs do 5. if Constraint_Itemset rule.antecedent then 6. Constraint_CARs = Constraint_CARs rule Fig. 1 CAR-Miner-Post algorithm
For detail on the CAR-Miner algorithm, please refer to the study by Nguyen et al. [8]. CAR-Miner-Post is easily implemented with slight modification of the original CAR-Miner but it fails to exploit the properties of the constraints. The main drawback of this approach thus lies in its computational complexity. In the proposed method, we try to push the constraints as deep inside the computation as possible. The most noticeable is that rather than inducting all tree nodes whose computational cost is much high, we form only the tree nodes that can generate rules satisfying item constraints to speed up the process.
Mining Class-Association Rules with Constraints
311
We use the example in Table 1 to illustrate the process of CAR-Miner-Post with minSup = 20%, minCon f = 60%, and Constraint Itemset = {(A, a3) , (B, b3)}. Figure 2 shows the result of this process. In total, there are 13 classification rules generated from the dataset in Table 1 that satisfy minSup = 20% and minCon f = 60%. However, only two rules also satisfy Constraint Itemset = {(A, a3) , (B, b3)}, as shown in Table 2.
^` 1u a1 127 2,1
1u a 2 38 0,2
3 u a 2b 2 38 0,2
1 u a3 456 2,1
3 u a3b3 46 2,0
2 u b2 238 0,3
5 u a3c1 46 2,0
2 u b3 467 3,0
6 u b 2c1 23 0,2
4 u c1
12346 3,2
4 u c2 5781,2
6 u b3c1 46 2,0
7 u a3b3c1 46 2,0
Fig. 2 Tree generated by CAR-Miner-Post for the dataset in Table 1
Table 2 Rules that satisfy minSup = 20%, minCon f = 60%, and Constraint Itemset = {(A, a3) , (B, b3)}
ID Node
CARs
Sup Conf
1 3 × a3b3
If A = a3 and B = b3 then Class = 1
2
2/2
2 7 × a3b3c1 If A = a3 and B = b3 and C = c1 then Class = 1 2
2/2
46(2,0)
46(2,0)
4 Mining Class-Association Rules with Item Constraints 4.1 Tree Structure This paper proposes the SCR-tree structure in that each node contains the following information: 1. att: a list of attributes. 2. values: a list of values, each of which is contained in one attribute in att. 3. (Obidset1 , Obidset2 , ..., Obidsetk ): each Obidseti is a set of object identifiers that contain an itemset and class ci .
312
D. Nguyen and B. Vo
4. pos: stores the position of the class with the maximum cardinality of Obidseti , i.e., pos = max {|Obidseti |}. k
5. total: stores the sum of cardinality of all Obidseti , i.e., total = ∑ |Obidseti |. i
6. const: indicates whether the node contains the constrained itemset. Unlike the MECR-tree, the SCR-tree stores not only the frequent nodes containing frequent 1-itemsets but also the constrained node containing the constrained itemset at the first level. At the following levels, SCR-tree stores only constrained nodes. Thus, it is not necessary to generate all rules, as done in CAR-Miner-Post. This noticeably improves mining time. For example, considering the node containing itemset X = {(A, a3), (B, b3)}. X is contained in objects 4 and 6, both of which belong to class 1. Therefore, the node / is added to the SCR-tree if minSup is 2. This node has att = 3, 3 × a3b3 (46, 0) values = {a3, b3}, Obidset1 = 46, Obidset2 = 0, / pos = 1 (a line under position 1 of list Obidseti ), and total = 2 . pos is 1 because the cardinality of Obidset for class 1 is maximum (2 versus 0). We use a bit representation for itemset attributes. For instance, the attributes AB can be presented by 11 in bit representation, so the value of these attributes is 3. Bitwise operations can be used to quickly join itemsets.
4.2 Proposed Algorithm In this section, we firstly introduce a theorem and a proposition as the basic concepts of the proposed method. Then, we present an effective and fast algorithm called Single Constraint CAR-Miner (SC-CAR-Miner) for mining CARs with item constraints based on the provided theorem and proposition. Proposition 1. To remove redundant rules, if multiple rules generated from a given node satisfy minSup and minCon f , strong rule is selected (see ) Definition ) 4). This implies that rule has the form itemset → c pos with Sup (r) = )Obidset pos ) ≥ minSup |Obidset pos | ≥ minCon f . and Con f (r) = total Assuming that minSup is 2 and minCon f is 40%, the node 4 × c1 (146, 23) has two rules, namely r1 : c1 → 1 and r2 : c1 → 2, that satisfy minCon f , r1 is selected since Con f (r1) = 3/5 > Con f (r2) = 2/5. Theorem 1. Given two nodes att1 × values1 (Obidset1i ) and att2 × values2 (Obidset2i ), if att1 = att2 and values1 = values2 , then Obidset1i ∩ / Obidset2i = 0. Proof. Since att1 = att2 and values1 = values2 , there exist val1 ∈ values1 and val2 ∈ values2 such that val1 and val2 have the same attributes but different values. Thus, if an object with OIDi contained val1 , it could not include val2 . Thus, ∀OID ∈ Obidset1i and it can be inferred that OID ∈ / Obidset2i . Consequently, Obidset1i ∩ / Obidset2i = 0. Theorem 1 implies that if two itemsets X and Y have the same attributes, it is not necessary to combine them as itemset XY since Sup (XY ) = 0. Considering two
Mining Class-Association Rules with Constraints
313
nodes 1 × a1 (17, 2) and 1 × a2 (/0, 38) of which the attribute is att = 1, it can be seen / Similarly, 3 × a1b1 (1, 0)∩ / that Obidseti (a1a2) = Obidseti (a1)∩Obidseti (a2) = 0. 3 × a1b2 (/0, 2) = 0/ since both a1b1 and a1b2 have the same attributes (AB) but different values. The pseudo code of the proposed algorithm is shown in Figure 3.
Input: Dataset D, minSup, minConf, and Constraint_Itemset Output: All CARs satisfying minSup, minConf, and Constraint_Itemset Procedure: FIND-Lr( D, minSup, Constraint_Itemset) 1. Constraint_Node = findConstraint_Node( D, minSup, Constraint_Itemset); 2. Frequent_Node = findFrequent_Node( D, minSup); 3. Lr Constraint_Node Frequent_Node; SC-CAR-Miner( Lr , minSup, minConf) 4. CARs= ; 5. for all li Lr .children do 6. if li .const false then 7. break; 8. GENERATE-RULE( li , minConf) 9. Pi ; 10. 11.
for all l j Lr .children, with j ! i do if l j .att z li .att then // using Theorem 1
li .att l j .att ; // using bitwise operation
12.
O.att
13.
O.values
14.
O.Obidseti
15.
O. pos
16.
O.total
li .values l j .values ; li .Obidseti l j .Obidseti ;
max ^ O.Obidseti ` ; k
¦ O.Obidset
i
;
i
O.const true ; if O.ObidsetO. pos t minSup then // using Proposition 1
17. 18.
Pi Pi O ; 19. 20. SC-CAR-Miner( Pi , minSup, minConf); GENERATE-RULE( l , minConf)
21. conf
l.Obidsetl . pos / l.total ;
22. if conf t minConf then // using Proposition 1 23.
^
CARs=CARs l.itemset o c pos l.Obidsetl . pos , conf
` ;
Fig. 3 SC-CAR-Miner algorithm for mining CARs with item constraints
314
D. Nguyen and B. Vo
Firstly, the root node of the SCR-tree (Lr ) is the union of the constrained node and the set of frequent nodes (Lines 1-3) at the first level of the tree. Note that infrequent nodes (based on Proposition 1) are excluded from Lr . Also, the nodes whose attributes belong to the attribute of the constrained node are not added to Lr because they cannot combine with the constrained node to form frequent child nodes. Then, the procedure SC-CAR-Miner is called with the parameters Lr , minSup, and minCon f to mine all CARs with item constraints from dataset D. The SC-CAR-Miner procedure considers each constrained node li with all other nodes l j in Lr , with j > i (Lines 5-7 and 10) to generate a candidate child node O. With each pair (li , l j ), the algorithm checks whether l j .att = li .att (Line 11, using Theorem 1). If the condition holds, it computes the elements att, values, Obidseti , pos, and total for the new node O (Lines 12-16) and node O is a constrained node (Line 17). After computing all information of node O, the algorithm uses Proposition 1 to check whether this node can generate a rule satisfying minSup (Line 18). Then, it adds node O to Pi (Pi is initialized empty in Line 9) if the condition is true (Line 19). Finally, the procedure SC-CAR-Miner is called recursively with a new set Pi as its input parameter (Line 20). The function of procedure GENERATE-RULE(l, minCon f ) is to generate a rule from node l. It firstly computes the confidence of the rule (Line 21), if the confidence of this rule satisfies minCon f by Proposition 1 (Line 22), then the rule is added to the set of CARs (Line 23).
4.3 Example Considering the dataset in Table 1 with minSup = 20%, minCon f = 60%, and Constraint Itemset = {(A, a3), (B, b3)}, the SCR-tree constructed by the proposed algorithm is shown in Figure 4. The process of mining CARs with item constraints by using SCCAR-Miner is explained as follows. The root node (Lr = {}) contains child nodes including both the constrained node 3 × a3b3 (46, 0) / and frequent nodes {4 × c1 (146, 23) , 4 × c2 (7, 58)} at the first level. Nodes {1 × a1 (17, 2) , 1 × a2 (/0, 38) , 1 × a3 (46, 5) , 2 × b2 (/0, 238) , 2 × b3 (467, 0)} / are also frequent. However, their attributes belong to the attribute AB (3 in bit / so they are removed from representation) of the constrained node 3 × a3b3 (46, 0), the root node Lr . The procedure SC-CAR-Miner then generates nodes with the parameter Lr at the second level and lower. Note that SC-CAR-Miner is executed only for the con/ We use the node li = 3 × a3b3 (46, 0) / as an example strained node 3 × a3b3 (46, 0). for illustrating the process of SC-CAR-Miner. li joins with all nodes following it in Lr : • With node l j = 4×c1 (146, 23): since l j .att = li .att, five elements are computed: 1. O.att = li .att ∪ l j .att = 3|4 = 7 or 111 in bit representation 2. O.values = li .values ∪ l j .values = a3b3 ∪ c1 = a3b3c1 3. O.Obidseti = li .Obidseti ∩ l j .Obidseti = (46, 0) / ∩ (146, 23) = (46, 0) /
Mining Class-Association Rules with Constraints
315
4. O.pos = 1 5. O.total = 2 ) ) Since )O.Obidset pos ) = 2 ≥ minSup, O is added to Pi (by Proposition 1). There/ fore, we have Pi = {7 × a3b3c1 (46, 0)}. • With node l j = 4 × c2 (7, 58): since l j .att = li .att, five elements are computed: O.att = li .att ∪ l j .att = 3|4 = 7 or 111 in bit representation O.values = li .values ∪ l j .values = a3b3 ∪ c2 = a3b3c2 O.Obidseti = li .Obidseti ∩ l j .Obidseti = (46, 0) / ∩ (7, 58) = 0/ O.pos = 0 O.total = 0 ) ) Since )O.Obidset pos ) = 0 < minSup, O is not added to Pi (by Proposition 1). 1. 2. 3. 4. 5.
After Pi is created, SC-CAR-Miner is called recursively with parameters Pi , / minSup, and minCon f . Because Pi has only one node, namely 7 × a3b3c1 (46, 0), the rule from this node is generated by the procedure GENERATE-RULE. Rules with item constraints are easily generated in the same step of traversing node li by calling the procedure GENERATE-RULE(li, minCon f ) (Line 8). For instance, while traversing the node li = 3 × a3b3 (46, 0), / ) the procedure ) computes the confidence of the candidate rule (Line 21), conf = )li .Obidsetli .pos ) /li .total = 2/2 = 1. The rule {(A, a3), (B, b3)} → 1 (2, 1) is added to the rule set CARs because conf ≥ minCon f . The meaning of this rule is If A = a3 and B = b3 then Class = 1 (support = 2 and confidence = 100%).
^` 3 u a3b3 46,
4 u c1146, 23
4 u c 2 7,58
7 u a3b3c1 46, Fig. 4 SCR-tree for the dataset in Table 1
It can be seen that the SC-CAR-Miner algorithm generates only CARs with item constraints instead of all CARs, as done in CAR-Miner-Post. Consequently, SCCAR-Miner can lower the storage complexity while improve the mining process.
316
D. Nguyen and B. Vo
Table 3 Characteristics of the experimental datasets
Dataset
#attributes #classes #distinctive values #objects
Breast German Lymph Porker-hand
12 21 18 11
2 2 4 10
737 699 1,077 1,000 63 148 95 1,000,000
5 Experiments All experiments were conducted on a computer with an Intel Core i5 M 540 CPU at 2.53GHz and 4 GB of RAM, running Windows 7 Enterprise (32-bit) SP1. The experimental datasets were obtained from the University of California Irvine Machine Learning Repository (http://mlearn.ics.uci.edu). The algorithms were coded in C# using MS Visual Studio .NET 2010 Express. Characteristics of experimental datasets and experimental results are described in Table 3 and 4, respectively. minCon f = 50% was used for all experiments. Table 4 Experimental results
Dataset Breast
minSup Constraint Itemset #CARs 1 0.5 0.3 0.1
Time (s) CAR-Miner-Post SC-CAR-Miner
{(2, 1)}
761 1,154 1,632 42,904
0.090 0.128 0.173 4.117
0.027 0.031 0.035 0.334
German
4 3 2 1
{(1, 0) , (3, 1)}
680 1,554 4,540 23,429
0.657 1.118 2.351 8.061
0.042 0.058 0.086 0.268
Lymph
4 3 2 1
{(1, 3) , (2, 2)}
1,720 3,624 25,220 118,884
3.452 4.794 17.527 52.361
0.046 0.061 0.271 1.223
5 5 5 110
22.104 22.290 22.365 55.853
2.798 2.960 2.979 7.027
Porker-hand
3 2 1 0.5
{(1, 4)}
The meaning of Constraint Itemset = {(2, 1)} is that the obtained rules must include Attribute2 along with Value1 in the rule antecedent. Similarly, the final rules must contain Attribute1 with Value3 and Attribute2 with Value2 in the rule antecedent in case of Constraint Itemset = {(1, 3) , (2, 2)}.
Mining Class-Association Rules with Constraints
317
The results show that SC-CAR-Miner is much more efficient than CARMiner-Post in all experiments. For example, considering the Lymph dataset with Constraint Itemset = {(1, 3), (2, 2)} and minSup = 1%, the mining time of SCCAR-Miner is 1.223(s) while CAR-Miner-Post is 52.361(s). For this example, SCCAR-Miner is 42.8 times faster than CAR-Miner-Post.
6 Conclusions and Future Work This paper proposed an efficient method for mining CARs with item constraints. The constraints are in the form of a specific itemset. Unlike post-processing approaches, the proposed approach generates only rules that satisfy the item constraints. The framework of the proposed algorithm is based on the SCR-tree structure which includes only nodes containing the constrained itemset and the theorem and the proposition for quickly pruning infrequent nodes and weak classification rules. To validate the effectiveness and efficiency of the proposed method, a series of experiments was conducted on four datasets, namely Breast, German, Lymph, and Poker-hand. The experimental results show that the proposed method outperforms the post-processing method. In the future, the SC-CAR-Miner algorithm will be extended for mining CARs with item constrains that are Boolean expressions over the presence of items (for example ((Shirts AND Shoes) OR Outerwear)) in rule antecedents. Acknowledgments. This research is funded by Viet Nam National Foundation for Science and Technology Development (NAFOSTED).
References 1. Liu, B., Hsu, W., Ma, Y.: Integrating classification and association rule mining. In: 4th International Conference on Knowledge Discovery in Databases and Data Mining, pp. 80–86 (1998) 2. Li, W., Han, J., Pei, J.: CMAR: Accurate and efficient classification based on multiple class-association rules. In: IEEE International Conference on Data Mining, pp. 369–376 (2001) 3. Yin, X., Han, J.: CPAR: Classification based on predictive association rules. In: 3rd SIAM International Conference on Data Mining, pp. 331–335 (2003) 4. Thabtah, F., Cowling, P., Peng, Y.: MCAR: multi-class classification based on association rule. In: 3rd ACS/IEEE International Conference on Computer Systems and Applications, pp. 33–39 (2005) 5. Thonangi, R., Pudi, V.: ACME: An associative classifier based on maximum entropy principle. In: Jain, S., Simon, H.U., Tomita, E. (eds.) ALT 2005. LNCS (LNAI), vol. 3734, pp. 122–134. Springer, Heidelberg (2005) 6. Vo, B., Le, B.: A novel classification algorithm based on association rules mining. In: Richards, D., Kang, B.-H. (eds.) PKAW 2008. LNCS (LNAI), vol. 5465, pp. 61–75. Springer, Heidelberg (2009)
318
D. Nguyen and B. Vo
7. Nguyen, L.T., Vo, B., Hong, T.P., Thanh, H.C.: Classification based on association rules: A lattice-based approach. Expert Systems with Applications, 11357–11366 (2012) 8. Nguyen, L.T., Vo, B., Hong, T.P., Thanh, H.C.: CAR-Miner: An efficient algorithm for mining class-association rules. Expert Systems with Applications, 2305–2311 (2013) 9. Srikant, R., Vu, Q., Agrawal, R.: Mining association rules with item constraints. In: 3rd International Conference on Knowledge Discovery in Databases and Data Mining, pp. 67–73 (1997) 10. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In: 20th International Conference on Very Large Data Bases, pp. 487–499 (1994) 11. Han, J., Pei, J., Yin, Y.: Mining frequent patterns without candidate generation. In: ACM SIGMOD International Conference on Management of Data, pp. 1–12 (2000) 12. Zaki, M.J., Parthasarathy, S., Ogihara, M., Li, W.: New algorithms for fast discovery of association rules. In: 3rd International Conference on Knowledge Discovery in Databases and Data Mining, pp. 283–286 (1997) 13. Ng, R.T., Lakshmanan, L.V.S., Han, J., Pang, A.: Exploratory mining and pruning optimizations of constrained associations rules. In: ACM SIGMOD International Conference on Management of Data, pp. 13–24 (1998) 14. Lin, W.Y., Huang, K.W., Wu, C.A.: MCFPTree: An FP-tree-based algorithm for multiconstraint patterns discovery. International Journal of Business Intelligence and Data Mining, 231–246 (2010) 15. Duong, H., Truong, T., Le, B.: An Efficient Algorithm for Mining Frequent Itemsets with Single Constraint. In: Nguyen, N.T., van Do, T., Thi, H.A. (eds.) ICCSAMA 2013. SCI, vol. 479, pp. 367–378. Springer, Heidelberg (2013)