Sampled Weighted Min-Hashing for Large-Scale Topic Mining

3 downloads 0 Views 575KB Size Report
Sep 8, 2015 - [2008] and clustering Chum and Matas [2010]. 3 Sampled Min-Hashing ..... David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet ...
Sampled Weighted Min-Hashing for Large-Scale Topic Mining Gibran Fuentes-Pineda and Ivan Vladimir Meza-Ruiz

arXiv:1509.01771v2 [cs.LG] 8 Sep 2015

Instituto de Investigaciones en Matem´aticas y en Sistemas Universidad Nacional Aut´onoma de M´exico

Abstract. We present Sampled Weighted Min-Hashing (SWMH), a randomized approach to automatically mine topics from large-scale corpora. SWMH generates multiple random partitions of the corpus vocabulary based on term cooccurrence and agglomerates highly overlapping inter-partition cells to produce the mined topics. While other approaches define a topic as a probabilistic distribution over a vocabulary, SWMH topics are ordered subsets of such vocabulary. Interestingly, the topics mined by SWMH underlie themes from the corpus at different levels of granularity. We extensively evaluate the meaningfulness of the mined topics both qualitatively and quantitatively on the NIPS (1.7K documents), 20 Newsgroups (20K), Reuters (800K) and Wikipedia (4M) corpora. Additionally, we compare the quality of SWMH with Online LDA topics for document representation in classification. Keywords: large-scale topic mining, min-hashing, co-occurring terms

1

Introduction

The automatic extraction of topics has become very important in recent years since they provide a meaningful way to organize, browse and represent large-scale collections of documents. Among the most successful approaches to topic discovery are directed topic models such as Latent Dirichlet Allocation (LDA) Blei et al. [2003] and Hierarchical Dirichlet Processes (HDP) Teh et al. [2004] which are Directed Graphical Models with latent topic variables. More recently, undirected graphical models have been also applied to topic modeling, (e.g., Boltzmann Machines Salakhutdinov and Hinton [2009], Nitish Srivastava and Hinton [2013] and Neural Autoregressive Distribution Estimators Larochelle and Stanislas [2012]). The topics generated by both directed and undirected models have been shown to underlie the thematic structure of a text corpus. These topics are defined as distributions over terms of a vocabulary and documents in turn as distributions over topics. Traditionally, inference in topic models has not scale well to large corpora, however, more efficient strategies have been proposed to overcome this problem (e.g., Online LDA Hoffman et al. [2010] and stochastic variational inference Mimno et al. [2012]). Undirected Topic Models can be also trained efficiently using approximate strategies such as Contrastive Divergence Hinton [2002]. In this work, we explore the mining of topics based on term co-occurrence. The underlying intuition is that terms consistently co-occurring in the same documents are

2

Fuentes-Pineda and Meza-Ruiz Table 1. SWMH topic examples.

NIPS

introduction,references,shown,figure,abstract,shows,back,left,process,. . . (51) chip,fabricated,cmos,vlsi,chips,voltage,capacitor,digital,inherent,. . . (42) spiking, spikes, spike,firing, cell, neuron, reproduces,episodes,cellular, . . . (17) 20 Newsgroups algorithm communications clipper encryption chip key lakers, athletics, alphabetical, pdp, rams, pct, mariners, clippers, . . . (37) embryo, embryos, infertility, ivfet, safetybelt, gonorrhea, dhhs, . . . (37) Reuters prior, quarterly, record, pay, amount, latest, oct precious, platinum, ounce, silver, metals, gold udinese, reggiana, piacenza, verona, cagliari, atalanta, perugia, . . . (64) Wikipedia median, householder, capita, couples, racial, makeup, residing, . . . (54) decepticons’, galvatron’s, autobots’, botcon, starscream’s, rodimus, galvatron avg, strikeouts, pitchers, rbi, batters, pos, starters, pitched, hr, batting, . . . (21)

likely to belong to the same topic. The resulting topics correspond to ordered subsets of the vocabulary rather than distributions over such a vocabulary. Since finding co-occurring terms is a combinatorial problem that lies in a large search space, we propose Sampled Weighted Min-Hashing (SWMH), an extended version of Sampled Min-Hashing (SMH) Fuentes Pineda et al. [2011]. SMH partitions the vocabulary into sets of highly co-occurring terms by applying Min-Hashing Broder [2000] to the inverted file entries of the corpus. The basic idea of Min-Hashing is to generate random partitions of the space so that sets with high Jaccard similarity are more likely to lie in the same partition cell. One limitation of SMH is that the generated random partitions are drawn from uniform distributions. This setting is not ideal for information retrieval applications where weighting have a positive impact on the quality of the retrieved documents Salton and Buckley [1988], Buckley [1993]. For this reason, we extend SMH by allowing weights in the mining process which effectively extends the uniform distribution to a distribution based on weights. We demonstrate the validity and scalability of the proposed approach by mining topics in the NIPS, 20 Newsgroups, Reuters and Wikipedia corpora which range from small (a thousand of documents) to large scale (millions of documents). Table 1 presents some examples of mined topics and their sizes. Interestingly, SWMH can mine meaningful topics of different levels of granularity. The remainder of the paper is organized as follows. Section 2 reviews the MinHashing scheme for pairwise set similarity search. The proposed approach for topic mining by SWMH is described in Sect. 3. Section 4 reports the experimental evaluation of SWMH as well as a comparison against Online LDA. Finally, Sect. 5 concludes the paper with some discussion and future work.

2

Min-Hashing for Pairwise Similarity Search

Min-Hashing is a randomized algorithm for efficient pairwise set similarity search (see Algorithm 1). The basic idea is to define MinHash functions h with the property that the probability of any two sets A1 , A2 having the same MinHash value is equal to their Jaccard Similarity, i.e.,

Sampled Weighted Min-Hashing for Large-Scale Topic Mining

P [h(A1 ) = h(A2 )] =

| A1 ∩ A2 | ∈ [0, 1]. | A1 ∪ A2 |

3

(1)

Each MinHash function h is realized by generating a random permutation π of all the elements and assigning the first element of a set on the permutation as its MinHash value. The rationale behind Min-Hashing is that similar sets will have a high probability of taking the same MinHash value whereas dissimilar sets will have a low probability. To cope with random fluctuations, multiple MinHash values are computed for each set from independent random permutations. Remarkably, it has been shown that the portion of identical MinHash values between two sets is an unbiased estimator of their Jaccard similarity Broder [2000]. Taking into account the above properties, in Min-Hashing similar sets are retrieved by grouping l tuples g1 , . . . , gl of r different MinHash values as follows g1 (A1 ) = (h1 (A1 ), h2 (A1 ), . . . , hr (A1 )) g2 (A1 ) = (hr+1 (A1 ), hr+2 (A1 ), . . . , h2·r (A1 )) , ··· gl (A1 ) = (h(l−1)·r+1 (A1 ), h(l−1)·r+2 (A1 ), . . . , hl·r (A1 )) where hj (A1 ) is the j-th MinHash value. Thus, l different hash tables are constructed and two sets A1 , A2 are stored in the same hash bucket on the k-th hash table if gk (A1 ) = gk (A2 ), k = 1, . . . , l. Because similar sets are expected to agree in several MinHash values, they will be stored in the same hash bucket with high probability. In contrast, dissimilar sets will seldom have the same MinHash value and therefore the probability that they have an identical tuple will be low. More precisely, the probability that two sets A1 , A2 agree in the r MinHash values of a given tuple gk is P [gk (A1 ) = gk (A2 )] = sim(A1 , A2 )r . Therefore, the probability that two sets A1 , A2 have at least one identical tuple is Pcollision [A1 , A2 ] = 1 − (1 − sim(A1 , A2 )r )l . The original Min-Hashing scheme was extended by Chum et al. Chum et al. [2008] to weighted set similarity, defined as P wi min(H1i , H2i ) simhist (H1 , H2 ) = P i ∈ [0, 1], (2) i i i wi max(H1 , H2 ) where H1i , H2i are the frecuencies of the i-th element in the histograms H1 and H2 respectively and wi is the weight of the element. In this scheme, instead of generating random permutations drawn from a uniform distribution, the permutations are drawn from a distribution based on element weights. This extension allows the use of popular document representations based on weighting schemes such as tf-idf and has been applied to image retrieval Chum et al. [2008] and clustering Chum and Matas [2010].

3

Sampled Min-Hashing for Topic Mining

Min-Hashing has been used in document and image retrieval and classification, where documents and images are represented as bags of words. Recently, it was also successfully applied to retrieving co-occurring terms by hashing the inverted file lists instead of

4

Fuentes-Pineda and Meza-Ruiz

Algorithm 1: Pairwise Similarity Search by Min-Hashing Data: Database of sets A = A1 , . . . , AN and query set q Result: Similar sets to q in A Indexing 1. Compute l MinHash tuples gi (Aj ), i = 1, . . . , l for each set Aj , j = 1, . . . , N in A. 2. Construct l hash tables and store each set Aj , j = 1, . . . , N in the buckets corresponding to gi (Aj ), i = 1, . . . , l. Querying 1. Compute the l MinHash tuples gi (q), i = 1, . . . , l for the query set q. 2. Retrieve the sets stored in the buckets corresponding to gi (q), i = 1, . . . , l. 3. Compute the similarity between each retrieved set and q and return those with similarity greater than a given threshold .

the documents Chum et al. [2008], Fuentes Pineda et al. [2011]. In particular, FuentesPineda et al. Fuentes Pineda et al. [2011] proposed Sampled Min-Hashing (SMH), a simple strategy based on Min-Hashing to discover objects from large-scale image collections. In the following, we briefly describe SMH using the notation of terms, topics and documents, although it can be generalized to any type of dyadic data. The underlying idea of SMH is to mine groups of terms with high Jaccard Co-occurrence Coefficient (JCC), i.e., JCC(T1 , . . . , Tk ) =

|T1 ∩ T2 ∩ · · · ∩ Tk | , |T1 ∪ T2 ∪ · · · ∪ Tk |

(3)

where the numerator correspond to the number of documents in which terms T1 , . . . , Tk co-occur and the denominator is the number of documents with at least one of the k terms. Thus, Eq. 1 can be extended to multiple co-occurring terms as P [h(T1 ) = h(T2 ) . . . = h(Tk )] = JCC(T1 , . . . , Tk ).

(4)

From Eqs. 3 and 4, it is clear that the probability that all terms T1 , . . . , Tk have the same MinHash value depends on how correlated their occurrences are: the more correlated the higher is the probability of taking the same MinHash value. This implies that terms consistently co-occurring in many documents will have a high probability of taking the same MinHash value. In the same way as pairwise Min-Hashing, l tuples of r MinHash values are computed to find groups of terms with identical tuple, which become a co-occurring term set. By choosing r and l properly, the probability that a group of k terms has an identical tuple approximates a unit step function such that ( 1 if JCC(T1 , . . . , Tk ) ≥ s∗ Pcollision [T1 , . . . , Tk ] ≈ , 0 if JCC(T1 , . . . , Tk ) < s∗ Here, the selection of r and l is a trade-off between precision and recall. Given s∗ and r, we can determine l by setting Pcollision [T1 , . . . , Tk ] to 0.5, which gives

Sampled Weighted Min-Hashing for Large-Scale Topic Mining

5

Co-occurring term sets

g1

gl

Table 1 ⋮

Table l ⋮

...









Fig. 1. Partitioning of the vocabulary by Min-Hashing.

l=

log(0.5) log(1 − s∗r )

. In SMH, each hash table can be seen as a random partitioning of the vocabulary into disjoint groups of highly co-occurring terms, as illustrated in Fig. 1. Different partitions are generated and groups of discriminative and stable terms belonging to the same topic are expected to lie on overlapping inter-partition cells. Therefore, we cluster co-occurring term sets that share many terms in an agglomerative manner. We measure the proportion of terms shared between two co-occurring term sets C1 and C2 by their overlap coefficient, namely ovr(C1 , C2 ) =

| C1 ∩ C2 | ∈ [0, 1]. min(| C1 |, | C2 |)

Since a pair of co-occurring term sets with high Jaccard similarity will also have a large overlap coefficient, finding pairs of co-occurring term sets can be speeded up by using Min-Hashing, thus avoiding the overhead of computing the overlap coefficient between all the pairs of co-occurring term sets. The clustering stage merges chains of co-occurring term sets with high overlap coefficient into the same topic. As a result, co-occurring term sets associated with the same topic can belong to the same cluster even if they do not share terms with one another, as long as they are members of the same chain. In general, the generated clusters have the property that for any co-occurring term set, there exists at least one co-occurring term set in the same cluster with which it has an overlap coefficient greater than a given threshold . We explore the use of SMH to mine topics from documents but we judge term cooccurrence by the Weighted Co-occurrence Coefficient (WCC), defined as P wi min (T1i , · · · , Tki ) WCC (T1 , . . . , Tk ) = P i ∈ [0, 1], i i i wi max (T1 , · · · , Tk )

(5)

6

Fuentes-Pineda and Meza-Ruiz

Algorithm 2: Topic mining by SWMH Data: Inverted File Lists T = T1 , . . . , TN Result: Mined Topics O = O1 , . . . , OM Partitioning 1. Compute l MinHash tuples gi (Tj ), i = 1, . . . , l for each list Tj , j = 1, . . . , N in T . 2. Construct l hash tables and store each list Tj , j = 1, . . . , N in the bucket corresponding to gi (Tj ), i = 1, . . . , l. 3. Mark each group of lists stored in the same bucket as a co-occurring term set. Clustering 1. Find pairs of co-occurring term sets with overlap coefficient greater than a given threshold . 2. Form a graph G with co-occurring term sets as vertices and edges defined between pairs with overlap coefficient greater than . 3. Mark each connected component of G as a topic.

where T1i , · · · , Tki are the frecuencies in which terms T1 , . . . , Tk occur in the i-th document and the weight wi is given by the inverse of the size of the i-th document. We exploit the extended Min-Hashing scheme by Chum et al. Chum et al. [2008] to efficiently find such co-occurring terms. We call this topic mining strategy Sampled Weighted Min-Hashing (SWMH) and summarize it in Algorithm 2.

4

Experimental Results

In this section, we evaluate different aspects of the mined topics. First, we present a comparison between the topics mined by SWMH and SMH. Second, we evaluate the scalability of the proposed approach. Third, we use the mined topics to perform document classification. Finally, we compare SWMH topics with Online LDA topics. The corpora used in our experiments were: NIPS, 20 Newsgroups, Reuters and Wikipedia1 . NIPS is a small collection of articles (3, 649 documents), 20 Newsgroups is a larger collection of mail newsgroups (34, 891 documents), Reuters is a medium size collection of news (137, 589 documents) and Wikipedia is a large-scale collection of encyclopedia articles (1, 265, 756 documents) 2 . All the experiments presented in this work were performed on an Intel(R) Xeon(R) 2.66GHz workstation with 8GB of memory and with 8 processors. However, we would like to point out that the current version of the code is not parallelized, so we did not take advantage of the multiple processors. 1 2

Wikipedia dump from 2013–09–04. All corpora were preprocessed to cut off terms that appeared less than 6 times in the whole corpus.

Sampled Weighted Min-Hashing for Large-Scale Topic Mining NIPS corpus

2683K

363K

987K

Topics mined, log scale

Topics mined, log scale

1627K

598K 363K 220K 133K

220K 133K 81K 49K 29K

81K 49K

Reuters corpus

598K

SMH SWMH

7

205

315

693 1369 Number of tables

2427

6931

18K

205

315

693 1369 Number of tables

2427

6931

Fig. 2. Amount of mined topics for SMH and SWMH in the (a) NIPS and (b) Reuters corpora.

4.1

Comparison between SMH and SWMH

For these experiments, we used the NIPS and Reuters corpora and different values of the parameters s∗ and r, which define the number of MinHash tables. We set the parameters of similarity (s∗) to 0.15, 0.13 and 0.10 and the tuple size (r) to 3 and 4. These parameters rendered the following table sizes: 205, 315, 693, 1369, 2427, 6931. Figure 2 shows the effect of weighting on the amount of mined topics. First, notice the breaking point on both figures when passing from 1369 to 2427 tables. This effect corresponds to resetting the s∗ to .10 when changing r from 3 to 4. Lower values in s∗ are more strict and therefore less topics are mined. Figure 2 also shows that the amount of mined topics is significantly reduced by SWMH, since the colliding terms not only need to appear on similar documents but now with similar proportions. The effect of using SWMH is also noticeable in the number of terms that compose a topic. The maximum reduction reached in NIPS was 73% while in Reuters was 45%. 4.2

Scalability evaluation

To test the scalability of SWMH, we measured the time and memory required to mine topics in the Reuters corpus while increasing the number of documents to be analyzed. In particular, we perform 10 experiments with SWMH, each increasing the number of documents by 10% 3 . Figure 3 illustrates the time taken to mine topics as we increase the number of documents and as we increase an index of complexity given by a combination of the size of the vocabulary and the average number of times a term appears in a document. As can be noticed, in both cases the time grows almost linearly and is in the thousand of seconds. The mining times for the corpora were: NIPS, 43s; 20 Newsgroups, 70s; Reuters, 4, 446s and Wikipedia, 45, 834s. These times contrast with the required time by Online LDA to model 100 topics 4 : NIPS, 60s; 20 Newsgroups, 154s and Reuters, 25, 997. Additionally, we set Online LDA to model 400 topics with the Reuters corpus and 3 4

The parameters were fixed to s∗ = 0.1,r = 3, and overlap threshold of 0.7. https://github.com/qpleple/online-lda-vb was adapted to use our file formats.

Fuentes-Pineda and Meza-Ruiz 5000

5000

4000

4000

3000

3000 Time (s)

Time (s)

8

2000

2000

1000

1000

00K

100K

200K

300K

400K 500K Documents

600K

700K

0

800K

20M

40M

60M 80M Complexity of corpus

100M

Fig. 3. Time scalability for the Reuters corpus. Table 2. Document classification for 20 Newsgroups corpus. Model 205 319 693 1693 2427 6963 Online LDA Online LDA

Topics Accuracy Avg. score 3394 59.9 60.6 4427 61.2 64.3 6090 68.9 70.7 2868 53.1 55.8 3687 56.2 60.0 5510 64.1 66.4 100 59.2 60.0 400 65.4 65.9

took 3 days. Memory figures follow a similar behavior to the time figures. Maximum memory: NIPS, 141M B; 20 Newsgroups, 164M B; Reuters, 530M B and Wikipedia, 1, 500M B. 4.3

Document classification

In this evaluation we used the mined topics to create a document representation based on the similarity between topics and documents. This representation was used to train an SVM classifier with the class of the document. In particular, we focused on the 20 Newsgroups corpus for this experiment. We used the typical setting of this corpus for document classification (60% training, 40% testing). Table 2 shows the performance for different variants of topics mined by SWMH and Online LDA topics. The results illustrate that the number of topics is relevant for the task: Online LDA with 400 topics is better than 100 topics. A similar behavior can be noticed for SWMH, however, the parameter r has an effect on the content of the topics and therefore on the performance. 4.4

Comparison between mined and modeled topics

In this evaluation we compare the quality of the topics mined by SWMH against Online LDA topics for the 20 Newsgroups and Reuters corpora. For this we measure topic coherence, which is defined as C(t) =

M m−1 X X m=2 l=1

log

D(vm , vl ) , D(vl )

Sampled Weighted Min-Hashing for Large-Scale Topic Mining 100

100

0

0

100

100

200

200

300

300

400

400

500 600

9

500 Online LDA (100)

Online LDA (400)

SWMH

(a)

600

Online LDA (100)

Online LDA (400)

SWMH

(b)

Fig. 4. Coherence of topics mined by SWMH vs Online LDA topics in the (a) 20 Newsgroups and (b) Reuters corpora.

where D(vl ) is the document frequency of the term vl , and D(vm , vl ) is the co-document frequency of the terms vm and vl Mimno et al. [2011]. This metric depends on the first M elements of the topics. For our evaluations we fixed M to 10. However, we remark that the comparison is not direct since both the SWMH and Online LDA topics are different in nature: SWMH topics are subsets of the vocabulary with uniform distributions while Online LDA topics are distributions over the complete vocabulary. In addition, Online LDA generates a fixed number of topics which is in the hundreds while SWMH produces thousands of topics. For the comparison we chose the n-best mined topics by ranking them using an ad hoc metric involving the co-occurrence of the first element of the topic. For the purpose of the evaluation we limited the SWMH to the 500 best ranked topics. Figure 4 shows the coherence for each corpus. In general, we can see a difference in the shape and quality of the coherence box plots. However, we notice that SWMH produces a considerable amount of outliers, which calls for further research in the ranking of the mined topics and their relation with the coherence.

5

Discussion and Future Work

In this work we presented a large-scale approach to automatically mine topics in a given corpus based on Sampled Weighted Min-Hashing. The mined topics consist of subsets of highly correlated terms from the vocabulary. The proposed approach is able to mine topics in corpora which go from the thousands of documents (1 min approx.) to the millions of documents (7 hrs. approx.), including topics similar to the ones produced by Online LDA. We found that the mined topics can be used to represent a document for classification. We also showed that the complexity of the proposed approach grows linearly with the amount of documents. Interestingly, some of the topics mined by SWMH are related to the structure of the documents (e.g., in NIPS the words in the first topic correspond to parts of an article) and others to specific groups (e.g., team sports in 20 Newsgroups and Reuters, or the Transformers universe in Wikipedia). These examples suggest that SWMH is able to generate topics at different levels of granularity. Further work has to be done to make sense of overly specific topics or to filter them out. In this direction, we found that weighting the terms has the effect of discarding several irrelevant topics and producing more compact ones. Another alternative, it is

10

Fuentes-Pineda and Meza-Ruiz

to restrict the vocabulary to the top most frequent terms as done by other approaches. Other interesting future work include exploring other weighting schemes, finding a better representation of documents from the mined topics and parallelizing SWMH.

Bibliography

David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. Andrei Z. Broder. On the resemblance and containment of documents. Computer, 33 (11):46–53, 2000. Christopher Buckley. The importance of proper weighting methods. In Proceedings of the Workshop on Human Language Technology, pages 349–352, 1993. Ondrej Chum and Jiri Matas. Large-scale discovery of spatially related images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32:371–377, 2010. Ondrej Chum, James Philbin, and Andrew Zisserman. Near duplicate image detection: min-hash and tf-idf weighting. In Proceedings of the British Machine Vision Conference, 2008. Gibran Fuentes Pineda, Hisashi Koga, and Toshinori Watanabe. Scalable object discovery: A hash-based approach to clustering co-occurring visual words. IEICE Transactions on Information and Systems, E94-D(10):2024–2035, 2011. Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002. Matthew D. Hoffman, David M. Blei, and Francis Bach. Online learning for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 23, 2010. Hugo Larochelle and Lauly Stanislas. A neural autoregressive topic model. In Advances in Neural Information Processing Systems 25, pages 2717–2725, 2012. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 262–272. ACL, 2011. David Mimno, Matthew D. Hoffman, and David M. Blei. Sparse stochastic inference for latent Dirichlet allocation. In International Conference on Machine Learning, 2012. Ruslan Salakhutdinov Nitish Srivastava and Geoffrey Hinton. Modeling documents with a deep Boltzmann machine. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2013. Ruslan Salakhutdinov and Geoffrey E. Hinton. Replicated softmax: An undirected topic model. In Advances in Neural Information Processing Systems 22, pages 1607–1614, 2009. Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval. Information Processing & Management, 24(5):512–523, 1988. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101, 2004.

Suggest Documents