Deep Learning for Content-Based, Content-based retrieval Video Cross-Modal Retrieval of Videos and Music Sungeun Hong
Woobin Im
School of Computing, KAIST
[email protected]
School of Computing, KAIST
[email protected]
चु�ा च�ा तजमाहल - ए.आर. रहमान -
retrieval
retrieval
arXiv:1704.06761v1 [cs.CV] 22 Apr 2017
Hyun S. Yang
School of Computing, KAIST
[email protected] Content-based
Content-based
Video
Music
Video
Music
Video
Music
चु�ा च�ा तजमाहल
Ballad du Paris
- ए.आर. रहमान -
- François Parisi -
Content-based retrieval
(a) Eiffel Tower scenes ⇐⇒ Music based on the Paris night view
(b) Taj Mahal scenes ⇐⇒ Traditional Indian music
Music
Figure 1: Motivation/Concept figure: Our model performs retrieval tasks between video and music using image frames for the video parts and audio signals for the music parts. We define such task as content-based music-video retrieval (CBMVR).
ABSTRACT
Ballad du Paris - François Parisi -
In the context of multimedia content, a modality can be defined as a type of data item such as text, images, music, and videos. Up to now, only limited research has been conducted on cross-modal retrieval of suitable music for a specified video or vice versa. Moreover, much of the existing research relies on metadata such as keywords, tags, or associated description that must be individually produced and attached posterior. This paper introduces a new content-based, cross-modal retrieval method for video and music that is implemented through deep neural networks. The proposed model consists of a two-branch network that extracts features from the two different modalities and embeds them into a single embedding space. We train the network via cross-modal ranking loss such that videos and music with similar semantics end up close together in the embedding space. In addition, to preserve inherent characteristics within each modality, the proposed single-modal structure loss was also used for training. Owing to the lack of a dataset to evaluate cross-modal video-music tasks, we constructed a large-scale video-music pair benchmark. Finally, we introduced reasonable quantitative and qualitative experimental protocols. The experimental results on our dataset are expected to be a baseline for subsequent studies of less-mature video-to-music and music-tovideo related tasks.
KEYWORDS Content-based music-video retrieval (CBMVR), deep multimodal embedding, MV-NET, music recommendation, video retrieval
ACM, 2017 2016. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 DOI: 10.475/123 4
ACM Reference format: Sungeun Hong, Woobin Im, and Hyun S. Yang. 2016. Deep Learning for Content-Based, Cross-Modal Retrieval of Videos and Music . In Proceedings of ACM, 2017, Conference, 10 pages. DOI: 10.475/123 4
1
INTRODUCTION
Music, images, and videos are widely used in everyday life and as a result, research on this field has been actively conducted over recent decades. However, to date, the relation between music and visual modalities (e.g., image or video) has not been adequately investigated. Previous studies have focused primarily on how data of a single modality might be processed in various tasks, e.g., music genre classification, music information retrieval, melody extraction, image retrieval, and video classification [1–5]. Some pioneering approaches that have explored the relation between music and visual modalities have been developed [6–10]. However, these techniques mainly use metadata, which is separately attached to each individual item of music or visual data, rather than content-based information that can be derived directly from the music or visual data itself. We argue that content-based approaches are more appropriate than metadata-based approaches for several reasons. First, metadatabased systems require individually written meta-information for each data item, which can be impractical for large-scale datasets. Moreover, in this situation, as mentioned in [11], unpopular multimedia items can be neglected because their meta-information is scarce or difficult to obtain. In contrast, content-based approaches do not require meta-information; thus, it is possible to employ machine-learning techniques to learn the relationship between items of different modalities based on large amounts of data. Second,
ACM, Conference, 2017 metadata-based approaches often rely on a hard-coded mapping function that links concepts from a visual modality to music templates, and this imposes limitations on their various applications. For example, to generate soundtracks for user-generated videos, Yu et al. [12] mapped a video’s geometric meta-information to a mood tag and then recommended music with the corresponding mood tag. In contrast, content-based approaches use only information derived from the data items themselves. Indeed, because of the many advantages, there has already been a shift from metadata-based approaches to content-based approaches in the areas of music and visual modality, respectively [5, 11, 13]. In this study, we undertook the task of finding music that suits a given video and vice versa using a content-based approach (see Fig. 1), which can be used for bidirectional retrieval. We refer to this task content-based music-video retrieval (CBMVR). The main challenge in this undertaking is to design a model that is rich enough to reason within a single domain about the heterogeneous contents of images and music. The second challenge is designing a cross-modal model that has no requirements for metadata and, additionally, has no hard-coded mapping functions or assumptions about specific rules. The third challenge is the difficulty of obtaining well-matched video-music pairs owing to the fact that the matching criteria in this case are more ambiguous than other cross-modal tasks such as image-to-text retrieval. Our key insight is that we can leverage large-scale music video datasets by treating each audio signal and visual signal from the same music video as a ground truth pair. This is motivated by studies showing that customers usually use album cover art as a visual cue when browsing music in record stores [7]. We are also inspired by the fact that producers with professional skill carefully create each album cover or music video considering the characteristics of the singer and the song [10]. In our approach, once large-scale ground truth pairs are obtained, the relationship between audio signals and visual signals is then learned through deep neural networks. Concretely, the contributions of this research are three-fold: (1) We have constructed a music-video network called MV-NET, which is a deep neural network that infers the latent alignment between music and videos using only the contents of each modality. Our model associates the two modalities through a two-branch network with one branch for each modality. This is followed by an embedding layer that maps the music and videos into a single multimodal embedding space. We train the network via cross-modal ranking loss, such that videos and music with similar semantics end up close together in the multimodal space. To preserve inherent characteristics within each modality, such as the rhythm of the music or the color of the image, the single-modal structure loss we designed was also used to train the network. (2) We have compiled a large-scale benchmark called MV-200K, composed of 200,500 video-music pairs including official music videos, parody music videos, and user-generated videos that are categorized as “music video” based on metadata, context, and content signals in the YouTube-8M dataset. In contrast, most previous studies on music videos have used only official music videos, which limit the amount of data. To our knowledge, the largest such dataset contains 1,600 music videos [14]. In those previous studies, the musical style could be biased because official music videos are produced primarily for popular songs.
Sungeun Hong, Woobin Im, and Hyun S. Yang (3) Finally, we have proposed reasonable experimental protocols for testing CBMVR systems, and using these protocols, we have carried out experiments whose results could be used as a baseline for subsequent studies of less-mature video-music related tasks. In particular, for quantitative performance measurements of CBMVR, we used Recall@K, which is a standard protocol widely used in other cross-modal tasks. We have also conducted a user test that examines users’ preferences for videos that suit given music and, vice versa. Additionally, we have carried out a qualitative investigation of the query results returned by MV-NET.
2
RELATED WORK
Early investigations of the relationship between music and visual modalities have involved studies using album covers as the visual modality [6–8, 15]; more recent research has included studies using music videos [9, 10, 14]. These approaches can be divided into two categories: retrieval and classification.
2.1
Video-Music Related Retrieval
As retrieval is based on matching, we provide a short survey here of overall techniques that have addressed matching between videos and music. Chao et al. [8] introduced a technique to recommend suitable music for photo albums. The technique utilized a crossmodal graph in which synsets of mood tags obtained from images and music are used as vertices, and relations between synsets are expressed as edges. Sasaki et al. [16] proposed a one-directional video-to-music recommendation by using the valence arousal plane. The study most closely related to our study is [10], in which semantic representations of music and images for cross-modal matching tasks were developed. To connect music and images, they adopt text from lyrics as a go-between media (i.e., metadata) and apply canonical correlation analysis [17]. These existing studies usually use metadata (e.g., mood tags and lyrics) for matching among different modalities. In contrast, we directly connect items from the music and visual modalities using only the contents of the modalities’ data items. Specifically, we embed features extracted from music and videos into a multimodal space and learn the relationship between them by training a deep neural network.
2.2
Video-Music Related Classification
The task of classification has also been explored in previous studies. To predict music genre tags, Libeks et al. [15] proposed a method based on color and texture features in promotional photographs and album covers. Music genre classification based on a set of color features and affective features extracted from music videos has also been proposed [14]. In that study, a set of various approaches based on psychological or perceptive models was applied to capture different kinds of semantic information. In considering deep neural networks, the work by Acar et al. [9] is closely related to our approach. They presented a framework for the affective labeling of music videos in which higher-level representations are learned from low-level audio-visual features using CNNs. However, unlike our model, they use deep architecture only as a mid-level feature extractor for music and images and do not learn the relationship between features extracted from each part. To the best of our knowledge, the work we present here is
Deep Learning for Content-Based, Cross-Modal Retrieval of Videos and Music
Signal decoding
ACM, Conference, 2017
Feature extraction
Embedding network
Training loss
Music stream Low-level audio feature extractors
Mean
F F F C C C 1 2 3
Variance Topk
L 2
Music video Video stream
Image
Low-level Pre-trained audio feature CNN extraction
Mean
F F C C 1 2
Variance Topk
L 2
E m b e d d i n g
Cross-modal ranking loss
Single-modal structure loss
Figure 2: Outline of the proposed method: Given a video and its associated music as input, we extract video features through a pre-trained convolutional neural network (CNN) and extract music features through low-level audio feature extractors. For each modality, the features are then aggregated and fed into a two-stream neural network that is followed by a embedding layer. The embedding network is trained by two losses with different purposes; these will be discussed in detail in Sec. 3.3. the first study in which a trainable neural network is used to infer the relationship (specifically, latent alignment) between video and music without metadata.
3
THE PROPOSED METHOD
Our goal was to design a model that infers the latent alignment between video and music (see Fig. 2), enabling the retrieval of music that suits a given video and vice versa.
3.1
Music Feature Extraction
To represent the music part, we follow an approach in [18, 19], which first decomposes an audio signal into harmonic and percussive components. We then apply log-amplitude scaling to each component to avoid numerical underflow. Next, we slice the components into shorter segments called local frames (or windowed excerpts) and extract multiple features from each component of each frame. Frame-level features. (1) Spectral features: The first type of audio features are derived from spectral analyses. In particular, we first apply the fast Fourier transform and the discrete wavelet transform to the windowed signal in each local frame. From the magnitude spectral results, we compute summary features including the spectral centroid, the spectral bandwidth, the spectral rolloff, and the first and second order polynomial features of a spectrogram [20]. (2) Mel-scale features: To extract more meaningful features, we compute the Mel-scale spectrogram of each frame as well as the Mel-frequency Cepstral Coefficients (MFCC). Further, to capture variations of timbre over time, we also use delta-MFCC features, which are the first- and second-order differences in MFCC features over time. (3) Chroma features: While Mel-scaled representations efficiently capture timbre, they provide poor resolution of pitches
and pitch classes. To rectify this issue, we use chroma short-time Fourier transform [21] as well as chroma energy normalized [22]. (4) Etc.: We use the number of time domain zero-crossings as an audio feature in order to detect the amount of noise in the audio signal. We also use the root-mean-square energy for each frame. Table 1 summarizes the audio features utilized in our approach. Music-level features. For each frame-level feature, we calculate three music-level features that capture properties of the entire music signal. In particular, we calculate first- and second-order statistics (i.e., mean and variance) of each frame-level feature as well as maximum top K ordinal statistics, which provide vectors consisting of the K largest values for each dimension of frame-level features. Finally, all the calculated music-level features are concatenated, and the result is passed from the feature extraction phase to the embedding neural network, as shown in the music-stream portion of Fig. 2.
3.2
Video Feature Extraction
To represent the video part, we employ the approach given in [23]. Frame-level features. Because the MV-200K dataset contains a large number of videos, it is impractical to train the deep network Table 1: Low-level audio features used in our approach Type Spectral Mel-scale Chroma Etc.
Audio features spectral centroid, spectral bandwidth, spectral rolloff, poly-feature (1st and 2nd) MFCC, Mel-spectrogram, delta-MFCC (1st and 2nd) chroma-cens, choroma-STFT zero-crossing, root-mean-square energy
ACM, Conference, 2017 from scratch. Instead, we extract frame-level features from the individual frames using an Inception network [4] previously trained on the large-scale ImageNet dataset [24]. We then apply a whitened principal component analysis (WPCA) so that the normalized features are approximately multivariate Gaussian with zero mean and identity covariance. As indicated by [23], this makes the gradient steps across the feature dimension independent, and as a result, the learning process for the embedding space, which we will run later, converges quickly, insensitive to changes in learning rates. Video-level features. Once frame-level features are obtained, we apply feature aggregation techniques and then perform concatenation as in Sec. 3.1. We then apply a global normalization process, which subtracts the mean of vectors from all the features, and we apply principal component analysis (PCA) [25]. Finally, we perform L2 normalization to obtain video-level features. For further details, please refer to [23].
3.3
Multimodal Embedding
The final step is to embed the separately extracted features of the heterogeneous music and video modalities into a single, common vector space. The music and video embedding occurs via a twobranch neural network, with separate branches for each modality. The network is trained by cross-modal ranking constraints and single-modal structure constraints with different purposes. The two-branch neural network. The extracted music and video features are fed into separate branches of a two-branch neural network. Inspired by multimodal image-text embeddings [26], each branch contains several fully connected (FC) layers with rectified linear unit (ReLU) nonlinearities. Recall that video features are extracted from a pre-trained deep architecture, whereas the music features are simply concatenated statistics of low-level audio features. To compensate for the relatively low-level audio features, we make the audio branch of the network deeper than the video branch. In particular, our empirical tests found that using three FC layers for the music and two for the video is effective. The final outputs of the two branches are L2-normalized for convenient calculation of cosine similarity, which is used in our ranking loss function, described next. Cross-modal ranking constraint. For cross-modal matching, we wish to ensure that a video is closer to suitable music than unsuitable music, and vice versa. We used the cross-modal ranking constraint inspired by triplet ranking loss [27], in which similar input items are mapped to nearby feature vectors in the embedding space for a homogenous, single modality. Assume triplets of items consisting of an arbitrary anchor, a positive cross-modal sample that is a ground truth pair item, and a negative cross-modal sample that is not paired with the anchor as can be seen in Fig. 3. The aim of cross-modal ranking constraint is to minimize the distance between an anchor and a positive sample while maximizing the distance between an anchor and a negative sample. Single-modal structure constraint. In the process of embedding with the cross-modal ranking constraint, in which similar music and video close together, the inherent characteristics of each modality may be destroyed. The inherent characteristics can include, for example, rhythm, tempo, or timbre in music, and brightness, color, or texture of image frames. To address this issue, we devised
Sungeun Hong, Woobin Im, and Hyun S. Yang Multimodal space Multimodal space after learning after learning
(a) Cross-modal ranking constraint that results in mapping of closely related music and videos to feature vectors that are close to each other in multimodal space. Single-modal space Single-modal space
Multimodal space Multimodal space after learning after learning
(b) Single-modal structure constraint that allows the relative positional relationship between feature vectors within a single-modal space to remain in the multimodal space.
Figure 3: Two constraint concepts for multimodal embedding. Identical shapes signify feature vectors obtained from the same modality (e.g., video or music). Identical colors indicate a ground truth matching pair. Here, the single-modal space and the multimodal space represent spaces before and after passing through the embedding network, respectively.
a novel single-modal structure loss. Suppose that there are three single-modal features extracted from different music videos using the process in Sec. 3.1 or Sec. 3.2. As these features have not yet been fed into the embedding network for cross-modal matching, they can maintain modality-specific characteristics. During training, to maintain these characteristics in the multimodal embedding space, we leverage the relative distance of items in the single-modal space, as depicted in Fig. 3. Embedding network loss. Given a mini-batch of N music videos, we can obtain N pairs of embedded features of the form (vi , mi ), where i ranges over the mini-batch. Here, vi and mi are the features from the video and music of the i-th music video, after passing through the two-stream neural net (i.e., embedding network). In this scenario, for the cross-modal ranking constraint, we can build two types of triplets (vi , mi , m j ) and (mi , vi , v j ), where i , j are two different indices of music videos. In addition, for singlemodal structure constraints, we used different types of triplets (mi , m j , mk ) and (vi , v j , vk ), where i , j , k are three different indices of music videos. Taking into consideration all these triplets, we set the network loss as L = λ1
Õ
+ λ2
Õ
+ λ3
i,j
i,j
max(0, viT m j − viT mi + e)
max(0, mTi v j − mTi vi + e)
Õ i,j,k
+ λ4
Õ i,j,k
Ci jk (m)(mTi m j − mTi mk ) Ci jk (v)(viT v j − viT vk )
(1)
Deep Learning for Content-Based, Cross-Modal Retrieval of Videos and Music
ACM, Conference, 2017
Here, e indicates a margin constant for hinge loss. λ 1 and, λ 2 balance the impact of cross-modal ranking loss from video-to-music and music-to-video matching. Recall that both video and music features are L2 normalized; therefore, as a distance measure, we use the negative value of the dot product between the video feature and the music feature. The impact of single-modal structure loss within music and video was balanced by λ 3 and, λ 4 , respectively. The function C(·) in the Eq. 1 is defined as follows: Ci jk (x) = siдn(x iT x k − x iT x j ) − siдn(x i0T x k0 − x i0T x j0 )
(2)
where siдn(x) is a function that return one if x is a positive value, zero if x is equal to zero, and −1 if x is a negative value. x i , x j , and x k are the trainable features in multimodal space and x i0 , x j0 , and x k0 are single-modal features before passing through the embedding network.
4
CONSTRUCTION OF THE DATASET AND THE EMBEDDING NETWORK
We needed to build a dataset of music videos as the main source of music-video pairs. Therefore, for several reasons, we used videos from the YouTube-8M [23], a large-scale labeled video dataset that consists of millions of YouTube video IDs and associated labels. The main reason is that the YouTube-8M dataset provides a large number of music videos including official music videos, parody music videos, and user-generated videos with background music. Second, YouTube-8M officially provides visual features obtained by a state-of-the-art CNN model, and we could use these features directly as the starting point for the embedding process of video features. For music feature, we downloaded all videos with a “music video” label and then separated out the audio component using FFmpeg [28]. Thus, we obtained 205,000 video-music pairs, and the sets for training, validation, and test comprised 200K, 4K, and 1K pairs, respectively. We can now provide additional details of the two-branch embedding network. Given that high-level video features are extracted from the pre-trained CNN, we stacked only two FC layers with 2048 and 512 nodes for the video branch. On the other hand, music features are extracted from low-level audio features, so for audio, we stacked three FC layers with 2048, 1024, and 512 nodes. Our approach used the ADAM optimizer [29] with learning rate 0.0003, and a dropout scheme with probability 0.9 is used. We selected a mini-batch size for training of 2,000 video-music pairs.
5
EXPERIMENTS FOR VIDEO-MUSIC RETRIEVAL
To test the effectiveness of our model, we carried out three experimental evaluations of the CVMVR tasks.
5.1
The Recall@K Metric.
The protocol. Recall@K is a standard protocol for cross-modal retrieval, especially in image-text retrieval [26, 33]. For a given value of K, it measures the percentage of queries in a test set for which at least one correct ground truth match was ranked among the top K matches. For example, if we consider video queries that request suitable music, then Recall@10 tells us the percentage of
video queries for which the top ten results include a ground truth music match. To our knowledge, our study is the only work using Recall@K to quantitatively evaluate the performance of bidirectional video-music retrieval. The results we present in this section provide a baseline for future CBMVR tasks. Key factors. Motivated by [32, 25], we used a test set of 1,000 videos and their corresponding music tracks. Several key factors affected the performance of our model under the Recall@K protocol as shown in Table 2. As a baseline, we provide the expected value for Recall@K using random retrieval from 1,000 video-music pairs. All experimental results in the table were obtained from our model using only cross-modal ranking loss (i.e., λ 3 and λ 4 were set to zero) The first section of the table shows the influence of the constraint weights. It is apparent that the performance generally improves by giving more weight to λ 1 than λ 2 . However, we confirmed that setting λ 1 to five or more does not improve Recall@K. We also analyzed the results by changing the number of layers in the music and video branches of the embedding network. Using only one layer for each modality resulted in low performance, and it can be inferred that a deeper network is needed to generate an effective embedding. As the number of layers increases, the performance tends to be higher. We also confirmed that when the number of layers for the video part is more than three, the network weights do not converge in training. This appears to be due to degradation that often occurs in deep networks, as mentioned in [34]. Comparison with other methods. Table 3 presents comparisons of MV-NET with previous work. As a baseline, we present the result from linear models such as PCA [25] and techniques based on partial least squares (PLS) [30]. We also present the result of canonical correlation analysis (CCA) [31], which finds linear projections that maximize the correlation between projected vectors from the two modalities. In order to fairly compare the conventional linear models to our own using bidirectional loss (i.e., cross-modal ranking loss), we evaluated the performance of our model without ReLU nonlinearities. We also present the results of our linear model using one-directional loss (i.e., λ 1 , λ 3 , and λ 4 were set to zero), which is very similar to the WSABIE image-text model [35]. Inspired by a technique [36] that applied the domain-adversarial neural network (DANN) [32] to image-text retrieval, we also applied DANN to our model. From the experiment, it it clear that it is difficult to obtain a good retrieval performance with conventional linear models. In our nonlinear models, going from one-directional to bidirectional constraints improves the retrieval results by 2-3% for R@1 and by a larger amount for the R@10 and R@25 protocols. We also confirmed that applying DAAN to our model gives similar results, or even deteriorates the results, such that the vanilla DA is not suitable for CBMVR tasks. It is apparent from this table that using both crossmodal and single-modal loss results in a significant performance improvement over using cross-modal only in R@10 and R@25.
5.2
A Human Preference Test
Applying Recall@K to the relatively subjective task of cross-modal video-music retrieval is one way to evaluate performance, but it might not be the most appropriate protocol. Therefore, we also conducted a subjective evaluation using humans.
ACM, Conference, 2017
Sungeun Hong, Woobin Im, and Hyun S. Yang Table 2: Retrieval results on MV-200K with respect to the key factors of MV-NET Parameter Expected value of R@K (1, 1) (1, 3) Constraint weight (1, 5) (λ 1 , λ 2 ) (3, 1) (5, 1) (1, 1) (2, 2) Number of layers (2, 3) (Nmusic , Nvideo ) (3, 2) (4, 2)
Music-to-video R@1 R@10 R@25 0.1 1.0 2.5 8.4 19.3 28.6 6.1 12.3 18.8 5.2 12.0 17.5 8.5 19.2 29.1 8.7 18.7 27.4 4.3 16.1 25.6 7.5 16.5 27.3 7.4 16.8 26.4 8.5 19.2 29.1 8.2 18.5 27.3
Video-to-music R@1 R@10 R@25 0.1 1.0 2.5 7.3 17.6 29.2 5.8 11.4 17.3 4.7 11.6 16.9 7.2 20.5 27.5 7.7 18.0 29.3 2.9 15.0 24.9 7.3 16.0 26.4 6.7 16.2 25.9 7.2 20.5 27.5 7.5 16.5 25.5
Table 3: Comparison of results on MV-200K using different methods Music-to-video Video-to-music R@1 R@10 R@25 R@1 R@10 R@25 PCA [25] 0.0 0.5 2.1 0.1 1.1 2.7 PLSSVD [30] 0.7 5.9 13.0 0.8 6.8 14.1 PLSCA [30] 0.8 6.3 13.7 0.4 7.1 14.1 CCA [31] 2.7 14.0 25.8 1.8 14.3 26.6 Ours (one-directional) 4.2 15.3 24.4 3.9 15.7 5 26.4 Ours (cross) 4.3 16.3 27.0 3.5 15.3 26.9 Ours (cross + single) 4.0 16.7 26.5 3.4 14.3 25.9 Ours (one-directional) 6.5 13.5 19.9 5.2 12.0 18.6 Ours (cross) 8.5 19.2 29.1 7.2 20.5 27.5 Ours (cross + single) 8.9 25.2 37.9 8.2 23.3 35.7 Ours (cross + DANN [32]) 7.7 19.7 28.6 8.0 19.9 29.8 Ours (cross + single + DANN [32]) 8.5 24.0 37.9 6.7 22.9 35.4 cross: cross-modal ranking loss, single: single-modal structure loss Method
Linear
Nonlinear
Outline. We developed a web-based application in which test subjects were asked to pick one of two cross-modal items that the user thought was the best match to a given query as shown in Fig. 4. Fifty-three people participated in this test. We gave each subject 120 questions containing 60 video-to-music and 60 music-to-video selection tasks. Selection pairs. To analyze the users’ subjectivity in detail, we used three different types of answer pairs. In some queries, the user was required to choose between a ground truth match and a randomly chosen match (G-R). Other queries required a choice between a ground-truth match and a search match (G-S); still other queries had a choice between a search match and a random match (S-R). Here, “ground truth” indicates the cross-modal item extracted from the music video from which the query item was also extracted; “random” means a randomly selected item; and “search” indicates the closest cross-modal item to the query in the embedding space of our MV-NET model. To compare the performance between human and machine (i.e., our model), we also experimented with what the machine preferred between ground truth items (G) and randomly selected items (R). In this experiment, we assumed that the machine always selected the item closer to the query in the embedding space.
Analysis. Table 4 presents the results of the preference test, where each row (such as G¿R) indicates the percent of answers where the subject preferred one match to another (such as preferring ground truth over random). Note that the human subjects’ preference for G¿R is over 80%, which is much higher than the expected value of a random answer (i.e., 50%). It follows that a human’s preference for selecting video and music obtained from the same music video is fairly high, and this supports the validity of using video and music pairs from music videos for learning. Compared to G¿R, G¿S shows a clear decrease in human preference. In other words, humans had a very clear preference for ground truth over random matches, but that preference for ground truth decreases when the provided alternative is the search result from our model. This indicates that our model’s results are better than the randomly selected results. That is, the retrieved results from our model are similar to ground truth items, which make it difficult for the subjects to choose. We can also see that the preference for S¿R is over 70%, which validates the effectiveness of our method in this subjective task. Interestingly, the performance of our model in the G¿R task is similar to human performance, suggesting a correlation between video and music regardless whether of the judgment is human or
Deep Learning for Content-Based, Cross-Modal Retrieval of Videos and Music
ACM, Conference, 2017 1 0.9
Accuracy
0.8 0.7 0.6 0.5
(a) Video-to-Music selection
0.4
V2M, G>R
M2V, G>R
V2M, G>S
M2V, G>S
V2M, S>R
M2V, S>R
Question Type
Figure 5: Box plot showing the range of individual differences in preferences: The green line indicates the expected value if the user were to select randomly. The red lines indicate the median preference value taken across all subjects; the blue boxes represent the middle 50% of subjects. The dotted black lines represent the full range of percentage values from all subjects, excluding outliers that are marked as red crosses. (b) Music-to-Video selection
Figure 4: Screenshots from our human preference test Table 4: Results of the human preference test, showing the percentage of subjects preferring one possible match to another Subject
Preference V2M M2V G>R 81.98 81.98 Human G>S 65.57 64.62 S>R 71.79 74.25 Machine G>R 78.45 78.50 G: ground truth, R: random, S: search result. V2M: Video-to-Music, V2M: Music-to-Video.
Total 81.98 65.09 73.02 78.48
machine. To show the distribution of preferences in each task in more detail, we also present a box plot in Fig. 5.
5.3
A Video-Music Qualitative Experiment
We also investigated the quality of the retrieval results for CBMVR tasks by using a test set comprising 1,000 video-music pairs as shown in Fig. 6. In the figure, pictures with the purple outlines represent the video items. Blue boxes represent music items. Because it is very difficult to express music or video itself only with figures, in both cases, we use just one key frame of the music video to represent its video item or the music item. Although the retrieval task was performed based only on content, we present in the figure a single piece of genre meta-information for each item to better convey the music or video content characteristics. Overall, the figure shows the results of each query in order from best match (on the left) to worst match (on the right). Even with the few examples
shown in Fig. 6, it is clear that the retrieved results accurately reflect characteristics of music or videos. Finally, to demonstrate the flexibility of MV-NET, we performed additional experiments using video and music that had been assigned by YouTube-8M to a general category (e.g., “boxing,” “motorsport,” and “U.S. army”) rather than just “music video.” (See Fig. 7.) Once again, the query results appear to be meaningful with respect to aspects such as gender and race of the characters. In order to directly visualize the retrieval performance of the proposed technique, we released a demo video online. (https://youtu.be/ZyINqDMo3Fg)
6
CONCLUSION
In this paper, we introduced a deep neural network (MV-NET) that associates video and music through a two-branch network with a separate branch for each modality. The two-branch network is followed by a multimodal embedding of both modalities into a single embedding space. We trained the network using crossmodal ranking loss such that the embedded location of each video and music item is close to suitable items from the other modality. Single-modal structure loss was also used to preserve inherent characteristics within each modality. In the CBMVR experiments measured by Recall@K, our model outperformed conventional cross-modal approaches. To evaluate how well our model relates the two cross-modal sources, we also conducted an evaluation with human subjects. We confirmed that the retrieval results are good enough to be confused with the ground truth for subjects. Finally, in a qualitative experiment, we confirmed that the music genre and gender or race of characters in videos could be learned by MV-NET. In future work, we plan to expand our approach to a fully trainable architecture comprising low-level feature extraction as well as an embedding network.
ACM, Conference, 2017
Query
Sungeun Hong, Woobin Im, and Hyun S. Yang
Most similar
Least similar
Music-to-Video retrieval
… K-pop(boy)
K-pop(boy)
K-pop(boy)
K-pop(boy)
K-pop(boy)
K-pop(boy)
Acoustic
Acoustic
Rock
Rock
Hip hop
Electronic
Hip hop
Hip hop
Hip hop
Hip hop
Pop
Pop
Reggae
Reggae
Rock
Rock
… K-pop(girl)
K-pop(girl)
K-pop(girl)
K-pop(girl)
K-pop(girl)
Pop
… Acoustic
Acoustic
Acoustic
Acoustic
Acoustic
Acoustic
… Folk(Asian)
Folk(Asian)
Folk(Asian)
Folk(Asian)
Folk(Asian)
Folk(Asian)
Video-to-Music retrieval
… Electronic
Electronic
Electronic
Electronic
Electronic
Electronic
… Hip hop
Hip hop
Hip hop
Hip hop
Hip hop
Hip hop
… Rock
Rock
Rock
Rock
Rock
Rock
… Acoustic
Acoustic
Acoustic
Acoustic
Acoustic
Acoustic
Figure 6: Bidirectional video-music retrieval results for the MV-200 dataset: Note that the genre information written below each item was not used during the content-based retrieval, but that information aids in a qualitative assessment of the retrievals.
Deep Learning for Content-Based, Cross-Modal Retrieval of Videos and Music
Query
ACM, Conference, 2017
Most similar
Least similar
Music-to-Video retrieval
… Boxing
Rock
Electronic
Electronic
Rock
Rock
Rock
Pop
Hip hop
Hip hop
Pop
Hip hop
Pop
Pop
Pop
Hip hop
Hip hop
Rock
Pop
Rock
Pop
K-pop(boy)
… Motorsport
Electronic
Electronic
Rock
Pop
Rock
… U.S. Army
Rock
Rock
Rock
Rock
Rock
… Carnival
Folk(Asian)
Reggae
Folk(Asian)
Folk(Asian)
Folk(Asian)
Video-to-Music retrieval
… Ballet
Acoustic
Acoustic
Acoustic
Acoustic
Pop
… Wedding
Acoustic
Acoustic
Acoustic
Acoustic
Acoustic
… GoPro
Rock
Rock
Rock
Rock
Rock
… Pokemon
Hip hop
Hip hop
Hip hop
Electronic
Hip hop
Figure 7: Additional bidirectional video-music retrieval results using queries from videos and music that had been assigned by YouTube-8M to a general category: Once again, the general category metadata written below each item was not used during the content-based retrieval, but it aids in a qualitative assessment.
ACM, Conference, 2017
REFERENCES [1] Rudolf Mayer, Robert Neumayer, and Andreas Rauber. Combination of audio and lyrics features for genre classification in digital audio collections. In Proceedings of the 16th ACM international conference on Multimedia, pages 159–168. ACM, 2008. [2] Sangeun Kum, Changheun Oh, and Juhan Nam. Melody extraction on vocal segments using multi-column deep neural networks. In The International Society for Music Information Retrieval (ISMIR), 2016. ISMIR, 2016. [3] Joon Hee Kim, Brian Tomasik, and Douglas Turnbull. Using artist similarity to propagate semantic information. In ISMIR, volume 9, pages 375–380, 2009. [4] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. [5] Ji Wan, Dayong Wang, Steven Chu Hong Hoi, Pengcheng Wu, Jianke Zhu, Yongdong Zhang, and Jintao Li. Deep learning for content-based image retrieval: A comprehensive study. In Proceedings of the 22nd ACM international conference on Multimedia, pages 157–166. ACM, 2014. [6] Eric Brochu, Nando De Freitas, and Kejie Bao. The sound of an album cover: Probabilistic multimedia and ir. In Workshop on Artificial Intelligence and Statistics, 2003. [7] Rudolf Mayer. Analysing the similarity of album art with self-organising maps. In International Workshop on Self-Organizing Maps, pages 357–366. Springer, 2011. [8] Jiansong Chao, Haofen Wang, Wenlei Zhou, Weinan Zhang, and Yong Yu. Tunesensor: A semantic-driven music recommendation service for digital photo albums. In Proceedings of the 10th International Semantic Web Conference. ISWC2011 (October 2011), 2011. [9] Esra Acar, Frank Hopfgartner, and Sahin Albayrak. Understanding affective content of music videos through learned representations. In International Conference on Multimedia Modeling, pages 303–314. Springer, 2014. [10] Xixuan Wu, Yu Qiao, Xiaogang Wang, and Xiaoou Tang. Bridging music and image via cross-modal ranking analysis. IEEE Transactions on Multimedia, 18(7):1305–1318, 2016. [11] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep contentbased music recommendation. In Advances in neural information processing systems, pages 2643–2651, 2013. [12] Yi Yu, Zhijie Shen, and Roger Zimmermann. Automatic music soundtrack generation for outdoor videos from contextual sensor information. In Proceedings of the 20th ACM international conference on Multimedia, pages 1377–1378. ACM, 2012. [13] Ritendra Datta, Jia Li, and James Z Wang. Content-based image retrieval: approaches and trends of the new age. In Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, pages 253–262. ACM, 2005. [14] Alexander Schindler and Andreas Rauber. An audio-visual approach to music genre classification through affective color features. In European Conference on Information Retrieval, pages 61–67. Springer, 2015. [15] Janis Libeks and Douglas Turnbull. You can judge an artist by an album cover: Using images for music annotation. IEEE MultiMedia, 18(4):30–37, 2011. [16] Shoto Sasaki, Tatsunori Hirai, Hayato Ohya, and Shigeo Morishima. Affective music recommendation system based on the mood of input video. In International Conference on Multimedia Modeling, pages 299–302. Springer, 2015. [17] John Shawe-Taylor and Nello Cristianini. Kernel methods for pattern analysis. Cambridge university press, 2004. [18] Francisco Jesus Canadas-Quesada, Pedro Vera-Candeas, Nicolas Ruiz-Reyes, Julio Carabias-Orti, and Pablo Cabanas-Molero. Percussive/harmonic sound separation by non-negative matrix factorization with smoothness/sparseness constraints. EURASIP Journal on Audio, Speech, and Music Processing, 2014(1):1– 17, 2014. [19] Keunwoo Choi, George Fazekas, Mark Sandler, and Kyunghyun Cho. Convolutional recurrent neural networks for music classification. arXiv preprint arXiv:1609.04243, 2016. [20] Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang. A survey of audio-based music classification and annotation. IEEE transactions on multimedia, 13(2):303–319, 2011. [21] Maksim Khadkevich and Maurizio Omologo. Reassigned spectrum-based feature extraction for gmm-based automatic chord recognition. EURASIP Journal on Audio, Speech, and Music Processing, 2013(1):15, 2013. [22] Meinard M¨uller and Sebastian Ewert. Chroma toolbox: Matlab implementations for extracting variants of chroma-based audio features. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR), 2011. hal00727791, version 2-22 Oct 2012. Citeseer, 2011. [23] Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016.
Sungeun Hong, Woobin Im, and Hyun S. Yang [24] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. [25] Ian Jolliffe. Principal component analysis. Wiley Online Library, 2002. [26] Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning deep structure-preserving image-text embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5005–5013, 2016. [27] Kilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10(Feb):207–244, 2009. [28] Fabrice Bellard, M Niedermayer, et al. Ffmpeg. Availabel from: http://ffmpeg. org, 2012. [29] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [30] Jacob A Wegelin et al. A survey of partial least squares (pls) methods, with emphasis on the two-block case. University of Washington, Department of Statistics, Tech. Rep, 2000. [31] David R Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural computation, 16(12):2639–2664, 2004. [32] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. Domainadversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. [33] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015. [34] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [35] Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI, volume 11, pages 2764–2770, 2011. [36] Gwangbeen Park and Woobin Im. Image-text multi-modal representation learning by adversarial backpropagation. arXiv preprint arXiv:1612.08354, 2016.