ADA: A Game-Theoretic Perspective on Data Augmentation for Object Detection Sima Behpour University of Illinois at Chicago
Kris M. Kitani Carnegie Mellon University
arXiv:1710.07735v2 [cs.CV] 12 Dec 2017
[email protected]
[email protected]
Brian D. Ziebart University of Illinois at Chicago
[email protected]
Abstract The use of random perturbations of ground truth data, such as random translation or scaling of bounding boxes, is a common heuristic used for data augmentation that has been shown to prevent overfitting and improve generalization. Since the design of data augmentation is largely guided by reported best practices, it is difficult to understand if those design choices are optimal. To provide a more principled perspective, we develop a game-theoretic interpretation of data augmentation in the context of object detection. We aim to find an optimal adversarial perturbations of the ground truth data (i.e., the worst case perturbations) that forces the object bounding box predictor to learn from the hardest distribution of perturbed examples for better test-time performance. We establish that the game theoretic solution, the Nash equilibrium, provides both an optimal predictor and optimal data augmentation distribution. We show that our adversarial method of training a predictor can significantly improve test time performance for the task of object detection. On the ImageNet object detection task, our adversarial approach improves performance by over 16% compared to the best performing data augmentation method.
Figure 1. Tiger localization example with three different bounding box annotations illustrates ambiguity in ‘ground truth’ labels.
tics for augmenting the ground truth through consensus [30, 23, 35, 22]. Despite these efforts, it is not clear if there is a principled approach for identifying the optimal ground truth distribution in the context of supervised learning. To address in part the uncertainty of the ‘ground truth’ annotations, dataset augmentation methods can be used to synthesize new annotations of images by perturbing annotations. In fact, heuristic data augmentation preprocessing such as random translation, flipping or scaling, has been shown to be essential for many modern visual learning tasks using deep networks. However, manually choosing perturbations to improve performance can be an error-prone process. While increasing the modes of data perturbations may effectively increase the amount of training data, it can also cause the learned predictor to drift. In this way, current techniques for data augmentation are more of an art than a science. Towards a more principled approach to data augmentation, we propose to integrate annotation perturbations directly into the learning process. We do this by introducing an adversarial function that generates maximally perturbed version of the ground truth, which makes it as hard as possible for the predictor to learn. The adversary, however, is not completely free to perturb the data. It must retain certain
1. Introduction There is no guarantee that human-labeled ‘ground-truth’ annotations of an image dataset are error free. Consider the bounding box annotations of three annotators of the image in Figure 1. Do all boxes contain the object? Are all three bounding boxes equally correct? Is there one bounding box which is most helpful for learning a detection model? These questions highlight the ambiguity in the annotation task. In response, many helpful heuristics have been utilized in the literature to obtain more consistent annotations. To deal with inter-annotator disagreement [33, 19, 27], previous work has relied primarily on reasonable heuris1
feature statistics (e.g., the features of the new bounding box distribution should still be close to the features of the original bounding box). Formally, we pose the data augmentation problem as a zero-sum game between a player (the predictor model) seeking to maximize performance and a constrained adversary (augmented data distribution) seeking to minimize expected performance [8, 1, 32]. To the best of our knowledge, this is the first work to provide a theoretic basis for data augmentation in terms of an adversarial two player zero-sum game. As a consequence of our game-theoretic formulation, we develop a novel adversarial loss function that identifies the optimal data augmentation strategy which leads to the most robust predictor possible (i.e., trained for the worst case perturbation of data). In our experiments, we focus on the task of object detection and show that our proposed adversarial data augmentation technique consistently improves performance over various competing loss functions, data augmentation levels, and deep network architectures.
2. Related Work It is common to assume that the ground truth is singular and error-free. However, disagreement between annotators is a widely-known problem for many computer vision tasks [33], as well as a major concern [19] when constructing an annotated computer vision corpora. In large part, the difficulty arises because the set of possible “ground truth” annotations is typically extremely large for vision tasks. It is a powerset of possible descriptions (e.g., words, noun phrases) in annotation tasks, multi-partitions of the pixels (exponential in the number of pixels) in segmentation tasks, and the possible bounding boxes (quadratic in the number of pixels) for localization tasks. Methods to form a “consensus” annotation and to improve the annotation process through crowd-sourcing have been developed by averaging or combining together different independent annotations [30], verifying annotations with other independent annotators [23], and other strategies [35, 22]. For example, the ILSVRC2012 image dataset employs boundary box drawing, quality verification, and coverage verification as three separate subtasks [25] in a crowd-sourcing pipeline. In the construction of that dataset, proposed bounding boxes are rejected 37.8% of the time [25], illustrating the inherent disagreement between annotators and the uncertainty of the task. Despite the added safeguards of the verification process, recent evaluations have also been performed by removing a substantial fraction of the training examples that are considered to have poor quality bounding boxes [27, 6, 34, 26, 21]. Many state-of-the-art methods for object detection [14] are based on CNN, and incorporate other improvements such as the use of very large scale datasets, more efficient GPU computation, and data augmentation [3]. Recently,
most of the literature on data augmentation studies effective data augmentation methods for CNN features that increases the performance of different tasks (e.g., classification, object recognition) [28, 20, 16, 11]. Chatfield [3] applies the data augmentation techniques commonly applied to CNN-based methods to shallow methods and shows an analogous performance boost [3]. Paulin et al. [20] claim that given a large set of possible transformations, all transformations are not equally informative and adding uninformative transformations increases training time with no gain in accuracy. They propose Image Transformation Pursuit (ITP) algorithm for the automatic selection of a compact set of transformations. Complementary to our work, data augmentation can also be used to guard against adversarial attacks [7, 18, 9, 13, 29]. Total variance minimization and image quilting are presented as very effective defenses adversarial-example attacks on image-classification systems [9]. The strength of these data augmentation lies in their non-differentiable nature and their inherent randomness resulting difficult defenses for an adversary. Our work is different in that we seek to optimize the data augmentation process as part of a supervised learning problem.
3. Problem Formulation In order to understand the underlying theory of adversarial data augmentation proposed in this work, we must first understand the role of the annotation distribution, p(y|x), which describes the distribution over labels y (e.g., a bounding box annotation) given a feature vector x (e.g., an RGB image). Note that a training dataset D = {yn , xn }N n=1 , induces an annotation distribution p(y|x). In other words, each label yn in the training set can be interpreted to be a sample from the annotation distribution which is conditioned on a feature vector x ∈ RD . When there is absolute certainty in the ground truth annotation, the annotation distribution p(y|x) is an indicator function where it is one for the true label y ∗ and zero otherwise.
3.1. Data Augmentation The process of data augmentation is a method of altering the annotation distribution. A typical method for data ˜ by perturbing the augmentation generates new examples D training data D. For example, if the label is a structured output like a bounding box (i.e., a vector of four values), we can generate a new structured label y˜ for the same image by slightly perturbing the original ‘ground truth’ bounding box y ∗ . This data augmentation process creates a new underlying annotation distribution p˜(y|x). Since data augmentation can be used to generate multiple new labels for the same feature vector x, the annotation probability p(y|x) becomes a soft distribution over labels.
Now if we are given a loss function `(ˆ y , y) describing the distance between an estimated label yˆ and annotation label y, we can compute the expected loss of the estimated label under the annotation distribution as: X P (y|x)`(ˆ y , y). (1) y∈Y
Notice that the expected loss is the smallest when the estimated label matches the annotation distribution. Conversely, the expected loss grows larger when the estimated label is far from the annotation distribution. It is important to note here that this marginalization over the annotation distribution is rarely made explicit in the loss function in most modern supervised learning objective functions because the distribution is assumed to be an indicator function at the ‘ground truth’ label. Now consider the probabilistic predictor f (y|x) which maps a feature vector x to a distribution over labels y. The expected loss over the entire dataset D under the predictor distribution and annotation distribution is defined as: expected loss for input x }| { X X zX 0 0 f (y |x) P (y|x)`(y , y) . (2) min f ∈Γ
x∈D y 0
y
The goal of supervised learning is to find the optimal predictor f (from some set of predictors Γ), that minimizes the above expected loss over the labeled training data. Understanding this verbose form of the supervised learning objective function is critical for the formulation that follows.
3.2. Adversarial Data Augmentation If we adopt a pessimistic view of the annotated data and assume uncertainty in the ‘ground truth’ annotations, we can use data augmentation to perturb the ‘ground truth’ annotations to reflect this uncertainty. We can go further and assume the worst case: that the quality of the annotation distribution is maximally perturbed. In other words, we make a strong pessimistic assumption that the annotation distribution was generated by an adversary. By making this worst case assumption, we hypothesize that we can train a more robust predictor that is resilient to large perturbations it might encounter at test time. Figure 2 illustrates the three possible choices of annotation distributions for a single image. More formally, instead of the common Empirical Risk Minimization (ERM) objective of Eq. (2), we aim to learn a predictor f that optimizes the following adversarial objective function: XX X min f (y 0 |x) max P (y|x)`(y 0 , y). (3) f ∈Γ
x∈D y 0
P (y|x)
y
Notice that the maximization sub-problem has been inserted into the objective function which reflects our assumption
Figure 2. Types of annotation distributions. Adversarial augmentation selects bounding boxes that are maximally different from the ground truth but still contain important object features.
that the annotation distribution is adversarial (i.e., the worst case distribution). One might quickly notice that this is an unreasonable objective function without some additional constraints because the adversarial annotation distribution can be arbitrarily bad. In the next section we will incorporate constraints that limit the adversary from deviating very far from the original ground truth annotations.
3.3. Game Formulation Our claim is that data augmentation should be included in the learning problem instead of being an independent data pre-processing step. By incorporating data augmentation into the predictor learning problem, we obtain a saddle point optimization problem where we pit a predictor trying to minimize the loss, against an adversarial annotation distribution that is trying to maximize the loss. In this form, the optimization can be seen as a minimax problem over a zero-sum two-player game. In the language of game theory, the player (predictor) selects a label from a mixed strategy y 0 ∼ f (y 0 |x) to minimize the loss, while the opponent (annotation distribution) selects an annotation from the adversarial distribution y ∼ P (y|x) to maximize the loss. The equilibrium point of the game yields both the optimal predictor and an optimal data annotation distribution. The game is zero-sum because the negative loss of the player (predictor) is exactly the gain of the adversary (annotation distribution). The value or payoff of the game for a particular feature vector x is the expected loss of the predictor distribution against the adversary’s annotation distribution: X E y0 |x ∼f [`(y 0 , y)] = f (y 0 |x)`(y 0 , y)P (y|x) (4) y|x ∼P
y 0 ,y
= f > Gp.
(5)
The expected loss of the game can also be written in matrix form, where f is the vector of probabilities obtained from the predictor over all labels, G is the game matrix where each element contains the loss between two labels, and p is the annotation distribution vector. The adversarial objective function, Eq. (3) in its current form is problematic because the adversarial annotation distribution is free to perturb the ground truth annotations in
arbitrary ways that have no similarity to the original annotations. This can be prevented by constraining the adversarial annotation distribution to choose label distributions in a way that retains feature statistics of the original ground truth annotation. For example, we may want the mean of a set of augmented bounding box annotations to be the same as the mean of the original bounding box annotation. Formally, we can define the first-order statistic of the ground truth data as: Ey,x∼D [φ(y, x)] =
N 1 X φ(yn , xn ), N n=1
(6)
where (yn , xn ) is the nth training example in D. We are now ready to define the constrained adversarial optimization problem. Definition 1. The Primal Adversarial Data Augmentation (ADA-P) game is: min max Ex ∼ D, [`(y 0 , y)] such that: f
P
y 0 |x ∼ f, y|x ∼ P
(7)
Ex ∼ D, [φ(y, x)] = Ey,x∼D [φ(y, x)] y|x ∼ P
where f (y 0 |x) and P (y|x) are distributions over all potential predicted labels for each feature vector x. Due to strong Lagrangian duality [2], a dual problem with an equivalent solution can be formulated by including the constraint in the objective function using a vector of Lagrangian multipliers, θ. This resulting Lagrangian potential θ> φ(·, ·) links together a set of otherwise independent zero-sum games. Definition 2. The Dual Adversarial Data Augmentation (ADA-D) game is: h h min Ex,y∗ ∼D min max Ey0 ∼ f, `(y 0 , y) (8) P θ f y ∼ P i + θ> {φ(y, x) − φ(y ∗ , x)} . We make two important observations based on this dual optimization perspective. First, since P is adversarially chosen, there is no need to restrict or parameterize f to avoid overfitting to P as is typically done in supervised learning. Instead, the feature potential based on θ is learned to provide constraints on the adversary that make prediction easier. Further, since P is chosen after f in Eq. (8), the predictor is incentivized to randomize.
4. Adversarial Object Localization Up to this point, we have described our proposed adversarial data augmentation learning approach in general terms, as it can apply to many structured output tasks. Now,
we shift our focus to the concrete problem of object detection. This explicit focus will help us to describe our approach in concrete terms. Label Space. Each structured output label y is represented by the four coordinates of a bounding box. The domain of a label is denoted Y. The set of all possible bounding boxes Y is very large for an image of modest size and therefore it is rarely practical to evaluate all possible bounding boxes. This means that the sums over labels used in the formulation above (Section 3) are not tractable and that some form of distribution approximation is needed. To discretize the label space Y, we use a bounding box proposal algorithm, Edgebox [15], to generate a set of k bounding boxes to define the label space Y. Feature Statistics. To represent the feature statistics φ(y, x) of a bounding box y over an image x, we use the FC7 features of the VGG16 [24] network. Concretely, it is a 4096 dimensional vector over a sub-image defined by the bounding box y. The feature statistic constraint {φ(y 0 , x) − φ(y ∗ , x)} described in the ADA-D definition represents the difference between the FC7 features of an arbitrary bounding box y 0 and the FC7 features of the ground truth bounding box y ∗ . Also known as the perceptual loss, this quantity ensures that the adversarial bounding box label remains perceptually similar to the ground truth bounding box label. Loss Function. The loss function used for object localization is based on the classical intersection over union (IoU) score, IoU (y, y 0 ) = area(y ∩ y 0 )/area(y ∪ y 0 ). Here, y and y 0 are two bounding boxes. In this work, we focus on losses defined in terms of the amount of non-overlap, `(y, y 0 ) = 1 − IoU(y, y 0 ), which equals to one when y and y 0 are disjoint, zero when they are identical, and smoothly transitions in between those extremes. Another loss function we use is the overlap loss with a threshold: ( 1 IoU(y, y 0 ) < α `tα (y, y 0 ) = (9) 0 IoU(y, y 0 ) ≥ α, which assigns binary loss to bounding boxes depending on the overlap threshold.
4.1. Game Matrix As noted in Eq. (5), the expected loss of the adversarial game can be written in matrix form, f > Gp. The game (or payoff) matrix G for ADA-D can be constructed from Eq. (8) as an |Y| × |Y| matrix, where each element is defined as: G(y 0 , y) = `(y 0 , y) + θ> {φ(y, x) − φ(y ∗ , x)},
(10)
where is the first term `(·, ·) is the IoU based loss and the second term is the weighted difference between FC7 features of the annotation distribution label y and the ground
Algorithm 1 ADA Equilibrium Computation
Figure 3. Example Game Matrix for a duck image with three bounding boxes. Each black bounding box is a potential label for the same duck image.
truth y ∗ label. To better understand the structure of the game GΦ . The elements matrix, we can decompose it as G = G` +G of each matrix are illustrated for a toy example in Figure 3. The first matrix G ` contains the pairwise loss between the label of the predictor and the label of the adversary, `(y 0 , y). The second matrix G Φ contains the difference in feature statistics between the adversarial label and the ground truth label, θ> {φ(y, x) − φ(y ∗ , x)}. Since the constraint matrix G Φ does not depend on the predictor label y 0 , each row is identical. The elements of the last column are all −1 in this example because the feature statistic of the bounding box y3 are very different from the ground truth bounding box y ∗ , whereas the first two columns are zero because the content of their bounding boxes are similar.
4.2. Nash Equilibria The solution of the game, the Nash equilibrium pair (f , p), is defined as the optimal strategy for each player such that: >
f 0 G p. max f >G p0 ≤ f >G p ≤ min 0 0 p
f
(11)
For the example in Figure 3, the lower potential for the third bounding box in G Φ offsets the larger loss that might be produced in G ` by having p(y3 ) > 0. Due to the symmetries in G , the equilibrium solution is then simply: P (y1 |x) = P (y2 |x) = f (y10 |x) = f (y20 |x) = 0.5 and P (y3 |x) = f (y3 |x) = 0. In general, this equilibrium solution pair can be obtained efficiently using a pair of linear programs: min v such that: f >G ≤ v1 and f > 1 = 1; and
v,f ≥0
(12)
max v such that: G p ≥ v1> and p> 1 = 1,
v,p≥0
where v is the value of the game (i.e., the expected loss). This first linear program finds f that produces the maximum value against the worst choice of p0 using the left-hand side of Eq. (11) via constraints for each deterministic choice of p0 (i.e., the 1 vector). The second linear program is constructed in a likewise manner to obtain p.
Input: Image x; Parameters θ; Ground Truth y ∗ Output: Nash equilibrium, (f , p) 1: Y ← EdgeBox(x) 2: Φ = CNN(Y, x) 3: ψ ← θ > (Φ − Φ(y ∗ )) 4: Sp ← Sf ← argmaxy ψ(y) 5: repeat 6: (f , p, vp ) ← solveGame(ψ(Sp ), loss(Sf , Sp )) 7: (ynew , vmax ) ← maxy Ey0 ∼f [loss(y, y 0 )+ ψ(y)] 8: if (vp 6= vmax ) then 9: Sp ← Sp ∪ ynew 10: end if 11: (f , p, vf ) ← solveGame(ψ(Sp ), loss(Sf , Sp )) 12: (y 0 new , vmin ) ← minyˆ Ey∼p [loss(y, y 0 )] 13: if (vf 6= vmin ) then 14: Sf ← Sf ∪ y 0 new 15: end if 16: until vp = vmax = vf = vmin 17: return (f , p)
4.3. Constraint Generation for Large Games In practice, forming and solving a zero-sum adversarial game over a very large set of labels (e.g., the set of all possible bounding boxes in an image) for each image is computationally expensive. To obtain the same result more efficiently, we employ a constraint-generation method [17, 32] to solve ADA-D without explicitly constructing the entire payoff matrix G . It is based on the key insight that the equilibrium distributions, f and p, both assign zero probabilities to many bounding boxes and those bounding boxes can then be effectively removed from the game matrix without changing the solution. The basic strategy of constraintgeneration is to use a set of the most violated constraints to grow a game matrix that supports the equilibrium distribution, but is much smaller than the full game matrix. The approach works by iteratively obtaining a Nash equilibrium for a game defined over a subset of the possible labels (not all of them), finding a player’s best response strategy (either the predictor or the annotation distribution) to that equilibrium distribution. Then the best response to the set of opponent strategies defining the game is added as a new strategy. When additional best responses no longer improve either player’s game value, the subgame equilibrium is guaranteed to be an equilibrium to the larger game [17].
4.4. Algorithm Details Algorithm 1 details the ADA equilibrium computation structure. The pre-processing step, extracting the image box proposals (EdgeBox) and their CNN features, is addressed in Lines 1-4. The CNN features are extracted from the last convolutional layer of the deep networks in respect with the related experiments. Sp and Sf stand for strategies(chosen
box proposals) of p and f game players. The algorithm presents the main structure in lines 5-16. solveGame is the process of obtaining a Nash equilibrium using linear programming. Gurobi LP solver [10] is used to solve the linear programming. The constraint generation is presented with max and min operations in lines 7,12. After reaching the loop termination condition (line 16), the f and p distributions are returned. Model parameters θ are obtained using convex optimization (gradient-based methods [4]) that reach convergence exactly when the data augmentations match the features of the training data: Ex ∼ D, [φ(y, x)] = Ey,x∼D [φ(y, x)]. y|x ∼ P
The gradient is simply the difference between these two expectations. At testing time, we employ a similar inference procedure. It likewise uses constraint generation—in the form of best responses to equilibria using sets of strategies—to find a set of strategies supporting the equilibrium of the full game. A final prediction is produced by taking the most probable predictor strategy under the equilibrium distribution, yˆ = argmaxy P (y|x), as the predicted bounding box.
5. Experiments In the following experiments, our aim is to show that our proposed adversarial optimization for data augmentation will provide meaningful improvements in test time prediction performance. We first compare our adversarial data augmentation (ADA) objective function against baselines models with no data augmentation on the task of localization in Section 5.1. Second, we compare our approach to baseline models with varying levels of data augmentation in Section 5.2. Third, we evaluate our approach on the task of detection (joint recognition and localization) in Section 5.3. Finally, we use our adversarial optimization over various deep features to show consistent improvements across networks in Section 5.4. Baselines. To analyze the performance of our adversarial objective function (ADA) for object detection, we benchmark it against two classical objective functions. (1) SSVM: The structured output support vector machine (SSVM)[31] is a large margin classifier with a variable margin depending on a structured loss function `. The objective function is defined as: X θˆ = argmin λ||θ||2 + ξn (13) θ
s.t.
n
>
θ (φ(yn∗ , xn ) − φ(y, xn )) ≥ `(yn∗ , y) − ξn ∀ y
where θ is the weight vector, φ is the feature function (image feature statistic), ` is the loss function and ξ is the slack variable. To solve the SSVM objective function, we employ an iterative constraint generation strategy to accelerate
Table 1. No augmentation baseline comparison (IoU>0.5) Model
Plane ADA+VGG (Ours) 92.0 Softmax+VGG 84.0 SSVM+VGG 90.0
Bird 93.5 86.5 82.5
ImageNet Object Categories Bus Car Cat Cow Dog Hors 92.0 100.0 89.1 100.0 93.0 96.4 84.0 87.0 70.9 77.5 62.0 72.7 82.0 82.0 40.0 87.5 72.0 72.7
Moni 96.0 72.0 90.0
Sofa mAP 90.0 94.2 80.0 77.7 78.0 77.7
Table 2. No augmentation baseline comparison (IoU>0.7) Model
Plane ADA+VGG (Ours) 58.0 Softmax+VGG 47.6 SSVM+VGG 51.8
Bird 61.5 45.7 55.5
ImageNet Object Categories Bus Car Cat Cow Dog Hors 64.0 91.0 30.9 77.4 58.0 58.2 40.0 62.8 20.0 42.5 25.1 25.4 44.0 61.7 21.8 54.7 31.6 43.6
Moni 61.8 31.4 56.0
Sofa mAP 61.9 62.3 44.2 38.5 57.3 47.8
the learning process by adding a few constraints per iteration (instead of the entire constraint set defined by each label y ∈ Y). At test time, we generate a set of bounding box proposals (EdgeBox) and take the bounding box with the highest potential using the learned weight vector, yˆ = argmaxy θ> φ(y, x) to identify the predicted structured output. (2) Softmax: The soft maximum (logistic regression) objective function is a probabilistic predictor. For the softmax objective function, we estimate a distribution over all proposed bounding boxes y that maximizes the conditional likelihood of proposed bounding boxes with an IoU above a given threshold. Y θˆ = argmax P (yn |xn ; θ), (14) θ
n
Y eθ> φ(yn ,xn ) P θ> φ(y,x ) = argmax n θ ye n
(15)
where θ is the weight vector, φ is the potential function (FC7 feature) and ` is the loss function. At test time, we compute the Bayesian optimal decision to identify the most likely bounding box from a set of proposed bounding boxes according to: X yˆ = argmin P (y 0 |x; θ)`(y, y 0 ), (16) y
y 0 ∈Y
where P (y|x; θ) is the learned conditional distribution parameterized by θ and ` is the loss function.
5.1. Baseline Comparisons with No Augmentation We begin with the simplest evaluation, where we compare our proposed adversarial data augmentation approach with two baseline method that use only the ground truth annotation, without augmenting the training data, to learn a predictor. We compare our method ADA+VGG against SSVM+VGG and Softmax+VGG. The suffix +VGG for each objective function specifies the deep network from which the features are used, in this case VGG16 [24]. We compute the mean Average Precision (mAP) score for several classes in ImageNet dataset for each of the competing methods. We train a bounding box predictor for each object
category, and consider an object to be correctly detected when the IoU is greater than a threshold. We emphasize here that we are decoupling the recognition task from the localization task by learning class specific bounding box regressor and testing only on images that contain the target class. Later experiments will evaluate on both recognition and localization. For this experiment, we give results for two thresholds, 50% IoU and 70% IoU for each object category. Since our method explicitly augments the dataset as part of the optimization process whereas the two baselines have no data augmentation, we expect our approach will outperform the two baselines. The test time localization accuracy at 50% IoU on object 10 classes from ImageNet are given in Table 1. As expected, we observed significant test time improvement in bounding box regression accuracy for every object category. On average, our proposed adversarial data augmentation approach improved mean average precision by 17% percentage points. We repeated the same experiment for a more strict loss, a 0.7 thresholded IOU loss function. The mean average precision of the predicted bounding boxes are given in Table 2. Since we are evaluating performance with a more strict loss function, the absolute mAP values decrease as expected. However, notice that our proposed approach still obtains a significant improvement over the baseline algorithms improving mAP by 15% percentage points over the strongest baseline SSVM+VGG.
5.2. Baseline Comparisons with Augmentation We now compare the performance of our approach to the strongest baseline model, SSVM+VGG, trained with different levels of data augmentation. As mention earlier, data augmentation such as random translations of bounding boxes, is a common heuristic used to help supervised learning methods avoid overfitting. We prepare five levels of data augmentation to train the SSVM+VGG baseline. Instead of using random translations within a range of the ground truth bounding box annotation, which can contain many similar bounding boxes, we use the EdgeBox proposal network to generate a diverse set of bounding boxes. We keep the top 250 EdgeBox proposals with the highest scores and filter them according to 5 thresholds with respect to the original ground truth bounding box: (1) IoU>50%; (2) IoU>60%; (3) IoU>70%; (4) IoU>75%; and (5) IoU>80%. We denote the experiment using the subscript t50 to represent a model trained on a collection of bounding boxes with IoU>50%. We consider the bounding boxes that pass the threshold test, as new ‘ground truth’ and use them as the training set. We note here again that our proposed method automatically selects (gives weights to) the bounding box proposals during the learning process and does not require a separate augmentation step.
Table 3. Effect of Data Augmentation (IoU > 70%) Augmentation
Plane SSVMt50 +VGG 53.8 SSVMt60 +VGG 54.7 SSVMt70 +VGG 56.4 SSVMt75 +VGG 52.6 SSVMt80 +VGG 49.8 ADA+VGG (Ours) 58.0
Bird 57.9 58.9 61.6 61.0 52.0 61.5
AlexNet Object Category Bus Car Cat Cow Dog Hors 49.7 64.0 22.6 59.9 37.5 45.5 52.7 67.7 23.7 64.9 42.0 48.6 56.8 70.8 25.4 67.3 49.1 51.9 51.7 64.4 20.2 61.2 42.6 44.0 44.9 60.3 20.2 55.8 33.1 41.4 64.0 91.0 30.9 77.4 58.0 58.2
Moni 56.7 57.3 58.6 57.3 55.8 61.8
Sofa 57.8 58.4 58.8 56.0 52.7 61.9
Avg 50.5 52.9 55.7 51.1 46.6 62.3
Table 4. Effect of Number of Augmented Data Annotations. ADA outperforms best configuration SSVM+VGG baseline by 12%. SSVM+VGG k=1 k=2 k=4 k=6 k=8 k=10 k=12 ADA+VGG mAP 77.6 79.7 81.4 83.8 83.7 79.8 75.3 94.2
Table 5. Detection Performance Comparison (IoU > 70%). Model
Plane ADA+VGG (Ours) 46.0 SSVM+VGG 42.0 Softmax+VGG 40.0
Bird 55.5 46.0 42.5
Image Net Object Category Bus Car Cat Cow Dog Hors 60.0 86.0 25.4 70.0 47.0 52.7 38.0 53.0 16.4 52.5 25.0 36.4 42.0 55.0 16.4 32.5 16.0 29.1
Moni 60.0 42.0 22.0
Sofa 48.0 42.0 34.0
Avg 55.1 39.3 33.0
The results of this experiment are shown in Table 3. We note that augmenting SSVM in this manner improves test performance compared to the model without data augmentation (Table 2). We also observe that the performance gain is maximized around the 70% overlap threshold. However, we find that with the exception of the Bird class, the performance gains of data augmentation do not reach the performance of our ADA approach. Our proposed approach still outperforms the model with the best level data augmentation by 6.4% percentage points. We also performed a second data augmentation experiments by only varying the number of bounding boxes with top the EdgeBox scores (instead of using IoU) for every image label to understand how the amount of augmented data affects test time performance. At test time, we use the 50% overlap criteria for successful localization. Table 4 shows the mAP performance over same 10 object categories in ImageNet as a function of the number of augmented data annotations per image. The augmented data annotations are selected from a rank list of EdgeBox proposals from each image. The performance of the SSVM+VGG tops out at 8 augmented data annotations and is still 12% points below our propose approach (94.2% mAP).
5.3. Detection Performance Comparison We now address the object detection task of jointly locating and recognizing the category an unknown object, to evaluate the performance of our approach on a harder task. In order for a model to obtain a correct result, the predictor must output the correct category label and also generate a bounding box that overlaps with the ground truth by at least 70% IoU. For our baseline models, SSVM+VGG and Softmax+VGG, we use the best performing data augmentation scheme from Table 3 that includes EdgeBox proposals that have 70% IoU threshold with the original ground truth annotation.
Adversarial Boxes (proposed)
Top-K EdgeBoxes by score
Top-K EdgeBoxes by IoU
Figure 4. The top five augmented bounding boxes (left to right) under the different augmentation strategies. Our adversarial augmentation approach generates much more diverse bounding boxes to augment the training data than choosing solely based on IoU yet the boxes are at the same time still relevant to the target object unlike general EdgeBoxes. Table 6. ADA Generalization Across Deep Architectures. VOC2007 mAP for IoU>0.5. Model
Aero ADA+VGG16 68.5 SSVM+VGG16 73.6 SVM+VGG16 [5] 73.4 ADA+AlexNet fc7 62.4 SSVM+AlexNet fc7 68.2 SVM+AlexNet fc7 [5] 68.1 ADA+ResNet101 76.4 SSVM+ResNet101 68.0
Bike 71.5 76.4 77.0 70.0 72.9 72.8 74.8 70.2
Bird 67.8 63.7 63.4 63.6 57.3 56.8 72.4 69.3
Boat 63.3 46.1 45.4 63.0 44.2 43.0 64.0 54.3
Bott 48.6 44.0 44.6 44.8 41.8 36.8 52.5 46.5
Bus 76.5 76.0 75.1 72.2 66.0 66.3 84.0 76.2
Car 78.8 78.4 78.1 75.5 74.3 74.2 81.9 78.8
Cat 80.9 80.0 79.8 79.5 69.2 67.6 86.0 85.0
VOC 2007 Object Category Chair Cow DinT Dog Horse mbike person Plant Sheep Sofa Train TV mAP 50.9 78.5 64.5 79.6 71.8 73.2 66.4 30.2 70.6 72.6 80.8 62.8 67.9 41.6 74.2 62.8 79.8 78.0 72.5 64.3 35.0 67.2 67.2 70.8 71.4 66.1 40.5 73.7 62.2 79.4 78.1 73.1 64.2 35.6 66.8 67.2 70.4 71.1 66.0 44.6 81.6 64.0 81.5 70.2 68.5 71.4 69.5 65.0 71.2 81.4 59.8 68.0 34.6 54.7 54.3 61.3 69.8 68.7 58.5 34.6 63.6 52.5 62.6 63.5 58.6 34.4 63.5 54.5 61.2 69.1 68.6 58.7 33.4 62.9 51.1 62.5 64.8 58.5 48.5 83.5 64.8 82.0 73.5 77.0 72.4 36.6 74.4 74.8 81.4 65.6 71.3 46.8 80.2 63.2 78.1 69.5 71.4 61.8 36.8 68.1 69.1 73.6 64.5 66.5
Table 5 shows the object detection performance when evaluated at the 70% IoU threshold for correctness. We again find strong support for our adversarial approach to dealing with uncertainty. Specifically, we find that ADA70 provides the best performance for all object classes. Though the relative performance advantage differs by object type, for classes like Dog, the improvement over the other approaches is nearly double. On average, ADA provides a significant performance improvement of 15.8% percentage points on this task over the strongest performing SSVM+VGG baseline.
proccessing step of theses experiments, the box proposals of every image are extracted using EdgeBox. These box proposals are employed as input to the respective deep networks. The CNN features are extracted from the last convolutional layer of the respective deep network and these box proposals and their CNN features are employed in training the respective classifier. We report the generalized version of ADA and SSVM with Alexnet[14], VGG, and ResNet101[12]. For the VOC 2007 dataset, we use 5000 training images and 4952 testing images.
5.4. Generalization Across Deep Features
6. Conclusions
To understand how our proposed adversarial loss function, ADA, generalizes across various deep architectures, we compare mAP over 20 categories from the VOC2007 dataset for several combinations of deep architecture using our proposed adversarial loss function. In the pre-
In this paper, we have developed a game-theoretic formulation for data augmentation that perturbs image annotations adversarially. This provides robustness in the learned predictor that is achieved by training from a number of augmentations that are adaptively selected to be difficult,
while still approximating the ground truth annotation. We demonstrated the benefits for object localization and detection using experiments over ten different object classes for the ILSVRC2012 dataset and twenty different object classes for the VOC2007 dataset, showing significant improvements for our approach under 50% and 70% thresholded IoU evaluation measures.
References [1] K. Asif, W. Xing, S. Behpour, and B. D. Ziebart. Adversarial cost-sensitive classification. In Proceedings of the Conference on UAI, 2015. 2 [2] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004. 4 [3] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531, 2014. 2 [4] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. 6 [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation tr. In arXiv:1311.2524, 2014. 8 [6] R. Gokberk Cinbis, J. Verbeek, and C. Schmid. Multi-fold mil training for weakly supervised object localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2409–2416, 2014. 2 [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. 2 [8] P. D. Gr¨unwald and A. P. Dawid. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian decision theory. Annals of Statistics, 32:1367–1433, 2004. 2 [9] C. Guo, M. Rana, M. Cisse, and L. van der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017. 2 [10] I. Gurobi Optimization. Gurobi optimizer reference manual, 2016. 5 [11] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9):1904–1916, 2015. 2 [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 8 [13] S. Joon Oh, M. Fritz, and B. Schiele. Adversarial image perturbation for privacy protection–a game theory perspective. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1482–1491, 2017. 2 [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. 2, 8
[15] P. D. Larry Zitnick. Edge boxes: Locating object proposals from edges. In ECCV. European Conference on Computer Vision, September 2014. 4 [16] I. Masi, A. T. Trn, T. Hassner, J. T. Leksut, and G. Medioni. Do we really need to collect millions of faces for effective face recognition? In European Conference on Computer Vision, pages 579–596. Springer, 2016. 2 [17] H. B. McMahan, G. J. Gordon, and A. Blum. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the International Conference on Machine Learning, pages 536–543, 2003. 5 [18] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401, 2016. 2 [19] S. Nowak and S. R¨uger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proceedings of the international conference on Multimedia information retrieval, pages 557–566. ACM, 2010. 1, 2 [20] M. Paulin, J. Revaud, Z. Harchaoui, F. Perronnin, and C. Schmid. Transformation pursuit for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3646–3653, 2014. 2 [21] S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014. 2 [22] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. A semi-automatic methodology for facial landmark annotation. In The IEEE Conference on CVPR Workshops, June 2013. 1, 2 [23] J. M. Saragih, S. Lucey, and J. F. Cohn. Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision, 91(2):200–215, 2011. 1, 2 [24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4, 6 [25] H. Su, J. Deng, and L. Fei-Fei. Crowdsourcing annotations for visual object detection. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012. 2 [26] S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080, 2(3):4, 2014. 2 [27] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 1, 2 [28] L. Taylor and G. Nitschke. Improving deep learning using generic data augmentation. arXiv preprint arXiv:1708.06020, 2017. 2 [29] F. Tram`er, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 2 [30] P. Upchurch, D. Sedra, A. Mullen, H. Hirsh, and K. Bala. Interactive consensus agreement games for labeling images. In Fourth AAAI Conference on Human Computation and Crowdsourcing, 2016. 1, 2
[31] A. Vedaldi. A MATLAB wrapper of SVMstruct . http://www.vlfeat.org/˜vedaldi/code/ svm-struct-matlab, 2011. 6 [32] H. Wang, W. Xing, K. Asif, and B. Ziebart. Adversarial prediction games for multivariate losses. In Advances in Neural Information Processing Systems, pages 2710–2718, 2015. 2, 5 [33] P. Welinder, S. Branson, P. Perona, and S. J. Belongie. The multidimensional wisdom of crowds. In Advances in neural information processing systems, pages 2424–2432, 2010. 1, 2 [34] T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2691–2699, 2015. 2 [35] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012. 1, 2