Draft, Sept. 2016.
Rain structure transfer using an exemplar rain image for synthetic rain image generation Chang‐Hwan Son and Xiao‐Ping Zhang {
[email protected];
[email protected]} Ryerson University, Toronto, Canada Abstract: This letter proposes a simple method of transferring rain structures of a given exemplar image (rain image) into a target image (non‐rain image). Given the exemplar rain image and its corresponding masked rain image, rain patches including real‐life rain structures are extracted randomly, and then residual rain patches are obtained by subtracting those rain patches from their mean patches. Next, residual rain patches are selected randomly, and then added to the given target image along a raster scanning direction. To decrease boundary artifacts around the added patches on the target image, minimum error boundary cuts are found using dynamic programming, and then blending is conducted between overlapping patches. Our experiment shows that the proposed method can generate realistic rain images that have similar rain structures in the exemplar images. Moreover, it is expected that the proposed method can be used for rain removal. More specifically, non‐rain images and synthetic rain images generated via the proposed method can be used to learn classifiers (e.g., deep neural network) in a supervised manner. Keywords: Structure transfer, dynamic programming, supervised learning, rain removal, image‐based rendering, feature descriptors 1. Introduction: Rain forms structures on captured images. This means that rain structures can prevent computer vision algorithms (e.g., car/sign detection, visual saliency, scene parsing, etc.) from working effectively [1]. Most computer vision algorithms depend on feature descriptors such as scale invariant feature transform (SIFT) [2] and histogram of oriented gradients (HOG) [3]. These descriptors are designed based on the gradient’s magnitude and orientation, and thus rain structures can have negative effects on the feature extractor. For this reason, rain removal is a necessary tool [4]. However, to learn classifiers (e.g., deep neural network) in a supervised manner, it is necessary to collect rain and clean patch pairs. This requires synthetic rain image generation. There are various types of methods [5‐9] that
transfer colors, textures, or styles of the exemplar images into the target images. Different from the conventional methods, in this letter, rain structure transfer method is presented. 2. Proposed Algorithm: The detail procedure of generating synthetic rain images is as follows: ‐ ‐ ‐ ‐
Prepare an exemplar rain image and a target non‐rain image Define rain regions in the exemplar image by using a brush tool in Abode Photoshop. Extract rain patches from the classified rain regions in the exemplar image Repeat the following steps: o Select one rain patch from the extracted rain patches randomly o Obtain the residual rain patch from the selected rain patch with the following equation: P o P r mean(P r ) where P r is the selected rain patch and mean() is a function that o
‐
produces a mean vector from an input vector Add the residual rain patch ( P o ) to the non‐rain patch ( P t ) at the (i, j ) block position in
the target image, thereby producing synthetic rain patch ( P s P o P t ) o Find the minimum error boundary cut [8] using dynamic programming for the rain patch ( P r ), and then blend overlapping areas between the synthetic rain patches by using the minimum error boundary cut o Move to the next block position in the target image Until current block position (i, j ) reaches the last block position
3. Experimental Results: To find the minimum error boundary cut [8], we used the source code provided by authors. The proposed method was coded with Matlab (R2016a), and then tested on Window 7. As shown in Figs. 1‐4, the proposed method can generate realistic rain images by transferring real‐life rain structures of the exemplar images into the target images.
(a) (b)
(c) (d)
(e) (f) Fig. 1. Resulting images: (a) an exemplar 'rain' image, (b) masked 'rain' image, (c) target 'non‐rain' image, (d) synthetic rain image generated via the proposed rain structure transfer, (e) target 'non‐rain' image, and (f) synthetic rain image generated via the proposed rain structure transfer.
(a) (b)
(c) (d) Fig. 2. Resulting images: (a) an exemplar 'rain' image, (b) scribbled 'rain' image, (c) target 'non‐rain' image, and (d) synthetic rain image generated with the proposed rain structure transfer.
(a) (b)
(c) (d) Fig. 3. Resulting images: (a) an exemplar 'rain' image, (b) masked 'rain' image, (c) target 'non‐rain' image, and (d) synthetic rain image generated via the proposed rain structure transfer.
(a) (b) (c) Fig. 4. Resulting images: (a) an exemplar 'rain' image, (b) scribbled 'rain' image, and (c) synthetic rain image generated via the proposed rain structure transfer. 4. Implementation about rain removal: Most regression models using learned parameters (e.g., deep neural network) require patch pairs. Therefore, in the proposed method, it is unnecessary to blend overlapping areas between synthetic rain patches for rain removal. In other words, to learn classifiers, only extracted non‐rain patches and the corresponding synthetic rain patches are needed. 5. Conclusion: In this letter, rain structure transfer method is proposed. To achieve this, exemplar rain images are used to transfer real‐life rain structures into the given target images. Our experiment shows that the proposed method can produce realistic rain images that have similar rain structures in the given exemplar rain images. References [1]
L.‐W. Kang, C.‐W. Lin, and Y.‐H. Fu, "Automatic single‐image‐based rain steaks removal via image decomposition," IEEE Transactions on Image Processing, vo. 21, no. 4, pp. 1742‐1755, Apr. 2012.
[2]
D. G. Lowe, "Distinct image features from scale‐invariant key points," International Journal of Computer Vision, vol. 60, no. 2, pp. 91‐110, Nov. 2004.
[3]
N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, Jun. 2005, pp. 886‐893.
[4]
C.‐H. Son and X.‐P. Zhang, "Rain removal via shrinkage of sparse codes and learned rain dictionary," IEEE International Conference on Multimedia & Expo Workshops, July 2016.
[5]
C.‐H. Son, X.‐P. Zhang, and K. Lee, "Near‐infrared coloring via a contrast preserving mapping model," IEEE Global Conference on Signal & Information Processing, Orlando, USA., pp. 677‐681, Dec. 2015.
[6]
E. Reinhard, M. Ashikhmin, B. Gooch, and P. shirley, "Color transfer between images," IEEE Computer Graphics and Application, vol. 21, no. 5, pp. 34‐41, Sep./Oct. 2001.
[7]
C.‐H. Son, C.‐H. Lee, K.‐H. Park, Y.‐H. Ha, "Real‐time color matching between camera and LCD based on 16‐bit LUT design in mobile phone," Journal of Imaging Science and Technology, vol. 51, no. 4, pp. 348‐359, July/Aug. 2007.
[8]
Alexei A. Efros and William Freeman, "Image quilting for texture synthesis and transfer", ACM SIGGRAPH, Aug. 2001.
[9]
Y. Shih, S. Paris, C. Barnes, William. T. Freeman, and F. Durand, "Style transfer for headshot portraits," ACM Transactions on Graphics, 2014.