Background Modelling by Codebook Technique for ... - IEEE Xplore

7 downloads 0 Views 1MB Size Report
Visual Background Extractor (ViBe). Automated Video Surveillance System is a monitoring system for a busy environment, detection of important target like in a ...
2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)

Background Modelling by Codebook Technique for Automated Video Surveillance with Shadow Removal Jharana Rani Rabha, Member, IEEE Abstract—Background modelling is a technique used for object detection by modelling the background of a video sequence. This technique is applied for monitoring in a surveillance application purpose. The technique should be robust in detecting the object in crucial condition. The internal model should sensitive to detect all moving objects by satisfying its challenging aspects. The important aspect of background modelling are local and global illumination variation, motion changes, presence of shadows, camouflage, some changes in background geometry and high frequency. Basic principle of background modelling is background subtraction, where the motion segmentation is done. During the motion segmentation, getting accurate or correct shape of the object is one of the prime challenges. While light is occlude by object, a shadow forms. During motion segmentation, it is detected as foreground object, which is disaster of motion algorithm. In our project, we have projected a Background Modelling Technique with Shadow Removal for Automated Video Surveillance System. The Algorithm has been tested by considering different datasets. Experimental results validate the implemented algorithms.

Fig. 1.

An illustration of an Institute campus with surveillance

II.

M ETHODOLOGY-I

A. Background Modelling by Codebook Technique I.

Contemporary frames have snapshot of dynamic scenes with homogeneous pixel feature. Objective of this algorithm is to extract the foreground object by modelling the background by codebook technique from a particular video sequences. Here, in this project we had first implemented and experimented the codebook algorithm; than works on its drawback, as result outcomes shows some criteria still has to cover.

I NTRODUCTION - I

This paper is divided into two parts. In Part A, codebook background modelling technique from Y. Kim [1] is described for background subtraction and modelling. In part B, we have described a shadow removal algorithm. Both algorithm is implemented as inter part of single vision system. Codebook algorithm detection result is also compared with another background subtraction model i.e. Visual Background Extractor (ViBe).

B. Construction of Codebook algorithm While training, let,  = r1 , r2 , ...., rN be sample pixel for N no of frames. Codebook of the pixel represents, C = c1 , c2 , c3 ,.... cL ,for L codeword value. Hence, the variations depends on L size. All particular codeword has ci , 1 ≤ i ≤ L, which has brightness & intensity values. The remaining distributed temporal values are,

Automated Video Surveillance System is a monitoring system for a busy environment, detection of important target like in a war zone is essential for Military surveillance. Surveillance System works on principle of background subtraction. A computer vision system is an equivalent of human perception or an eye power. Vision system uses a computer to extract high level of information from a digital image. The principle of background subtraction is subtracting the current frames to upcoming frames. Topology of surveillance is to fixed a CCD camera for acquiring a live video for analyses purpose. In fig.1 an illustration of surveillance plan is shown. Later it is digitized for detection work, without boredom and detrimental effect it can observe. Video surveillance system uses hybrid color and texture for processing. Here sequences of images are need to process for detection. Segmentation is one of the main operation included in system. A shadow removal technique is used to correct the moving object segmentation error for ultimate correct segmentation result. So, an accurate detection of model with good segmentation with good background model is necessary.

ui =(Ri , Gi , B i ): it is the mean of R,G and B for that particular codeword of matching pixel). Bri = Imin(i) , Imax(i)  : The maximum & minimum (brightness) intensity value for particular match of pixel. τi =  fi , λi , pi , qi  : temporal variables of the codeword. where, f = frequency of the codeword λ = maximum temporal time interval where, codeword has no occurrence.

c 978-1-4799-9969-9/15 $31.00 2015 IEEE

584

2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)

D. Color and Brightness distortion

p,q = first & last occurrence time, while training.

The codebook algorithm deal with local and global illumination, where changes happen naturally or artificially. This color distortion technique works on basis of color normalization. While normalizing, dark pixels have highest uncertainty. Because of it, dark pixels region are unsatble. A false detection found as a form of cluster in or around the dark regions. Therefore, we get how value of those pixel changes under illumination condition variation. According to light condition variations, it pixels brightness are decreases or increases over time. So, pixel values are distributed in elongated shape to the origin (0,0,0) of the axis.

While training, sample rt =(R, G, B) at time ’t’, which is compared with the codeword in a codebook. This is measured on color distortion & brightness(intensity) by using color model. The algorithm follows, if any a single match is formed or exists codeword value for a single case, got updated or in that particular scenario the match has not found, an another new codeword has added to codebook. C. Codebook Algorithm: 1. L ← 0, C ← Φ, (an empty set) 2. For t = 1 to N do

Separated background pixel, are lies along the axis of codeword, where it depend on low and high bound of intensity or brightness value.

i. rt = (R,G,B), I = R + G + B

So considering on, input pixel, rt = (R,G,B) and codeword ci , where ui = (Ri , Gi , B i ) (5)

ii. Find the codeword cm in C = ci , 1 ≤ i ≤ L

2

2

2

rt 2 = R2 + G2 + B 2 , ui 2 = Ri + Gi + B i

matching to rt based on two conditions (a) and (b).

rt , ui 2 = (Ri R + Gi G + B i B)2 (a) colordist(rt , ui )≤ ε1

(6) (7)

Hence, considering p value,

(b) brightness(I,  Imin(i) , Imax(i) ) = true where, ε1 = sampling threshold

p2 = rt 2 cos2 θ =

rt , ui 2 ui 2

substituting the p value, color distortion = δ , we get  colordist(rt , ui ) = δ = rt 2 − p2

iii. If C = Φ or no match has found, then L = L + 1.

(8)

(9)

Creating a new codeword cL by setting On this algorithm the brightness change has been under the limit of [Imin , Imax ]. During new codeword updation this pixel got stable. This Imin and Imax are defined as,

uL = (R, G , B) BrL = I, I τL = 1, t − 1, t, t iv. Updating the match codeword cm which consists of

Imin = αImax , and Imax = min(βImax ,

um = (Rm , Gm , B m ) Brm =  Imin(m) , Imax(m)  and τm =  fm , λm , pm , qm  by setting

(10)

Imin ) α

(11)

where, α < 1, β < 1 Hence, brightness bound is calculated through,

um = (

fm Rm + R fm Gm + G fm B m + B , , ) fm + 1 fm + 1 fm + 1

(1)

 brightness(I, Imin , Imax ) =

Brm = minI, Imin(m) , maxI, Imin(m) 

(2)

τm = fm + 1, max{λm , t − qm }, pm , t

(3)

ifImin ≤ rt  ≤ Imax otherwise (12)

After building the codebook, we need to refine as it contains all codewords training images for sequences, along with moving forground and noise. The process is done through a temporal filtering, where only true moving object are allowed to be present. Basically, temporal creation of negative run length is termed as where maximum interval of time has been taken or gap to reoccur a codeword during training period. Hence, after temporal filtering codebook is

end for III. For an each codeword ci , i = 1,2,3,...L, on wraping around condition λi by setting λi ← max{λi , (N − qi + pi − 1)}

true, f alse,

Ctf = cL | cL ∈ C ∧ λL ≤ Mψ

(13)

Mψ = half of the training samples i.e.

(4)

Mψ ≤

585

N 2

(14)

2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)

E. Background Subtraction Algorithm Considering input pixel, r while training, which is defined as BSt (r). Algorithm follows I. r = (R, G, B), I = R + G + B II. Considering equation (13) all codewords cm to r based on two conditions, . colordist(r, um )≤ ε2 Fig. 3. scene

ε2 = detection threshold

Moving Cast Shadow: challenging issue while understanding of

. brightness(I,  Imin(m) , Imax(m) ) = true;  III.BSt (r) =

f oreground, background,

III.

if there is no match otherwise (15) Fig. 4. Motion segmentation: (a) background image, (b) current image and (c) segmentated image

I NTRODUCTION -II

IV.

M ETHODOLOGY-II

A. Shadow Correction On paper Saritha Murali[9], they describe a new shadow removal algorithm for a single image, detection is done in LAB color space. It is based on static image shadow correction. Initially, RGB image is converted to a LAB image and further it is processed. Hence, darker pixels are considered as shadow, which basically less illuminated due to occlusion. Therefore, it can be easily locate in LAB plane. Cause, L channel provides the lightness information. In other hand, B pixel values are lesser in shadow areas for outdoor images.

(a) Shadow casted by moving ob- (b) Shadow casted by static object ject Fig. 2.

Moving shadow figure courtesy from PET , Static shadow

Shadows are usually a local illumination problem, as shown in fig.2. It can take shape and size automatically. Challenging task is understanding the scene, as shown in fig.3. Static shadows formed due to static objects likebuilding, trees, parked cars etc. Static shadows don’t effect moving object detection system. Because, it is modelled as part of background. While in moving objects, moving shadows create problem and harmful for algorithm. This moving shadow appears due to moving objects, such as vehicles, pedestrians etc.

Fig. 5.

Block diagram of shadow removal algorithm

So, a frame level shadow removal algorithm is proposed with a application implementation. The code validates and hence gives a proper shadow correction. The block diagram is shown in fig.5

Existence of a shadow in resultant could lead to an inaccurate detection. It uses a hybrid color and texture information. Therefore, it is important to get an accurate and robust detection of multiple moving object while segmentation. This is one of the most challenging task, in motion algorithm. The real-time challenge is detecting the shadow and correcting it for a motion detection system.

Suppose, from a fixed camera, a particular live video is acquired. Considering the camera is ’X’ and its capturing video sequences are Yx , where video sequences, Y = Yx | 0≤ x ≤ N. The video sequence of Ym for time t and N no of frames. F = Ftx | 0 ≤ x ≤ N. Here, for all video t | 0≤ x ≤ N. computing, YF = Yx Fm

Despite this, identification of correct object, video surveillance, sports event interpolation, human computer interpretation, advanced human interface and virtual reality are some main application of shadow removal and correction process.

Algorithm step follows: 1. Converting the all video to frame by frame for processing i.e splitting the sequences with respect to time.

The presence of shadows as shown in fig. 4 in an image causes distortion in the motion detection algorithm. Therefore, shadow correction has been increasingly addressed problem over years.

2. Whole sequence of frame i.e RGB images are converted to LAB images.

586

2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)

3. Computing, mean values of the pixels in L, A and B planes of the image separately. 4. If mean (Ai ) + mean (Bi ) ≤ 256; where, i | 0 ≤ i ≤ N; N = Number of frames 5. Classifying the pixels by a value in Li ≤ (mean (Li ) - standard deviation (Li ) / 3); considered for shadow pixels and remaining consider as non-shadow pixels. 6. Calculating the result of detection = Li

Suggest Documents