Multimed Tools Appl DOI 10.1007/s11042-015-3060-0
A video steganography algorithm based on Kanade-Lucas-Tomasi tracking algorithm and error correcting codes Ramadhan J. Mstafa 1 & Khaled M. Elleithy 1
Received: 26 August 2015 / Accepted: 3 November 2015 # Springer Science+Business Media New York 2015
Abstract Due to the significant growth of video data over the Internet, video steganography has become a popular choice. The effectiveness of any steganographic algorithm depends on the embedding efficiency, embedding payload, and robustness against attackers. The lack of the preprocessing stage, less security, and low quality of stego videos are the major issues of many existing steganographic methods. The preprocessing stage includes the procedure of manipulating both secret data and cover videos prior to the embedding stage. In this paper, we address these problems by proposing a novel video steganographic method based on KanadeLucas-Tomasi (KLT) tracking using Hamming codes (15, 11). The proposed method consists of four main stages: a) the secret message is preprocessed using Hamming codes (15, 11), producing an encoded message, b) face detection and tracking are performed on the cover videos, determining the region of interest (ROI), defined as facial regions, c) the encoded secret message is embedded using an adaptive LSB substitution method in the ROIs of video frames. In each facial pixel 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs are utilized to embed 3, 6, 9, and 12 bits of the secret message, respectively, and d) the process of extracting the secret message from the RGB color components of the facial regions of stego video is executed. Experimental results demonstrate that the proposed method achieves higher embedding capacity as well as better visual quality of stego videos. Furthermore, the two preprocessing steps increase the security and robustness of the proposed algorithm as compared to state-ofthe-art methods. Keywords Video steganography . Smart video transmission . Hamming codes . Face detection . Smart video tracking . KLT tracking . ROI
* Ramadhan J. Mstafa
[email protected] Khaled M. Elleithy
[email protected] 1
Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
Multimed Tools Appl
1 Introduction Since the Internet has become a popular avenue for people to obtain desired information, attackers can hack intellectual property rights and valuable information of others with little effort [3]. Steganography is defined as the art of hiding data in order to prevent the detection of the secret message. It is the process of concealing some valuable information inside other usual data [43]. As a result, mixed data (stego object) must resemble its original form. Carrier data is also called cover data [24, 32]. Carrier data can be recognized in many forms such as text, audio, image, and video. The secret information is defined as a message, which can also be seen in any type of data [23]. The main goal of steganography is to remove any hacker’s suspicion to the transmission of a secret message. The steganography algorithm will be useless if any suspicion is raised. The human visual system (HVS) cannot recognize a slight distortion that occurs in cover data [37]. If the size of secret message is large compared to the size of cover data, then, the distortion will be seen by the naked eye and the steganographic algorithm will be defeated [17]. Figure 1 illustrates the general block diagram of steganography concepts. Embedding efficiency and embedding payload are the two main factors that are included in any successful steganographic algorithm [9, 15]. The embedding efficiency includes two main questions [28, 31]: 1) How secure is the steganography algorithm in order to embed the secret message inside the cover data? 2) How accurate are visual qualities of stego data after the embedding process occurs? The algorithm with the high embedding efficiency embeds the message into the cover objects by using some of cryptographic algorithms in order to improve the system’s security [13, 34]. Obtained stego objects with a high quality and a low modification rate will maintain the attacker’s attention as normal in order to avoid drawing any suspicion to the transmission of hidden data. The more efficient the steganography scheme is, the more difficult to detect the hidden message by the steganalytical detectors will be [12, 33]. In addition, the embedding payload is an important factor that allows any steganography algorithm to increase in size with the consideration of visual quality for stego objects. The embedding payload is the amount of the secret information required to be embedded inside cover data. In traditional steganographic schemes, both embedding efficiency and embedding payload are contradictions [5, 34]. In other words, if the embedding payload is increased, then the visual quality of stego videos will be decreased which leads to a reduction in the algorithm’s efficiency. The embedding efficiency of the steganographic scheme depends directly on its embedding payload.
Fig. 1 General block diagram of steganography algorithm
Multimed Tools Appl
To increase the embedding payload with the low modification rate of cover data, many steganography algorithms have been proposed using alternative methods. These algorithms use block code and matrix encoding principles including Bose, Chaudhuri, and Hocquenghem (BCH) codes, Hamming codes, cyclic codes, Reed-Solomon codes, and Reed-Muller codes [11, 45]. In addition, robustness is another factor which measures the steganography algorithm's resistance against signal processing and attacks. Signal processing operations include compression, geometric transformation, filtering, and cropping. The algorithm is robust when the receiver side extracts the secret message correctly, without any errors. High efficient steganography algorithms are robust against both signal processing and adaptive noises [27, 44]. Many of the existing steganographic algorithms are designed without taking into account the preprocessing stages such as selecting the ROI for embedding process, encryption, and encoding the secret message as much as focusing on the embedding strategies. As a result, these algorithms are built with the lack of security, robustness, and imperceptibility. In this paper, we address these issues by proposing a new imperceptible steganographic technique, considering some preprocessing stages, improving the security and robustness. The main contributions of this research work are as follows: 1) A novel video steganography algorithm based on KLT tracking using Hamming codes is proposed, controlling the limitations of some state-of-the-art steganographic algorithms in terms of security, robustness, and imperceptibility. 2) A portion of video frames are utilized for embedding process instead of using entire video frames, in the sense that we track the facial regions in video, leading to improved visual quality of stego videos. Moreover, it is very challenging for attackers to determine the location of secret message in video frames as the secret message is embedded into facial regions only which changes from frame to frame, hence preserving the security of embedded data. 3) Error correcting codes such as Hamming codes (15, 11) are used to encode the secret massage prior to the embedding process, making the proposed method more secure and robust during the transmission process. 4) The proposed method maintains a reasonable trade-off between visual quality, embedding payload, and robustness, making it more suitable for real-time security systems. The rest of the paper is organized as follows: Section 2 discusses the background for the paper, which includes some state-of-the-art methods related to our proposed work, Hamming codes, face detection using Viola-Jones algorithm, and KLT based face tracking. The proposed method is detailed in section 3, followed by experimental results and discussion in section 4. Finally, section 5 concludes the paper and suggests some future directions.
2 Background In this section, we briefly present the main concepts which are necessary to understand our proposed work. First, we present some state-of-the-art methods that are closely related to our present work. Then we discuss Hamming codes (15, 11) which is one of the main embodiments of the proposed work. Next, the process of face detection based on the Viola-Jones face detection is presented. Finally, we illustrate the process of face tracking based on KLT tracking algorithm.
Multimed Tools Appl
2.1 Related work Khupse et al. [20] proposed an adaptive video steganography scheme using steganoflage. The steganography scheme has been used in ROI video frames. The authors used human skin color as a cover data for embedding the secret message. The morphological dilation and filling operation methods have been used as a skin detector. After video frames have converted to YCbCr color space, the frame that has the minimum mean square error will be selected for data embedding process. Only the Cb component of this particular frame will be picked for embedding the secret message [20]. This scheme is very limited in capacity because only one frame is selected for the data embedding process. Also, the skin detection method previously used would have different results when it is applied to people with different skin colors. Alavianmehr et al. [1] proposed a robust lossless data hiding algorithm on raw video based on the histogram distribution constrained (HDC). In this algorithm, the luminance Y component of each video frame is separated into blocks. Then, the procedure of embedding the secret message into the luminance blocks is conducted by applying the process of shifting the values of each block’s arithmetic differences. The authors have stated that the algorithm is a reversible and robust against video compression [1]. However, this algorithm utilized only the luminance component to embed the secret message. Moon et al. [25] proposed a secure video steganography algorithm based on a computer forensic technique. After the video is converted into frames, the secret message will be embedded inside the cover frames using the LSB method. The message was encrypted and authenticated using a special key before it hides into the 4 LSB of the cover frames. The authentication key will also be embedded into a specific frame that is known by the receiver side. The purpose of using the computer forensic method is to determine the authenticity of the received videos [25]. This algorithm is not robust enough against video processing and noises because it utilizes the spatial domain. Kelash et al. [19] proposed a video steganography algorithm based on the histogram variation. First, video frames are selected for the embedding process based on the histogram constant value (HCV), which is a predefined threshold. The frame that has the average of histogram variations larger than the HCV will be picked for embedding; otherwise, it will be discarded. Moreover, the selected video frames are divided into blocks and the differences of consecutive pixels are computed. The process of embedding a secret message into 3 LSB for each pixel is controlled by its 4 MSB [19]. This algorithm is limited because the embedding payload will be increased only when the HCV value is decreased. Paul et al. [29] proposed a new approach of video steganography algorithm that hides secret data inside the video. After sudden changes in scenes contained throughout video frames are detected, the secret message will then be embedded into these detected frames. The histogram difference method was used to detect each frame to determine whether it is an abrupt scene change or not. The embedding process operates in the spatial domain and 3-3-2 approach of LSB technique used to hide the secret message. The steganography algorithm is efficient due to the randomization of the pixels’ position [29]. However, the numbers of frames that have an abrupt scene change are limited; as a result, the embedding payload will become low. Cheddad et al. [6] proposed a skin tone video steganography algorithm based on the YCbCr color space. YCbCr color space is a useful color transformation, which is used in many techniques such as compression and object detection methods. The correlation between three color channels (RGB) is removed, so that the intensity (Y) will be separated from colors
Multimed Tools Appl
chrominance blue and red (Cb and Cr). After the human skin regions are detected, the only Cr of these regions will be utilized for embedding the secret message. The algorithm is compared with F5 and S-Tools steganography algorithms [6]. Overall, the algorithm has a low embedding payload because it has embedded the secret message into the only Cr component of the skin region. Bhole et al. [2] proposed a video steganography algorithm based on a random byte hiding technique. The secret message will be randomly embedded into the video frames using the first frame as an index which contains the control information of the data embedding process. The information hiding process, in each row of the frame, relies on the first pixel’s value. For example, if a first pixel’s value in a row is xx, then the message will be hidden in xx+N locations for that specific row, where N is only shared between sender and receiver. In addition, the authors have been used LSB method [2]. Any type of video related operation will destroy the hidden data because the direct pixel domain was being used. Previously mentioned algorithms lack the robustness to resist attackers. With the embedding payload, flexibility exists that allows the increase in the amount of the secret message with standing the reasonable tradeoff of the visual quality. This paper proposes a novel video steganography algorithm based on the KLT tracking algorithm using Hamming codes (15, 11).
2.2 Hamming codes In this section, the Hamming codes technique will be explained and discussed through a specific Hamming (15, 11) example. Hamming codes are defined as one of the most powerful binary linear codes. These types of codes can detect and correct errors that occur in the binary block of data during the communication between parties [35]. The codeword includes both original and extra data with a minimum amount of data redundancy, and is the result of the encoded message that uses the Hamming codes technique. In general, if p is parity bits of a positive integer number p≥2; then, the length of the codeword is n=2p −1. The size of the message that needs to be encoded is defined as k=2p −p−1. The number of parity bits that must be added to the message is p=n−k with the rate of r=k/n [7, 26]. In this paper, Hamming codes (15, 11) are used (n=15, k=11, and p=4), which can correct the identification of a single bit error. A message of size M (m1, m2, …, mk) is encoded by adding p (p1, p2, p3, p4) extra bits as parity to become a codeword of 15-bit length. The codeword is prepared to transmit through a communication channel to the receiver end. The common combination of both message and parity data using these type of codes is to place the parity bits at the position of 2i (i=0, 1, …, n-k) as follows: p1 ; p2 ; m1 ; p3 ; m2 ; m3 ; m4 ; p4 ; m5 ; m6 ; m7 ; m8 ; m9 ; m10 ; m11
ð1Þ
During the encoding and decoding processes, the generator matrix G and parity-check matrix H are being used by Hamming codes (15, 11). At the transmitter channel, a message M, which includes of 11-bit, will be multiplied by the generator matrix G, and then, manipulated by having modulo of 2. The codeword X of 15-bit is obtained and ready to be sent. X ð1nÞ ¼ M ð1k Þ GðknÞ
ð2Þ
At the receiver channel, the encoded data (message + parity) which is a codeword R of 15bit will be received and checked for errors. Once the received codeword R is multiplied by the parity-check matrix H, modulo of 2 will then be applied.
Multimed Tools Appl
A syndrome vector Z (z1, z2, z3, z4) of 4-bit is obtained. If the received message is correct, then Z must have all zero bits (0000); otherwise, during the transmission, one or more bits of the received message might be flipped. In that case, the error correction process must occur. Z ð1pÞ ¼ Rð1nÞ H T
ð3Þ
Where 2
1 60 6 H ¼4 0 0
0 1 0 0
0 0 1 0
0 0 0 1
1 1 0 0
0 1 1 0
0 0 1 1
1 1 0 1
1 0 1 0
0 1 0 1
1 1 1 0
0 1 1 1
1 1 1 1
1 0 1 1
3 1 07 7 05 1
The reason of using parity bits in the Hamming codes is to protect the message during communication. In Hamming codes (15, 11), 7-bit of the message are used to calculate each of parity bit (total 8-bit), which is illustrated in the Fig. 2. Hamming codes (15, 11) is explained through the example stated below. A message M1 consists of 11-bit (11111111111), X1 is a transmitted codeword, R1 is a received codeword, and Z1 a syndrome, then the process of finding the Hamming codes (15, 11) is conducted as follows: 1) Calculate: X1 =M1 ×G, then X1 vector equals to (777711111111111). By applying modulo of 2 to the X1 vector, 15-bit codeword (111111111111111) is obtained. Then, this 15-bit codeword is sent to the destination side. 2) To obtain the correct message on the receiver side, the syndrome Z1 vector must have all zero bits after taking Z1’s modulo of 2. 3) For example, if R1 =111111111111111 is received error-free, then Z1 vector will be (0000). 4) However, assume we have a noisy channel and one of the bits has flipped during the transmission. Then, the received codeword R1 =111111111011111 contains one bit error. Thus, the syndrome Z1 will become (0101).
Fig. 2 Venn diagram of the hamming codes (15, 11)
Multimed Tools Appl
5) In an error example, checking Z1 vector with the parity-check matrix H, it showed that Z1 (0101) is equal to the row number 10 of the parity-check matrix H, which appears that the 10th bit of R1 has flipped. 6) Upon changing the 10th bit of R1 from 0 to 1, R1 will be correct (111111111111111). 7) Hence, the original message M1 (11111111111) of 11-bit can be obtained by taking R1 and ignoring the first 4-bit.
2.3 Face detection To detect the facial area in the first video frame, one of the most powerful and fast algorithm in object detection has been used. It is called the Viola-Jones object detection algorithm. The reason of using the Viola-Jones detector in the proposed steganographic algorithm is that this detector consists of three major contributions. The first contribution is the integral image, which introduces a new image representation. The integral image representation can compute the selected features (Haar-like features) much faster than other detectors [41]. The second contribution of the Viola-Jones detector is building a specific feature based classifier using an AdaBoost algorithm. The third contribution of the Viola-Jones algorithm is identifying a cascade structure, which consists of combining many complex classifiers [18]. The cascade object detector eliminate unimportant areas such as an image’s background, and; focuses on the important areas of the image that contain a given object such as a facial region [39, 42]. Figure 3 shows the process of detecting the facial region in video frame using the Viola-Jones face detection algorithm.
2.4 KLT face tracking In this section, we will introduce the KLT tracking algorithm which is used for feature selection and tracking objects. The process of facial detection in all video frames is costly, because this process requires a high computation time [21, 22]. In addition, when a person moves fast or tilts his head the result will cause the detector to fail based on the training stage of the classifier. Therefore, it is important to have an alternative algorithm which tracks the face throughout the video frames. Once the Viola-Jones detector algorithm is applied to the first frame for purposes of detecting the facial region, the KLT tracking algorithm will be applied throughout the remaining video frames. The KLT algorithm operates by finding good feature points (Harris corners) in the facial area from the first frame. These feature points are tracked throughout all the video frames [36, 38]. Each feature point will have a corresponding point between two consecutive frames. The displacement of the corresponding point pairs can
Fig. 3 Detecting the ROI in the first video frame. (a) original video frame, (b) detected facial region frame after applying the Viola-Jones face detection
Multimed Tools Appl
be computed as motion vectors. The process of tracking the facial region depends on the movement of the centers of the features in two successive video frames. The following equations show the process of face tracking across the video frames [4, 10]: Rt ¼ Rt−1 þ ðCt −Ct−1 Þ
Ct ¼
Ct−1 ¼
1 X f t ðiÞ j f tj i
1 X f t−1 ðiÞ j f t−1 j i
ð4Þ
ð5Þ
ð6Þ
Where Rt and Rt−1 represent the face areas in two adjacent video frames, respectively. Ct and Ct−1 are the position centers of features in two consecutive frames, respectively. Also, ft and ft−1 are the feature points in current and previous frames, respectively [14]. Figure 4 displays the process of tracking the facial regions throughout the video frames using KLT algorithm.
3 The proposed steganography methodology In this section, we proposed a novel video steganography algorithm based on the KLT tracking algorithm using Hamming codes (15, 11). Tables 1 and 2 provide data embedding and
Fig. 4 Face tracking in video frames. (a), (b), and (c) three original different frames in tested video, (d), (e), and (f) show facial regions that are tracked in the three frames using KLT tracking algorithm
Multimed Tools Appl Table 1 Data embedding of the proposed algorithm Input: V //Video, M //Secret message in characters, Key1, Key2; //Stego keys Output: SV; //Stego video Initialize km, pm,p; B ← M; //Convert the alphabetic secret message to the binary array // Stego keys Key1 ← Length(B)/11; //Length of the secret message Key2 ← rand (2^15,Key1,1)’; //Randomization of the seed Key1 EB ← E(B, [Key1]); //Encrypt the binary array by Key1 for1 i=1: (Key1*15) do //Encode each 11 bits of encoded message by Hamming (15,11) g(1:11) ← get(EB(km:km+11)); E_EB ← encode(g,15,11); temp(1:15) ← get(Key2(i)); Ecdmsg(pm:pm+15) ← xor(E_EB,temp); pm+15; km+11; end1 Read (V); //Read input video, {Vf1, Vf2,…, Vfn} are video frames (n frames) FBox1 ← Face_detector (Vf1); //Calling the Viola-Jones face detector for first frame Vf1 Non_Face(Vf1) ← Key1, Key2; //Embed keys (Key1 and Key2)into the non-facial regions of the first frame Vf1 for2 t=1:n do //For each video frame, track the face and its corner box points FBoxt ← Face_KLT(Vft ); //Calling KLT face tracking algorithm Non_Face(Vft ) ← Edges (FBoxt (xz ,yz )); //Embed edge points of each facial box into the non-facial area of its frame (z=1,2,3, and 4) B_mat = mask(Edges (FBoxt (xz ,yz)),Vfx,Vfy); //Identify the binary mask of the facial regions of size (Vfx,Vfy) //Embed the encoded message into the 1 LSB, 2 LSBs, 3 LSBs, or 4 LSBs of each frame’s facial region (FBox1,2,…n) for3 i=1:Vfx do for4 j=1:Vfy do if5 B_mat(i,j) == 1 LSB_R1,2,3, or 4 (FBoxt(i,j)) ← Ecdmsg(p+1,4,7, or 10); LSB_G1,2,3, or 4 (FBoxt(i,j)) ← Ecdmsg(p+2,5,8, or 11); LSB_B1,2,3, or 4 (FBoxt(i,j)) ← Ecdmsg(p+3,6,9, or 12); p+3,6,9, or 12; end5 end4 end3 end2 get SV //Obtain the stego video
extracting algorithms for the proposed scheme, respectively. Our research analyzes the following four stages:
3.1 Secret message preprocessing stage The secret text message is a digital data type which depends on the ASCII code of characters. Each character has a unique 8-bit code that is unchangeable. In this work, a sizable text file is used as a secret message, and it is preprocessed before the embedding phase. First, the whole
Multimed Tools Appl Table 2 Data extracting of the proposed algorithm Input: SV; //Stego video Output: M; //Secret message in characters Initialize km, pm,p; {Sf1, Sf2,…, Sfn} ← Read (SV); //Convert the stego video into frames (n stego frames) Extract[Key1, Key2] from (Sf1); //Extract keys (Key1 and Key2) from the non-facial region of the first stego frame Sf1 for1 t=1:n do Extract[Edges (FBoxt (xz ,yz))] from (Non_Face(Sft )); //Extract edge points of each facial box FBox from non-facial areas of its stego frame (z=1,2,3, and 4) Extract[FBoxt ] from (Sft ); //Identify the region of interest (facial region) by edges B_mat=mask(Edges (FBoxt (xz ,yz)),Sfx,Sfy); //Identify the binary mask of facial regions of size (Sfx,Sfy) //Extract the encoded message from the 1 LSB, 2 LSBs, 3 LSBs, or 4 LSBs of each frame’s facial region (FBox1,2,…n) for2 i=1:Sfx do for3 j=1:Sfy do if4 B_mat(i,j) == 1 Ecdmsg(p+1,4,7, or 10) ← LSB_R1,2,3, or 4 (FBoxt (i,j)); Ecdmsg(p+2,5,8, or 11) ← LSB_G1,2,3, or 4 (FBoxt (i,j)); Ecdmsg(p+3,6,9, or 12) ← LSB_B1,2,3, or 4 (FBoxt (i,j)); p+3,6,9, or 12; end4 end3 end2 end1 for5 i=1: (Key1*15) do //Decode each 15 bits of extracted data by Hamming (15,11) Sg(1:15) ← get(Ecdmsg (km:km+15)); temp(1:15) ← get(Key2(i)); E_EB ← xor(Sg,temp) EB ← decode(E_EB ,15,11); pm+11; km+15; end5 B ← D(EB, [Key1]); //Decrypt the binary array by Key1 M ← B; //Convert the binary array to the alphabetic characters get M //Recover the secret message
characters in the text file are converted into ASCII codes in order to generate an array of binary bits. Then, for security purposes, the binary array is encrypted by using a key (Key1) that represents the size of the secret message. A shuffle encryption method has been used to change the index of entire bits of the binary array using the process of permutation. This process will encode the message, and protect it from hackers. Since the binary linear block of Hamming codes (15, 11) are used, the encrypted array is divided into 11-bit blocks. Then, every block is encoded by the Hamming codes (15, 11) that will generate 15-bit blocks. Consequently, this encoder extends the size of the message by adding four parity bits into each block. Another key (Key2) is utilized as a seed to generate randomized 15-bit numbers, and each number is XORed with the 15-bit encoded block. By using two keys, Hamming codes, and XOR operation the security of the proposed algorithm will be improved.
Multimed Tools Appl
3.2 Face detection and tracking stage As previously mentioned, the process of extracting the facial regions in the video frames must be identified when facial regions are used as cover data for embedding the secret message. To detect the facial region in the first video frame, the Viola-Jones detector algorithm has been applied. Then, throughout the remaining video frames the face will be tracked by using the KLT tracking algorithm. Since the proposed algorithm is based on the face detection and KLT tracking algorithms, the secret message will be hidden into only video frames that contain facial features. Otherwise non-facial frames will be discarded without embedding. Hiding secret messages inside facial regions make it more challenging for attackers during data extraction as these regions changes in every frame of the underlying video.
3.3 Data embedding stage After the facial region is detected and tracked, it will be extracted from the video frames. In each frame, the cover data of the proposed algorithm is the facial region of interest. The ROI changes in every frame based on the size of the facial bounding box. The ROI extracting process is not always accurate, and may include pixels outside the facial bounding box. For example, when the face is tilted, a binary mask is applied. This mask sets the pixels that are located inside the polygon bounding box to B1^ and sets the pixels outside the bounding polygon to B0^. The binary mask applies to all video frames. The main advantage of the binary mask is to determine the number of pixels and their positions that will involve in the embedding process of every frame. Figure 5 illustrates the role of the binary mask that identifies the facial region in each video frame. Every four edges of the bounding box in each frame are embedded into the specific nonfacial area known by both sender and receiver. Every box needs 80 bits per frame (40 bits for X-axis and 40 bits for Y-axis). Moreover, in order to transmit keys to the receiver party, both keys will be embedded into the non-facial region of the first video frame. Then, the next stage in the process of embedding the hidden data begins. This stage hides the secret message blocks by placing them into the LSB of each red, green, and blue color components of the facial region in all video frames.
Fig. 5 Binary mask of one video frame
Multimed Tools Appl
In the proposed algorithm, 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs of the three color components from each facial pixel are utilized in order to embed 3, 6, 9, and 12 bits of the hidden message, respectively. Upon completion, the stego frames will be reconstructed into a stego video format that sends via the communication channel to the receiver party. Figure 6 illustrates the block diagram of data embedding stage.
3.4 Data extraction stage At the receiver side, the stego video will be divided into frames, and both two keys are extracted from the non-facial area of the first frame. Moreover, in each video frame, the four corner points of the facial box will be extracted from the non-facial regions. The binary mask of each frame is generated from these points, and the exact facial region will be identified. Then, the process of extracting the hidden message is conducted by taking out the 3, 6, 9, and 12 bits from the 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs, respectively, in each facial pixel of all video frames. The extracted bits from all video frames are stored in a binary array. The binary array will be segmented into the 15-bit blocks. Each block will be XORed with the 15 bits number that was randomly generated by Key2. The results of 15-bit blocks are decoded using the Hamming (15, 11) decoder to produce 11-bit blocks. Since the sender has encrypted the secret message, the obtained array must be decrypted using Key1. The final array divides into 8-bit codes (ASCII) for generate the right characters of the original message. Figure 7 illustrates the block diagram of the data extracting stage.
4 Experimental results and discussion This section presents experimental results that are implemented by the MATLAB software version R2013a. A dataset of five videos (Video1, Video2, Video3, Video4, and Video5) with
Fig. 6 Block diagram for the data embedding stage
Multimed Tools Appl
Fig. 7 Block diagram for the data extracting stage
the format of audio video interleave (AVI) are used. The implemented videos are videoconferencing sequences taken by the laptop camera. The cover videos have a 640×480 pixel resolution at 30 frames per second, and a data rate of 8856 kbps. Each video contains a face object through the entire 413 frames. The size of each tested video is 378.18 Megabits (each frame about 0.916 Megabits). In all video frames, the secret message is a large text file segmented according to the size of the ROI.
4.1 Visual quality assessment metrics and evaluation In order to evaluate the transparency of the proposed algorithm, different metrics are applied. The experimental results are measured using the peak signal to noise ratio (PSNR) which is a non-perceptual objective metric used to compute the difference between the cover and stego videos. PSNR and mean square error (MSE) are calculated by following equations: M AX C 2 ð7Þ PSNR ¼ 10*Log10 ðdBÞ M SE Xm Xn Xh M SE ¼
i¼1
j¼1
k¼1
½C ði; j; k Þ−S ði; j; k Þ2
mnh
ð8Þ
C and S refer to the original and distorted frames, respectively. An m and n are defined as video resolutions, and h indicates the R, G, and B color channels (k=1, 2, and 3). Since the HVS has a nonlinear behavior, the PSNR will not correlate with the perceptual visual quality. In order to improve the visual quality of stego videos, PSNR-HVS (PSNRH) and PSNR-HVS-M (PSNRM) metrics are utilized. The PSNRM represents for a modified version of
Multimed Tools Appl
the PSNRH. Both PSNRH and PSNRM are perceptual subjective metrics that are based on the frequency domain’s coefficients [8]. Moreover, PSNRM enhances the quality of the stego videos better than PSNRH [30]. M AX C 2 ð9Þ ðdBÞ PSNRH ¼ 10*Log 10 M SE hvs PSNRM ¼ 10*Log10
M AX C 2 M SE hvs m
ðdBÞ
ð10Þ
MSEhvs and MSEhvs_m rely on the 8X8 DCT coefficients of the original and distorted blocks, and factor matrix [8]. Figure 8 explains the visual quality comparison when one LSB is used for embedding the hidden message from each of R, G, and B color channels. Here, the averages of PSNR, PSNRH, and PSNRM for the five experiments are 53.07, 64.01, and 74.51 dBs, respectively. Visual qualities of the stego videos are close to visual qualities of the original videos. Figure 9 shows the visual quality comparison when two LSBs are utilized to hide the secret message from each of three color components. The averages of PSNR, PSNRH, and PSNRM for the five videos are 43.10, 56.91, and 64.75 dBs, respectively. Table 3 summarizes the results of the visual qualities included in each PSNR, PSNRH, and PSNRM for the five experiments using 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs. In Fig. 10, the visual quality comparison is shown by using three LSBs for the embedding payload from each of the RGB pixel components. Here, the averages of each PSNR, PSNRH, and PSNRM for the five experiments equal 36.86, 50.29, and 55.56 dBs, respectively. Figure 11 illustrates the visual quality comparison when four LSBs are used for embedding
Fig. 8 Averages of visual qualities for five videos using 1 LSB
Multimed Tools Appl
Fig. 9 Averages of visual qualities for five experiments using 2 LSBs
the hidden message in each of the color channels. The averages of each PSNR, PSNRH, and PSNRM for five videos are 33.81, 43.54, and 46.81 dBs, respectively. The PSNRM metric has enhanced the visual quality. In conclusion, due to the high values of PSNR, PSNRH, and PSNRM, the proposed algorithm has consistent visual qualities for stego videos.
4.2 Embedding payload based evaluation According to the [40], the proposed scheme has a high embedding payload. The obtained embedding capacity ratios of the proposed algorithm, when using 1 LSB, 2 LSBs, 3 LSBs, and Table 3 Visual qualities comparison for five experiments using each of 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs into R, G, and B color components Visual quality
No. of LSBs in each R,G, and B
Video1
PSNR
1 LSB
53.08
53.53
52.23
53.93
52.63
2 LSBs
43.11
43.56
42.26
43.96
42.66
3 LSBs
36.87
37.32
36.02
37.72
36.42
4 LSBs
33.81
34.26
32.96
34.66
33.36
1 LSB
64.02
64.47
63.17
64.87
63.57
2 LSBs
56.91
57.36
56.06
57.76
56.46
3 LSBs 4 LSBs
50.30 43.54
50.75 43.99
49.45 42.69
51.15 44.39
49.85 43.09
1 LSB
74.51
74.96
73.66
75.36
74.06
2 LSBs
64.75
65.20
63.90
65.60
64.30
3 LSBs
55.56
56.01
54.71
56.41
55.11
4 LSBs
46.81
47.26
45.96
47.66
46.36
PSNRH
PSNRM
Video2
Video3
Video4
Video5
Multimed Tools Appl
Fig. 10 Averages of visual qualities for five videos using 3 LSBs
4 LSBs, are 5.5, 10.9, 16.4, and 21.9%, respectively. Table 4 illustrates the amount of the embedded secret message in each test video using 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs. All tested videos are equal in size of 378.18 Megabits each one. Since the facial features contain throughout all video frames, the secret message is embedded into the whole frames. Moreover,
Fig. 11 Averages of visual qualities for five experiments using 4 LSBs
Multimed Tools Appl Table 4 Total embedding capacity of the secret message (Megabits) that can be hidden in each tested video using 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs. The format of video is AVI with size =378.18 Megabits, frame resolution=640×480 pixels, and frame rate=30 per second. Each tested video contains a total of 413 frames
Embedding capacity (Megabits)
No. of LSBs in each R,G, and B
Video1
Video2
Video3
Video4
Video5
1 LSB
20.80
20.72
20.96
20.64
20.88
2 LSBs
41.60
41.44
41.92
41.28
41.76
3 LSBs
62.40
62.16
62.88
61.92
62.64
4 LSBs
83.20
82.88
83.84
82.56
83.52
the size of each frame is 0.916 Megabits. The embedding capacity ratio can be calculated in the following equation: Embedding ratio ¼
Size of embedded message 100% Video size
ð11Þ
A number of experiments were conducted that compares both embedding payload and visual quality of the proposed algorithm with other related algorithms. Table 5 shows the comparison of data embedding ratios of our proposed algorithm with others. Figure 12 illustrates the comparison of average visual quality of the proposed algorithm with other related algorithms. The results of comparison demonstrated that our algorithm outperformed the three related algorithms in the literature in both visual quality and embedding capacity. Figure 13 summarizes the average of the data embedding payload of five experiments for the proposed algorithm using 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs.
4.3 Robustness evaluation In order to evaluate the performance of the proposed algorithm for retrieving the secret message successfully, two objective metrics, bit error rate (BER) and similarity function (SF) are used. During the communication, they test the embedded and extracted secret message to determine if it is corrupted. The BER and SF can be calculated in the following equations [16]: Xa Xb BER ¼
i¼1
j¼1
^ ði; jÞ M ði; jÞ⊕M
ab
100%
ð12Þ
Table 5 The comparison of embedding capacity ratios for the proposed algorithm with other existent algorithms Proposed algorithm
Alavianmehr et al. [1]
Cheddad et al. [6]
Tse-Hua et al. [40]
1 LSB
5.5 %
1.34 %
0.08 %
0.50 %
2 LSBs
10.9 %
2.68 %
0.17 %
1.00 %
3 LSBs 4 LSBs
16.4 % 21.9 %
4.02 % 5.36 %
0.26 % 0.34 %
1.50 % 2.00 %
Multimed Tools Appl
Fig. 12 Visual quality comparison of our algorithm with Alavianmehr et al. [1], Cheddad et al. [6], and Tse-Hua et al. [40] existent algorithms
Xb Xb ^ ði; jÞ M ði; jÞ M i¼1 j¼1 ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S F ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xa Xb h Xa Xb 2 ^ ði; jÞ2 M ði; jÞ M i¼1 j¼1 i¼1 j¼1
ð13Þ
^ are the embedded and extracted secret messages, respectively, a and b are Where M and M the size of the secret message. The proposed algorithm is tested against various attacks (Gaussian noise with the zero mean and variance=0.01 and 0.001, Salt & Pepper noise with
Fig. 13 Average of the data embedding payload for five videos using each of the 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs
Multimed Tools Appl Table 6 BER and SF values for the five distorted videos against three attacks Type of attack
Video1 SF
No attacks Salt & pepper
1.00 D=0.01
Video2 BER % SF 0
1.00
Video3 BER % SF 0
Video4 BER % SF
1.00
0
1.00
Video5 BER % SF 0
1.00
BER % 0
0.76 24.5
0.76 24.2
0.75 25.1
0.76 23.9
0.75 24.8
D=0.001 0.76 23.7
0.76 24
Gaussian white V=0.01 V=0.001 Median filtering
0.77 23.4
0.76 24.3
0.77 23.1
0.69 31.4
0.69 31.1
0.68 32
0.69 30.8
0.68 31.7
0.69 30.9
0.69 30.6
0.69 31.5
0.70 30.3
0.69 31.2
0.79 21.4
0.79 21.1
0.78 22
0.79 20.8
0.78 21.7
the density=0.01 and 0.001, and Median Filtering). To achieve the robustness of the algorithm, the lower BER and the higher SF must be obtained. Table 6 illustrates the values of BER and SF for the five experiments. Since the proposed algorithm is applied on time domain, the BER and SF values are reasonable but not ideal. Table 7 summarizes the performance of the proposed algorithm under such attacks. The visual qualities (PSNR, PSNRH, and PSNRM) of the distorted videos are calculated and the visual qualities are acceptable.
4.4 Security analysis Despite of the high embedding payload, the security of the steganography algorithm has been improved. The reason for this security improvement is based upon the cover data (facial regions) being changed frame to frame. Therefore, the attackers have an extremely difficult time determining the location of the hidden message. In addition, since two keys have been used prior to the start of the embedding process, attackers will be further prevented from reading the secret message. Moreover, applying the Hamming codes (15, 11) to the secret Table 7 Visual qualities comparison for the five distorted videos against three attacks Visual quality
Type of attack
Video1
Video2
Video3
Video4
Video5
PSNR
Impulsive
D=0.01
23.17
23.62
22.32
24.02
22.72
D=0.001 V=0.01
31.20 18.95
31.65 19.40
30.35 18.10
32.05 19.80
Gaussian white
30.75 18.50
V=0.001
28.87
29.32
28.02
29.72
28.42
25.26
25.71
24.41
26.11
24.81
D=0.01
35.23
35.68
34.38
36.08
34.78
D=0.001
45.27
45.72
44.42
46.12
44.82
V=0.01
30.01
30.46
29.16
30.86
29.56
V=0.001
39.93
40.38
39.08
40.78
39.48
D=0.01
34.47 38.06
34.92 38.51
33.62 37.21
35.32 38.91
34.02 37.61
D=0.001
48.17
48.62
47.32
49.02
47.72
V=0.01
33.37
33.82
32.52
34.22
32.92
V=0.001
45.08
45.53
44.23
45.93
44.63
38.99
39.44
38.14
39.84
38.54
Median filtering PSNRH
Impulsive Gaussian white
PSNRM
Median filtering Impulsive Gaussian white Median filtering
Multimed Tools Appl
message prior to the data embedding stage, hackers will have additional obstacles to overcome in order to read the secret message.
5 Conclusion and future work This paper proposed a novel video steganography algorithm based on the KLT tracking algorithm using Hamming codes (15, 11). The proposed method depends on the four different stages, including: 1) the secret message preprocessing stage; 2) the facial regions detection and tracking stage; 3) the data embedding process stage; and 4) the data extracting process stage. The proposed method achieved a high embedding efficiency by incorporating the preprocessing stages as verified by a number of experiments. An encouraging finding of the proposed method is the higher visual quality of stego videos including average PSNRM of 74.51, 64.75, 55.56, and 46.81 dBs when each of 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs is utilized. According to the embedding payload, the proposed algorithm achieved a high embedding payload. The amount of the secret data in each video is about 2.6, 5.2, 7.8, and 10.4 Mbytes when each of 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs is applied. In addition, the usage of two keys, Hamming codes (15, 11), and XOR operation has improved the security of the proposed algorithm. By testing various attacks, we have proven the robustness of the proposed algorithm by conducting all of the above mentioned experiments. Although, the proposed method achieves higher embedding capacity, better visual quality of stego videos and improved security and robustness, yet some further improvement is possible. In future, we tend to focus on the following points which will further improve the payload, security, and robustness: 1) Applying the proposed steganography algorithm in the frequency domain to improve both data embedding payload and robustness against various attacks. 2) In addition of using facial regions, utilizing motion objects in videos as regions of interest to further increase the embedding capacity. For instance, using videos that contain multiple moving objects such as pedestrians and vehicles.
Acknowledgments The authors are sincerely thankful to the associate editor and anonymous reviewers for their useful suggestions and constructive comments which improved the quality of our research work. We are also grateful to Ms. Camy Deck of English department, University of Bridgeport, Bridgeport, USA for proofreading of our work.
References 1. Alavianmehr MA, Rezaei M, Helfroush MS, Tashk A (2012) A lossless data hiding scheme on video raw data robust against H.264/AVC compression. In: Computer and Knowledge Engineering (ICCKE), 2012 2nd International eConference on 194–198. doi:10.1109/iccke.2012.6395377 2. Bhole AT, Patel R (2012) Steganography over video file using Random Byte Hiding and LSB technique. In: Computational Intelligence & Computing Research (ICCIC. IEEE Int Conf 1–6. doi:10.1109/iccic.2012. 6510230 3. Cheddad A, Condell J, Curran K, Mc Kevitt P (2009) A secure and improved self-embedding algorithm to combat digital document forgery. Sig Process 89(12):2324–2332. doi:10.1016/j.sigpro.2009.02.001 4. Cheddad A, Condell J, Curran K, Mc Kevitt P (2009) A skin tone detection algorithm for an adaptive approach to steganography. Sig Process 89(12):2465–2478. doi:10.1016/j.sigpro.2009.04.022
Multimed Tools Appl 5. Cheddad A, Condell J, Curran K, Mc Kevitt P (2010) Digital image steganography: survey and analysis of current methods. Sig Process 90(3):727–752. doi:10.1016/j.sigpro.2009.08.010 6. Cheddad A, Condell J, Curran K, McKevitt P (2008) Skin tone based Steganography in video files exploiting the YCbCr colour space. In: Multimedia and Expo. IEEE Int Conf 905–908. doi:10.1109/icme.2008. 4607582 7. Chin-Chen C, Kieu TD, Yung-Chen C (2008) A high payload steganographic scheme based on (7, 4) hamming code for digital images. In: Electronic Commerce and Security. Int Symp 16–21. doi:10.1109/isecs.2008.222 8. Egiazarian K, Astola J, Ponomarenko N, Lukin V, Battisti F, Carli M (2006) New full-reference quality metrics based on HVS. In: CD-ROM proceedings of the second international workshop on video processing and quality metrics, Scottsdale, USA 9. Farschi S, Farschi H (2012) A novel chaotic approach for information hiding in image. Nonlinear Dyn 69(4): 1525–1539. doi:10.1007/s11071-012-0367-5 10. Fassold H, Rosner J, Schallauer P, Bailer W (2009) Realtime KLT feature point tracking for high definition video. GraVisMa 11. Fontaine C, Galand F (2007) How Can Reed-Solomon Codes Improve Steganographic Schemes? In: Furon T, Cayre F, Doërr G, Bas P (eds) Information Hiding, vol 4567. Lect Notes Comput Sci. Springer Berlin Heidelberg, pp 130–144. doi:10.1007/978-3-540-77370-2_9 12. Guangjie L, Weiwei L, Yuewei D, Shiguo L (2011) An adaptive matrix embedding for image steganography. In: Multimedia Information Networking and Security (MINES). Third Int Conf 642–646. doi:10.1109/mines.2011.138 13. Guangjie L, Weiwei L, Yuewei D, Shiguo L (2012) Adaptive steganography based on syndrome-trellis codes and local complexity. In: Multimedia Information Networking and Security (MINES). Fourth Int Conf 323– 327. doi:10.1109/mines.2012.55 14. Guo-Shiang L, Tung-Sheng T (2013) A face tracking method using feature point tracking. In: Information Security and Intelligence Control (ISIC). Int Conf 210–213. doi:10.1109/isic.2012.6449743 15. Hasnaoui M, Mitrea M (2014) Multi-symbol QIM video watermarking. Sig Process Image Commun 29(1): 107–127. doi:10.1016/j.image.2013.07.007 16. He Y, Yang G, Zhu N (2012) A real-time dual watermarking algorithm of H.264/AVC video stream for video-on-demand service. AEU Int J Electron Commun 66(4):305–312. doi:10.1016/j.aeue.2011.08.007 17. Islam S, Modi MR, Gupta P (2014) Edge-based image steganography. EURASIP J Inf Secur 2014(1):1–14 18. Isukapalli R, Elgammal A, Greiner R (2005) Learning a dynamic classification method to detect faces and identify facial expression. In: Zhao W, Gong S, Tang X (eds) Analysis and Modelling of Faces and Gestures, vol 3723. Lect Notes Comput Sci. Springer Berlin Heidelberg, pp 70–84. doi:10.1007/11564386_7 19. Kelash HM, Abdel Wahab OF, Elshakankiry OA, El-sayed HS (2013) Hiding data in video sequences using steganography algorithms. In: ICT Convergence (ICTC). Int Conf 353–358. doi:10.1109/ictc.2013.6675372 20. Khupse S, Patil NN (2014) An adaptive steganography technique for videos using Steganoflage. In: Issues and Challenges in Intelligent Computing Techniques (ICICT). Int Conf 811–815. doi:10.1109/icicict.2014.6781384 21. Leibe B, Leonardis A, Schiele B (2008) Robust object detection with interleaved categorization and segmentation. Int J Comput Vis 77(1–3):259–289. doi:10.1007/s11263-007-0095-3 22. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: IJCAI, 674–679 23. Lusson F, Bailey K, Leeney M, Curran K (2013) A novel approach to digital watermarking, exploiting colour spaces. Sig Process 93(5):1268––1294. doi:10.1016/j.sigpro.2012.10.018 24. Masoumi M, Amiri S (2013) A blind scene-based watermarking for video copyright protection. AEU Int J Electron Commun 67(6):528–535. doi:10.1016/j.aeue.2012.11.009 25. Moon SK, Raut RD (2013) Analysis of secured video steganography using computer forensics technique for enhance data security. In: Image Information Processing (ICIIP). IEEE Second Int Conf 660–665. doi:10. 1109/iciip.2013.6707677 26. Mstafa RJ, Elleithy KM (2014) A highly secure video steganography using Hamming code (7, 4). In: Systems, Applications and Technology Conference (LISAT). IEEE Long Island 1–6. doi:10.1109/lisat.2014.6845191 27. Mstafa RJ, Elleithy KM (2015) A high payload video steganography algorithm in DWT domain based on BCH codes (15, 11). In: Wireless Telecommunications Symposium (WTS). 1–8. doi:10.1109/wts.2015.7117257 28. Mstafa RJ, Elleithy KM (2015) A novel video steganography algorithm in the wavelet domain based on the KLT tracking algorithm and BCH codes. In: Systems, Applications and Technology Conference (LISAT). IEEE Long Island 1–7. doi:10.1109/lisat.2015.7160192 29. Paul R, Acharya AK, Yadav VK, Batham S (2013) Hiding large amount of data using a new approach of video steganography. In: Confluence 2013: The Next Generation Information Technology Summit (4th International Conference) 337–343. doi:10.1049/cp.2013.2338 30. Ponomarenko N, Silvestri F, Egiazarian K, Carli M, Astola J, Lukin V (2007) On between-coefficient contrast masking of DCT basis functions. In: Proceedings of the Third International Workshop on Video Processing and Quality Metrics
Multimed Tools Appl 31. Qazanfari K, Safabakhsh R (2014) A new steganography method which preserves histogram: Generalization of LSB++. Inform Sci 277:90–101. doi:10.1016/j.ins.2014.02.007 32. Qian Z, Feng G, Zhang X, Wang S (2011) Image self-embedding with high-quality restoration capability. Digit Sig Process 21(2):278–286. doi:10.1016/j.dsp.2010.04.006 33. Rupa C (2013) A digital image steganography using sierpinski gasket fractal and PLSB. J Inst Eng India B 94(3):147–151. doi:10.1007/s40031-013-0054-z 34. Sadek M, Khalifa A, Mostafa MM (2014) Video steganography: a comprehensive review. Multimed Tools 1–32. doi:10.1007/s11042-014-1952-z 35. Sarkar A, Madhow U, Manjunath BS (2010) Matrix embedding with pseudorandom coefficient selection and error correction for robust and secure steganography. IEEE Trans Inf Forensics Secur 5(2):225–239. doi:10. 1109/tifs.2010.2046218 36. Shi J, Tomasi C (1994) Good features to track. Proc IEEE Conf Comput Vis Pattern Recognit Comput Soc 593–600. doi:10.1109/cvpr.1994.323794 37. Subhedar MS, Mankar VH (2014) Current status and key issues in image steganography: a survey. Comput Sci Rev 13–14:95–113. doi:10.1016/j.cosrev.2014.09.001cbrs 38. Tomasi C, Kanade T (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon Univ, Pittsburgh 39. Torres-Pereira E, Martins-Gomes H, Monteiro-Brito A, de Carvalho J (2014) Hybrid Parallel Cascade Classifier Training for Object Detection. In: Bayro-Corrochano E, Hancock E (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, vol 8827. Lect Notes Comput Sci. Springer International Publishing, pp 810–817. doi:10.1007/978-3-319-12568-8_98 40. Tse-Hua L, Tewfik AH (2006) A novel high-capacity data-embedding system. IEEE Trans Image Process 15(8):2431–2440. doi:10.1109/tip.2006.875238 41. Viola P, Jones M (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154. doi:10.1023/ B:VISI.0000013087.49260.fb 42. Viola P, Jones M Rapid object detection using a boosted cascade of simple features. In: Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings IEEE Comput Soc Conf 511:I-511-I-518. doi:10.1109/ cvpr.2001.990517 43. Wang X-y, Wang C-p, Yang H-y, Niu P-p (2013) A robust blind color image watermarking in quaternion Fourier transform domain. J Syst Softw 86(2):255–277. doi:10.1016/j.jss.2012.08.015 44. Yiqi T, KokSheik W (2014) An overview of information hiding in H.264/AVC compressed video. IEEE Trans Circuits Syst Video Technol 24(2):305–319. doi:10.1109/tcsvt.2013.2276710 45. Zhang R, Sachnev V, Kim H (2009) Fast BCH syndrome coding for steganography. In: Katzenbeisser S, Sadeghi A-R (eds) Information Hiding, vol 5806. Lect Notes Comput Sci. Springer Berlin Heidelberg, pp 48–58. doi:10.1007/978-3-642-04431-1_4
Ramadhan J. Mstafa is originally from Duhok, Kurdistan Region, Iraq. He is pursuing his PhD degree in Computer Science and Engineering at the University of Bridgeport, Bridgeport, Connecticut, USA. He received his Bachelor’s degree in Computer Science from the University of Salahaddin, Erbil, Iraq. Mr. Mstafa received his Master’s degree in Computer Science from University of Duhok, Duhok, Iraq. He is IEEE Student Member. His research areas of interest include image processing, mobile communication, security, and steganography.
Multimed Tools Appl
Dr. Elleithy is the Associate Vice President of Graduate Studies and Research at the University of Bridgeport. He is a professor of Computer Science and Engineering. He has research interests are in the areas of wireless sensor networks, mobile communications, network security, quantum computing, and formal approaches for design and verification. He has published more than three hundred research papers in international journals and conferences in his areas of expertise. Dr. Elleithy has more than 25 years of teaching experience. His teaching evaluations are distinguished in all the universities he joined. He supervised hundreds of senior projects, MS theses and Ph.D. dissertations. He supervised several Ph.D. students. He developed and introduced many new undergraduate/ graduate courses. He also developed new teaching / research laboratories in his area of expertise. Dr. Elleithy is the editor or co-editor for 12 books by Springer. He is a member of technical program committees of many international conferences as recognition of his research qualifications. He served as a guest editor for several International Journals. He was the chairman for the International Conference on Industrial Electronics, Technology & Automation, IETA 2001, 19–21 December 2001, Cairo – Egypt. Also, he is the General Chair of the 2005–2013 International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering virtual conferences.