Approaches to Multimedia and Security: Cryptographic and watermarking algorithms Jana Dittmann,
Petra Wohlmacher
Klara Nahrstedt
GMD-IPSI Dolivostrasse 15 D-64293 Darmstadt +49-6151-869845
University of Klagenfurt Villacherstrasse 161 9020 Klagenfurt, Austria +43 463 2700854
[email protected]
[email protected]
Department of Computer Science University of Illinois at Urbana Champaign Urbana, IL 61801, USA
[email protected]
IT-systems are commonly used for different kinds of applications, increasingly applications dedicated to multimedia. In this context, IT-systems are named multimedia systems. Obviously, secure and trustworthy actions and interactions are also important requirements for multimedia systems. Whether or not a multimedia system fulfils these requirements will have a substantial influence on the acceptance of multimedia.
ABSTRACT Regarding security particularly in the field of multimedia, the requirements on security increase. If and in which way security mechanisms can be applied to multimedia data and their applications needs to be analyzed for each purpose separately. This is mainly due to the structure and complexity of multimedia. Based on the main issues of IT-security, this paper introduces the most important security requirements, which must be fulfilled by today's multimedia systems. Our paper describes the security measures used to satisfy these requirements. These measures are based on modern cryptographic mechanisms and digital watermarking techniques as well as on security infrastructures.
Starting from these three threats, the basic requirements for the security of a given multimedia system may be derived. Security requirements are met by security measures, which generally consist of several security mechanisms. Security services can be implemented by security mechanisms. Overall, security requirements are described within a security policy. The security policy defines also which measures are implemented to realize these requirements.
Furthermore we focus on a survey of revocation methods for digital signatures, a media independent classification scheme and discuss an example for securing RSVP for multimedia applications.
In this paper we present the most important security requirements, measures, and security mechanisms. Furthermore problems with media data, revocation methods for digital signatures, a media independent watermarking classification scheme and an example for securing RSVP for multimedia applications are discussed.
Keywords Multimedia and security, revocation of digital signatures, digital watermarking, classification, evaluation service, QoS
Digital certificates form a basis that allows entities to trust each other. Due to different constraints, a certificate is only valid within a specific period of time. Coming from several threats, there are important reasons why its validity must be terminated sooner than assigned and thus, the certificate needs to be revoked. This paper provides a classification of revocation methods.
1. INTRODUCTION Recently security has become one of the most significant and challenging problems for spreading new information technology. Since digital data can easily be copied, multiplied without information loss, and manipulated without any detection, security solutions are required, which encounter these threats. Security solutions are especially of interest for such fields as distributed production processes and electronic commerce, since their producers provide only access control mechanisms to prevent misuse and theft of material.
Today a wide variety of techniques has been proposed but it is quite difficult to classify the approaches and measure their quality. Our intention in this paper is also to discuss the main watermarking parameter and to present a media independent classification scheme as a basis for quality evaluation.
In order to assess trustworthiness of IT-systems in general, catalogues for security criteria have been published [17,19,29,30,31]. One of the most important ones is the Europewide valid ITSEC catalogue of criteria [19], which contains criteria for evaluating the security of IT-systems. This catalogue defines security criteria within different classifications regarding the following three basic threats: • • •
Additionally distributed multimedia applications require end-toend quality of service(QoS) in order to be accepted and used. We propose a solution for securing RSVP messages in a flexible, efficient and scalable manner. Our solution extends the RSVP protocol with a scalable QoS protection, using a hybrid hierarchical security approach. A preliminary version of parts of the paper were presented at the ACM’00 Workshop on Multimedia and Security [15] and at the special session on Multimedia and Security at ICME 2000 [16].
threat of confidentiality (unauthorized revealing of information), threat of integrity (unauthorized modification of data), threat of availability (unauthorized withholding of information or resources).
1
enough to make it hard to find the right key K. Public-key cryptosystems are based on trapdoor one-way functions. Each entity holds a key-pair (PK,SK). This pair consists of a private key SK and a public key PK corresponding to SK. The key SK must strictly be kept secret, the key PK may be made public. Given a public key PK, it is computationally infeasible to find the private key SK if the trapdoor information is unknown. In other words, even with the most powerful computers it is computationally infeasible to deduce PK from SK during a given period of time.
2. REQUIREMENTS AND MEASURES The following security requirements are essential for multimedia systems. These requirements can be met by the succeeding security measures: •
Confidentiality: Cipher systems are used to keep information secret from unauthorized entities. • Data integrity: The alteration of data can be detected by means of one-way hash functions, message authentication codes, digital signatures (especially content-based digital signatures), fragile digital watermarking, and robust digital watermarking. • Data origin authenticity: Message authentication codes, digital signatures, fragile digital watermarking, and robust digital watermarking enable the proof of origin. • Entity authenticity: Entities taking part in a communication can be proven by authentication protocols. These protocols ensure that an entity is the one it claims to be. • Non-repudiation: Non-repudiation mechanisms prove to involved parties and third parties whether or not a particular event occurred or a particular action happened. The event or action can be the generation of a message, the sending of a message, the receipt of a message and the submission or transport of a message. Nonrepudiation certificates, non-repudiation tokens, and protocols establish the accountability of information. These mechanisms are based on message authentication codes or digital signatures combined with notary services, timestamping services and evidence recording. The security measures mentioned above, use cryptographic mechanisms and digital watermarking techniques. A short introduction to both approaches is given in the next section. The focus is also concentrated on the problems with multimedia data deriving from applying cryptographic mechanisms.
Usually cryptographic mechanisms take each bit of data for input to calculate the output, which is needed to provide the security mechanism. Additionally, the mechanisms have the property that if one bit of the input or output is changed, the encryption or even the validation will fail in most cases. Multimedia applications need a high performance and their data can be altered due to transmission errors, higher compression rates or scaling operations. Therefore, it seems to be difficult to define a suitable input for the cryptographic mechanisms. If cryptographic mechanisms are applied directly to the whole amount of media data, some problems may occur. These problems and their solutions will be described in more detail within each section accordingly to the presented cryptographic mechanism.
3.2 Digital Watermarking Techniques Digital watermarking techniques based on steganographic systems offer the possibility to embed information directly into the media data. Besides cryptographic mechanisms, watermarking represents an efficient technology to ensure both data integrity and data origin authenticity. Watermarking techniques usually used for digital imagery and now also used for audio and 3Dmodels, are relatively young and their amount is growing at an exponential rate. Copyright, customer or integrity information are embedded, using a secret key, into the media data as transparent patterns. Because the security information is integrated into the media data, one cannot ensure confidentiality of the media data itself, only the security information using a secret key.
3. Security Mechanisms Security mechanisms, applied for multimedia systems are based on cryptographic mechanisms as well as on digital watermarking techniques.
In section 6 we discuss a media independent classification based on application areas for digital watermarking.
3.1 Cryptographic Mechanisms
4. Security MEASURES
Modern cryptographic mechanisms are mainly based on different not proven assumptions of the complexity theory, concerning “easy” and “hard” computation of functions. In this context, the term “easy” computation stands in contrast to the term “hard” computation. Intuitively, “easy” and “hard” computation can be described as follows [36]: A problem is called to be “easy” if there exists an algorithm for the problem, which operates in polynomial time. The problem is named “hard”, if no polynomialtime algorithm is known. In cryptography two types of these functions are used in particular: one-way functions and trapdoor one-way functions [28].
In this section we present mechanisms for the realization of security measures and point out problems of multimedia security.
4.1 Confidentiality Confidentiality can be achieved by means of cipher systems. These systems are used to keep information secret from unauthorized entities. Private-key and some public-key cryptosystems can be used for cipher systems. In consideration of performance large amounts of data are enciphered by a sessionkey scheme (also known as hybrid cryptosystems). This scheme applies both a private-key and a public-key cryptosystem. Non of these mechanisms provides protection after deciphering, for example to check if the data is presented in an unchanged form or who is the owner of the data to ensure copyright protections.
The most important cryptographic mechanisms are implemented by use of cryptosystems. These systems consist of two sets of functions, a set of keys, parameterizing these functions, and sets, on which these functions operate. Cryptosystems are subdivided into private-key cryptosystems and public-key cryptosystems. In private-key cryptosystems, the communicating entities share a key K, which must strictly be kept secret, called the secret key. The size of the key space, where K is chosen from, must be large
Within streaming applications, usually a huge amount of data needs to be transmitted from a sender to a receiver in a timecritical and confident manner. Even the session-key scheme fails to support the necessary level on performance. Another problem, derived from a huge amount of data, is that media data can be
2
mechanisms are detection mechanisms and the protected data remains in plaintext.
changed due to transmission errors, higher compression rates, or scaling operations during transmission or life time. Using common encryption methods, the decryption of a ciphertext block may fail. These methods have the property that the original plaintext cannot be recovered if one bit of the ciphertext block is altered. A general solution of these problems is partial encryption: instead of encrypting the whole amount of data, only special parts of the entire data are enciphered. If the selection is well chosen, a sound confidentiality of the whole data can be achieved.
A MAC is a one-way hash function, which is parameterized by a secret key. Only those entities that know the secret key may calculate the MAC. Public-key cryptosystems can be used to generate and verify digital signatures [22,25]. A digital signature of an entity A (the signer) to data m depends on m and additionally, on the private key of A. Each user is be able to verify the authenticity of the signature created by A (verification), using the public key of A.
Considering the results from performance measures in secure video systems, several methods for partial encryption of video data have been proposed in the last few years [26,33,35]. The basic idea of the approaches is to encrypt only relevant information, for example motion vectors, coefficients or header information. MPEG-1/MPEG-2 and H.261/H.263 are widespread compression standards used in most of today’s video conferencing applications. They are well suited for partial encryption because on the one hand they make use of DCT, which has a high potential for dividing data in more relevant less relevant parts (entropy of the coefficients). On the other hand, large amounts of video data are encoded by reference to preceding or succeeding blocks (intracoded blocks), where only the referenced blocks have to be protected.
Regarding multimedia data, digital signatures could be used e.g. for image/video authentication to ensure trustworthiness by means of public-key cryptosystems. However, applying digital signatures directly to digital image data is vulnerable to image processing techniques like conversion, compression or scaling. The image material is changed irreversible without content modifications. Although the content of the image has not been changed and the viewers still have the same image impression, the signature verification would fail. Manipulations can be differed from content-preserving and content-changing manipulations. Digital signatures should be applied to the feature codes of the media data. Such feature codes have to be used, which are not altered by the allowed operations such as scaling and conversion of media formats. Because feature codes should represent the content of the media these mechanisms are called content-based authentication codes or content-based digital signatures.
There are several sophisticated approaches for applying partial encryption to non-scalable standard-based hybrid video coding schemes like MPEG video. Base layer encryption does not require content parsing and, therefore, has a much lower overall computational complexity than partial MPEG encryption. Note that for base layer encryption the amount of encrypted data has to be determined a priori whereas partial MPEG encryption allows different security levels even if a video has already been encoded.
Media integrity using digital watermarking, is different from the introduced cryptographic mechanisms of hash functions, message authentication codes, digital signatures, and content-based digital signatures, where the check value is appended to the data. Watermarking uses redundant information of media data to slightly modify the media and embed integrity information. The integrity verification data is embedded in the media rather than appended to it. Possessing the appropriate secret key K, it is possible to verify the watermark and to evaluate whether the data was altered – particularly tampered – or not by checking the embedded information. The use of public-key cryptosystems is in the moment unknown for fragile watermarking. Several techniques and concepts have been introduced for image and audio data as fragile digital watermarks using private-key cryptosystems. The existing approaches have different strategies of tamper detection. Some approaches are very sensible to changes like in check values, others try to recognize only content changes [18,27]. The latter approaches are usually called contentfragile watermarks. The problem to embed the content as a watermark is that watermarking techniques usually cannot embed more than 10 to 100 bytes. Therefore, it is impossible to embed the content with a data rate higher than 1 kByte. The solution for content-fragile watermarkings combines a robust watermarking technique the content characteristic for integrity detection. The main idea is to initialize a robust watermarking pattern with the content of the media.
4.2 Data Integrity The integrity of data can be checked by means of one-way hash functions. Furthermore, some mechanisms presented in the following section 4.3 can be applied for detecting alteration of data. These mechanisms cannot prevent data manipulations, but they make these manipulations detectable. Therefore, they are called detection mechanisms. The protected data still remain in plaintext. A hash function maps strings of arbitrary length to strings of a maximum or fixed length. Hash functions are public, i.e. no secret information is used for the computation of a hash value. Thus, everyone knowing the function may compute the hash value and thereby check the integrity of the data. Regarding multimedia applications, media data can be changed by compression or scaling without content manipulation. Therefore, hash functions are not very appropriate if they are applied to media data directly. To solve the problem, hash functions should be applied to data concerning the semantic of the media stream. These data are called feature codes, which represent the content of the media. For image data, DCT coefficients or edges [18,27] can be used.
A robust mark is designed to resist attacks that attempt to remove or destroy the mark. The intention is to embed owner, producer or customer identification into the media data to ensure copyrights using a private-key cryptosystem. Secure public-key techniques are also not known today like for fragile watermarks. The robust watermark should remain present even after media processing or attacks, even if the content is manipulated.
4.3 Data Origin Authenticity Authentication codes (MAC), digital signatures (especially content-based digital signatures), fragile digital watermarks, and robust digital watermarks assure data origin authenticity. Additionally, the first three mechanisms ensure also data integrity. Similar to the mechanisms for data integrity, all four
3
policy regulates e.g. the generation and distribution of certificates, and how to ensure the availability of the services.
4.4 Entity Authenticity Often it is necessary to ensure the authenticity of entities, e.g. for guaranteeing that the communicating parties (this may be persons as well as devices) are indeed the ones they claim to be. Schemes enabling such a proof are called authentication protocols. The simplest version of an authentication protocol is the challengeresponse protocol. Basically, such a protocol works as follows: The verifier sends to the claimant a randomly generated number, the so-called challenge. The claimant returns a response to the verifier which consists of a value generated by using the challenge and a secret. The verifier is able to proof, that the claimant possesses the secret. For each authentication a new question is generated. These protocols can be implemented e.g. based on private-key or public-key cryptosystems [23].
5. DIGITAL CERTIFICATES: A SURVEY OF REVOCATION METHODS Nowadays, large security infrastructures are developed and currently going to be established for meeting security requirements. For example PKI infrastructures and certificates form a basis that allows entities to trust each other. Certificates are generated and issued by a trustworthy authority named certification authority (CA). The security policy of a CA describes the lifecycle of key pairs or attributes, respectively, and thus, the validity period of a certificate. Within this period, its reliability can be assured. The validity of the public key or attribute is specified in the certificate and signed together with other data by the CA. Therefore, certificates are unforgeable and usually securely submitted to an authority that provides certificates. The authority is mostly called directory. Typically, the validity period of a certificate is between several months and two years. But in some circumstances, a certificate must to be revoked, i.e. its validity must be terminated sooner than assigned. The revocation management needs to be clearly defined for CAs, directories, and users: CAs must provide a revocation service in a trustworthy manner and therefore, publish a proper security policy. A user needs to know how and when a revocation must be initiated and also gets informed. The revocation is initiated by the owner of the certificate (subject), by an authorized representative which is already mentioned in the certificate or by a CA. Only the CA revokes certificates and complies with a revocation request since the initiator is able to prove his authorization. Usually, the status of all certificates is submitted to the directory that answers users requests concerning the validity of certificates. Due to the security policy, this service might also be provided by another authority. Additionally, revocation methods must fulfill other requirements, too. A revocation needs to be fast, efficient, timely and particularly appropriated for large infrastructures. Due to that, it is necessary e.g. to reduce the number of time-consuming calculations like verification processes of a digital signature and to apply other mechanisms, or to minimize the amount of data transmitted. It is also desirable that a method provides suspending a certificate temporarily (placed onhold) and also a reuse. To prove the validity of a certificate, a user has to perform different tests, where some of them are really critical. One of the most critical ones is to determine whether a certificate has been revoked or not. Usually, a user determining whether a specific certificate has been revoked sends a request to a directory. The request contains at least a serial number which represents a unique identifier for each certificate. The response includes the serial number, status, date and reason of revocation, and is then analyzed by the user. They all have in common that an authentic verification key of the (Root-)CA is required. In the following we give a short classification and point out main reasons for the revocation of certificate and an overview of different kinds of revocation methods.
4.5 Non-Repudiation Within legal facilities, digital signatures in their own are not sufficient to link data and actions to their originators. Here, security infrastructures and security techniques may be used to provide some evidence that will be accepted by courts. So-called non-repudiation mechanisms [24], which are based on private-key cryptosystems (message authentication code) or public-key cryptosystems (digital signatures), are supporting such security techniques. They comprise non-repudiation certificates, non-repudiation tokens and protocols. Trusted Third Parties (TTP) supply notary services, timestamping services and evidence recording. By means of these mechanisms, it can be proven to involved parties and third parties whether or not a particular event occurred or a particular action happened. The event or action may be generating a message, sending a message, receiving a message or transmitting a message. Therefore, these mechanisms are subdivided into non-repudiation of origin, non-repudiation of delivery, non-repudiation of submission, and non-repudiation of transport.
4.6 Public-Key Infrastructure The use of public-key cryptosystems raises the two problems: •
By means of session-key schemes, the encrypted session key (and thus the plaintext) may be recovered only with the private key of the recipient (so-called addressed confidentiality). However, it cannot be ascertain whether or not the public key, which is used for the encryption of the session key, actually belongs to a particular person (or device). • Using digital signatures and signature-based authentication protocols, it can be checked whether the signature to particular data was generated by a specific key by verifying the digital signature. Thus, the authenticity of a message or communication can be proven. However, it is not provable whether or not the used keys actually belong to a certain person. Obviously, an authentic link between the public key and its owner is needed. Such a link is provided by so-called public-key certificates [20,21]. For the issuing of certificates a trustworthy authority, a trust center (TC) is needed. Trust centers authenticate the link of users to their public keys, and can provide further services like non-repudiation, revocation handling, timestamping, auditing and directory service. Within a trust center, these services are provided by special components. Each trust center, and even its components, comply with a security policy. This
5.1 Classification and Reasons of Revocation Methods for revocation can be classified in different ways:
4
(1)
(2)
(3)
(4)
indirect evidence and is provided offline. CRLs are periodically issued, usually monthly. A CRL contains a list of serial numbers of revoked certificates together with their date of revocation, and also a date of its generation and a latest date of the next issue. Optional more information e.g. reasons of revocation can be added. Finally, the CRL is digitally signed by the issuing CA. Thus, its freshness and authenticity can be checked. CRLs are periodically sent to a directory. Users requesting the validity of a certificate receive a full CRL. Then they check the actuality and verify the signature of the CRL. If this succeeds, they determine whether the certificate queried is included in the CRL or not. If the serial number can not be found, the certificate is supposed to be still valid. Because CRLs are straightforward, they are easy to understand and thus, widespreadly used. Since the validity period of certificates is long and the number of users is immens, CRLs can grow extremely large. Therefore, a great amount of data needs to be transmitted. The fact that a CRL is only up-to-date at their point of issuing led to the definition of so called delta-CRLs. A delta-CRL is issued between two CRL updates. It includes only changes since the last issued CRL and so enhances the efficiency. Delta-CRLs contain sequence numbers that allow to verify the completeness of CRL information. We can classify into the following: • The certificate revocation system (CRS) [42] has been introduced by Silvio Micali in 1995. His idea uses online/offline signatures [37]. He improved his idea in 1996 [9], where he redefines CRS by revocation status, and also gets a patent in 1998 [44]. A CRS mixes positive and negative lists and thus, gives direct evidence. The validity status of each certificate is treated separately. Here, a user sending a query concerning the validity of a single certificate, will get a response containing an individual, short information about this certificate. Depending on the up-todate-time schedule, the system can either operate online or offline. In the following, we point out the main concept of a CRS [42]. • Certificate Revocation Trees (CRT) have been introduced by Paul Kocher in 1998 [39] [40] and are based on hash trees [41]. CRTs are negative lists but also support information about non-revoked certificates (mixed form): Regarding the sorted set of revoked certificates, all still valid certificates can be assigned to specific validity intervals. Thus, they give direct evidence. • Another method is the Online Certificate Status Protocol (OCSP) [43] developed by IETF. It specifies a protocol used to determine the current validity status of a certificate online. OCSP is designed for X.509 certificates but may also work with other kind of certificates. The protocol can be used instead of or even together with CRLs if more timely information about the status is required. Information about the way to obtain a certificates status can be included within the extension fields of a X.509-certificate. Regarding revocation of certificates different methods have been developed. Beside the presented methods, further methods exist. If and in which way a revocation method is suited must be analyzed in accordance to their purpose. An important aspect for a decision is its costs. High costs derive from a great amount of transmitted data that is needed to provide a proper revocation, but also from measures to provide the availability of timely data. Using offline systems, commonly the time period between two
By the way of checking: The check can be performed either offline or online, sometimes both methods are applied. Within an offline scheme, the validity information is precomputed by a CA and then distributed to the requester by an non-trusted directory. Within an online scheme, the status information is provided online by a trusted directory. A proof of validity is performed during each request and provides up-to-date information. By their kinds of lists: Negative (black) lists contain revoked certificates and positive (white) lists contribute valid certificates. Sometimes both mechanisms are combined. By the way of providing evidence: A direct evidence is given if a certificate is mentioned in a positive or negative list, respectively. Then it is supposed to be not revoked or revoked, respectively. An indirect evidence is given, if a certificate can not be found on a list and therefore, the contrary is assumed. By the way of distributing information either via a push or pull mechanism.
Coming from several threats, there are important reasons why a certificate needs to be revoked [38]: • Key compromise: The private key of the subject (user) or of the issuer (CA) has been compromised or is suspected to be compromised (e.g. broken or stolen). • Change of affiliation: Some information in the certificate about the subject or any other information is not longer valid. • Superseded: The certificate is superseded - no further reasons are made available. • Cessation of operation: The certificate is not longer needed for its assigned purpose. Additionally, there are some further arguments why a certificate needs to be revoked (descendently sorted by their urgency): • Algorithm compromise: The signature algorithm used by the CA has been broken in general or the algorithm of the certified public key is compromised. This might be caused by new advances in algorithm theory, number theory or computer capabilities. • Revocation of superordinated certificate: A certificate being part of the certification path is revoked. • Loss or defect of security token, loss of password or PIN: Either the subject of the certificate has lost its physical equipment or its equipment is damaged. Regularly, a password or a PIN protects the token from unauthorized access and can be lost, too. • Change of key usage: The certified key can not longer be used for its assigned purpose. • Change of security policy: The CA does not longer work under its defined policy, e.g. it ceases to support a service for certificates. Usually, the status information about the certificate include reasons for the revocation.
5.2 Certificate Revocation Lists Certificate revocation lists (CRL) together with X.509 certificates have been introduced in 1988 by ITU-T (formerly CCITT). Since the second edition of the X.509-Recommendation in 1993, revocation lists are based on an improved version 2 by ITU-T and ISO/IEC [38]. A CRL represents a negative list giving
5
•
updates is long and therefore, the validity cannot be assured exactly. However, this is sufficient for the purpose of some applications. Online systems appropriated for purposes where more timely information is needed are obviously more expensive than an offline system. Another aspect is also whether a revocation method is applicatively for a storage equipment like smart cards or other security tokens. The knowledge about different revocation methods is not very widely spread. Efficient and practicable methods are still needed and a topic of today's research. A main requirement for new developments and new ideas is that they can easily be integrated in widespreadly used X.509 certificates.
Robustness describes whether the watermark can be reliably detected after media operations. It is important to note that robustness does not include attacks on the embedding scheme that are based on the knowledge of the embedding algorithm or on the availability of the detector function. Robustness means resistance to “blind”, non-targeted modifications, or common media operations. For example the Stirmark or 2Mosaik tool [12] attack the robustness of watermarking algorithms with geometrical distortions. For manipulation recognition the watermark has to be fragile to detect altered media. • Security describes whether the embedded watermarking information cannot be removed beyond reliable detection by targeted attacks based on a full knowledge of the embedding algorithm and the detector, except the key, and the knowledge of at least one watermarked data. The concept of security includes procedural attacks, such as the IBM attack [3], or attacks based on a partial knowledge of the carrier modifications due to message embedding [7] or embedding of templates [13]. The security aspect also includes the false positive detection rates. • Transparency relates to the properties of the human sensory. A transparent watermark causes no artefacts or quality loss. • Complexity describes the effort and time we need to embed and retrieve a watermark. This parameter is essential if we have real time applications. Another aspect addresses whether the original data in the retrieval process or not. We need to distinguish between non-blind and blind watermarking schemes. • Capacity decribes how many information bits can be embedded. It addresses also the possibility of embedding multiple watermarks in one document in parallel. • The verification procedure describes if we have a private verification like private key functions or a public verification possibility like the public key algorithms in cryptography. • Invertibility describes the possibility to produce the original data during the watermark retrieval. The optimization of the parameters is mutually competitive and cannot be clearly done at the same time. If we want to embed a large message, we cannot require large robustness simultaneously. A reasonable compromise is always a necessity. On the other hand, if robustness to strong distortion is an issue, the message that can be reliably hidden must not be too long.
6. MEDIA INDEPENDENT CLASSIFICATION SCHEME Digital Watermarking is a technology capable of solving important practical security problems. It is a highly multidisciplinary field that combines media and signal processing with cryptography, communication theory, coding theory, signal compression, and the theory of human perception. Interest in this field has recently increased because of the wide spectrum of applications it addresses. Although a wide variety of techniques has been proposed it is rather difficult to classify the approaches and measure their quality. Our intention is to classify the different watermarking schemes and to find quality measures. Our classification scheme takes the application areas into account and shows which parameters and attacks are essential. In comparison to Pitas [11], we do not analyze algorithms’ details and do not refer to the domain where the watermark is embedded. Our goal is to give the users a system (available over the web) to find the appropriate watermarking function along with their parameters.
6.1 Application based Classification In general, we find the following watermarking classes based on application areas for digital watermarking: •
Copyright Watermark: watermarking the data with an owner or producer identification • Fingerprint Watermark: watermarking the data with customer identifications to track and trace legal or illegal copies • Copy Control or Broadcast Watermark: Ensuring copyrights with customer rights protocols, for example for copy or receipt control • Annotation Watermark: Annotations or capturing of the media data, this kind of watermark is also used to embed descriptions of the value or content of the data • Integrity Watermark: beside the authentication of the author or producer we want to ensure integrity of the data and recognize manipulations In our classification scheme we do not consider watermarking as information hiding technique to have a secure cover communication.
6.3 Important parameters Each of the five classes of watermarks has its own quality parameters and standards. In table 1 we point out the general watermarking parameters described in section 2 for each of the five watermark classes. These can be used as general quality metrics. Our goal is to classify new and existing algorithms into the scheme. We will offer a web interface where researchers and industry can register their algorithms in a classification database to provide user transparency. In addition we show possible attacks which depend on the application area, [2]. For example, the Fingerprint Watermark has to deal with the coalition attack: When we watermark the original with different user identifications we produce different copies. Customers could work together by comparing their different copies to find and destroy the Fingerprint Watermark [1,5].
6.2 Watermarking Parameter The most important properties of digital watermarking techniques are robustness, security, imperceptibility/ transparency, complexity, capacity and possibility of verification as well as invertibility:
6
find approaches which can distinguish malicious changes from innocent image processing operations. Such techniques [14] could be termed authentication of the visual content. We have to distinguish between content-preserving and content-changing manipulations. Most existing techniques use threshold based techniques to decide the visual integrity. The main problem is to face the wide variety of allowed content-preserving operations. As we see in the literature most algorithms address the problem of compression. But very often scaling, format conversion or filtering are also allowed transformations. Most existing techniques recognize scaling, format conversion or compression as integrity violation. But to allow several post production editing processes we need more sophisticated approaches.
Another problem we recognize is the property of robustness and the recognition of manipulations for the Integrity Watermark. The robustness has to be adapted to contentchanging and content preserving manipulation which must be addressed by the watermarking algorithms. Watermark Copyright Watermark
Fingerprint Watermark
Copy Control/ Broadcast Watermark Annotation Watermark
Integrity Watermark
Parameter -high robustness -high security -non-perceptual -blind methods are usually more practicable -capacity should fit to the needs for a rightful owner identification -verfication process usually private, public can be also desirable
Attacks -Mosaik attack [12] -Stirmark attack [9,12] -Geometrical attacks [2,6] -Histogram attacks [10] -Template attacks [13] -Forgery attacks [8] -Rightful ownership attacks (invertability) [3]
See Authentication Watermark -also non blind techniques are useful See Fingerprint Watermark -low complexity required -robustness in most cases less important -security is usually not important -blind methods are preferable with low complexity -high capacity - verification process usually private, public may be desirable See Authentication Watermark -but robustness until the semantics of the data is destroyed (semi-fragile, content-fragile)
See Authentication Watermark -additionally the coalition attack [1,5] See Authentication Watermark
It appears that no single scheme can have both precise localization properties without being too sensitive. Depending on the application area the user have to chose the appropriate technique.
7. SECURING RSVP FOR MULTIMEDIA APPLICATIONS In this section we want to give an example design for securing RSVP for multimedia applications. Distributed multimedia applications require end-to-end quality of service(QoS) in order to be accepted and used. One approach to achieve end-to-end QoS is to provide end-to-end resource reservations. Resource ReSerVation Protocol (RSVP) [47] [45] is a unicast and multicast signalling protocol for setting up network bandwidth reservation. RSVP sets up a distributed state in routers and hosts and is used by a host to request specific QoS from the network for particular application data streams or flows. It is also used by routers to deliver QoS requests to all nodes along the path(s) of the flows and to establish and maintain state to provide the requested service. The RSVP protocol raises several security issues [47] [45] [48] : (1) Message Integrity and node(router) authentication : RSVP reservation request messages could be corrupted or spoofed up leading to theft of service or denial of service by unauthorized persons. (2) User Authentication : Policy control will depend largely on the positive authentication of the user responsible for each reservation request. (3) Non-Repudiation : A user should not be able to falsely deny later that he sent the message. (4) Confidentiality : The RSVP request messages must not be visible to outside parties. This implies need for encryption of RSVP messages. (5) Replay Attacks : The RSVP messages could be replayed by intruders causing wasteful reservations and/or theft of service. (6) Traffic Analysis : The outside parties should not be able to infer the RSVP requests by analyzing the traffic. This depends on the strength of the encryption mechanism used. (7) Cut and Paste Attacks: The outside parties may copy the RSVP requests from one RSVP flow and paste it in another leading to Denial of Service or theft of service by unauthorized persons. (8) Denial of Service Attacks : This can be achieved by [8] (a) Corrupting packets (b) Degrading network utilization (c) Spoofed up messages by unauthorized persons
In most cases no interest in attacking the watermark
-Forgery attacks [8] -Rightful ownership attacks (invertability) [3] -attacks on the fragility [4]
Table 1: Important parameters and attacks The Integrity Watermark is a fragile watermark that is readily altered or destroyed when the host image is modified through a linear or nonlinear transformation. The sensitivity of fragile watermarks to modification leads to their use in image authentication Fridrich [14] classifies into fragile, semi-fragile, robust watermarks, and self-embedding as a means for detecting both malicious and inadvertent changes to digital media. Currently fragile watermarks are very sensitive to changes and can detect every possible change in pixel values. That can be of interest for parties to verify that an image has not been edited, damaged, or altered since it was marked. But in many applications we have to cope with several allowed post production editing processes which do not manipulate the content of the image or video data. The semi-fragile schemes try to address this problem. These techniques are moderately robust and the value identifying the presence of the watermark (a correlation in most cases) can serve as a measure of tampering. The problem of these schemes is that we cannot recognize whether the content or the message of the media was effected or manipulated. Therefore we
7
in an unchanged form and not in a copy, and therefore support copyright protection. In order to contribute non-repudiation services and revocation management for cryptographic mechanisms as well as for watermarking techniques a proper security infrastructure has to be established and proper security policies must be defined.
leading to theft of service (d) Dropping of packets by unauthorized persons.
7.1 RSVP with Scalable QoS Protection (RSVP-SQoS) The current solutions for securing RSVP suffer from a high overhead [46] and scalability problems [48]. To make the design scalable, we divide the network into domains or subnetworks and modify the algorithm in [48]. Each subnetwork has an ingress node and an egress node. All traffic incoming to a subnetwork enters through the ingress node and all outgoing traffic goes through the egress node. In this hierarchical network we design a hybrid protocol, called RSVP with Scalable QoS Protection (RSVP-SQoS) protocol, consisting of two major protocol processing approaches - one within a subnetwork and the other across subnetworks. The security assumptions for the two approaches are different. Within a subnetwork, we assume a weaker security assumption than that across subnetworks. So within a subnetwork, we use delayed integrity checking. When the RSVP messages go across subnetworks, a stronger integrity check is done with encryption, if necessary. Authentication is done using digital signatures. The delayed integrity check within a subnetwork is done by making each egress node send a feedback message for integrity checking of RSVP QoS parameters within that subnetwork. While this integrity checking is going on, the egress node could either wait (pessimistic approach) or could forward the packet ahead (optimistic approach). The advantages and disadvantages of the two schemes are given in detail later. Whenever an intrusion is detected, TearDown messages are sent tearing the connection down and preventing further intrusions. The detailed protocol is given in [15]. Our design views the network as a set of subnetworks. The security assumptions in a security domain are less stricter compared to that across subnetworks. The RSVP messages follow different approaches within a subnetwork and across subnetworks. It achieves integrity checking of the RSVP messages, authenticates the sender of the messages and protects against replay attacks. It aims at providing a scalable and flexible solution minimizing the delay in detecting intrusions with low overhead. The lower overhead in time is achieved by subnet to subnet authentication as against hop by hop authentication [46] and lower overhead in space is achieved by not storing all keys.
9. ACKNOWLEDGMENTS Our thanks to Vanish Talwar for their contributions in the research about RSVP for multimedia applications.
10. REFERENCES [1] D. Boneh and J. Shaw, Collusion-Secure Fingerprinting for Digital Data, In Proc. CRYPTO’95, Springer LNCS 963, pp. 452-465, 1995.
[2] Cox, Ingemar J, and Linnartz, Jean-Paul M.G.: Public watermarks and resitence to tampering, Proceedings of IEEE Int. Conf. O Image Processing, 1997, av. only on CD-ROM
[3] S. Craver, N. Memon, B. Yeo, and M. Yeung: Can Invisible Watermarks Resolve Rightful Ownerships? Technical Report RC 20509, IBM Research Division, July 1996.
[4] J. Dittmann, A. Steinmetz, R. Steinmetz: Content-based Digital Signature for Motion Pictures Authentication and Content-Fragile Watermarking, Inn Proc. of IEEE Multimedia Systems, Multimedia Computing and Systems, June 7-11, 1999, Italy, Volume1, pp. 574-579, 1999
[5] J. Dittmann; A. Behr, M. Stabenau, P. Schmitt, J. Schwenk, J. Ueberberg (1999). Combining digital Watermarks and collision secure Fingerprints for digital Images, In Proc. of the SPIE Conference on Electronic Imaging '99, Security and Watermarking of Multimedia Contents, 24-29 January 1999, San Jose USA, Proceedings of SPIE Vol. 3657, [3657-51], pp. 171-182, 1999
[6] Dittmann, Jana, Nack, Frank, Steinmetz, Arnd, Steinmetz, Ralf: Interactive Watermarking Environments, Proceedings of International Conference on Multimedia Computing and Systems, Austin, Texas, USA, 1998, pp. 286-294
[7] J. Fridrich: Methods for data hidung, Center for Intelligent Systems & Department of Systems Science and Industrial Engineering, SUNY Binghamton, Methods for Data Hiding", working paper, 1997
8. SUMMARY Whether or not the presented security mechanisms can be used easily for multimedia systems must be examined for each kind of multimedia data and multimedia applications separately. Additionally, it must be checked, on which features the mechanisms need to be based on. All together, the main difference to non-media data is, that cryptographic mechanisms usually have to be applied to special parts of the media data: •
•
•
[8] M. Holliman, N. Memon: Counterfeitung Attack on Linear Watermarking Schemes, In Workshop Security Issues in Multimedia Systems, IEEE Multimedia Systems Conference '98, Austin, Texas, 1998
[9] M. Kutter, F. Petitcolas: Fair Benchmark for Image Watermarking Systems, In Proc. of the SPIE Conference on Electronic Imaging '99, Security and Watermarking of Multimedia Contents, 24-29 January 1999, San Jose USA, Proceedings of SPIE Vol. 3657, pp. 226-239, 1999
For providing confidentiality: Instead of encrypting the whole data, only special parts of the entire data are encrypted (partial encryption) This ensures a high performance. For providing authenticity: Message authentication codes and digital signature schemes have to be applied to the content, the feature code, of media data, which is robust to allowed media operations. For providing data originality: New techniques have to be developed which can guarantee that data are presented
[10] M. J. J. Maes: Twin peaks: The histogram attack to fixed depth image watermarks, In Proc. of the Workshop on Information Hiding, Portland, April 1998. Submitted.
[11] N. Nikolaidis, I. Pitas: Digital Image Watermarking: an overview, In Proc. of IEEE Multimedia Systems, Multimedia
8
[30] National Computer Security Center: Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria (Red Book). NCSC-TG-005, V 1, Jul. 1987. [31] NATO: NATO Trusted Computer System Evaluation Criteria (Blue Book). NATO AC/35-D/1027, 1987. [32] Petitcolas, F.; Anderson, R.: Weaknesses of Copyright Marking Systems. Multimedia and Security Workshop at the Sixth ACM International Multimedia Conference, Sep. 12-16 1998, Bristol, England; Workshop notes published by GMD GmbH, GMD Report 41, 1998, pp. 55-61. [33] Qiao, L.; Nahrstedt, K.: Comparison of MPEG Encryption Algorithms. IEEE Journal on Computer Graphics & Applications, Special Issue: Data Security in Image Communication and Network, Vol. 22, No. 4, Permagon Publisher, 1998, pp. 437-448. [34] Qiao, L.; Nahrstedt, K.: Non-invertible Watermarking Scheme for MPEG Audio, SPIE Security and Watermarking of Multimedia Contents, San Jose, CA, Jan. 1999, pp. 194-202. [35] Qiao, L.; Nahrstedt, K.: Watermarking Methods for MPEG Encoded Video: Towards Resolving Rightful Ownership, IEEE Multimedia Computing and Systems Conference, Austin, TX, June 1998, pp. 276-286. [36] Salomaa, A.: Public-Key Cryptography. Springer, 1996. [37] S. Even, O. Goldreich, S. Micali: On-Line/Off-Line Digital Signing. Proc. of CRYPTO 89, pp. 263-275. [38] International Telecommunication Union: ITU-T Recommendation X.509 (1997 E): Information technology Open Systems Interconnection - The Directory: Authentication Framework, 6-1997 (also published as ISO/IEC International Standard 9594-8) [39] P. Kocher: A Quick Introduction to Certificate Revocation Trees. http:// www.valicert.com/resources/bodyIntroRevocation.html [40] P. Kocher: On Certificate Revocation and Validation. Proc. of Financial Cryptography 1998, pp. 172-177. [41] R. Merkle: Secrecy, Authentication, and Public-Key Systems. PH.D. Dissertation, Department of Electrical Engineering, Stanford University, 1979. [42] S. Micali: Enhanced Certificate Revocation. Technical Memo MIT/LCS/TM-542, 1995. [43] M. Myers, R. Ankney, A. Malpani, S. Galperin, C. Adams: X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP. IETF, Jun. 1999. http://www.rfceditor.org/rfc/rfc2560.txt [44] United States Patent Nr 5.793.868: Certificate Revocation System, Silvio Micali, Date of Patent: 11. Aug 1998 [45] A. Mankin, F. Baker, B. Braden, S. Bradner, M. O`dell, A. Romanow, A. Weinrib, and L. Zhang: Resource ReSerVation Protocol - Version 1 Applicability Statement Some Guidelines on Deployment. RFC 2208, 1997. [46] F. Baker, B. Lindell, and M. Talwar: RSVP Cryptographic Authentication. RFC 2747, January 2000 [47] R.Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin: Resource ReSerVation Protocol - Version 1 Functional Specification. RFC 2205, September 1997. [48] Tsung-Li Wu, S. Wu, and F. Gong: Securing QoS: Threats to RSVP Messages and Their Countermeasures. Proceedings IWQoS, pages 62-64, 1999
Computing and Systems, June 7-11, 1999, Florence, Italy, Volume1, pp. 1-6, 1999
[12] F. Petitcolas, R. Anderson and M. Kuhn: Attacks on copyright marking systems, In Proc of Second International Workshop on Information Hiding ’98, 14-17 April, Portland, Oregon, USA; proceedings published by Springer as Lecture Notes in Computer Science v 1525, pp. 219—239, 1998.
[13] T. Pun: Watermark Attacks, DFG V3D2 Watermarking Workshop, http://www.lnt.de/~watermarking, 1999
[14] J. Fridrich: Methods for Tamper Detection of Digital Images, in J. Dittmann, K. Nahrstedt, P. Wohlmacher (Eds.), Multimedia and Security, Workshop at ACM Multimedia’99, Orlando, Florida, USA, Oct. 30 – Nov 5 1999, pp. 29-34, 1999
[15] Proceedings ACM Multimedia 2000 Workshops, November 4, Los Angeles, California, ISBN 1-58113-311-1, 2000
[16] Dittmann, Jana; Nahrstedt, Klara; Wohlmacher, Petra: Approaches to Multimedia and Security, IEEE International Conference on Multimedia and Expo, 30 July - 2 August, New York, USA, pp. 1275 - 1278, ISBN 0 - 7803 - 6536 - 4, 2000 [17] Department of Defense: Department of Defense Trusted Computer System Evaluation Criteria (Orange Book). DOD 5200.28-STD, Dec. 1985. [18] Dittmann, J.: Digitale Wasserzeichen: Grundlagen, Verfahren, Anwendungsgebiete, Springer, 2000. [19] Information Technology Security Evaluation Criteria (ITSEC): Provisional Harmonised Criteria. V 1.2, Jun. 1991. [20] ISO/IEC 9594-8 | ITU-T Recommendation X.509: Final Text of Draft Amendments DAM 1 to ITU-T Recommendation X.509 (1993) | ISO/IEC 9594-8 on Certificate Extensions: ISO/IEC JTC 1/SC 21/WG 4 and ITU-T Q15/7. Dec 1996. [21] ISO/IEC 9594-8 | ITU-T Recommendation X.509: Information technology – Open Systems Interconnection – The Directory. Part 8: Authentication Framework, 1993. [22] ISO/IEC 9796: 1991 Information technology – Security techniques – Digital signature scheme giving message recovery. [23] ISO/IEC 9798: Information technology – Security techniques – Entity authentication. Part 1: General. Part 2: Mechanisms using encipherment algorithms. Part 3: Mechanisms using a public-key algorithm. [24] ISO/IEC 13888: Information technology – Security techniques – Non-repudiation. Part 1: General. Part 2: Using private-key techniques. Part 3: Using public-key techniques. [25] ISO/IEC 14888: 1998 Information technology – Security techniques – Digital Signatures with appendix. [26] Kunkelmann, T.: Sicherheit für Videodaten. Vieweg, 1998. [27] Lin, C.-Y., Chang, S.-F.: A Robust Image Authentication Method Surviving JPEG Lossy Compression. SPIE Storage and Retrieval for Image and Video Databases, San Jose, CA, USA, Jan. 1998. [28] Menezes, A.J.; van Oorschot, P.C.; Vanstone, S.A.: Handbook of applied cryptography. CRC Press 1997. [29] National Computer Security Center: Trusted Database Management System Interpretation of the Trusted Computer System Evaluation Criteria. NCSC-TG-021, V 1, Apr. 1991.
9
10