1. Current Web Security: Mostly SSL-based Logon

3 downloads 0 Views 757KB Size Report
Feb 26, 2006 - We describe the current state of web security, and identify the main problems. ... and passwords, sent by a consumer to web servers (e.g. merchant sites and ... proves to the browser that the server was able to decrypt EncrytPKserver(k), and ... The security of this process depends on several correct-usage ...
Protecting web users from phishing, spoofing and malware Sunday, February 26, 2006 Amir Herzberg Dept. of Computer Science Bar Ilan University Abstract We describe the current state of web security, and identify the main problems. We then present proposals for improvements, including: secure site identification widget; secure and convenient `single click logon`; improved validation certificates; and using public-key signatures and automated resolutions and penalties, to defend against malicious content including malware. The web and its users are suffering from a growing amount and different forms of malicious, criminal abuses, despite the deployment of sophisticated cryptographic protocols (SSL/TLS). We believe that modest improvements to browser security indicators and mechanisms, can prevent many of these abuses, including many of the phishing, spoofing, malware and cross-site scripting attacks. These proposals focus on secure usability aspects, and should make browsing easier rather than more cumbersome; and the performance requirements are modest. Our discussion is largely based on experience and conclusions from developing TrustBar [HG04], an improved security indicator extension to the FireFox browser, including feedback received from users, surveys and empirical data collected.

1. Current Web Security: Mostly SSL-based Logon The importance of security to web users and services is obvious, and indeed essentially all browsers and almost all web servers support advanced, public-key cryptographic protocols, mainly the Secure Socket Layer (SSL) protocol (or its standard version, the Transaction Layer Security (TLS) standard); for details see e.g. [R00]. The main goal of SSL it to protect sensitive traffic, such as credit card numbers and passwords, sent by a consumer to web servers (e.g. merchant sites and e-banking logon pages). Simplified description of SSL as used in most sites. SSL operation is divided into two phases: a handshake phase and a data transfer phase. We illustrate this in Figure 2, for connection between a client and an imaginary bank site (http://www.bank.com). During the handshake phase, the browser confirms that the server has a domain name public key certificate. Such a certificate is a statement signed (digitally) by a trusted entity, called a Certificate Authority (CA), specifying a public key PKserver and authorizing it to use the domain name www.bank.com contained in the specified web

address (URL). The certificate is digitally signed by CA; this proves to the browser that CA believes that the owner of the domain name www.bank.com is also the owner of the public key PKserver. Next, the browser chooses a random key k, and sends to the server EncrytPKserver(k), i.e. the key k encrypted using the public key PKserver. The brower also sends MACk(messages), i.e. Message Authentication Code using key k computed over the previous messages. This proves to the server that an adversary didn’t tamper with the messages to and from the client. The server returns MACk(messages) (with the last message from the browser added to messages); this proves to the browser that the server was able to decrypt EncrytPKserver(k), and therefore owns PKserver (i.e. it has the corresponding public key). This concludes the handshake phase. The data transfer phase uses the established shared secret key to authenticate and encrypt requests and responses. Again simplifying, the browser computes Encryptk(Request, MACk(Request)) for each Request, and the server computes Encryptk(Response, MACk(Response)) for each Response. This protects the confidentiality and integrity of requests and responses.

Notice that the handshake phase, as described, identified by server (by its certificate), but not the client. SSL includes an optional mechanism for client authentication, using a certificate of the client, but this is rarely used. Instead, users usually identify themselves to the site, by presenting their name and a password. This sensitive data is transferred using SSL, encrypted with the key shared with the server at the handshake phase. The security of this process depends on several correct-usage assumptions. A failure of any of these assumptions can lead to exposure of the password. Here are three assumptions which are related to user behavior; unfortunately, as we show, all three assumptions have common failure scenarios:

Illustration 2: Unprotected logon form, yet image of padlock

First assumption: Users will send their password only via an SSL/TLS protected connection. Failure scenarios: Current browsers indicate the use of SSL by a padlock in the status area. Some browsers, e.g. FireFox, recently added additional indicators such as a yellow background and another padlock icon in the location bar. However, users often forget to confirm these indicators, indeed many users are not aware of their significance. In fact, many logon pages are not protected in an SSL/TLS connection; most of these sites invoke SSL/TLS to protect the password, but since the logon form is on an unprotected page, an attacker could send a look-alike page that will send the password to the attacker instead, and users will not be able to detect this. Adding to users confusion, many of these pages contain an image of a padlock as part of the page itself; see e.g. the unprotected logon page of Chase in Illustration 2, and notice the padlock image included as part of the page (rather than in the status or address/location bar). Such failures happen on several sensitive, widely used logon forms, e.g. of PayPal, Chase, Microsoft passport, Bank of America, and (currently) many more (see the I-NFL Hall of Shame). Several works also showed `advanced spoofing attacks`, that present fake SSL indicators and/or location (URL) [FB*97, FS*01, LY03, YS02, YYS02].

Illustration 3: Unprotected logon, in reo.com domain (not BankOfAmerica)

Second assumption: Users should specify or confirm the identity of the site as specified in the certificate. Failure scenarios: Browsers validate that the address (URL) of the site, is authorized in the certificate. However, users often do not directly specific the address; instead the browser often uses address from a link in another (potentially malicious) page, or from a (potentially malicious) email message. Sites can remove the address/location bar and possibly include a fake one to prevent users from detecting the use of wrong address. Furthermore, most users are not aware of the structure of domain names, e.g. may believe that the domain name BankOfAmerica.reo.com belongs to Bank of America; indeed, this example is the real address of one of the logon services of BankOfAmerica, see Illustration 3. Finally, many users do not confirm the address bar, even among the (relatively few) who are aware of the structure of domain names and of the risk of deceptive domain names. Third assumption: Users should use independent passwords for each service, so that each password cannot be guessed; users should never provide passwords in an insecure environment. Failure scenarios: Users often use weak passwords, listed in `dictionaries` of common passwords. Also, users often reuse the same passwords for multiple sites, and often provide their passwords in insecure environments such as public terminals. To conclude, notice that most attention is focused on the threat of password exposure (and subsequent abuse). However, a malicious web site can have other goals, such as distributing malware or misleading information. In the next section, we discuss some mechanisms to improve the ability of the user to identify sites securely. We then focus on the problem of secure logon, and finally discuss protection against misleading/malicious web sites.

2. Site identification widget Since we cannot rely on users to identify sites by the URL or by the contents of the SSL certificate, browsers should help users identify sites by a secure site identification widget, such as TrustBar [HG04]. The main element of the site identification widget is a textual string or a graphical element identifying the site, such as organization name or logo. Two possible sources for this identifying string or graphics are: 1. User defined (`petname`). For example, TrustBar allows users to `type in` a name, or to select an image on the page as the logo (from right-mouse click menu); see Illustration 4. 2. Certified, from the `organization` field of the SSL certificate or a `logotype` certificate, sent by the site or received from a `site identification server`.

Illustration 4: An SSL protected page with TrustBar

Notice that in TrustBar (Illustration 4), when we identify a site by an SSL certificate, we also display `Identified by:` and the name of the certificate authority (CA). This was recently adopted by developers of version 7 of Internet Explorer [F05], see Illustration 5, with two changes. The first change is that the name of the site and the `Identified by: ` are alternated in the same area, to save `real estate`. The second change is more significant, namely displaying the identification gadget only for sites with extended validation certificates.

Illustration 5: Site identification (above) and CA identification (below) in IEv7 (beta)

Extended validation certificates. As the name implies, extended validation certificates will involve extended, standard identity-validation mechanisms (compared to the process for current SSL certificates). This will help prevent attackers from obtaining a certificate for a domain which is deceptively-similar to a domain of a sensitive logon site (usually of financial institution). Several browser vendors and certificate authorities are involved in defining the details of extended validation certificates, including the process by which an entity will become an extended validation certificates authority, accepted by browsers. This recognizes the reality, that the (existing) requirements to become a (standard) certificates authority accepted by browsers, does not require specific identification mechanisms. Furthermore, while users may remove (and add) authorities to the list accepted by the browser, very few modify the `default` list. The `extended validation` approach, as well as the current default certificate authority list, assumes that users expect, with the browser, a service choosing trustworthy

(extended validation) certificate authorities. This is based on two assumptions. The first assumption seems hardly disputable: users want to delegate the choice of trusted certificate authorities. The second assumption is that (almost) all users want browser vendors to choose trustworthy certificate authorities, directly or via third-party programs such as WebTrust. Delegation to security service providers. Users may want to delegate the choice of trusted certificate authorities to other providers of security and network services, such as anti-malware services or corporate security manager. Browsers, or extensions such as TrustBar, may allow users to perform such delegation, possibly providing a list of reputable security service providers, and allowing this list to be customized in a configuration file, e.g. by security software, corporation or ISP. When the user explicitly delegates the selection of trustworthy certificate authorities to a non-default provider, e.g. her anti-malware service provider, it may be unnecessary to add the `identified by: ` display. Public-protest-period certificates are an alternative to `extended validation certificates`, where the certificate request, including name of organization, URL and possibly logo (image) are published for sufficient public-protest period before issuing the certificates. The public-protest-period allows sites, e.g. banks, to prevent issuing of misleading certificates. Similar mechanisms are used successfully, for many years, in many existing official registration services (e.g. for Trademarks and company names). The Picture In Picture (PIP) attack is a simple, generic attack against site identification widgets. Consider the partial screen shot presented in Illustration 6 below. What do you see?

Many users may believe that what they see here are two browser windows: an external unprotected window, and an internal SSL protected window of E*Trade. Both windows use TrustBar. Even experts, trying to access the E*Trade site, may ignore the seemingly-irrelevant DigiCrime window, and enter their password. However, exactly the same display could be the result of a Picture In Picture (PIP) spoofing attack. In this attack, there is really only one browser window, from DigiCrime.com; the `internal window`, appearing to be a window of E*Trade, is actually a Java applet emulating a browser displaying the E*Trade window. Notice that if the browser allows a web page to display windows without any security indicators, as possible on most current browsers, then the attacker can simply present the `internal, emulated` page, with no need to display the `external, container` page. To prevent the PIP attack, we recommend the following additional security user interface mechanisms: 1. Customized security indicators, e.g. displaying for each user a unique, secret image as part of the browser' s security indicators; see `dynamic security skins` [DT05]. 2. Mandatory and highly visible borders in all browser windows. Without such borders, an attacker may be able to overlay a valid browser window (with correct security indicators, possibly including customized security indicators) by another browser window, which will contain only a Java applet – no borders, bars or menus. By placing this applet page exactly over the password dialog, or over the entire `real` page, the attacker can trick users, e.g. into entering the password. 3. Significant change in appearance of non-active web pages, e.g. `shadowed`, with reduced visibility. This effect may be invoked only upon entering a sensitive or protected site. Secure identification without SSL. Many sensitive web pages are not protected and identified (by SSL), often due to the overhead of the SSL handshake, and in particular the public key operations. This may be solved by using shared-key SSL handshake, e.g. the TLS Ticket proposal [SZ*06]. Another alternative is to authenticate the page contents, using digital signatures or shared-key message authentication code. This alternative allows third-party identification of sites, e.g. identification of non-SSL login sites of banks, by a security service provider (without depending on authentication by the bank, e.g. using TLS Ticket). To conclude, improved site identification, such as done by TrustBar (and planned for IEv7), can reduce the success probability of spoofing and phishing attacks. However, significant probability of spoofing may remain. We need complementing mechanisms, which will reduce the damage from spoofing. We present such mechanisms in the next two sections. The next section suggests improvements to the logon process, to prevent password exposure. The following section suggests mechanisms to filter and block malicious content, including malware.

3. Single-Click Logon for Security and Convenience

Most sensitive websites use username-password login process; in the previous section, we concluded that even with improved site identity indicators, many users may fail to detect spoofed sites. We now present an improved, `single-click` login process, improving both convenience and security; the solution will even protect users of sites that use SSL only to protect the password submission but not to protect the login form, e.g. Chase (Illustration 2). Client-only single-click logon. We first present client-only single click logon process that does not require any change in the server, or any hardware token. This motivate the use of improved logon mechanisms by clients, before adopted by servers, avoiding a `chicken and egg deadlock` situation. Our improved logon mechanism is an improvement to the existing login-manager supported by many browsers, e.g. FireFox. The login manager is an optional mechanism, that stores the pairs used for each user login. Upon entering a site from this list, the browser will automatically populate the username and password fields. To prevent abuse, a `master password` is required before the first auto-filling of password (for each session). Special care must be taken to protect the master password itself; in particular, the current `master password` dialog in the FireFox browser can be subject to a PIP attack; this can be prevented using the general-purpose defenses described earlier, or by prompting for the master password before displaying the first web page. We propose to combine the login manager with the site identification widget. Namely, instead of auto-filling the username and password field and have the user click the `submit` button as provided by the site, we propose to disable the submit button in the page. Instead, the user will be instructed to click on the `site identification widget`. In identified logon pages, the widget will automatically display the logo (or name/petname) of the site. The widget may also provide a pull-down menu with logos/names for other known (login) sites, allowing `single-click` login even from another site. Therefore, users never enter their username and password into a (possibly malicious, spoofed) web page. Instead, the widget securely submits the password and username, using SSL. If the widget does not know yet the password and username for a logon site, then it will securely prompt the user. The widget checks that the password is strong and different from other stored passwords (if any). The list of usernames and passwords can be encrypted by the master key and saved to disk or moved to new machines easily, as done e.g. for private-key keyring, in PGP [Z95]. We next use the keyring for login from a remote, untrusted computer. Single-click logon proxy service. The client-only logon mechanism requires a single click only when used from the user' s computer, where the passwords and user-name are stored. We can expand the solution to allow the user to logon from a remote computer which does not have the (encrypted) keyring, if the user is willing to place limited trust in a new entity which we call a single-click logon proxy service.

In a simple single-click proxy design, the user sends the master password to the proxy over a secure (SSL) connection, e.g. using secure identification widget. The user also sends her (encrypted) keyring to the proxy, possibly in advance. This allows the proxy to open another SSL connection, to the web server to which the user wants to logon, and use the user' s password to logon into that site. This simple design exposes the entire keyring and master password to the proxy; other designs can reduce this trust requirement. Single-click with server and/or token support. Some additional improvements are possible, when the web server is willing to support advanced single-click logon functionality, and/or when the user has a secure hardware token (e.g. phone, PDA, …). We first discuss the case of server support for single-click logon. This can reduce the computational workload of SSL on the server (and also on the client, but this is usually less important). Specifically, we avoid public key cryptography. This requires support for shared-key SSL handshake, e.g. the TLS Ticket proposal [SZ*06]. Next, assume that the user also has a secure hardware token, such as a smart card, mobile phone, personal digital assistant (PDA), etc. Such device can securely identify the user, e.g. using biometrics, and authenticate itself to the server in a secure manner, e.g. using strong cryptographic mechanisms, e.g. (efficient) shared key encryption. A secure token has some inherent advantages compared to a software-only solution, most notably, it stores the authentication key inside it, therefore even if used on a computer controlled by an attacker, the key is not exposed. We can use the token to store the key shared with the site, to improve security, especially for the mobile user (using a computer which is not `his own`). Suppose that the token shares key k with the server; at every logon, we can use a separate key, e.g. PRFk(time), to limit the possible damage from exposure of the key by the (rogue) computer. Here, PRF is a pseudo-random function as defined in [GGM86], e.g. a block-cipher such as the advanced encryption standard (AES). Server authentication after logon. The single-click logon process improves the ability of the site to identify the customer, but does not authenticate the server to the client. There are many reasons why server authentication may be critical; a spoofed server may be able to give misleading information or malware, or to receive confidential information from the user. Therefore, even after single-click logon, server authentication is critical. Since the server is confident of the identity of the user, and communicates with the user over a secure (SSL) connection, there is a simple solution. Namely, the server can present to the user highly visible yet personalized `greeting`, e.g. a picture selected in advance by the user, and or a user-specific audio signal (music). A spoofed site will not know what is the correct greeting for the user, and therefore the attack will be likely to fail.

4. Defending against malicious content The single-click logon mechanisms can help protect users against password theft by spoofed web sites, e.g. due to phishing attacks (fake email with a link to the spoofed site). However, there are other common attacks which this will not prevent. In particular, another common technique for password theft is the use of key-loggers or other malicious software (malware) running on the user' s computer. Malware can cause damages in many other ways, e.g. allow attackers to bypass firewalls and attack an internal network (`trapdoor`), send confidential information to an attacker outside the organization (`Trojan` or spyware), send spam or perform Denial of Service attack, and more. We focus on browser security, and discuss prevention of malware distribution via web sites. Some of the ideas can be adapted to prevent malware via other protocols, e.g. email and instant messaging. Browsers have different defenses against distributing malware. Specifically, browsers prompt the user for approval, before saving a file, or doing other security-sensitive operations based on content received from the network (e.g. adding a new certificate authority). Unfortunately, users often approve such requests without fully realizing the implications (`click through syndrome`). Even security-savvy users often install software from the Web, usually from `reputable sites`. Therefore, security here relies on site authentication. Unfortunately, existing site authentication mechanisms are very vulnerable, and even secure identification widgets will only provide a reasonable level of site authentication, definitely not perfect. How, then, can we protect against download of malware from a spoofed download site? Furthermore, there are other malicious-content attacks where browsers may not even require user' s approval (before using the malicious content). One example is applets and scripts (e.g. JavaScript), executed by default, in most browsers, without asking user' s permission. Browsers execute applets and scripts automatically, since the possible operations are intentionally restricted, with the goal of preventing damage to the user; this is called the sandbox model. Another example is content such as graphics, audio and video files, or other file types which are `viewed` by different `helper applications`, such as Adobe Acrobat Reader or Microsoft Word. However, there are many ways in which scripts, applets and even images can be dangerous. In particular, secure implementation of the sandbox model is challenging, and there have been numerous vulnerabilities allowing attacks via scripts, applets, documents and even images. In particular, some of the most common web attacks involve different forms of cross site scripting (XSS), where a web site is tricked into sending a malicious script to the browser, usually embedded in a web page. Such scripts often expose confidential information of the user of the site, e.g. cookies allowing access to the user account, or otherwise attack the user, the site, or other Internet services. Furthermore, attackers can send content which is simply deceptive, attacking the user directly rather than exploiting a vulnerability of the computer or of the site. This is

how most phishing/spoofing sites work: they copy or simply link to the content, e.g. images and logos, of the original, cloned site, thereby misleading users to believe they are at the trusted (financial, download,…) site. There are also other types of malicious, undesirable content. For examples, parents and organizations often want to block types of content they deem inappropriate, such as pornography. We next briefly discuss two approaches to deal with malicious content received via the web: blacklists of `bad` sites, and the use of signed content and reputation. Black lists of malicious content sites. One natural approach to protect users from spoofed web sites, and potentially other sites containing malicious content, is to maintain a `blacklist` of such sites. When the browser reaches such a page, it can warn the user and/or block access to the page. Several extensions and few browsers now support such blacklists. Blacklists rely on databases of spoofed (or more generally malicious) web sites, maintained by dedicated organizations, and/or by feedback from users of security browser extensions. They can be very effective in blocking known malicious sites, and if they are sufficiently accurate, browsers could completely block access to the suspect sites – implementing the `defend, don' t ask` principle. Blacklists are already applied, for years, for other applications. Most notably, to block spam, many incoming mail servers rely on one or multiple blacklist of mail servers and domains which may be sources of spam; servers usually query these lists using the DNS protocol. However, blacklists have significant limitations and drawbacks; many of these problems are well known from the use of blacklists for spam prevention, and some are even more severe for web sites. As a result, we expect blacklists to be a very useful in the short term, but to have limited long term value, and to require complementing measures. Some of the problems are: 1. Blacklists are reactive; they only list identified servers. Setting up a new site is very easy and inexpensive, and can be automated. 2. Current anti-spoofing blacklists seem to always operate based on the server' s domain name. However, buying a new domain is really trivial for attackers, and almost cost free. Furthermore, listing domain names still allows for attacks by DNS poisoning, which is often still possible. This will require listing IP addresses and then IP address blocks – as done by anti-spam blacklists. However, this is likely to result – as for spam – in listing of larger and larger address blocks, cutting off innocent users as well as attackers. 3. The control and management of such blacklists require very large, manual, operations and expense, and on the other hand gives the operator control over availability. This may have undesirable social and economical consequences. The market for anti-spoofing blacklist services is just emerging, therefore it is hard to predict how much of a problem this may be, but some of the early prices that we have seen are quite high. This may make this method applicable

only to one or very few vendors of browsers, as well as large ISPs and corporations. 4. Attackers often are able to exploit only a part of the services of a site, e.g. by buying site hosting services (but not controlling the entire web server). Blocking the entire server may not be a viable solution. Consider, for example, an attacker using a hosting service such as Akamai. Cross site scripting, mentioned above, is an extreme case, where the site is not malicious at all (but only has a common vulnerability). Notice that this problem does not exist for anti-spam blacklists. We conclude that blacklists of malicious domains can be an important defensive mechanism, but additional, complementary or alternative mechanisms are needed. We next describe a specific proposal for such complementary / alternative mechanism to prevent malicious content attacks via the web. Default block mode: only signed content. Finally, let us sketch a new direction for blocking malicious content, which we call default block mode. Recall that blacklisting allows all web contents except for content from blacklisted sites. Instead, default block mode will block all web objects, or at least all objects which are considered more dangerous, except for objects specifically approved. Default block mode may be applied by the user, e.g. upon clicking a logon bookmark or a special `block mode` button. Alternatively, the default block mode may be applied as a filtering service, by the Internet Service Provider (ISP) or by the corporate (for corporate users). We can apply default block mode to one or more types of potentially malicious content, including executables (software downloads, applets, scripts, macros, etc.) as well as `passive` content such as images. Simple forms of default blocking are already available in existing browsers. One method is by blocking certain forms of content based on source address, i.e. allowing that content only from a specific list of servers (`white list`). One example is the `Trusted Zone` in the Internet Explorer browser. Another example is the NoScript extension to the FireFox browser, preventing the execution of scripts (except for designated sites). Notice both of these existing mechanisms focus on executables, and on blocking based on source address. However, blocking based on source address may be insufficient. In particular, it will fail to block malicious content inserted into legitimate sites, e.g. in a cross-site scripting (XSS) attacks, or when some part of a site is legitimate and other parts are malicious, e.g. in personal pages of a campus. Furthermore, blocking by source address fails if the attacker is able to perform address spoofing, e.g. via Domain Name System (DNS) poisoning. Finally, blocking by source address may be overly restrictive, since it essentially requires listing of every (non-malicious) site; users may find it acceptable as a method to block executables (e.g. scripts), but would usually be inappropriate for blocking other forms of malicious content, such as images. A possible solution is default blocking, except of properly, digitally signed content. In particular, instead of blocking all scripts except for a list of permitted sources (as in NoScript), browser may block all scripts except for scripts with appropriate digital signature.

An appropriate signature will be signed by a rating authority trusted by the user, or trusted by a security service provider, to whom the user delegated the content-filtering decision. The rating labels are a crucial element of our design. These ratings should indicate well-defined, easily disputable aspects relevant to the blocking decision, e.g.: 1. 2. 3. 4.

This script/executable does not contain malware. This image does not contain any logo, trademark or malware. This image contains only logos and trademarks of Foo.com Inc. This page or image does not contain pornographic materials inappropriate for minors.

By using well defined, easily disputable ratings, security service providers will be able to use reputation and accreditation services to ensure the quality of the content filtering process. Further research is required to turn this idea into a viable design, and in particular address the performance and usability issues. Notice that the technique of default blocking except for properly signed content, is already applied for specific forms of (executable) content. Specifically, the Internet Explorer browser allows, by default, automated installation and execution of ActiveX controls, signed by authorities trusted by Microsoft. A more general trustedexecutable system was proposed and deployed [R95], however without automated blocking. References [DT05] Rachna Dhamija and J. Doug Tygar. The battle against phishing: Dynamic security skins. In Proc. ACM Symposium on Usable Security and Privacy (SOUPS 2005), pages 77–88, 2005. [F05] Rob Franco, Better Website Identification and Extended Validation Certificates in IE7 and Other Browsers, published in Microsoft Developer Network' s IEBlog, November 21, 2005 (http://blogs.msdn.com/ie/archive/2005/11/21/495507.aspx). [FB*97] Edward W. Felten, Dirk Balfanz, Drew Dean, and Dan S. Wallach. Web Spoofing: An Internet Con Game. Proceedings of the Twentieth National Information Systems Security Conference, Baltimore, October 1997. Also Technical Report 540– 96, Department of Computer Science, Princeton University. [FS*01] Kevin Fu, Emil Sit, Kendra Smith, and Nick Feamster, Do’s and Don' ts of Client Authentication on the Web, in the Proceedings of the 10th USENIX Security Symposium, Washington, D.C., August 2001. [GGM86] Oded Goldreich , Shafi Goldwasser , Silvio Micali, How to construct random functions, Journal of the ACM (JACM), v.33 n.4, p.792-807, Oct. 1986. [HG04] A. Herzberg and A. Gbara, Protecting (even) Naïve Web Users, or: Preventing Spoofing and Establishing Credentials of Web Sites DIMACS Technical Report 2004-23, May 2004.

[LY03] Tieyan Li, Wu Yongdong. "Trust on Web Browser: Attack vs. Defense". International Conference on Applied Cryptography and Network Security (ACNS' 03). Kunming China. Oct. 16-19, 2003. Springer LNCS. [R00] Eric Rescorla. SSL and TLS: Designing and Building Secure Systems. Addison-Wesley, 2000. [R95] Aviel D. Rubin, Trusted Distribution of Software Over the Internet, Proc. ISOC Symposium on Network and Distributed System Security, pp. 47-53, February, 1995. [SZ*06] J. Salowey, H. Zhou, P. Eronen and H. Tschofenig, Transport Layer Security Session Resumption without Server-Side State, Internet Draft draft-salowey-tls-ticket07.txt, expires: July 29, 2006. [YS02] Zishuang (Eileen) Ye, Sean Smith: Trusted Paths for Browsers. USENIX Security Symposium 2002, pp. 263-279. [YYS02] Eileen Zishuang Ye ,Yougu Yuan ,Sean Smith . Web Spoofing Revisited: SSL and Beyond . Technical Report TR2002-417 February 1, 2002. [Z95] Phil R. Zimmerman. The Official PGP User' s Guide. MIT Press, Boston, 1995.