FeelTrust: Providing Trustworthy Communications in Ubiquitous Mobile Environment Giuliana Carullo∗ , Aniello Castiglione† , Giuseppe Cattaneo‡ , Alfredo De Santis§
Ugo Fiore¶
Francesco Palmierik
Dipartimento di Informatica Centro per i Servizi Informativi Dipartimento di Ingegneria Universit`a degli Studi di Salerno Universit`a di Napoli Industriale e dell’Informazione
[email protected]∗ ,
[email protected]† , Federico II Seconda Universit`a degli Studi di Napoli
[email protected]‡ ,
[email protected]§
[email protected]¶
[email protected]
Abstract—The growing intelligence and popularity of smartphones and the advances in Mobile Ubiquitous Computing have resulted in rapid proliferation of data-sharing applications. Instances of these applications include pervasive social networking, games, file sharing and so on. In such scenarios, users are usually involved in selecting the peers with whom communication should take place, continuously facing trust issues. Unfortunately, providing trust support in a pervasive world is challenging due to peer mobility and lack in central control. We propose a novel approach that establishes trust leveraging users’ profiles: humans today produce rich strings of unique data twenty-four hours a day. These information enables a task-aware trust model, namely a finer-grained model in which users are classified as trusted or not depending on the intended business activity. However, simply collecting user’s interests may be insufficient to provide a reasonable trust management system. In order to enable the system to recognize malicious users, we include a recommendation subsystem based on the Wilson score confidence interval. It has been designed to be lightweight, minimizing battery depletion. It also protects user privacy. To make our approach fully deployable, it supports two modalities: a TPM-based one and a TPM-less one. The former gives more security guarantees and ensures a fully distributed approach. The latter, requires a Trusted Authority to avoid feedbacks to get tampered and is no more fully distributed. Index Terms—Pervasive Computing; Recommendation System; Mobile Sensing; Trusted Computing; Data Mining; Reputation System; Smartphones; Mobile Security.
I. I NTRODUCTION Security in ubiquitous environment is a field that is growing up fast and spans across several fields including risk management [17], access management [4], [18] and privacy [9]. This paper stems by the observation that our interests and habits have a deep impact on our behavior: our hobbies are tightly connected with the places we visit everyday, the searches carried on the Web and so on. As a consequence, we produce rich strings of unique data twenty-four hours a day. Until today, these data has been used for convenience, not for security. For example, while surfing the Internet our browser habits are ceaselessly tracked by third parties for user profiling, social networks connections suggestion and targeted advertising. Corresponding author: Aniello Castiglione,
[email protected], Via Ponte don Melillo, I-84084 Fisciano (SA), ITALY. Phone: +39089969594
We envision a new class of security-related applications for smartphones that monitor multiple dimensions of human behavior to mitigate security issues present in several fields (e.g., access control and trust management). An important enabler of this vision are the new trends in computer technologies. Indeed, in real-worlds scenarios, computing is so ubiquitous that is moving beyond PCs to everyday devices such as smartphones. This is due to the advances in smartphones, which are equipped with powerful embedded technologies and sensors including accelerometer, GPS, microphone, and cameras. Seminal work on this topic has been done by Tang et al. [22]. They exploited data mining to provide user authentication. To this end, they performed classification on application history and GPS, which is data that can be collected easily without user awareness, giving, at the same time, a good characterization of users’ habits. We claim that there are still technical issues to be solved to make our vision a reality. One of the major problems is about securing sensed data from getting tampered. Trusted hardware such as a Trusted Platform Module (TPM) can be a solution. Unfortunately, this approach is not widely available on smartphones and it is not 100% secure. However, eliminating all the possible vulnerabilities is beyond the scope of this paper and most of them are unlikely to disappear anytime soon. Our work addresses the aforementioned issues, and paves the way for a trust management model that exploits the link between trust and user behavior. Thus, we present FeelTrust, a prototype implementation for smartphones that helps people in the first stage, deciding on whether or not an interaction with a stranger is desirable. Indeed, with the proliferation of datasharing applications such as social networking or file sharing apps, users are usually involved in selecting the peers with whom communication should take place, continuously facing trust issues. In order to mitigate risks of interaction with malicious peers, FeelTrust continuously monitors users’ behavior and incorporates users’ feedback to increase awareness of the trust level of peers with whom communication takes place. Moreover, to make FeelTrust deployable on a wide range of devices, two modalities are supported: a TPM-based one and a TPM-less one. The former gives more security guarantees
and ensures a fully distributed approach. The latter, requires a Trusted Authority to avoid feedbacks to get tampered. Notice that this approach is no more fully tamper-proof, hence a user can potentially modify its profile. Finally, privacy is also addressed, since it is a key element to user acceptance of these new technologies. The rest of the paper is organized as follows. Section II gives some technical details needed to understand the proposed trust model and Section III presents FeelTrust’s design guidelines. Section IV and V discuss FeelTrust architecture and the detailed implementation together with a use case scenario, respectively. Section VI discusses on some experimental results, while section VII presents Related Work. Final remarks and Future Work close the paper in VIII. II. T ECHNICAL BACKGROUND In this section, we provide background information on technical aspects that are necessary to the understanding of FeelTrust. Hence, we first introduce some fundamental concepts about Trust Management and then Trusted Platform Modules are described. A. Trust Management Security mechanisms are intended to protect users against malicious parties. Therefore, they typically protect resources or interactions from malicious users, usually enforcing authentication mechanisms. This approach is not well suited for cases in which mutual trust is not based on identity, rather it is based on the mutual knowledge of behavioral pattern of the other parties. In such scenarios, participating peers want to relax usual checks on authentication and establish a communication based on reciprocal trust. Trust alone, however, may be a weak indicator of a good match, because of the intrinsic nature of interactions, which usually involve anonymous users interacting each others. This issue, typical of trust-based communications, can be mitigated including a reputation management system, that certifies the quality of a user profile used to leverage the trust level of that user. Several works in the literature [19], [27] agreed on the fact that trust and reputation are closely related concepts and that they share several common properties embracing: • Context specific - Trust and reputation both depend on a given context. For example, Johanna trusts Mike as her lawyer, but she does not trust Mike as a mechanic who can fix her car. Thus, in the context of seeing Mike as a lawyer, he is trustworthy. But in the context of fixing a car, Mike is untrustworthy. • Multi-faceted - Even in the same context, there is a need to differentiate trust levels (or reputations) depending on different aspects of the capability of a given peer. For example, a customer might evaluate an online-seller from several aspects such as the quality and price of goods. For each aspect, he/she develops a trust level. The combination of all these scores represent the overall trust in the given peer.
Dynamic - Both trust and reputation increase or decrease with further experience. In addition, they decay with time. Even these common properties, trust and reputation can not be pinpointed to the same category. A non negligible difference is that trust is active, in the sense that it is a node’s belief in the trust characteristics of a peer. Rather, reputation is passive, which means that it is the perception that peers have about a certain node. •
B. Trusted Platform Module Trusted Platform Module (TPM) was originally designed by the Trusted Computing Group, which published the official TPM specifications [24]. TPM is a relatively inexpensive hardware device that provides several tamper-proof mechanisms to secure input and output, including protected storage capabilities, creation of trusted identities, and integrity measurement. For this purpose, each TPM chip has an embedded memory and logic. In particular it comes with an Endorsement Key (EK) that uniquely identifies an individual device. EK is an RSA key pair burned into the device by the Smartphone manufacturer. This key is kept secure inside the TPM, thus, any data signed via EK must have originated from the device that contains the TPM itself. A TPM can also generate new public-private key pairs called Attestation Identity Keys (AIKs) to provide anonymous quoting. An AIK becomes trustable only after a trusted third-party privacy Certificate Authority (privacy CA) generates a certificate for the public half of the AIK. TPM devices are available in many off the shelf PCs and laptops, and there is an industry push to make them available on smartphones. III. D ESIGN C ONSIDERATIONS While designing distributed trust systems, it is first needed to identify desirable properties. User anonymity, accountability and lightweightness (both in terms of required storage and scalability) are included. Another important property is that trust should be a function of reputation information, mimicking social trust between humans, that depends on relationship development. Hence, a reputation system should support both types of recommendations (good and bad ones) incorporating the three classical dimensions of computational trust (context, subjectiveness, and time). Finally it should have a trust metric that is expressive, yet tractable. We derived our design guidelines bearing in mind these properties and major reputationbased system’s threats. Hence, in the following, we argue about countermeasures that FeelTrust adopts to offer a smaller surface to these threats. A. Protecting Device Owner Privacy Smartphones are tightly integrated with the everyday life of their user. Thus properly dealing with privacy issues is of paramount importance. Moreover, potential privacy risks could be a barrier to wide adoption of services based on mobile sensing. Recall that privacy means control over sensitive personal data: the user should be able to decide who can read, modify and distribute his/her information. This is especially
true for our system since it relies on former high-sensible data that if spoofed would lead to a complete user profile reconstruction. As argued in [25] the location where all this data is stored is a crucial point. Certainly, a centralized approach takes away the control from the user, thus negatively impacting on privacy. Therefore, our approach relies on a decentralized system with local storage of the sensed data. B. Bad Mouthing Attack Bad Mouthing Attack has been widely discussed since it is the most straightforward attack. This attack consists of a malicious party that tries to provide dishonest recommendations to frame up good parties and/or boost trust values of malicious peers [2]. In order to let our system be safe from this attack, we evaluated two possible scenarios. First, the recommendation trust is treated as an additional dimension in the malicious entity detection process. As a result, if a node has low recommendation trust, its recommendations will have minor influence on good node’s decision-making, and it might be detected as malicious and expelled from the network. Second, it may happen that an attacker well-behaves for a certain period of time to obtain disproportionately high reputation and then provides dishonest recommendations. To discourage and easily detect this kind of attacks, misbehaving users with high reputation should be punished by an increased value of negative feedback. Despite these alternatives, our system exploits Wilson score confidence interval [28] that itself provides the probability of the user reliability. As a consequence, none of the previously described solutions to Bad Mouthing Attack are taken into account, since they may be even self-defeating. C. Sybil Attack As long as a system makes allowances for recommendations, malicious peer may seek to subvert the reputation system by creating a large number of pseudonymous entities, using them to gain a disproportionately large influence. For example, if a party has suffered significant loss of reputation it might try to change identity to cut with the past and start from a fresh profile. This attack is referred to as the Sybil Attack [3]. In order to limit the scope of Sybil Attack, we assume that trusted privacy CAs will only certify a small number of AIKs for each EK. In addition, to obtain a stronger reputation mechanism, it would help to penalize newcomers from changing identities as shown by Zacharia et al. [29]. However, this would lead to several difficulties in distinguishing between good and bad newcomers thus discouraging new users in the acceptance of the system. Since user acceptance is a significant impediment to the success of new reputation system, FeelTrust does not apply this mechanism. D. Tampering and Man in the Middle Tampering and Man in the Middle (MiTM) are likely to be the most common attacks. Tampering attack consists of a malicious peer that changes his/her reputation, while MITM is
configured when a malicious peer tries to intercept messages from a benevolent peer to the requestor may try to intercept the messages from a benevolent peer to the requestor and rewrite them with bad services, thus decreasing the reputation of the benevolent peer. That participant could even maliciously modify the recommendations given by an honest peer, in order to benefit his/her own interests [10]. Both of the presented attacks are solved by the usage of TPM hardware. Indeed, TPM is both strongly protected against any remote attempt to compromise its functioning and enable to enforce a SSL-like mechanism to secure communications using TPM key pairs. Anyway, TPM is not 100% proof against physical attacks. Indeed, a physical access to a TPM chip along with the right tools can allow an attacker to compromise its secrets [11]. Considering that, we believe that this is probably the weakest point of our system. E. Denial of Service Centralized systems are typically vulnerable to Denial of Service (DoS) [25] attacks, while distributed calculation and dissemination algorithms are often less vulnerable if enough redundancy is employed such that misbehavior or loss of a few participants will not affect the operation of the system as a whole. The system we propose is fully decentralized, so this kind of attack is unlikely to be carried out. F. Summary of Design Goals We now summarize the requirements of our system. Some of them are strictly related with reputation systems and have already been discussed and motivated in [25]. FeelTrust requirements are: • must be lightweight, protect user anonymity, be fully distributed; • must provide information that allows users to distinguish between trustworthy and untrustworthy users. Those information must be presented to users in a user-friendly fashion; • users must not be able to fake their reputation values; • both positive and negative ratings should be supported; • must ensure data integrity, in the sense that users must not be able to get rid of a negative reputation; • it should not be possible to defame someone without proof; • minimize bandwidth demand, evolve (social) trust as humans do and accomplish user acceptance. IV. F EELT RUST A RCHITECTURAL D ESIGN In this section, we describe the FeelTrust architecture and then we present the operational phases of the FeelTrust system as shown in Figure 1 and discussed below. FeelTrust consists of four trusted components which are: • Monitor Behavior - module that monitors and collects sensor data. • Manage Feedbacks - module that manages feedbacks received from other users.
or upper bounds of Wilson [28] score confidence interval for a Bernoulli parameter. This solution considers only positive and negative ratings and defines the score as lower/upper bounds on the proportion of positive ratings s as: ! 2 2 zα zα /2 /2 2 pˆ(1 − pˆ) + 4n /n pˆ + 2n ± zα /2 score =
Fig. 1.
• •
FeelTrust Architecture
Model Quantified-Trust - module that merges behavior and feedbacks to provide an overall level of trust. Secure Communication Module - module that lets users to communicate.
A. Monitor Behavior The FeelTrust application automatically infers behavioral patterns. Rather than tracking a single behavioral dimension, FeelTrust monitors multiple dimensions, thus representing a finer-grained view of the user’s overall interests. A first challenge that FeelTrust tries to address is about energy consumption. Monitoring users’s behavior continuously may involve multiple sensors to be frequently triggered, thus quickly depleting the battery. In order to save energy, FeelTrust does not exploit sensors like GPS or accelerometer because of their quite high power consumption, but only relays on collecting search history and installed applications. Even these design choices, the modular approach of our proposal allows to include other sensors, to better model behavior features that can be captured by specific sensors only. Observe that our approach let us to well model both context specific and multifaceted properties mentioned in Section II. B. Manage Feedbacks Simply collecting user’s interests may be insufficient to provide a reasonable trust management system. As a consequence, FeelTrust is equipped with a recommendation subsystem that works as follows. Each FeelTrust client maintains the total number of, both up and down, received votes. The proposed model excludes the possibility of providing ratings with graded levels (e.g., bad, average or good) since we believe that just allowing ratings to be expressed with two values is full proof. In other words, we avoid the ambiguity on quantifying the real value of the intermediate ranks, thus impacting the overall usability. Just evaluating the total score as either difference or average between positive and negative scores may not be exhaustive. Indeed, it is required to balance the proportion of positive ratings with the uncertainty of a small number of observations. To do this, we calculate the score as either lower
1+
2 zα /
2
n
where pˆ is the observed fraction of positive ratings, zα /2 is the (1 − α/2 ) quantile of the standard normal distribution, α is the confidence level, n is the total number of ratings and plus/minus indicates upper/lower bound respectively. Supposing to set the confidence level α at 95%, results from the above presented formula can be interpreted as follows: it is likely at 95% that the real user reputation is between lower and upper bounds. As more votes are taken into consideration, the difference between this two bounds decreases giving a better estimation of the user trustability. Presenting to the users two trust levels (lower and upper bounds) for each peer, may be confusing. We solve this problem by introducing an experimental threshold γ such that, if the difference between upper and lower bound is less than the threshold γ and the number of positive votes is greater than the negative once, then the upper bounds is considered. Otherwise, the lower bound is chosen. From our studies, we found that γ = 0, 07 gives a good approximation. Let us stress that, different applications built on our approach may define different confidence levels depending on the context in which they are applied. C. Model Quantified-Trust FeelTrust makes users aware of the current trust level of other users with which they are trying to communicate. FeelTrust quantifies collected information — as described in previous sections — and presents them in two fashions: i) sensed level, presenting only the trust level extracted by the monitored behavioral dimensions, ii) recommendation level, presenting the level constructed by the recommendation subsystem. In particular, each FeelTrust client offers the users the overall trust level, presenting both sensed and recommendation levels. Thus, a personalized estimate of the user’s reputation can be used from other users, to make a more informed decision to accept an interaction or not. D. Secure Communication Module An high level view of the Secure Communication Module is shown in Figure 2. First, the user who wants to start a communication selects the preferred topic and searches for online users (1). Then, FeelTrust presents to this user a list of all available users with the relative trust levels. After the user evaluated the user he/she guesses to be the more trustable, asks FeelTrust to start a communication session (2). Finally, FeelTrust requests to the selected user if he/she is willing to accept the connection request. The requestor’s trust level is shown to this user to enable a more conscious choice. If the request is accepted, interaction can take place (3). This remarkable
Fig. 2.
Secure Communication Module: an high level view
ease of use, that makes FeelTrust valuable even to non-techsavvy users, requires a fairly complex protocol, as detailed in the following. In order to let FeelTrust be deployable on a wide range of devices, it supports two modalities: a TPMbased one and a TPM-less one. The former gives more security guarantees, and ensures users can choose the approach that offers the best trade off between security and deployability of FeelTrust. The involved actors are: the Searcher Node (SN), the Target Node (TN), the Trusted Authority (TA). The TA is needed in the TPM-less approach to guarantee that feedbacks have not been tampered. The TPM-less uses public-key cryptography to thwart forging of fake feedbacks; however, it assumes that the software can be tampered, hence a user can potentially modify his/her profile (i.e., a user can claim to be an expert of any topic, because the tampered software can forge fake statistics). The main drawback of this approach is that makes FeelTrust no longer completely distributed. The communication protocol of FeelTrust is based on two distinct phases: a Handshake Process, that establishes a connection between a SN and TN, and a Feedback Assignment Process, that reliably manages feedbacks. 1) Handshake Process: The Handshake Process, in turn, is split into three phases: the Discovery phase, the Secure Channel phase and the Profile Checking phase. In both cases, with or without TPM, during the Discovery phase, the SN, that wants to start a communication, searches for available TNs using a discovery protocol(e.g., Bluetooth, SSDP [6] and JINI [26]). The Secure Channel phase, depends on the availability of TPM: if TPM is enabled, then the parties establish a secure communication channel using TLS for TPM [13]; Otherwise, the secure channel is created between the SN and the TN using the standard TLS handshake protocol, thus relying on software-generated keys. This design ensures that all communications between the SN and the TN are secure and protected from eavesdropping, thus enhancing the privacy preserving properties of FeelTrust. The Profile Checking phase is the one that actually establishes whether the TN is compatible with the profile query of the SN. The SN requests the profile from the TN and checks if
it is compatible with the profile query required by the SN user. In addition, the TN sends its feedback score. The SN users can check the TN profile and its feedback score. If it satisfies his/her requirements, the SN sends a connection request to the TN, that can accept the connection. A this point, the real application-dependent interaction between the users starts. If TPM is enabled it guarantees both FeelTrust software and feedbacks have not been tampered, thus it does not require any additional checking. Thus, the Profile Checking phase is the following: first, the SN sends an HelloReq message specifying the topic previously chosen by the SN. Each available TN responds with an HelloResp message, containing its reputation information. In particular, this message piggybacks both the sensed level for the received topic and the reputation together with timestamp. On the reception of all these packets, the user is enabled to choose the desired TN. Then a SynReq message is instantiated by the SN. This packet has the same format of the HelloResp. Finally, on receiving the SynReq message, the TN may respond either with a SynAck or SynNack message, relatively if he/she is willing to accept the request or not. If TPM is not enabled, FeelTrust requires a TA to be available, to check that feedback scores have not been maliciously modified. In particular, the feedback score of each user has to be signed by the TA, whose public key is locally available to all participating nodes, thus the SN can check that the feedback is genuine by checking the signature of the TA. However, this approach has a drawback: a malicious TN can report an old feedback that, for instance, has a higher score than the current real score. A SN can check if this is the case by asking the TA the current score of TN: of course, this check is not possible if the TA is not available (e.g., due to patchy connectivity to the Internet). In the following we delve into finer details. First, the SN sends an HelloReq message specifying the topic previously chosen by the SN. Each available TN responds with a HelloResp message, containing its reputation information as in the previously described case. Notice that, in this case, the reputation given by feedbacks was previously signed by the TA. On the reception of all these packets, the user is enabled to choose the desired TN. Before sending the SynReq message, the SN needs to verify the received feedback. As a consequence, it sends a VerifyReq message to the TA, including in the request the received feedback. The TA verifies the coherence of the received feedback and sends back to the SN a VerifyResp message, which indicates the truthfulness of the feedback itself. In case of positive VerifyResp, a SynReq message is instantiated by the SN. This packet has the same format of the HelloResp. Finally, on receiving the SynReq message, the TN verifies the SN reputation as done by the SN, and responds either with a SynAck or SynNack message, relatively if he/she is willing to accept the request or not. 2) Feedback Assignment Process: Also the Feedback Assignment Process depends on the availability of the TPM chip. Indeed, in case of TPM-enabled, SN and TN can simply sends feedbacks each other, since the TPM protect the application from getting altered. Moreover, on receiving the HelloResp, the SN stores the received feedback together with the ID TN
and data to avoid a ping pong attack. In other words, it may happens that two users agreed on positively voting each other to obtain disproportionate high reputation level. By saving all this information, each time a new feedback is available, the system is able to distinguish across several copies of the same feedback. In this situation, the copy is simply discarded. Otherwise, if TPM is not available, the Feedback Assignment Process involves the following steps: first, the SN produces a feedback that is a tuple in the form (ID SN, ID TN, feedback). The feedback is encrypted with the public key of TA, and signed with the SN private key and then sent to the TN. In this way, the TN can not distinguish between positive and negative feedbacks, thus it is not able to choose only positive feedbacks. This feedback is then sent to the TN, who signs it and register it to the TA, which accepts and save the feedback only if the ID TN and TN’s signs are the same. Finally, the TA signs the feedback and sends it to the TN, that will use it for next interactions with other SNs. V. P ROOF OF C ONCEPT In this section, we first present a reference scenario, which will be used to exemplify and motivate our approach. Then we propose some trust assumptions together with a proof-ofconcept implementation. A. Reference Scenario In a pervasive world, whenever a change of context occurs (e.g., user enters a different location) new possibilities of interaction become available. It is not realistic to assume that interactions always take place between known entities or that a trust relationship has been preconfigured by an administrator between every party. However, in most of the real-world scenarios, the risk of interaction with untrusted entities must be avoided. In particular, the scenario we refer to, is the one presented by Moloney and Ginzboorg [12]. In their work they present a lack in the current research about pervasive networks. A pervasive network is a network in which communications take place via short-range technologies. Given the unfeasibility in this context of both centralized authority and stable Wi-Fi connections, [citazione] highlights the need for a distributed recommendation system to mitigate intrinsic risk of interactions, hence fostering pervasive networks. We consider pervasive networks for several reasons: i) they are well suited to the data-sharing, because they are free to use and have higher data throughput rates than other networks (e.g., cellular networks); ii) offering trustworthiness in such context is a real-world issue not just research. Moloney and Gizboorg also highlighted the necessity of some kind of automatization while establishing trust relationships. FeelTrust may work accordingly by simply defining a certain threshold β. Thus if the reputation of the user with whom communication should take place is major than this threshold, the first connection could be carried on automatically. B. Trust Assumptions FeelTrust relies on TPM about data authenticity of sensed data. An attack against the TPM may compromise either
private key of a TPM’s EK or private key of TPM’s AIK. Thus an attacker may be able to arbitrary generate TPM quotes. Only if a privacy CA becomes compromised, an attacker may be able to successfully masquerade as a large number of devices without compromising a large number of TPMs. By design, FeelTrust is built on Android OS. As a consequence, FeelTrust inherits Android’s security model, thus relying on Android’s security mechanisms to thwart attacks against the platform. Stock Android firmware typically does not give users root access, although some users will re-flash their device to gain root access. Even if attackers modify firmware allowing users to execute code as root, FeelTrust assumes that it can be detected through TPMs. Let us stress that, eliminating all possible software vulnerabilities is out of the scope of this paper. Sure enough, attacks that exploit specific implementation vulnerabilities of a secure platform such as Android are serious problems that are unlikely to disappear anytime soon. Other side-channel attacks that take advantage of physical access are beyond the scope of this work. C. Implementation In this subsection, we describe our prototype implementation of FeelTrust for Nexus S smartphones and Android 4. To the best of our knowledge, the Nexus S does not include a TPM chip. Thus, we emulated it by porting a popular open-source TPM emulator [21] to Android. FeelTrust implementation is based on the architecture discussed in Section IV and follows the scenario presented above. The software components (Figure 3) installed on the Smartphone include: i) Sensing Module, which is responsible for collecting data, classification and managing reputation levels; ii) Graphical User Interface (GUI), which lets users access all FeelTrust features, including search of available users, checking of trust levels and monitoring the handshake process; iii) Communication Infrastructure, which lets users communicate. In particular, the Sensing Module can be broken down into the following components: Reputation Manager, Reputation Classifier, Feedback Manager and Sensing Controller. The Reputation Manager is the main component of the Sensing Module. It manages either user’s feedbacks and sensed data. which are provided by Feedback Manager (FM) and Reputation Classifier (RC), relatively. It is a kind of supervisor in the sense that, every time new data become available, it properly invokes FM and RC components, thus maintaining the user’s profile up to date. The Reputation Classifier, computes the sensed trust level. It has two main functions. The first one is to classify data sensed by the Sensing Controller. It uses an internal taxonomy of possible interests and stores classification results for future computations. The second one is to compare different user’s profiles using previously computed interests. The Feedback Manager component simply computes feedback as described in Section IV and stores them into the Local Storage. The Sensing Controller component is responsible for orchestrating the underlying sensing components. It monitors sensed data and stores it until a new reputation check is made. To this end,
Fig. 3.
Fig. 4.
FeelTrust components
Summary of collected data
this component periodically sends new available data to the Reputation Manager. In particular, these updates are sent either at regular intervals of time or when an enough big chunk of data is available. Another component of FeelTrust implementation is the Communication Infrastructure. Its main components are the Handshake Manager (Handshake Request/Response Manager) and Reputation Exchange, that, together, carry on the Handshake Process as described in Section IV. VI. E XPERIMENTAL R ESULTS To prove the validity of the Wilson score compared to other naive, yet widely used, scores, we present in this section some experimental results. In particular, we collected representative real world feedback scores from ebay.com. To better highlight the features of Wilson score, we collected three meaningful main types of feedbacks: few/medium/many positive ratings and no or few negative ratings, balanced number of positive and negative ratings, higher number of negative ratings in respect to the positive one. For each of those types we collected both positive and negative values. Note that eBay also considers neutral ratings which are mapped as no rate into our system. As shown in Figure 4, we run our reputation system on the collected data. The first and most valuable observation about results concerns the extention of ratings through time and experience. This evidence can be found observing both values in Figure 4. As an example, consider users U 6: whichever
is the number of positive feedbacks, if there are no negative feedbacks, eBay considers the user as fully trustable. Rather, our system has a more cautious approach declaring the user fully trustable only when he/she has a long, positive history (i.e., U 3). This is a favourable feature. Indeed, the probability of a misbehaving user decreases as more are the positive interactions experienced by other users in the network. In a nutshell, with the increasing number of positive ratings (i.e., users U 2, U 4, U 7 and U 8), the user has a more stable positive valuation and both Wilson score and eBay votes are aligned. In general, in respect to eBay, our approach underestimates users trustability. Consider, for example, user U 5 which is scored as 33% trustable by eBay compared with only 12% given by Wilson. This also is a positive aspect since the uncertancy is very high in situations like this one in which the number of negative scores is greater than the positive scores, and the system collected only few ratings. Likewise the previous analyzed case, with the growing number of score collected, Wilson and eBay scores tend to be aligned. We were not able to find users with a very high number of negative ratings and few positive votes. This is an uncommon situation on eBay, in which if a user has a very low profile is incentivated to start from a new one. VII. R ELATED W ORK Many works in the literature discuss on decentralized trust problem and propose solutions. Abdul-Rahman and Hailes [1] first propose the use of a recommendation system in order to manage context-dependent and subjective trust. Their work suffers from the lack of a process for trust evolution. Subsequent research has shown that Bayesian approach provides a statistically sound basis for computing reputation score. Indeed, several works in the literature propose binomial Bayesian reputation systems [15], [8], which allow ratings to be expressed with two values, as either positive or negative feedback. In order to provide a wider range of rates to users, Jøsang and Haller propose in [7] a different approach, based on Dirichlet distribution. More explicitly, users are allowed to rate other peers within any level from a set of predefined ratings levels.
Concerning to correlation between trust and similarity, some recommendation systems based on trust have already been proposed. In particular, Olsson proposes in his work [16] an approach for decentralized social filtering. It consists of several types of filtering (content-based, collaborative and social) that, when merged, express the overall trustability level. As mentioned by Olsson, the main problem of this approach is how the system can protect users’ privacy. In another approach incorporating trust models into online recommender systems, recommendations are synthesized based on feedback from trusted peers rather than most similar ones [14]. Finally, Ziegler and Jennifer Golbeck in [31] discuss the dependencies between trust and user similarity. They also provide empirical evidence to support their thesis. In contrast, our work spans over these topics, involving both decentralized trust based on probabilistic approach and correlation between trust and similarity. In particular, to the best of our knowledge, FeelTrust is a novel work in the field of pervasive computing, which exploits both sensing and reputation system. VIII. C ONCLUSION AND F UTURE W ORK We presented the design and implementation of FeelTrust, an application for smartphones that automatically monitors a user’s overall trustability level. To this end, FeelTrust classifies users as trusted or not depending on their interests and pair this result with feedbacks from an embedded reputation system. The FeelTrust implementation demonstrates the feasibility of security tasks using off the shelf smartphones. As future works, we plan to conduct a large-scale deployment of FeelTrust to both validate our proposal and better understand how different users with different interests, can benefit from this new approach. Moreover, a particular attention will be given on how to easily integrate FeelTrust in context-specific applications. Finally, we plan to evaluate which is the most suitable recommendation subsystem in terms of both performance, model fitting to data, and resistance against common attacks that recommendation systems are usually vulnerable to. R EFERENCES [1] Abdul-Rahman, A., Hailes, S., “Using Recommendations for Managing Trust in Distributed Systems”. In Proceedings of IEEE Malaysia International Conference on Communication, (1997). [2] Dellarocas, C., “Mechanisms for coping with unfair ratings and discriminatory behavior in online reputation reporting systems”, in Proceedings of ICIS, (2000). [3] Douceur, J., “The Sybil Attack”. In Proc. International Workshop on Peer-to-Peer Systems, (2002), pp. 251–260. [4] Ficco, M., D’Arienzo, M., D’Angelo, G. “A bluetooth infrastructure for automatic services access in ubiquitous and nomadic computing environments”. In Albert Y. Zomaya and Sherali Zeadally, editors, Proceedings of the 5th ACM international workshop on Mobility management and wireless access, pp. 17–24, ACM, 2007. [5] Gilbert, P., Cox, L. P., Jung, J., Wetherall D., “Toward Trustworthy Mobile Sensing”. In HotMobile, (2010), pp. 31–36. [6] Goland, Y. Y., Cai, T., Leach, P., Gu, Y., “Simple Service Discovery Protocol/1.0”, (1999) [7] Jøsang, A., Haller J., “Dirichlet reputation systems”. In Proceedings of the 2nd International Conference on Availability, Reliability and Security (ARES), (2007), pp. 112–119.
[8] Jøsang, A., Ismail, R., “The Beta Reputation System”, in Proceedings of the 15th Bled Electronic Commerce Conference, June 2002. [9] Langheinrich, M., “A Privacy Awareness System for Ubiquitous Computing Environments”, in Lecture Notes in Computer Science, Vol. 2498, Springer-Verlag, (2002), pp. 315–320. [10] M´armol, F. G., P´erez, G. M., “Security threats scenarios in trust and reputation models for distributed systems”, Elsevier Computers & Security, 28 (7) (2009), pp. 545–556. [11] MITRE, Open Vulnerability and Assessment Language, Trusted Platform Module to Enhance OVAL Driven Assessments. http://oval.mitre. org/language/about/docs/OVAL and TPM White Paper.pdf [12] Moloney, S., Ginzboorg P., “Security for Interactions in Pervasive Networks: Applicability of Recommendation Systems”, Lecture Notes in Computer Science (2004), Vol. 3313, pp. 95–106. [13] Latze, C., Ultes-Nitsche, U., Baumgartner, F., “Transport Layer Security (TLS) Extensions for the Trusted Platform Module (TPM)” (2010), [14] Montaner, M., L´opez, B., de la Rosa, J. L., “Opinion based filtering through trust”, in: Sascha Ossowski, Onn Shehory (Eds.), Proceedings of the Sixth International Workshop on Cooperative Information Agents, LNAI, vol. 2446, Springer-Verlag, (2002), pp. 164–178. [15] Mui, L., Mohtsahemi, M., Ang C., Szolovits, P., Halberstadt A., “Ratings in Distributed Systems: A Bayesian Approach”. In Proceedings of the 11th Workshop on Information Technologies and Systems, New Orleans, Louisiana, USA, December 2001. [16] Olsson, T. ,“Decentralized social filtering based on trust”. In Working Notes of the AAAI-98 Recommender Systems Workshop, (1998). [17] Palmieri, F., Fiore, U., Castiglione, A. “Automatic security assessment for next generation wireless mobile networks”, IOS Press, Mobile Information Systems, 7 (3), (2011), 217–239. [18] Palmieri, F., Fiore, U., “Audit-Based Access Control in Nomadic Wireless Environments”, in Lecture Notes in Computer Science, Vol. 3982, Springer-Verlag, (2006), pp. 537–545. [19] Sabater, J., Sierra, C.,“Regret: a reputation model for gregarious societies”. In 4th Workshop on Deception, Fraud and Trust in Peer Societies, 2001. [20] Saha, D., Mukherjee, A., “Pervasive computing: a paradigm for the 21st century” Computer, 36 (3), (2003), pp. 25–31 [21] Strasser, M., Stamer, H., The TPM Emulator. http://tpm-emulator. berlios.de/. [22] Tang, Y., Hidenori, N., Urano, Y. , “User authentication on smart phones using a data mining method”, Information Society (i-Society), 2010 International Conference on , pp. 173–178 [23] Trusted Computing Group - trusted platform module - specifications. http://www.trustedcomputinggroup.org/developers/trusted platform module/specifications/. [24] Trusted Platform Module. http://www.trustedcomputinggroup.org/ developers/trusted platform module [25] Voss, M., “Privacy Preserving Reputation Systems”, in Proceedings of 19th IFIP International Information Security Conference (SEC2004), Toulouse, France. Kluver Academic Publishers, 2004 [26] Waldo, J., “The Jini Architecture for Network-centric Computing”. Communications of the ACM, 42(7), pp. 76–82, July 1999. [27] Wang, Y., Vassileva, J., “Trust and Reputation Model in Peer-to-Peer Networks”. Proc. of The Third IEEE International Conference on Peerto-Peer Computing, September 1-3, 2003, Linkoping, Sweden. [28] Wilson, E.B., “Probable inference, the law of succession, and statistical inference”. Journal of the American Statistical Association, 22 (1927), pp. 209–212 [29] Zacharia, G., Moukas, A., Maes, P., “Collaborative Reputation Mechanisms in Electronic Marketplaces”. In Proceedings of the 32nd Hawaii International Conference on System Science. IEEE, 1999. [30] Zhang, X., Onur, A., Seifert, J.-P., “Building Efficient Integrity Measurement and Attestation for Mobile Phone Platforms”, in Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, (2009), vol. 17, pp. 71–82 [31] Ziegler, C.-N., Golbeck, J., “Investigating interactions of trust and interest similarity”, Decision Support Systems Journal, 43 (2), (2007), pp. 460-475.