Efficient Distributed Event Processing using ... - Semantic Scholar

4 downloads 0 Views 341KB Size Report
propagate user subscriptions among the brokers of the system. ..... broker, the actual network bandwidth required to exchange subscriptions among brokers is ...
Efficient Distributed Event Processing using Subscription Summaries in Large Scale Publish/Subscribe Systems Peter Triantafillou Andreas Economides Department of Computer Engineering and Informatics Department of Electronic and Computer Engineering University of Patras Technical University of Crete Rio Patras, 26500, Greece Chania, 73100, Greece [email protected] [email protected]

Abstract A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate user subscriptions among the brokers of the system. In this paper we contribute the notion of broker subscription summaries and accompanying distributed and scalable algorithms for subscription summary propagation and event filtering and routing. In addition we present a performance analysis, quantifying the associated benefits. Our results show that the proposed mechanism (i) introduces significant performance gains in terms of saved network bandwidth (up to orders of magnitude), required storage space, and processing capacity requirements at each broker, and (ii) is highly scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales. Keywords: event-based systems, notification services, publish/subscribe, subscription summaries, distributed event processing. Technical areas: distributed data management.

1. Introduction Traditionally, interested users used searching facilities to find interesting distributed information. The publish/subscribe model is recently receiving increasing attention as a means to develop large scale information retrieval and dissemination systems that enable personalized data delivery. In this model a user declares his interests and receives the appropriate information/events as the events matching these interests take place. Such a system gives users the ability to receive information dynamically at the time it becomes

1

available. Publish/Subscribe (pub/sub) systems therefore connect producers (information providers) with consumers (interested users) delivering personalized information according to their interests.

2. Problem Definition and Background 2.1 System Architecture A pub/sub system is comprised of three main elements. A consumer, who subscribes his interests to the system, a provider, who publishes events and the pub/sub infrastructure which has the responsibilities to (i) match each event to all related subscriptions and (ii) to deliver the matching events to the corresponding consumers. The architecture of a basic pub/sub system consists of: 1. One or more Event Sources (ES) / Producers An Event Source produces events in response to changes to a real world variable that it monitors. 2. An Event Brokering System (EBS) It consists of one or more brokers. The events are published to the Event Brokering System, which matches them against a set of subscriptions, submitted by users (consumers) in the system. 3. One or more Event Displayers (ED) / Consumers If a user’s subscription matches, the event, it is forwarded to the Event Displayer for that user. The Event Displayer is responsible for alerting the user. A detailed presentation of this model can be found in [13]. 2.2 Related Work The first pub/sub systems were based on the concepts of group (also known as channel-based systems) or subject (a.k.a. topic-based systems). Channel-based systems [14, 21] categorize events into pre-defined groups. Users subscribe to the groups of interest and receive all events for these groups. In subject-based systems [8, 16] each event is enhanced with a tag describing its subject. Subscribers can declare their interests about event subjects flexibly using string patterns, e.g. all events with a subject starting with "stock". As an attempt to overcome the limitations on subscription declarations, recently the content-based model [3, 6], has emerged, in which subscribers use flexible querying languages to declare their interests with respect to the contents of the events. For example, such a query could be "give me the price of stock A when

2

the price of stock B is less than X". A slightly different model than this is the content-based with patterns model [7, 15], with extra functionality on expressing user interests. It is obvious that the implementation of these models can become complicated, especially during the matching of events with subscriptions. A lot of research and commercial work has been developed in this area. Examples of this research activity are the Gryphon [1, 2], Siena [7], Jedi [8], Le Subscribe [9], Ready [12], and Elvin [19]. Correspondences commercial implementations are the CORBA Event Service [14], CORBA Notification Service [15], iBus [20], Jini [21], Tibco [22] and Vitria [23]. We now provide a brief introduction to Bloom filters, upon which our mechanism is based. 2.3 Bloom Filters In [4] Bloom introduced a method according to which a vector V containing m bits, initially all set to 0, is used to compact the information in a set A = {α1, α2,…, αn} by hashing each value into V. In general k independent hash functions, h1, h2,…, hk can be used for each element of A producing k values, each varying from 1 to m and setting the corresponding bit in vector V. It is obvious that a specific bit can be set to 1 many times. MD5 [17] can be used to produce the values for the hash functions. To check if an element b belongs to set A the same k hash functions are applied on b and the bits of V in positions of h1(b), h2(b),.., hk(b) are checked. If at least one of these bits is 0, then b does not belong to A. Else, it is conjectured that b belongs to A although this may be wrong (this is referred to as a "false positive"). By tuning k and m we control the probability for false positives, which is given by PFP = (1 - e -kn/m ) k , where n is the number of stored values, m is the size of the bitmap, and k is the number of hash functions. Element a

Bit vector v

h1(a)=p1

1

h2(a)=p2 1

h3(a)=p3

m bits

1

h4(a)=p4 1 Figure 1: A Bloom filter with 4 hash functions.

3

In our approach, we used Bloom filters to summarize the names of the subscription attributes and the values of string attributes. Changes in the set of subscriptions must be supported (because existing subscriptions can be updated or deleted). This can be done by keeping for each location l in the bit vector, a count c(l) of the number of times that the bit is set to 1. Initially all the counts are set to 0. For every element a which is inserted or deleted, the counts c(h1(a)), c(h2(a)), …, c(hk(a)) are incremented or decremented accordingly. Bloom filters summarize large sets of information with little storage, with a controllable number of false positives. In fact, bloom filters have been employed for summarizing web cache contents [10]. The size of the bit map implementing the filter is typically equal to the product of n with a value called the load factor, used to decrease the probability of false positives. 2.4 The Problem, Motivations, and Contributions A key problem when designing and implementing large-scale publish/subscribe systems is the efficient propagation of subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (i.e., network bandwidth and processing time). The key motivation for our work is that, in order for a publish/subscribe system to be able to scale to very large numbers of brokers, events, and subscriptions, it is imperative to develop data structures and algorithms that can introduce significantly more efficient subscription propagation and event matching/filtering. We have presented the notion of per-broker subscription summaries, a mechanism compacting subscription information. The subscription summaries are partly based on Bloom filters [4]. Bloom filters have been straightforwardly applied for similar purposes, storing either string or numeric values and testing whether a given value matches a value stored in the filter (e.g. [10]). However, it is not straightforward to see how operators, other than equality, can be supported using Bloom filters, making their application in publish/subscribe systems a challenge. Our summarization mechanism supports event/subscription schemata that are rich with respect to the attribute types they include (i.e., numeric types, ranges, strings) and powerful

4

with respect to the operators for these attributes (i.e., , =, prefix, suffix, string containment, etc). We also developed the accompanying event matching algorithms for subscription summaries. We contribute the notion of multi-broker subscription summaries and develop distributed algorithms for propagating multi-broker summaries among brokers and filtering and routing events to the interested brokers. In addition, we contribute the extensions, which allow for the dynamic fine-tuning of the size of the summaries, which enables adaptable and even higher performance. Finally, we present a performance analysis of the above contributions. As a result of the contributions in this paper, the performance of a pub/sub system is greatly improved: we have measured with our performance study orders of magnitude improvement in network bandwidth requirements, as well as considerably better performance in terms of required storage space at each broker and processing capacity requirements at each broker (as exemplified by the complexity analysis for the event-subscriptions matching algorithm).

3. Per-broker Subscription Summaries In this section we summarize the basic set of data structures for representing the summarized subscriptions received at a broker. It is this compacted, summarized, per-broker subscription information that is propagated to the other brokers. The latter brokers will use it for filtering incoming events and forwarding them only to brokers, which have users with subscriptions for these events. We also describe the algorithms that process incoming events and filter them at each broker (A more detailed presentation of the data structures along with the matching algorithm can be found in [18]). 3.1 Event and Subscription Types Our approach borrows the event and subscription schemata developed by Siena [7]. Event Schema The Event Schema of this model is an untyped set of typed attributes. Each attribute consists of a type, a name and a value. The type of an attribute belongs to a predefined set of primitive data types commonly found in most programming languages. The attribute’s name is a simple string, while the value can be in any range

5

defined by the corresponding type. The whole structure of type – name – value for all attributes constitutes the event itself. Subscription Schema The data structures compact the subscription information, and the associated algorithms allow for expressing a rich set of subscriptions, containing all interesting subscription-attribute data types (such as integers, strings, etc.) and all interesting operators (=, , ranges, prefix, suffix, containment, etc.). All the attribute values of a subscription are interpreted conjunctively: An event matches a subscription if and only if all the subscritption’s attributes constraints are satisfied. The same subscription can have two or more constraints for the same attribute. In this case all constraints must be satisfied for a successful matching. In general, an event can have more attributes than those mentioned in the subscription attributes. Subscription

Event Type

Name

Value

Type

Name

Value

string string date float integer float float

exchange symbol when price volume high low

= NYSE = OTE = Jul 1 12:05:25 EET 2002 = 8.40 = 132700 = 8.80 = 8.22

string string float float

exchange symbol price price

N*SE = OTE < 8.70 > 8.30

Figure 2: An event and a subscription example. We can see that the event is satisfying the specific subscription.

3.2 Data Structures and Operators Per-broker subscription summaries consist of four data structures, which keep the compacted, summarized subscription information. We assume that: (i) A named attribute cannot have two different data types; (ii) The number of supported attributes in the whole system are predefined as well as the specification of these attributes (name – type); (iii) The set of supported attributes is ordered and known from each broker; (iv) String attributes can have at most one "*" operator. 1. Subscription Attribute Summary (SAS) SAS is used to hold information about all the attribute names that appear in at least one subscription received by a broker. It’s a Bloom filter whose size is the number of all supported attributes by the system (nt) multiplied with a load factor lf (lf > 1). SAS summarizes information about the attributes of interest for a

6

specific broker. The specific data structure looks like the figure 1. In our example of figure 2, element ‘a’ is equal to each of the names of the subscription and will give us different positions in the bit vector. 2. Attribute Association List (AAL) AALs store information about the attributes, which are jointly contained in some subscription. An AAL is created for each distinct first attribute of all the subscriptions received by a broker. It is implemented using an array of bits (initially all 0) with a constant number of columns (nt) and a variable number of rows (ndsf). Columns represent the ordered set of supported attributes (one column for each attribute). The rows represent the unique sets of the other attributes, which follow the specific attribute. Attribute’s id in the system Attribute id

AAL for attribute exchange

exchange

symbol

price

volume

1

2

3

4

i position of the array Sub of figure 2

1

2

3

4

1

1

1

0

Figure 3: An Attribute Association List example.

3. Arithmetic Attribute Constraint Summary (AACS) The AACS (figure 4) structure holds information about the constraints of each different arithmetic attribute of a subscription. AACS is useful because Bloom filters cannot capture the meaning of operators (other than equality) on the stored arithmetic values. An AACS consists of two arrays. The first (AACSSR) is an array with two columns and a variable number (ndsr) of rows. Each row represents non-overlapping sub-ranges of values specified in subscriptions for the specific attribute. The second array (AACSE) is used when an arithmetic constraint in a subscription has an equality operator for a value that is not in the existing sub-ranges. It has only one column and a variable number of rows (nde) with the same meaning of the first array. 4. String Attribute Constraint Summary (SACS) SACS holds information about the constraints of subscriptions’ string attributes. For each different string attribute, which appears in at least one subscription, a broker is implementing a SACS structure using three bit vectors SACSL, SACSR, and SACSX, as Bloom filters. The size (in bits) of each one of these three vectors is equal to the number of different values (ndv) that a specific string attribute can take multiplied by a load factor lf. The reason for requiring three bit vectors is in order to ensure accurate matching even when the

7

subscriptions’ constraints for string attributes may include all operators on strings (prefix ">*", suffix "*