7th International Symposium on High-Level Parallel Programming and Applications (HLPP 2014) Amsterdam, Netherlands July 3-4, 2014
Pool evolution: a domain specific parallel pattern Marco Aldinucci · Sonia Campa · Marco Danelutto · Peter Kilpatrick · Massimo Torquati
Abstract We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimised. The pattern has been demonstrated to be suitable for capturing and modelling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, developed in the framework of the ParaPhrase EU-funded FP7 project, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability. Keywords Parallel design patterns · algorithmic skeletons · multi/many core architectures · evolutionary computing · FastFlow
1 Introduction Design patterns were originally proposed as a tool to support the development of sequential object oriented software [12] and have proven to be a very effective programming methodology, greatly improving the time-to-deploy and maintainability of complex applications. M. Aldinucci Dept. of Computer Science, Univ. of Torino E-mail:
[email protected] S. Campa, M. Danelutto, M. Torquati Dept. of Computer Science, Univ. of Pisa E-mail: {campa,marcod,torquati}@di.unipi.it P. Kilpatrick Dept. of Computer Science, Queen’s Univ. Belfast E-mail:
[email protected]
Marco Aldinucci et al.
Later, the pattern concept migrated to the parallel programming world where it has been seen to be a realistic instrument with which to attack the programmability issue of modern parallel architectures [4]. According to Mattson, Sanders and Massingill [14], . . . a design pattern describes a good solution to a recurring problem in a particular context. The pattern follows a prescribed format that includes the pattern name, a description of the context, the forces (goals and constraints), and the solution. In their book, the authors identify a number of patterns, organised in four design spaces: the “finding concurrency”, “algorithm structure”, “support structure” and “implementation mechanisms” spaces. Each of the spaces includes several patterns aimed at capturing and modelling parallelism exploitation features typical of the abstraction level represented in the design space. Proceeding from the top down, the finding concurrency space hosts patterns identifying the available concurrency; the algorithm structure space hosts patterns modelling different kinds of algorithm; and the support structure and implementation mechanisms spaces provide patterns modelling the typical implementation mechanisms (high and low level) used to construct parallel computations. Design patterns, as well as parallel design patterns, are described by suitable sections of text, and programmers wishing to use the patterns to implement their applications can follow the pattern recipe but they must write all the code needed to implement the patterns on their own and, very often, from scratch. In the ParaPhrase project [16], we devised a slightly different parallel application development methodology. Parallel design patterns are used to define the correct parallel structure of an application. Then, the actual implementation of the “patterned” application uses a library of composable algorithmic skeletons, as suggested by M. Cole. Cole, in his “skeleton manifesto” [7], observes how patterns may be used to abstract computation and interaction structures in parallel applications: Skeletal programming proposes that such patterns be abstracted and provided as a programmer’s toolkit, with specifications which transcend architectural variations but implementations which recognise these to enhance performance. Thus within ParaPhrase we use the algorithmic skeletons provided by the FastFlow programming framework [3, 8, 11], and by the skel Erlang skeleton library [5, 10], to implement, alone or in suitable composition, the parallel patterns deemed appropriate by the application programmer. The parallel design patterns identified in Mattson’s book are quite generic. They include general patterns such as divide&conquer (algorithm space) and master/slave (implementation structure space), to mention just two well-known pattern examples. Parallel patterns identified by different authors [15] are also generic/general purpose. However, application programmers tend to identify as “patterns” computation structures very close to their application domain.
Pool evolution pattern
For example, programmers of evolutionary algorithms readily recognise as a pattern the (parallel) computation of the possible evolution of a gene population, that is the application of some “evolution function” to all the genes of a given population. Numerical application programmers, however, will recognise the same parallel computation schema as a map pattern, that is, a pattern processing an input collection–a vector or a matrix–by applying to each element of the collection xi the same function f and returning the collection of the f (xi )s. We are therefore faced with two distinct but opposite forces: – domain specific patterns may significantly improve the productivity of programmers in the specific domain; but – general purpose patterns may be more easily implemented, optimised and eventually adapted–via suitable functional and non-functional parameters– to implement domain specific parallel computations. If a domain specific pattern is identified that may be generalised to a nondomain specific context and still proves to be useful in a number of different applications, the domain specific pattern becomes a worthwhile candidate to extend our parallel pattern set. In this paper we introduce a new parallel pattern that originated in a domain specific context (evolutionary computing) but has been demonstrated to be more general and useful in a number of different applications, both from the evolutionary computing and the symbolic computing domains. The main contribution of the paper can be summarised as follows: – Definition of a new, domain specific, parallel pattern pool evolution capturing the iterative evolution of a population. The pattern logically belongs to the “algorithm structure” design space, according to the layering of Mattson’s book. – Identification of a list of applications, from different domains, whose parallel behaviour may be perfectly modelled via the new pattern. – Implementation of the pattern as a new algorithmic skeleton, such that the application programmer may implement a pool evolution patterned application by just providing the functional parameters (business logic code of the evolutionary algorithm) to a pool evolution skeleton object. Both FastFlow and Erlang implementations have been developed. – Experimental results assessing the scalability and efficiency of both the FastFlow and Erlang skeletons implementing the new parallel pattern. The rest of this paper is organised as follows: Sec. 2 introduces the new pattern and lists different applications whose parallel structure may be suitably modelled using the pattern. Sec. 3 outlines possible implementation strategies relative to an algorithmic skeleton implementing the pattern and then outlines the actual FastFlow and Erlang skeleton implementations. Sec. 4 presents some preliminary experimental validation of the FastFlow and Erlang implementations of the new pattern. Finally, Sec. 5 outlines related work and Sec. 6 draws conclusions.
Marco Aldinucci et al. Name Problem
Pool evolution pattern The pattern models the evolution of a population. In the pattern, a “candidate selection” function (s) selects a subset of objects belonging to an unstructured object pool (P ). The selected objects are processed by means of an “evolution” function (e). The evolution function may produce any number of new/modified objects out of the input one. The set of objects computed by the evolution function on the selected object are filtered through a “filter” function (f ) and eventually inserted into the object pool. At any insertion/extraction into/from the object pool a “termination” function (t) is evaluated on the object pool to determine whether the evolution process should be stopped or continued for further iterations. A pool evolution pattern therefore computes P as result of the following algorithm: 1: while not(t(P )) do 2: N = e(s(P )) 3: P = P ∪ f (N, P ) 4: end while
Forces
The selection function may be implemented in a data parallel way, by partitioning the object pool and applying selection in parallel on the different partitions. The evolution function step is clearly an embarrassingly parallel one, as is the filter process. The termination process may be implemented as a reduce pattern, but in some cases (e.g. when the “counting” termination function is used) it may be implemented as a plain sequential process. Candidate partitioning in small blocks may improve load balancing. Job stealing may be used to improve load balancing in the last stage of the computation. Mapreduce implementation of the selection, evolution and filtering stages may be considered to improve parallelism exploitation when large populations are evolved.
Table 1 Pool evolution pattern
2 Pool evolution pattern In this section we first describe the new parallel pattern and then provide different patterns, from the same or from other application domains, that may be implemented as specialisation of the new pattern.
2.1 Pattern definition By means of Tab. 1 we provide a concise pool evolution pattern definition, in the style of Mattson’s book. We deliberately omit the solution and example sections, as pattern implementation will be discussed thoroughly in Sec. 3 and sample usage of the pattern will be discussed later in this Section and in Sec. 4. From an algorithmic skeleton perspective, the pool evolution pattern may be described in terms of its functional (what it computes) semantics and parallel (how the results are computed in parallel) semantics. The functional semantics is that defined by the while loop in the Problem part of Table 1. We now consider the parallel semantics in more detail. In principle, the computation of a pool evolution pattern is an iterative process. Iteration i builds on the results given by iteration i − 1 and so iteration computations must be serialised. Also, each iteration is build of different
Pool evolution pattern
Fig. 1 Alternative implementations (with multipopulation) of the pool evolution pattern: single population (left) vs. multi population (right)
stages (selection, evolution, filtering) that should also be executed sequentially, as each stage takes as input the output of the previous stage. However, the different stages of the single iteration may be computed in parallel, as suggested by Forces in Tab. 1. In particular: – The selection process is usually based on some function selecting the “best” individuals as candidates to be submitted to the evolution process. This obviously makes the selection process a good candidate for application of a mapreduce pattern: first map the function estimating how good is an individual, then filter (reduce) the better candidates. – The evolution process is embarrassingly parallel: each individual may be “evolved” independently of the other selected individuals. – The filtering process usually evaluates some fitness function on the evolved individuals. Finally, the new individuals with the “best” fitness values are selected to be kept in the next generation. In some cases they are added to the current population, retaining in the population the original individuals that led to the evolution of the new ones. More commonly, the new individuals replace the ones originating the evolution. In either case, the filtering process may be another candidate for application of a mapreduce pattern, with the fitness function being applied in the map and the filtering being applied in the reduce. Assuming availability of the usual map and mapreduce patterns in the implementation structure design space1 , the complete iteration process may be structured as follows: pipe(mapreduce(fsel , ⊕max k ), map(fevol ),mapreduce(ff itness , ⊕maxk )) Fig. 1 (left) outlines the kind of computation performed, with circles representing individuals in the population transformed by the different map and mapreduce phases. This raises the opportunity to group differently the computations eventually leading to the new population by: 1
actually we assume here to have the corresponding algorithmic skeletons available
Marco Aldinucci et al.
1. splitting the population into disjoint groups G1 , . . . , Gg ; 2. within each group selecting individuals, evolving them, computing fitness and filtering new individuals; and 3. putting back the selected individuals from the different groups in the population and evaluating the termination condition. as sketched in Fig. 1 (right). This makes sense from the pure parallelism exploitation viewpoint; however it also slightly changes the evolution process semantics (single population vs. multiple population algorithms [2]), as the reduce steps will probably lead to different results, given that the population is split into groups. In the definition of the pattern, this is not actually relevant, as the Solution text only suggests possible solutions but it does not actually impose any parallel implementation schema. When moving to the algorithmic skeleton implementation of the pattern, this possibility should be taken into account and the user (application programmer) may be conveniently provided with a boolean skeleton parameter enabling/disabling population decomposition. The boolean parameter will provide the application programmer with the possibility (and duty) to evaluate the trade-off between the parallelism exploited and the kind of evolutionary algorithm computed. 2.2 Pattern usage examples Having defined the pool evolution pattern, we now describe a) more patterns that may be implemented in terms of the pool evolution pattern, and b) applications from different domains that may be implemented using the pattern. The goal is to assess whether the new pool evolution pattern is, in fact, a more general pattern, not specifically bound to the particular domain where it originated. If this is the case then it is of course desirable that efficient implementations of the pattern can be obtained via specific (compositions of) algorithmic skeletons as this broadens the reach of the pattern. 2.2.1 More patterns Table 2 includes a number of patterns drawn from two distinct application domains: evolutionary computing and symbolic computing. These patterns may be “reduced” to the pool evolution pattern, that is, a pool evolution pattern instance exists computing the same (parallel) computation captured by each of these more specific patterns. The orbit pattern comes from the symbolic computing community [13, 19– 21]. It computes the transitive closure of a set through a set of generators. From the pool evolution perspective, the selection function is the identity (all the items in the current set are selected), the evolution function is a map pattern over the generator set items (for each individual pi compute the set gen1 (pi ), gen2 (pi ), . . . , geng (pi )) and finally the filter function checks that the new individuals do not already belong to the population.
Pool evolution pattern
Name Problem
Name Problem
Name Problem
Name Problem
Symbolic computing domain Orbit The orbit pattern comes from the symbolic computing domain and models the iterative construction of a set of items starting from an initial set (S) of items using generator functions from a generator function set (G = {g1 , . . . , gk }). At each stage of the iterative process, the set of generators are applied to all the elements in the current set. The resulting set of new items is then added to the original set taking care of avoiding duplicates. Therefore the orbit pattern computes the transitive closure of a set according to a set of generator functions. Evolutionary computing domain Genetic algorithm pattern The genetic algorithm pattern describes an iterative behaviour in which, at each iteration step, a set of items (the individuals belonging to a population) evolves. The size of the population could change or could be statically defined. How the items change depends on the genetic operator that the pattern will apply (mutation and crossover, for instance), and is thus a matter of application specificity. Global single population pattern The Global single population genetic pattern is a domain-specific instance of the Genetic Algorithm pattern (see above) where the evolution is a process involving the whole population in each generation. In fact, the population is seen as a single entity over which individuals evolve on the basis of a set of genetic operators. The population size tends to be statically defined, and so does not change as the computation proceeds. The result of the global single population genetic pattern may be defined in terms of the algorithm computed by the pool evolution pattern algorithm (see Tab. 1) Multiagent system pattern The multiagent system pattern models the evolution of a multiagent system. A multiagent system can be described as a set of autonomous, independent and decentralised agents which represent, through their activities, the evolution of a virtual, discrete or continuous - environment. Agents can be provided with different degrees of “intelligence”, depending on the application domain and the problem to solve but, generally speaking, they act all independently, they could go through significant inactivity periods, they do not necessarily share information, and their behaviour is generally highly influenced by the environment. A multiagent system can be seen as a set of agents A1 , . . . , An , each executing one or more jobs j1 , . . . , jm . Jobs could be provided with a weight and agents can have a limit of workload assigned a1 , . . . , an . A pattern for a multiagent system is that which assigns jobs to each agent (the so called Job Assignment Problem) so as to obtain the the minP maximum utility in imum overall completion time, i.e. to maximise Mi t1 Ui where a∈{A ,...,A } 1
Name Problem
n
i
Mi is the load of each job, ti is the execution time of the ith job imposed by agent a, and ui is the utility gained from the job being executed by agent a. Concurrent memetization pattern This pattern is also used in evolutionary computation in which iterative progress processing, such as growth or development in a population, is performed. With respect to other patterns in the family of genetic algorithm patterns, here the population is selected, during the iterative process, using suitable search operators in order to achieve the desired goal. The pattern involves continuous optimisation and combinatorial optimisation phases. It may be useful for implementing Lamarckian or Baldwinian memetic variation operators [6]. The procedure starts with a certain individual iinit and a set of mutation operators (M) available. Then, according to the parameters, a series of mutations m ∈ M and evaluations f of new individuals are performed in parallel. The best obtained solution i becomes a new starting point for another phase of the memetization. The best individual after the assumed number of phases is returned.
Table 2 Specific domain patterns suitable for reduction to pool evolution pattern
Marco Aldinucci et al.
The other patterns all come from the evolutionary computing community. The genetic algorithm pattern maps one-to-one onto the pool evolution pattern, as does the Global single population pattern. Actually, they can be understood as the pattern(s) that generated–through generalisation and abstraction–the pool evolution pattern. The Multiagent system pattern is somehow more interesting. In terms of the pool evolution pattern, the selection function selects all agents that have an event to process in their input event queue (a message, a synchronisation request, etc.); the evolution function updates the agent’s internal state based on the input event accepted; and the filter function is the identity function (all transformed individuals/agents are put back into the original population replacing their previous (state) instances). The “evolution” of an agent, however, may generate events directed to other agents. These events should be directed to the correct agent queues during the filtering (update) function, which makes the filtering / termination test slightly more complex. The concurrent memetization pattern may be reduced to the pool evolution pattern with a process similar to that used to reduce the orbit pattern to the pool evolution one.
2.2.2 More applications As far as applications (rather than patterns) are concerned, we consider a few representative applications whose parallel behaviour may be readily modelled by the pool evolution pattern.
Strings of a given length generated by a grammar This is clearly an instance of the orbit pattern (and thus, transitively, of the pool evolution pattern). The productions of the grammar are used as generators and the filtering function (with no fitness function included) simply filters those items a) not already belonging to the current population and b) not longer that the given length. Termination is determined by an empty set to be added to the population after an iteration.
Brute force sudoku solver This is a plain pool evolution pattern instance. The population is initially an empty board. The evolution function generates a board with possible assignments of an empty cell from the original board and the filter function sends back to the population those boards adhering to the sudoku rules. A more efficient variant is that where the evolution function picks up an empty cell and generates only those configurations filling the cell with legal values. In this case the filter function is the identity function, and the filtering activity itself is moved from filter (sequential execution in the implementation schema P1 in Sec. 3 below) to evolution (computed in parallel).
Pool evolution pattern
Function minimum in an interval This is a plain genetic pattern with population made of random points in the interval, a fitness function computing the function on the given point of the interval, and the evolution pattern(s) generating new random points, or new points close to the “best” (i.e. minimum) elements in the population. Finding a function approximating a given set of h point, value i pairs A population of random functions is evolved by selecting those giving the best approximation of the function value on the known points and applying different random mutations, including a kind of crossover, to obtain new functions. The fitness function measures the distance of the function from the target points. Here the selection function requires evaluation of the distance of the computed points from the target values for all the functions in the population, and a reduction to determine the functions with minimal distance, similar to the function used to select the best mutated individuals as candidates to be inserted in the new population.
3 Skeleton implementation The parallel implementation of the pool evolution pattern may employ parallelism at different levels and with different granularities. We consider three possibilities, namely: P1 parallel computation (map pattern) of the evolution function over the selected individuals, with sequential computation of the other phases (selection, filtering and termination). P2 parallel computation of all phases (as outlined at the end of Sec. 2): mapreduce for selection and filter phases and map for the evolution phase. P3 split the population into sub-populations and map the whole computation relative to one iteration on the sub-populations, merging the updates after the termination of the sub-computations (map of filter(evolve()) over sub-partitions, then “reduce” filtered individuals for inclusion in the pool population). The three alternatives use different grains of parallelism (P1 and P2 process individuals in parallel, while P3 processes partitions of the population) and two of them (P1 and P2), while working at the same granularity, use different extents of parallelism (P1 has a greater serial fraction than P2). In accordance with the ParaPhrase methodology–which provides both C++/ FastFlow and Erlang pattern implementations–we implemented two versions of the pool pattern: a FastFlow [11] version and an Erlang version. The FastFlow version is built on top of a task-farm-with-feedback core skeleton, suitably customised to take into account the features of the pool as implemented according to schema P2. The Erlang version, instead, is built on top of the skel Erlang skeleton library [5, 10] and, in particular, uses a map
Marco Aldinucci et al.
skeleton instance to implement in parallel the evolve function over the selected population items (schema P1). The Erlang version is very compact and the actual code corresponds oneto-one to the pseudo code given in Tab. 1 to describe the pattern’s functional semantics. The code implementing the pool pattern on top of skel is: 1 2 3 4 5 6 7 8 9 10 11 12 13
pool(Termination, Selection, Evolution, Filter ) −> fun(Set) −> case (Termination(Set)) of true −> Set; false −> {Selected,Rest} = Selection(Set), Evolved = skel:do([{map,[{seq,fun ?MODULE:Evolution/1}]}], [Selected]), Filtered = Filter(Evolved), Newset = union(Rest, Filtered), (pool(Termination, Selection, Evolution, Filter ))(Newset) end end.
where the skel:do([{map,[{seq,fun ?MODULE:Evolution/1}]}], [Selected])] is functionally equivalent to a lists:map(fun ?MODULE:Evolution/1, Selected) but is computed in parallel using the map skeleton in the skel library. In the FastFlow implementation, by default only the evolution phase is computed in parallel. However, it is also possible to configure the pool implementation to compute the selection and the filtering map-reduce phases in parallel also. On the contrary, the termination phase is always computed sequentially in the current implementation. Both the map and map-reduce phases have been implemented using the ParallelForReduce high-level pattern [8] already available in the FastFlow framework. The ParallelForReduce pattern allows efficient parallelisation of parallel loops and parallel loops with reduction variables. It is implemented using the task-farm-with-feedback core skeleton of the framework. In the FastFlow task with feedback skeleton, an emitter thread schedules tasks (either appearing on an input stream or generated from an in memory data-structure) to a pool of worker threads. The workers compute the task results and deliver them back to the emitter. The emitter scheduling policy may be customised by the user. In Fig. 2 are sketched both the concurrency structure of the possible parallel pattern implementing the pool evolution and the concrete implementation skeleton currently implementing the pool pattern in FastFlow. This quite simple but effective parallel structure is provided to the parallel application programmer through an interface similar to that used for the other FastFlow high level patterns (see [17]), supporting all the parameters needed to specialise the pool evolution pattern by means of user (application programmer) supplied business code and non-functional parameters. In particular, the pool evolution pattern interface has been designed as follows:
Pool evolution pattern
Fig. 2 Algorithm, concurrent activity graphs and the FastFlow parallel skeleton of the pool evolution pattern. Each circle represents a thread, arrows represent communication channels, which are implemented through FastFlow lock-free buffers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
template class poolEvolution : public ff node { public: /∗ selection t : it takes the population and returns a sub−population ∗ evolution t : it works on the single element ∗ filtering t : it takes the population produced at the previous step, ∗ the output of the evolution phase and produces a new population ∗/ typedef void(∗selection t)(ParallelForReduce&,std::vector&,std::vector&, env t&); typedef const T&(∗evolution t)(T&); typedef void(∗filtering t)(ParallelForReduce&,std::vector&,std::vector&, env t&); typedef bool(∗termination t)(const std::vector&,env t&); protected: env t env; ParallelForReduce mapreduce; ... public : /∗ constructor : to be used in non−streaming applications ∗/ /∗ maximum parallelism degree in all phases ∗/ poolEvolution ( size t maxp, std :: vector &pop, /∗ the initial population ∗/ selection t sel /∗ the selection function ∗/ /∗ the evolution function ∗/ evolution t evol , /∗ the filter function ∗/ filtering t fil , termination t ter , /∗ the termination function ∗/ const env t &E= env t()); /∗ user’s environment ∗/ /∗ constructor : to be used in streaming applications ∗/ poolEvolution ( size t maxp, /∗ maximum parallelism degree in all phases ∗/ selection t sel /∗ the selection function ∗/ evolution t evol , /∗ the evolution function ∗/ filtering t fil , /∗ the filter function ∗/ termination t term, /∗ the termination function ∗/ const env t &E= env t()); /∗ user’s environment ∗/
33 34 35 36 37 38
/∗ changing the parallelism degree of the evolution phase ∗/ void setParEvolution(size t pardegree); const env t& getEnv() const { return env;} ....
Marco Aldinucci et al.
The pattern has two constructors: one to support standalone execution of the pattern, where execution processes only the input population specified as parameter; and another to support execution of the pattern over population items appearing on the pattern’s input stream2 . To exemplify the user’s (application programmer’s) perspective of the pool evolution pattern, we now discuss the kind of code needed to program a simple application exploiting parallelism via the pool evolution pattern in FastFlow. In particular, we consider an application modelling a population evolution where: – each individual of the population is tagged as selected or not-selected by means of some criterion; – individuals are evaluated in parallel and those exhibiting a very good fitness can generate new individuals and/or mutate; and – the new or mutated individuals are indiscriminately added to the original population. The following outline code captures this scenario: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
#include = MAXGENERATION; } void selection (ParallelForReduce &mapreduce, std :: vector &P, std :: vector &newP, Env t &E) { // implementation of selection: P −−> newP } const Individual& evolution(Individual &t) { // implement the evolution changes in t } void filter (ParallelForReduce &mapreduce, std :: vector &P, std :: vector &newP, Env t &E) { // filter individuals to be added to current pop newP += P; } /∗−−−−−−−−−−end of genetic specific stuff −−−−−−−−∗/
27 28 29 30 31 32 33 34 35 36 37 38
int main(int argc, char ∗argv[]) { std :: vector initialP = ....; Env t num generations=0; /∗ my simple environment ∗/ /∗ instantiate the pattern with a max parallelism of 48 ∗/ poolEvolution pool(48, initialP , selection , evolution , filter , termination, num generation); if (pool.run and wait end()