Calculating Linear Time Algorithms for Solving Maximum Weightsum Problems Isao Sasano
Zhenjiang Hu
Masato Takeichi
Department of Information Engineering The University of Tokyo
(fsasano,hu,
[email protected]) Mizuhito Ogawa NTT Communication Science Laboratories, NTT, Japan (
[email protected])
Abstract
Group Organizing Problem The president wants to or-
In this paper, we propose a new method to derive practical linear time algorithms for maximum weightsum problems. A maximum weightsum problem is speci ed as follows: given a recursive data x, nd an optimal subset of elements of x which not only satis es certain property p but also maximizes the sum of the weight of elements of the subset. The key point of our approach is to describe the property p as a functional program. This enables us to use program transformation techniques. Based on this approach, we present the optimization theorem, with which we construct a systematic framework to calculate ecient linear time algorithms for maximum weightsum problems on recursive data structures. We demonstrate eectiveness of our approach through several interesting and non-trivial examples, which would be dicult to solve by known approaches.
Supervisor Chaining Problem The president wants to
Keywords: Programming Pearl, Program Transformation, Catamorphism, Mutumorphisms, Linear Time Graph Algorithm, Fusion, Tupling, Optimization Problems, Dynamic Programming. 1 Introduction Consider the following optimization problems, the rst of which appeared as an exercise in [CLR90] and was discussed in [BdM96]. Professor McKenzie is consulting for the president of the A.-B Corporation. The company has a hierarchical structure; that is, the supervisor relation forms a tree rooted at the president. The personnel oce has ranked each employee with a conviviality rating, which is a real number, positive or negative.
Party Planning Problem The president wants to have a
company party. In order to make the party fun for all attendees, the president does not want both an employee and his or her direct supervisor to attend. The problem is to design a linear algorithm to make up the guest list. The goal should be to maximize the sum of the conviviality ratings of the guests.
ganize a project group in which the existing supervisor relation can take eect; that is, each group member (except the group leader) has his or her direct or indirect supervisor in the group. The problem is to design a linear algorithm to make the member list. Again, the goal should be to maximize the sum of the conviviality ratings of the members. award a chain of supervisors whose sum of the conviviality is maximum. The problem is to design a linear algorithm to compute the supervisor list.
These problems, though looking a bit dierent, can be formulated in the following uniform way: Given a tree t with nodes being associated with weights, we are interested in nding a set of nodes vs, which satis es a certain property p(t; vs) and maximizes the sum of node weights, i.e.,
spec t =
maximize weightsum [vs vs nodes t; p(t; vs)] j
nodes is to generate all sets of nodes of a tree, and maximize is de ned by maximize f [a] = a maximize 0f (a : x) = let a = maximize f x 0 0
where
in if f a > f a then a else a :
The dierence lies in the dierent de nition of the property description of p from problem to problem. The properties for
the party planning problem, the group organizing problem, and the supervisor chaining problem, denoted by ppp , pgo and psc respectively, are as follows. ppp (t; vs): for any two dierent v1 and v2 in vs, v1 is not the parent or a child of v2 in t.
pgo (t; vs): for any two dierent v1 and v2 in vs, v1 is an ancestor or a descendent of v2 , or a common ancestor of v1 and v2 exists in vs.
psc (t; vs): for any two dierent v1 and v2 in vs, v1 is an ancestor or a descendent of v2 , and vertices on the path between v1 and v2 are all included in vs.
We shall refer to these problems as maximum weightsum problems, the main subject of this paper. In general, a max-
imum weightsum problem is to nd a set of elements from the data structure such that the weightsum of the elements is maximum. The maximum weightsum problems are of great importance, because they encompass a very large class of optimization problems [BLW87, BPT92] (see Section 6 for some concrete examples). However, solving maximum weightsum problems needs much insightful analysis, and a formal derivation of linear time algorithms is often dicult. There are basically two kinds of approaches.
We are the rst who successfully apply the algebraic approach to solving the huge-size table problem appearing in the derivation of linear time algorithms on decomposable graphs [AP84, Wim87, BLW87, BPT92], which has not been well solved so far, either by table compression [BLW87] or by dynamic table management [APT98]. This would be signi cant in making the theoretically interesting linear time graph algorithms be practically useful.
The Algebraic Approach.
Based on the algebraic laws of programs [Bir89, BdM96], one may try to calculate ecient solutions to the problems by program transformation. For instance, Bird successfully derived a linear algorithm to solve the maximum segment sum problem [Bir89], which is a maximum weightsum problem on lists. Bird and de Moor [BdM96] demonstrated the derivation of a linear functional program for the party planning problem based on the relation calculus of high abstraction. However, the success of the derivation usually depends on a powerful calculation theorem, and involves careful and insightful justi cation to meet the condition of the theorem, which would somehow be dicult for (even experienced) functional programmers to mimic to solve dierent but similar problems.
We propose a calculation framework together with a new calculation theorem, namely the optimization theorem, which can be applied to derive practical linear time algorithms to solve the maximum weightsum problems in more systematic and simpler manner than the existing algebraic approach.
As demonstrated in Section 6, derivation of linear algorithms for many interesting non-trivial optimization problems turns out to be straightforward, but would be hard in other approaches. The Haskell codes for solving all the problems in this paper are available at http://www.ipl.t.utokyo.ac.jp/~ sasano/pub/mws.html.
The organization of this paper is as follows. In Section 2, we informally convey our idea using a list version of the party planning problem. After brie y reviewing the notational conventions and some basic concepts of program calculation in Section 3, we formally de ne maximum weightsum problems in Section 4, and propose our calculation framework and our main theorem in Section 5. We highlight the features of our approach with several examples in Section 6. Finally, we discuss the related work in Section 7 and make concluding remarks in Section 8.
The Constructive Approach from Predicates.
It has been shown for decades [AP84, Wim87, BLW87, BPT92], though little known in functional programming community, that if speci ed by so-called regular predicates [BPT92], the maximum weightsum problems on graphs are linear time solvable on decomposable graphs, and that such linear time algorithms can be automatically derived from speci cations of the graph problems. Though being more systematic than the algebraic approach, the derived linear time algorithms suer from an unbearably huge table (see Section 7 for details). For instance, the algorithm for solving the party planning problem would need to construct a table of more 142 than 2(2 ) entries [BPT92], which actually prevents the derived algorithms from practical use.
2 A Tour Before formally addressing our approach, we brie y explain our idea through the list version of the party planning problem, giving the avor of our calculation procedure.
2.1 Speci cation The list version of the party planning problem, which will be called the maximum independent-sublist sum problem, is to compute vs, the set of elements from a non-empty list xs, such that no two elements in vs are adjacent in xs. Certainly, it is one of the maximum weightsum problems on lists, which can be naturally speci ed by
mis : [ ] ! [ ] mis xs = maximize weightsum [vs j vs subs xs; pmis (xs; vs)];
In this paper, we propose a new approach to deriving linear time algorithms for the maximum weightsum problems over data structures such as lists, trees, and decomposable graphs. It is more systematic and simpler, integrating the advantages of the above two approaches: applying the rst algebraic approach to make the second approach practical. More concretely, we formalize property description p by a function instead of a logic predicate which has been used so far, and we remedy the serious situation of impractical huge table of the second approach via program calculation. Our main contributions can be summarized as follows.
where subs enumerates all sublists (not necessary contiguous) of a list. What left is to give a de nition for pmis , which is dierent from problem to problem. Noticing that vs are sublist of xs, we can therefore specify p by two steps; rst marking the elements in xs which belong to vs, and then on the marked xs de ning the property:
pmis (xs; vs) = p (marking xs vs);
2
mis :: [Elem] -> (Weight,[MElem]) mis xs = let opts = mis' xs in getmax [ (w,cand) | (c,w,cand) MElem mark x = (x,True) unmark :: Elem -> MElem unmark x = (x,False)
Ord w => [(w,a)] -> (w,a) = error "No solution." = foldr1 f xs (w1,cand1) (w2,cand2) = if w1>w2 then (w1,cand1) else (w2,cand2)
table table table table table table table table table
eachmax :: (Ord w, Eq c) => [(c,w,a)] -> [(c,w,a)] eachmax xs = foldr f [] xs where f (c,w,cand) [] = [(c,w,cand)] f (c,w,cand) ((c',w',cand') : opts) = if c==c' then if w>w' then (c,w,cand) : opts else (c',w',cand') : opts else (c',w',cand') : f (c,w,cand) opts
:: Bool -> Class -> Class True 0 = 0 True 1 = 0 True 2 = 0 True 3 = 2 False 0 = 1 False 1 = 1 False 2 = 3 False 3 = 3
Figure 1: Linear-time Algorithm for Maximum Independent-Sublist Sum Problem where p checks that all pairs of marked elements are not adjacent in the marked xs. p : [] ! Bool p [x] = True p (x : xs) = if marked x then not (marked (hd xs)) ^ p xs
Return to the p for the maximum independent-sublist sum problem, and matching its de nition body with the required form suggests introducing the following auxiliary property description:
p1 : [] ! Bool p 1 [x] = not (marked x) p1 (x : xs) = not (marked x)
else p xs
So much for our speci cation of the problem. It is worth noting that our speci cation is almost as natural as that using the monadic second order logic as in [BPT92], but the sharp dierence is in the use of a recursive function instead of a predicate to specify p. This enables us to employ the functional program calculation for the later derivation.
With p1 , p can be de ned by
p [x] = True p (x : xs) = if
marked x then p
1
xs
^
p xs else p xs
Now applying our general optimization theorem for solving maximum weightsum problems soon yields the linear time algorithm as in Figure 1. The algorithm is coded in Haskell. The main function mis takes a list as its argument and returns a pair whose rst component is the maximum weightsum and the second component is the input list with the selected elements being marked with True. For example, mis [1..4] returns
2.2 Derivation The derivation is based on our optimization theorem in Section 5, which says that if the property description p can be de ned as mutumorphisms [Fok89, Jeu93, HITT97] with other property descriptions, then a linear time algorithm to solve the maximum weightsum problem with respect to p can be derived automatically. Therefore the derivation of a linear time algorithm for the maximum independent-sublist sum problem reduces to be just derivation of mutumorphisms for p de ned above. More precisely, we hope to transform p to the following form p (x : xs) = x (p xs; p1 xs; : : : ; pn xs) where denotes a function, and pi 's are auxiliary property descriptions and they are de ned in a similar fashion to p: pi : [] ! Bool pi (x : xs) = i x (p xs; p1 xs; : : : ; pn xs):
(6,[(1,False),(2,True),(3,False),(4,True)]).
Notice that our initial speci cation only returns [2,4]. The correctness of the algorithm follows from our optimization theorem (see Section 5). The linear property for mis comes from the following observation:
3
The argument to eachmax has at most 8 elements, and so does the second argument of f used to de ne eachmax. So eachmax costs O(1) time. Therefore, mis', the auxiliary function, is a linear time program.
could be attened by fusion. To be precise, the recursive data types in this paper are de ned in the following form D = C1 11 : : : 1m1 D1 : : : Dn1
The opts in the body of mis has at most 4 elements, so it costs constant time to produce the nal result after computing mis' xs.
The function mis' computes one optimal solution for each class, where classes correspond to elements of range of h de ned as follows.
j j
j
111
j
data constructors which may have a bounded number of elements of type and a bounded number of recursive components. Though being seemly restricted, they are powerful enough to cover our commonly using data types such as lists, binary trees, rooted trees [BLW87], and series-parallel graphs [TNS82]. Moreover, for other data types like the rose trees de ned by RTree = Node [RTree ]; we can easily convert it into that in the above form, as will be demonstrated in Section 5.
Class 0 corresponds to (False; False), Class 1 (False; True), Class 2 (True; False), and Class 3 (True; True). These classes can be interpreted as follows.
Ci i1 : : : imi D1 : : : Dni
Ck k1 : : : kmk D1 : : : Dnk where ij denotes and Di denotes D . Ci 's are called
h : [] ! (Bool; Bool) h x = (p x; p1 x)
111
Class 0 means that p does not hold and p1 does not hold. This means that head of the list is marked and the set of marked elements is not independent in the list. Class 1 means that p does not hold and p1 holds. This means that head of the list is not marked and the set of marked elements is not independent in the list.
3.2 Catamorphism
Class 2 means that p holds and p1 does not hold. This means that head of the list is marked and the set of marked elements is independent in the list.
Catamorphisms , one of the most important concepts in pro-
gram calculation [MFP91, SF93, BdM96], form a class of important recursive functions over a given data type. They are the functions that promote through the data constructors. For example, for the type of lists, given e and 8 , there exists a unique catamorphism, say cata, satisfying the following equations. cata [ ] = e cata (x : xs) = x 8 (cata xs)
Class 3 means that p holds and p1 holds. This means that head of the list is not marked and the set of marked elements is independent in the list.
From the function h, we can automatically derive the de nition of table. See Section 5 for details.
2.3 Remarks
In essence, this solution is a relabeling: it replaces every occurrence of [ ] with e and every occurrence of : with 8 in the cons list. Because of the uniqueness property of catamorphisms (i.e., for this example e and 8 uniquely determines a catamorphism over cons lists), we usually denote this catamorphism as cata = ([e 5 8]). De nition 1 (Catamorphism) A catamorphism over a recursive data type D is characterized by: f = ([1 ; : : : ; k ])D
Two remarks are worth making. First, our derivation is surprisingly simple and powerful. It is just a derivation of mutumorphisms which consist of several property descriptions. This derivation is nothing very special, in contrast to other program calculation methods where a wealth of calculation techniques have been developed [Jeu93, BdM96]. Second, the optimization theorem plays a signi cant role in our derivation. It not only provides a friendly interface for people to derive linear time algorithms, but also is suitable for automatic implementation. This theorem, which will be discussed in detail in Section 5, is eective because it protects people from needing extensive knowledge of tupling transformation and the dynamic programming technique.
f (Ci e1 : : : emi x1 : : : xni ) = i e1 : : : emi (f x1 ) : : : (f xni )
3 Preliminaries
(i = 1; : : : ; k) When no ambiguity happens, we usually omit the subscript of D in ([1 ; : : : ; k ])D . 2 Catamorphisms play an important role in program transformation (program calculation), as they satisfy a number of nice calculational properties in which the fusion theorem is of greatest importance:
In this section, we brie y review the notational conventions and some basic concepts of program calculation as in [Bir87, MFP91, BdM96].
3.1 Recursive Data Type To simplify our presentation, the recursive data structure we will consider in this paper is neither mutual nor nonpolynomial; mutual recursive structures could be turned into single ones by tupling, while non-polynomial structures
Theorem 1 (Fusion)
If for i = 1 : : : k, f (i e1 : : : emi r1 : : : rni ) = i e1 : : : emi (f r1 ) : : : (f rni ) then 4
f ([1 ; : : : ; k ])D = ([ 1 ; : : : ; k ])D
2
We use gen to generate all possible markings of input data x, and from those which satisfy the property p use maxweight to select a marked one whose weightsum of the marked elements is maximum. Before de ning gen and maxweight, we should explain some of our notation for marking. For a data type of , we use 3 to extend with marking information. It can be de ned more concretely by 3 = (; Bool) where a boolean value indicates whether the element of type is marked or not. Accordingly, we use a3 , b3 ; : : : ; x3 to denote variables of the type 3 or the type D 3 (i.e., D (3 )). The function gen exhaustively enumerates all possible ways of marking or unmarking every element, which can be recursively de ned by gen : D ! [D 3 ] gen (Ci ei1 : : : eimi x1 : : : xni ) = [ Ci e3i1 : : : e3imi x31 : : : x3ni j e3i1 [mark ei1 ; unmark ei1 ]; .. . 3 eimi [mark eimi ; unmark eimi ];
The fusion theorem gives the condition that has to be satis ed in order to promote (fuse) a function into a catamorphism to obtain a new catamorphism. It actually provides a constructive but powerful mechanism to derive a \bigger" catamorphism from a program in a compositional style, a typical style for functional programming.
3.3 Mutumorphisms
Mutumorphisms, generalizations of catamorphisms to mutually de ned functions, are de ned as follows [Fok89, Fok92, HITT97]. De nition 2 (Mutumorphisms)
Mutumorphisms over a recursive data type D are functions f1 ; f2 ; : : : ; fn which are de ned in the following mutually recursive form. For each i 2 f1; 2; : : : ; ng,
fi (C1 e1 : : : em1 x1 : : : xn1 ) = i1 e1 : : : em1 (h x1 ) : : : (h xn1 ) fi (C2 e1 : : : em2 x1 : : : xn2 ) = i2 e1 : : : em2 (h x1 ) : : : (h xn2 ) .. .
x31
fi (Ck e1 : : : emk x1 : : : xnk ) = ik e1 : : : emk (h x1 ) : : : (h xnk ) h = f1 4 f2 4 : : : 4 fn 2 Here, f1 4 f2 4 : : : 4 fn represents a function de ned
marking:
as following.
It is known that mutumorphisms can be transformed into a single catamorphism by the tupling transformation [Fok89, Fok92, HITT97].
in if wsum x3 > wsum y3 then x3 else y3
Theorem 2 (Mutu Tupling) If f1 ; f2 ; : : : ; fn are mutumorphisms in De nition 2, then f1 4 f2 4 : : : 4 fn = ([1 ; 2 ; : : : ; k ])D where i = 1i 4 : : : 4 ni (i = 1; : : : ; k):
where wsum is to compute sum of weights of marked elements in a data structure: wsum : D 3 ! Weight wsum (Ci e3i1 : : : e3imi x31 : : : x3ni ) = (if marked e3i1 then weight e3i1 else 0) + .. . (if marked e3imi then weight e3imi else 0) + (i = 1 : : : k) wsum x31 + : : : + wsum x3ni Here, marked is a function which takes an element e3 as its argument and checks whether e3 is marked or not. marked : D 3 ! Bool marked (e; m) = m So much for the speci cation of mws, with which we can de ne various concrete maximum weightsum problems. For example, the maximum independent-sublist sum problem in Section 2 can be speci ed by mis = mws p where p is the description of the \independent" property as in Section 2. Obviously, the above mws is an exponential algorithm which is too expensive to be used practically.
2
4 The Maximum Weightsum Problems Before studying our approach formally, we need to be more precise about the maximum weightsum problems, and clarify the limitations of the existing solutions.
4.1 A Formal De nition A maximum weightsum problem can be rephrased as follows. Given a data x, the task is to nd a way to mark some elements in x such that the marked data, say x3 , meet a certain property, say p, and the sum of weights of the marked elements is maximum. Here the \maximum" means that no other marking of x satisfying p can produce a larger weight sum. A straightforward solution is as follows. !
!
j
mark x = (x; T rue) unmark x = (x; F alse):
Then, the function maxweight is to select from a list of marked data one which has the maximum weightsum. maxweight : [D 3 ] ! D 3 maxweight [x33] = x3 maxweight (x : xs) = let y3 = maxweight xs
(f1 4 f2 4 : : : 4 fn ) x = (f1 x; f2 x; : : : ; fn x)
mws : (D 3 Bool) 3 D3 mws p x = maxweight [x x
gen x1 ;
x3ni gen xni ] (i = 1 : : : k) where mark and unmark are functions for marking and un-
where
holds.
.. .
D 3 gen x; p x3 ]
!
5
This lemma was inspired by Bern et al.'s work on decomposable graphs [BLW87], where a similar condition on
4.2 Limitation of Borie's Approach The maximum weightsum problems have many instances in graph algorithms. Bern et al. showed that many graph problems, such as minimum vertex cover problem, maximum independent set problem, maximum matching problem, traveling salesman problem, etc., can be described by mws, and more interestingly they can be solved in linear time on decomposable graphs [BLW87]. Based on this work and Courcelle's work [Cou90b], Borie et al. proposed a method to automatically construct linear time algorithms to solve the maximum weightsum problems [BPT92]. The main idea there is to restrict the property p to be described in terms of a small canonical set of primitive predicates (such as the incident predicate Inc (v; e)), which are combined by logical operators (^, _, and :) and (either rst or second order) quanti ers (8 and 9). For example, the property p for the maximum independent set problem is described by
p
Adj (v ; v ) 1
2
decomposable graphs was given with an informal study of the linear time algorithm. We generalize the idea from decomposable graphs1 to generic recursive data types. Moreover, we indeed give a concrete linear time algorithm using the optimization function opt as in Figure 2. The relation between opt and spec is that for any input x,
wsum (spec x) = wsum (snd (opt accept
1
: : : k x));
that is, opt accept 1 : : : k can compute a solution to the maximum weightsum problem spec. This relation can be proved by induction on the structure of x, which is omitted here. Instead, we give an intuitive explanation. We compute the maximum weightsum by two steps. First we use candidates to compute one maximum weightsum for each class2 , and then we apply getmax to select an optimal solution among the maximum weightsums for all accepting classes. Note that the functions getmax and eachmax in Figure 2 are the same as those in Figure 1. Next, we brie y explain why the function opt is linear.
= 8v1 8v2 : Adj (v1 ; v2 ) = (9e1 ) (Inc (v1 ; e1 ) ^ Inc (v2 ; e1 )) ^ : (v1 = v2 ):
opt accept 1 : : : k x = let cand = candidates 1 : : : k x in getmax [(w; r3 ) j (c; w; r3 ) cand; accept c]
With this restriction, one is able to construct a linear time algorithm for mws p from the predicative structure of p. Though being attractive in theory, the linear time algorithms need to create a huge table, which prevents the algorithms from being used in practice [BLW87, BPT92, APT98]. For instance, the size of the created table for the 142 maximum independent set problem is more than 2(2 ) . In contrast, by our proposed functional approach this number can be reduced to 8 (See Figure 1).
The important observation is that the number of elements in
cand is bounded by the number of classes (i.e., the number of elements in the domain of accept or the range of the catamorphism), because eachmax returns a list whose length is bounded by the number of classes. Therefore, if candidates can be computed in linear time, then so does opt. To prove that candidates is a linear time algorithm, it
5 The Optimization Theorem
is suce to show that i e1 : : : emi cand1 : : : candni can be computed in constant time. Taking the fact that i can be computed in O(1) time because the size of its arguments is independent of the input, we can see that the construction of each triple (c; w; r3 ) (i.e., each element consumed later by eachmax) can be computed in O (1) time, because c can be obtained in O(1) time by i , w by (mi + ni ) times of + operation, and r3 by only combining e31 : : : e3mi and r13 : : : rn3 i . Here mi and ni are bounded by M and N respectively.
This section focuses on a formal study of our functional approach to solving the maximum weightsum problems, where our main theorem, the optimization theorem , plays a significant role. It not only clari es a sucient condition for the existence of linear time algorithms, but also gives a calculation rule for construction of such algorithms. As will be seen later, the key points of our calculation are to take notice of the functional (rather than predicative) structure of property descriptions, and to make good use of program transformation such as fusion and tupling [HITT97, Fok92, Fok89].
M = max fmi j 1 i kg N = max fni j 1 i kg Moreover, exactly (2mi 2 jcand1 j 2 : : : 2 jcandni j) triples of (c; w; r) are generated, and this number is bounded by the constant 2M 2 C N , where C is the number of classes. Therefore, the complexity of i e1 : : : emi cand1 : : : candni is O(1). Though M ,N work exponentially, in many well-used data types, such as lists and binary trees, M , N are small
5.1 Optimization Theorem To solve the maximum weightsum problems, we shall begin by giving a sucient condition under which ecient linear time algorithms are guaranteed to exist in Lemma 3. Then we use mutumorphisms to formalize the class of problems we can solve in linear time in Lemma 4. Our optimization theorem summaries the two lemmas. The following lemma gives a sucient condition with which one can judge whether a maximum weightsum problem can be solved in linear time.
(perhaps one or two). In our optimization lemma, we give the optimization function opt to compute an optimal solution. It is worth noting that we can also do recognition and enumeration in linear time, by slightly changing the de nition of the optimization function opt. Recognition means recognizing whether a solution satisfying the property description exists or not, and
Lemma 3 (Optimization Lemma) Let spec be de ned by
1 Decomposable graphs do not allow the addition of a new element when gluing smaller graphs. In other words, it imposes a restriction on D a where for each data constructor Ci , either mi = 0 or ni = 0. 2 From the condition that the domain of accept is nite, we know the range of the catamorphism ([1 ; : : : ; k ]) is nite. This catamorphism projects the input to this nite range, whose elements are called classes [BLW87], as seen in Section 2.
spec : D ! D 3 spec = mws (accept ([1 ; : : : ; k ])): If the domain of the predicate accept is nite, then spec can be solved in linear time. 2 6
opt accept 1 : : : k x = let cand = candidates 1 : : : k x in getmax [(w; r3 ) j (c; w; r3 ) cand; accept c] candidates 1 : : : k = ([ 1; : : : ; k ])D where i e1 : : : emi cand1 : : : candni = eachmax [(i e31 : : : e3mi c1 : : : cni ; (if marked e31 then weight e31 else 0) + : : : + (if marked e3mi then weight e3mi else 0) + w1 + : : : + wni ; Ci e31 : : : e3mi r13 : : : rn3 i ) j e31 [mark e1 ; unmark e1 ]; 1 1 1 ; e3mi [mark emi ; unmark emi ]; (c1 ; w1 ; r13 ) cand1 ; 1 1 1 ; (cni ; wni ; rn3 i ) candni ] (i = 1; : : : ; k) Figure 2: Optimization Function opt enumeration means counting the number of solutions satisfying the property description [BPT92]. To apply the optimization lemma, we need to represent the property description p as accept ([1 ; : : : ; k ]), where the domain of accept is nite. The following lemma gives a method to transform the property description p into the above form, when p is de ned using mutumorphisms.
Combining the above two lemmas, we obtain the optimization theorem.
Theorem 5 (Optimization Theorem)
The maximum weightsum problem speci ed by
spec : D ! D 3 spec = mws p0
Lemma 4 (Decomposition Lemma)
can be solved in linear time if the property description p0 : D 3 ! Bool can be de ned as mutumorphisms with other property descriptions pi : D 3 ! Bool for i = 1; : : : ; n. 2
If the property description p0 : D 3 ! Bool can be de ned as mutumorphisms with other property descriptions pi : D 3 ! Bool for i = 1; : : : ; n, then p0 can be decomposed into p0 = accept ([1 ; : : : ; k ]); where the domain of accept is nite. Proof. To prove the lemma, we suppose that mutumorphisms p0 , p1 ; : : : ; pn are de ned as follows. For each i 2 f0; 1; : : : ; ng,
This theorem provides a friendly interface for the derivation of ecient linear time algorithms to solve the maximum weightsum problems. When the property description is de ned as mutumorphisms which consist of n property descriptions, a derived catamorphism will have range which has 2n elements. In many cases, property description p can be easily described as mutumorphisms which consist of a small number (perhaps two to four) of property descriptions. For example, the party planning problem is described with mutumorphisms which consist of only two property descriptions. This means obtained linear time algorithms are practical. To see how the theorem works, recall the maximum independent-sublist sum problem in Section 2. The property description p is de ned with p1 in the following mutumorphic form:
pi (C1 e1 : : : em1 x1 : : : xn1 ) = i1 e1 : : : em1 (h x1 ) : : : (h xn1 ) pi (C2 e1 : : : em2 x1 : : : xn2 ) = i2 e1 : : : em2 (h x1 ) : : : (h xn2 )
.. . pi (Ck e1 : : : emk x1 : : : xnk ) = ik e1 : : : emk (h x1 ) : : : (h xnk ) where
h = p0 4 p1 4
111
p [x] = True p (x : xs) = if
4 pn :
By the mutu tupling theorem, we have
p0 4 p1 4 : : : 4 pn = ([1 ; : : : ; k ])D
p1 p1
(1)
where
i = 0i 4 1i 4 : : : 4 ni
marked x then p [x] = not (marked x) (x : xs) = not (marked x)
1
xs
^
p xs else p xs
It follows from the optimization theorem that an ecient linear time algorithm to solve the maximum independentsublist sum problem can be automatically derived.
(i = 1; : : : ; k):
By de ning
5.2 Calculation Strategy
accept (x0 ; x1 ; : : : ; xn ) = x0 ;
In this section, we show how to derive practical linear-time algorithms to solve the maximum weightsum problems based on the optimization theorem. An intuitive tour has been given in Section 2.
we obtain
p0 = accept ([1 ; : : : ; k ])D : Next, we show that domain of accept, i.e., range of ([1 ; : : : ; k ])D is nite. From the equation (1), we know that p0 ; p1 ; : : : ; pn are functions whose range has two elements, True and False. So, the range of ([1 ; : : : ; k ])D has 2n+1 elements, which is nite. This means that domain of ([1 ; : : : ; k ])D is nite. 2
Speci cation De ne a data structure R to capture the
structure of inputs, and concisely specify the property description for the maximum weightsum problem using a function p on R.
7
Step 1 If R is not a recursive data structure as de ned in Section 3, nd a recursive data structure D, which corresponds to R, and then, transform the property description p on R, into p0 on D. If the data structure R is a recursive data structure de ned in Section 3, then do nothing, that is, let p0 = p and D = R. Step 2 Derive p0 in terms of mutumorphisms which consist of several property descriptions p00 ; p01 ; : : : ; p0n on D, by
The relation between these two data types can be captured by the following functions.
r2o : RTree ! Org r2o (Root v ) = Leader v [ ] r2o (Join t1 t2 ) = let Leader v ts = r2o t2 in Leader v ((r 2o t1 ) : ts)
o2r : Org ! RTree o2r (Leader v [ ]) = Root v o2r (Leader v (t : ts)) = Join (o2r t) (o2r (Leader v ts))
using fusion and tupling transformation.
Step 3 Apply our optimization theorem to get a linear time algorithm to solve the maximum weightsum problem.
These two functions convert data types in linear time. Next, we transform the property description p on Org to p0 on RTree . Let p0 and getLeader0 be the functions on RTree which correspond to p and getLeader on Org . These functions should satisfy the following equation.
We demonstrate these steps on the party planning problem.
The Party Planning Problem Speci cation The inputs of the party planning problem
are trees, so we can use the following recursive data structure.
p0 p0 t
: RTree 3 ! Bool = p (r 2o t)
The speci cation of the problem is
getLeader00 getLeader t
: =
Org ::= Leader [Org ] pp : Org ! Org 3 pp = mws p
p : Org 3 ! Bool p (Leader v [ ]) = True p (Leader v (t : ts)) = not (bothmarked v (getLeader t)) p t ^ p (Leader v ts)
p0 (Root v ) = True p0 (Join t1 t2 ) = not (marked (getLeader0 t1 ) ^ marked (getLeader0 t2 )) ^ 0 p t1 ^ p0 t2
getLeader00 (Root v) = v getLeader (Join t t ) = getLeader0 t : 1
^
!
!
1 ^
2
marked v
lm0 (Root v) = marked v lm0 (Join t1 t2 ) = lm0 t2 :
Now we can represent p0 using mutumorphisms which consist of the following two property descriptions.
2
p0 (Root v ) p0 (Join t1 t2 )
Notice that our initial speci cation is rather straightforward.
p t1 0
^
^
p t2 0
lm0 t2 ) ^
Step 3 By applying our optimization theorem, we get the
where nodes can have arbitrary many children. In fact, we can represent it by a binary tree structure called rooted tree [BLW87] as de ned below. j
= True = not (lm0 t1
lm0 (Root v ) = marked v lm0 (Join t1 t2 ) = lm0 t2
Step 1 The data type Org is a non-polynomial data type
::=
2
and by a simple fusion calculation we can get
!
RTree
2
Step 2 In order to represent p0 using mutumorphisms, de ne lm0 by lm0 : RTree 3 ! Bool lm0 = marked getLeader0
If the tree contains only a single node, then the property is satis ed. Otherwise we check its root v1 with its children one by one to guarantee that both v1 and its child are not marked at the same time, and then check other parts of the tree recursively. Here bothmarked v1 v2 is used to check whether both v1 and v2 are marked, and getLeader returns the root of the organization tree: 1
!
A simple fusion calculation yields:
where the property description p of the party planning problem can be written as following.
bothmarked : 3 3 Bool bothmarked v v = marked v getLeader : Org getLeader (Leader v ts) = v:
RTree Bool getLeader (r2o t)
following linear time algorithm.
pp = r2o snd (opt accept 1 2 ) o2r where accept (x0 ; x1 ) = x0 1 v = (True; marked v ) 2 (a1 ; b1 ) (a2 ; b2 ) = (not (b1 ^ b2 ) ^ a1 ^ a2 ; b1 )
Root Join (RTree ) (RTree ) 8
So much for the derivation. Note that we can go further to compute i statically and store it in a table, though this is not necessary. To do so, de ne four classes by
the subtrees to guarantee that at most a single subtree has marked nodes. The function nm t is to check that the tree t has no marked nodes, which is de ned as follows.
and we can simplify the functions accept, 1 , and 2 to the following.
Following similar thoughts in Section 5, we can turn these functions into the following mutumorphisms on Rtree , and thus can obtain a linear time algorithm for the group organizing problem.
c0 = (False; False) c1 = (False; True) c2 = (True; False) c3 = (True; True);
accept c 1 v 2 c 0 2 c 1 2 c2 c0 2 c2 c1 2 c2 c2 2 c2 c3 2 c3 c2 2 c 3
= = = = = = = = = =
nm (Leader v [ ]) = not (marked v ) nm (Leader v (t : ts)) = nm t ^ nm (Leader v ts)
(c == c2 )
_ (c == c3 ) if marked v then c3 else c2 c0 c1 c0 c0 c2 c2 c3 c1
: RTree 3 ! Bool = True = if lm0 t2 then True
p0 p0 (Root v) p0 (Join t1 t2 )
else if nm0 t1 then p0 t2 else p0 t1 ^ nm0 t2
nm0 (Root v ) = not (marked v) nm0 (Join t1 t2 ) = nm0 t1 ^ nm0 t2 Supervisor Chaining Problem
Notice that 2 behaves like a table.
Similarly, for the supervisor chaining problem, we can specify the property that the marked nodes form a path chain as follows.
6 Features
p : Org 3 ! Bool p (Leader v [ ]) = True p (Leader v (t : ts)) = if marked v then if nm t then p (Leader v ts) else marked (getLeader t) ^ p t ^ nm (Leader v ts) else if nm t then p (Leader v ts) else p t ^ nm (Leader v ts)
In this section, we would like to highlight several important features, namely, simplicity, generality, and exibility, of our calculational approach for solving the maximum weightsum problems.
6.1 Simplicity First, as seen in the derivation of a linear time algorithm for solving the party planning problem as well as that in Section 2, our calculation is surprisingly simple. With the optimization theorem, all we need to do is to derive a mutumorphic form of the property description. Fortunately, there are a lot of handy calculation strategies [Jeu93, BdM96], such as generalization of subexpressions to functions and fusion transformation, to derive mutumorphisms [HITT97]. In the following, we shall illustrate more about this simplicity by solving the other two problems in the introduction, whose linear algorithms are indeed not so straightforward for even an experienced functional programmer.
This property p can be expressed using the following mutumorphisms.
p0 : RTree 3 ! Bool 0 p (Root v ) = True p0 (Join t1 t2 ) = if lm0 t2 then if nm0 t1 then p0 t2 else lm0 t1 ^ p0 t1 ^ nm0 t2 else if nm0 t1 then p0 t2 else p0 t1 ^ nm0 t2
Group Organizing Problem
Now we can use our optimization theorem to obtain a linear time algorithm which solves the supervisor chaining problem, as we did for the partying planning problem.
The inputs of the group organizing problem are trees, and we use the same data structure Org as for the party planning problem. The property p on the marked tree for this problem is that for any two marked nodes one of their ancestor node must be marked. This can be described recursively by
6.2 Generality Our approach is general (polytypic) enough to deal with maximum weightsum problems on generic data structures not being restricted to lists or trees. As an example, we shall derive a linear time algorithm to solve the maximum two disjoint paths problem on series-parallel graphs. This problem was discussed in [TNS82], but as far as we know no practical linear algorithm has been given so far. The de nition of the series-parallel graph is de ned as follows. SPG ::= Base Vert Vert Edge
p : Org 3 ! Bool p (Leader v [ ]) = True p (Leader v (t : ts)) = if marked v then True else if nm t then p (Leader v ts) else p t ^ nm (Leader v ts):
The rst equation reads that a single node tree always satis es p. The second equation reads that for a tree with the root v, if the root is marked, then any way of marking the nodes of its subtrees is acceptable. Otherwise, we scrutinize
j
j
9
Series SPG SPG Parallel SPG SPG
Here, Vert represents type of vertices, and Edge represents type of edges. Every graph should have a single source and a single sink. The two data constructors are Series and Parallel. Series g1 g2 only makes sense when the sink of g1 is the source of g2 , and Parallel g1 g2 only makes sense when g1 and g2 share source and sink. Figure 3, 4, and 5 show the meaning of the constructors. For example, the graph in Figure 4 is represented by g = Series (Base u w e1) (Base w v e2): The maximum two disjoint paths problem is de ned as follows: when given two vertices s and t, nd two disjoint paths between the two vertices s; t such that the sum of the edge weights is maximum. Since the two disjoint paths between the vertices s and t can be seen as a cycle containing the vertices s and t, we can specify the property p of the problem by p : Vert ! Vert ! SPG ! Bool p s t g = cycle g ^ th s g ^ th t g: Here, cycle judges whether the marked edges in the graph form a cycle, and th v judges whether there exists a marked edge which is incident to the vertex v . They can be naturally de ned as follows. cycle (Base v1 v2 e) = marked e cycle (Series g1 g2 ) = (cycle g1 ^ nm g2 ) _ (nm g1 ^ cycle g2 ) cycle (Parallel g1 g2 ) = (cycle g1 ^ nm g2 ) _ (nm g1 ^ cycle g2 ) _ (span g1 ^ span g2 )
6.3 Flexibility Finally, we explain exibility of our derivation in coping with modi cation of speci cation of problems. We choose as our example the maximum segment sum problem, which is to compute the maximum of the sums of all segments (contiguous sublist) of a list. This is a quite well-known problem in program calculation community [Bir89]. In the following, we will not only give a new solution to it, but also demonstrate that we can solve a set of related problems in a straightforward way, which would not be so simple by existing approaches. The maximum segment sum problem is actually a maximum weightsum problem where the property is that all marked elements in a list should be adjacent (connected). This property can be speci ed as follows. conn [x] = True conn (x : xs) = if marked x then
nm xs _
(marked (hd xs) else conn xs
conn xs)
If the list contains only a single element, then the property is satis ed. Otherwise, if the head is marked, then the remaining list should have no marked elements or have connected marked elements with the head marked. If the head is not marked, then check the remaining list recursively. As before, the function nm judges whether all the elements are not marked. nm [x] = not (marked x) nm (x : xs) = not (marked x) ^ nm xs The purpose is to derive a mutumorphic form for conn. Matching the de nition of conn with the required mutumorphic form suggests to generalize the underlined expression as a function, say, mh: mh xs = marked (hd xs) and it is easy to derive from the above de nition that mh [x] = marked x mh (x : xs) = marked x: So we have successfully obtained the following mutumorphic form for conn (mutually de ned with nm and mh): conn [x] = True conn (x : xs) = if marked x then nm xs _ (mh xs ^ conn xs)
th v g = anyMarked (inc v g) We use span to judge whether the marked edges in the graph span between the source and the sink of the graph, nm
to judge whether there are no marked edges in the graph, anyMarked to take a list of edges es as its argument and judge whether there exists a marked edge in es, and inc v g to gather all the edges of g which are incident to the vertex v. span (Base v1 v2 e) = marked e span (Series g1 g2 ) = span g1 ^ span g2 span (Parallel g1 g2 ) = (span g1 ^ nm g2 ) _ (nm g1 ^ span g2 )
nm (Base v1 v2 e) nm (Series g1 g2 ) nm (Parallel g1 g2 )
^
= not (marked e) = nm g1 ^ nm g2 = nm g1 ^ nm g2
else conn xs
inc v (Base v1 v2 e)
and application of the optimization theorem soon gives us a linear time algorithm. Although our linear algorithm may use a few more operations than that in [Bir89], our derivation is much easier. Now consider an extension of the maximum segment sum problem where we are only interested in those segments containing only even numbers. The property for this extended problem is: p xs = conn xs ^ evens xs where evens can be naturally de ned by evens [x] = even (weight x) evens (x : xs) =
= if v == v1 _ v == v2 then [e] else [ ] inc v (Series g1 g2 ) = inc v g1 ++ inc v g2 inc v (Parallel g1 g2 ) = inc v g1 ++ inc v g2 By fusion calculation, we can get the following ecient recursive de nition for th. th v (Base v1 v2 e) = if v == v1 _ v == v2
then marked e else False th v (Series g1 g2 ) = th v g1 _ th v g2 th v (Parallel g1 g2 ) = th v g1 _ th v g2 Now, the property description p s t is represented as mutumorphisms with functions cycle, span, nm, th s, th t.
marked x then even (weight x) evens xs else evens xs
if
It follows from our optimization theorem that a practical linear time algorithm can be obtained.
^
10
e1
e2 e1
u
e
w
w
u
u
v
e1
e2 w
e2
u
w
e3
e3 v
v
Figure 5: Parallel operation of MSOL uses Inc (v; e), which means a vertex v is an incident of an edge e, instead of the use of a successor function in ordinary MSOL [Tho90b]. Though linear, these methods are still impractical, since they requires manipulation of huge tables. This arises from two reasons:
It is easy to see that p can be de ned as mutumorphisms with conn, evens, nm and mh, and we thus obtain a linear algorithm for solving this extended problem. Similarly, we can consider many similar segment problems following this line. For example, we may consider maximum sums of segments that have even numbers of elements by de ning the following property ^
u
v
Figure 4: Series operation
p xs = conn xs
e2
e1
v
u
Figure 3: Base graph
w
v
even (mnum xs);
where mnum xs is the number of marked elements in xs. We can derive a linear algorithm, by introducing a new function to generalize even (mnum xs):
em xs = even (mnum xs)
deriving a recursive form for em, and using the similar calculation like above. It should be noted that deriving a linear time algorithm for this problem requires some creativity with the approach as in [Bir89, Jeu93], because the property p is not pre x-closed resulting in diculty in lter promotion. Finally, we mention that maximum segment sum problem can be generalized to trees, where segment in a list is generalized to a set of connected nodes in a tree. It should be easy for reader to derive a linear time algorithm to solve this generalized problem.
The construction of tables re ects the decomposition of the primitive predicates into an accept function and a catamorphism whose range is nite (such as in the proof of Lemma 4). However, their method starts with xed basic predicates as primitives, whereas our method freely adapts the primitives if they are in the form of mutumorphisms. Their construction of tables causes the exponential blow-up at each occurrence of quanti ers. Our method replaces the occurrences of quanti ers with recursions, which are computationally more ecient.
Aspvall et al. proposed a method to reduce the run-time memory storage by reducing the number of simultaneously needed tables [APT98]. Our work reduces the size of tables directly, complementing their work. Bird calculated a linear time algorithm for solving the maximum segment sum problem on lists [Bir89], which is a sort of maximum weightsum problem. Bird and de Moor demonstrated the derivation of a linear functional program for the party planning problem based on the relation calculus of high abstraction [BdM96]. They studied optimization problems in a more general way, and their algebraic approach can be applied to a much wider scope of problems [BdM96]. However, the developed theory is somehow dicult for non-experts to use in deriving algorithms, and some problems as in this paper could not be easily dealt with by their approach. On the contrary, we focus on a useful class of optimization problems, the maximum weightsum problems, and propose a very simple calculational way to derive ecient linear algorithms. One signi cant reason for achieving this simplicity in derivation is that we recognize the importance of the niteness of the range of a catamorphism as in the optimization lemma, which has received little attention in [BdM96].
7 Related work Since the mid-1980s, there have been several powerful approaches of dynamic programming so that many NPcomplete problems are reduced to linear time for families of recursively constructed graphs. The pioneer work is those on series-parallel graphs [TNS82]. The backbone concept is tree-width [RS86], which is also independently discovered as partial k-tree [Arn87] and is also discussed in terms of separators [Tho90a] and cliques [KT90]. The set of graphs with tree-width at most k is equal to the set of k -terminal graphs constructed by algebraic composition rules [ACPS93]. Then, NP-complete problems on graphs, such as Hamilton path problem, are often regarded as linear in the size and beyond exponential in the tree width by the divide-and-conquer strategy according to this algebraic construction. Much work has been done [Bod93, Arn95], but most of it is on individual graph problems. Courcelle showed that whether a (hyper) graph with bounded tree width satis es a closed monadic second order (MSOL, for short) formula (of graphs) is solvable in linear time [Cou90a]. Borie et al. further showed that the recognition, enumeration, and optimization problems speci ed by an extension of MSOL formula are solved in linear time [BPT92]. This graph variant
8 Conclusions In this paper, we present a new method of deriving practical linear time algorithms for maximum weightsum problems on recursive data structures. From a speci cation represented as a functional program, our method obtains practical linear time algorithms by deriving a catamorphism whose range size is small. Though our method does not guarantee that derived catamorphism has range with the smallest 11
number of elements, it has range consisting of about 10 or 20 elements in many cases. This means our derived linear time algorithms are practical in many cases. Furthermore, our method enables more systematic, natural, and simpler derivation of linear time algorithms, and can exibly cope with the modi cation of the speci cation. Though currently we focus mainly on well-used recursive data structures such as lists and trees, our method should be able to deal with the graphs with bounded tree width, because a class of the graphs with bounded tree width can be represented as a recursive data structure in Section 3. For instance, our next target problems are linear time control
ow analyses for structured procedural programs, for which the control ow graphs are known to have bounded tree width [Tho98]. Of course, our method has the limitation; the backbone limitation is bounded tree width. For example, the twodimensional maximum segment sum problem can not be dealt with by our method, because a two-dimensional matrix corresponds to a grid graph, which will have unbounded tree width.
[Bod93]
H.L. Bodlaender. A tourist guide through treewidth. , 11:11{21, 1993. [BPT92] Richard B. Borie, R. Gary Parker, and Craig A. Tovey. Automatic generation of linear-time algorithms from predicate calculus descriptions of problems on recursively constructed graph families. Algorithmica, 7:555{581, 1992. [CLR90] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. The MIT electrical engineering and computer science series. MIT Press, 1990. [Cou90a] B. Courcelle. Graph rewriting: An algebraic and logic approach. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, chapter 5, pages 194{242. Elsevier Science Publishers, 1990. [Cou90b] Bruno Courcelle. The monadic second-order logic of graphs. I. Recognizable sets of nite graphs. Information and Computation, 85(1):12{75, March 1990. [Fok89] M.M. Fokkinga. Tupling and mutumorphisms. Squiggolist, 1(4), 1989. [Fok92] M.M. Fokkinga. Law and Order in Algorithmics. PhD thesis, University of Twente, Dept INF, Enschede, The Netherlands, 1992. [HITT97] Zhenjiang Hu, Hideya Iwasaki, Masato Takeichi, and Akihiko Takano. Tupling calculation eliminates multiple data traversals. In Proceedings of the 1997 Acta Cybernetica
Acknowledgment This paper owes much to the thoughtful and helpful discussion with Tokyo CACA members. Thanks to Jeremy Gibbons for his comments on an earlier version of this paper.
ACM SIGPLAN International Conference on Func-
tional Programming, pages 164{175, Amsterdam, The Netherlands, 9{11 June 1997. [Jeu93] J. Jeuring. Theories for Algorithm Calculation. Ph.D thesis, Faculty of Science, Utrecht University, 1993. [KT90] I. Krz and R. Thomas. Clique-sums, treedecompositions and compactness. Discrete Mathematics, 81:177{185, 1990. [MFP91] E. Meijer, M. Fokkinga, and R. Paterson. Functional programming with bananas, lenses, envelopes and barbed wire. In Proc. Conference on Func-
References [ACPS93] S. Arnborg, B. Courcelle, A. Proskurowski, and D. Seese. An algebraic theory of graph reduction. Journal of the Association for Computing Machinery, 40(5):1134{1164, 1993. [AP84] A. Arnborg and A. Proskurowski. Linear time algorithms for NP-hard problems on graphs embedded in k-trees. Technical Report TRITA-NA-8404, Royal Institute of Technology, Sweden, 1984. [APT98] B. Aspvall, A. Proskurowski, and J.A. Telle. Memory requirements for table computations in partial ktree algorithms. In S. Arnborg and L. Ivansson, editors, Algorithm Theory - SWAT'98, pages 222{233, 1998. Lecture Notes in Computer Science, Vol. 1432, Springer-Verlag. [Arn87] S. Arnborg. Complexity of nding embeddings in a ktree. SIAM Journal Algebraic Discrete Mathematics, 8:277{287, 1987. [Arn95] S. Arnborg. Decomposable structures, Boolean function, representations, and optimization. In J. Wiedermann and eds. Hajek, P., editors, Mathematical Foundations of Computer Science 1995, pages 21{36, 1995. Lecture Notes in Computer Science, Vol. 969, Springer-Verlag. [BdM96] Richard Bird and Oege de Moor. Algebra of Programming. Prentice Hall, 1996. [Bir87] Richard Bird. An introduction to the theory of lists. In Manfred Broy, editor, Logic of Programming and Calculi of Discrete Design, NATO ASI Series 36, pages 5{42. Springer-Verlag, 1987. [Bir89] R. S. Bird. Algebraic identities for program calculation. The Computer Journal, 32(2):122{126, 1989. [BLW87] M. W. Bern, E. L. Lawler, and A. L. Wong. Lineartime computation of optimal subgraphs of decomposable graphs. Journal of Algorithms, 8:216{235, 1987.
tional Programming Languages and Computer Archi-
(LNCS 523), pages 124{144, Cambridge, Massachusetts, August 1991. [RS86] N. Robertson and P.D. Seymour. Graph minors II. algorithmic aspects of tree-width. Journal of Algorithms, 7:309{322, 1986. [SF93] T. Sheard and L. Fegaras. A fold for all seasons. In Conference on Functional Programming Languages and Computer Architecture, pages 233{242, 1993. [Tho90a] R. Thomas. A Menger-like property of tree-width: The nite case. Journal of Combinatorial Theory, Series B, 48:67{76, 1990. [Tho90b] W. Thomas. Automata on in nite objects. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, chapter 4, pages 133{192. Elsevier Science Publishers, 1990. [Tho98] Mikkel Thorup. All structured programs have small tree width and good register allocation. Information and Computation, 142:159{181, 1998. [TNS82] K. Takamizawa, T. Nishizeki, and N. Saito. Lineartime computability of combinatorial problems on series-parallel graphs. Journal of the Association for Computing Machinery, 29:623{641, 1982. [Wim87] T. V. Wimer. Linear Algorithms on k-terminal Graphs. PhD thesis, Clemson University, 1987. Report No. URI-030. tecture
12