>Missing:
P1: FCW 0521670152pre
CUNY656/McMillan
Printer: cupusbw
0 521 67015 2
February 17, 2007
20:59
inf"); else Console.Write(sPath[j].distance); string parent = vertexList[sPath[j].parentVert]. label; Console.Write("(" + parent + ") "); } } class chapter16 { static void Main() { Graph theGraph = new Graph(); theGraph.AddVertex("A"); theGraph.AddVertex("B"); theGraph.AddVertex("C"); theGraph.AddVertex("D"); theGraph.AddVertex("E"); theGraph.AddVertex("F"); theGraph.AddVertex("G"); theGraph.AddEdge(0, 1, 2); theGraph.AddEdge(0, 3, 1); theGraph.AddEdge(1, 3, 3); theGraph.AddEdge(1, 4, 10); theGraph.AddEdge(2, 5, 5); theGraph.AddEdge(2, 0, 4); theGraph.AddEdge(3, 2, 2); theGraph.AddEdge(3, 5, 8); theGraph.AddEdge(3, 4, 2); theGraph.AddEdge(3, 6, 4); theGraph.AddEdge(4, 6, 6); theGraph.AddEdge(6, 5, 1); Console.WriteLine();
311
P1: JZP 0521670152c16
CUNY656/McMillan
Printer: cupusbw
0 521 67015 2
312
February 17, 2007
21:50
GRAPHS AND GRAPH ALGORITHMS
Console.WriteLine("Shortest paths:"); Console.WriteLine(); theGraph.Path(); Console.WriteLine(); } }
The output from this program is:
SUMMARY Graphs are one of the most important data structures used in computer science. Graphs are used regularly to model everything from electrical circuits to university course schedules to truck and airline routes. Graphs are made up of vertices that are connected by edges. Graphs can be searched in several ways; the two most common are depth-first search and breadth-first search. Another important algorithm performed on graph is determining the minimum spanning tree, which is the minimum number of edges necessary to connect all the vertices in a graph. The edges of a graph can have weights, or costs. When working with weighted graphs, an important operation is determining the shortest path from a starting vertex to the other vertices in the graph. This chapter looked at one algorithm for computing shortest paths, Dijkstra’s algorithm. Weiss (1999) contains a more technical discussion of the graph algorithms covered in this chapter, whereas LaFore (1998) contains very good practical explanations of all the algorithms we covered here.
EXERCISES 1. Build a weighted graph that models a section of your home state. Use Dijkstra’s algorithm to determine the shortest path from a starting vertex to the last vertex.
P1: JZP 0521670152c16
CUNY656/McMillan
Exercises
Printer: cupusbw
0 521 67015 2
February 17, 2007
21:50
313
2. Take the weights off the graph in Exercise 1 and build a minimum spanning tree. 3. Still using the graph from Exercise 1, write a Windows application that allows the user to search for a vertex in the graph using either a depth-first search or a breadth-first search. 4. Using the Timing class, determine which of the searches implemented in Exercise 3 is more efficient.
P1: JZP 0521670152c17
CUNY656/McMillan
Printer: cupusbw
0 521 67015 2
February 18, 2007
21:58
C HAPTER 17
Advanced Algorithms
I
n this chapter, we look at two advanced topics: dynamic programming and greedy algorithms. Dynamic programming is a technique that is often considered to be the reverse of recursion—a recursive solution starts at the top and breaks the problem down solving all small problems until the complete problem is solved; a dynamic programming solution starts at the bottom, solving small problems and combining them to form an overall solution to the big problem. A greedy algorithm is an algorithm that looks for “good solutions” as it works toward the complete solution. These good solutions, called local optima, will hopefully lead to the correct final solution, called the global optimum. The term “greedy” comes from the fact these algorithms take whatever solution looks best at the time. Often, greedy algorithms are used when it is almost impossible to find a complete solution, due to time and/or space considerations, yet a suboptimal solution is acceptable.
DYNAMIC PROGRAMMING Recursive solutions to problems are often elegant but inefficient. The C# compiler, along with other language compilers, will not efficiently translate the recursive code to machine code, resulting in an inefficient, though elegant computer program. 314
P1: JZP 0521670152c17
CUNY656/McMillan
Printer: cupusbw
0 521 67015 2
February 18, 2007
21:58
Dynamic Programming
315
Many programming problems that have recursive solutions can be rewritten using the techniques of dynamic programming. A dynamic programming solution builds a table, usually using an array, which holds the results of the different subsolutions. Finally, when the algorithm is complete, the solution is found in a distinct spot in the table.
A Dynamic Programming Example: Computing Fibonacci Numbers The Fibonacci numbers can be described by the following sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, . . . There is a simple recursive program you can use to generate any specific number in this sequence. Here is the code for the function: static long recurFib(int n) { if (n < 2) return n else return recurFib(n - 1) + recurFib(n - 2); }
And here is a program that uses the function: static void Main() { int num = 5; long fibNumber = recurFib(num); Console.Write(fibNumber); }
The problem with this function is that it is extremely inefficient. We can see exactly how inefficient this recursion is by examining the tree in Figure 17.1. The problem with the recursive solution is that too many values are recomputed during a recursive call. If the compiler could keep track of the values that are already computed, the function would not be nearly so inefficient. We can design an algorithm using dynamic programming techniques that is much more efficient than the recursive algorithm.
P1: JZP 0521670152c17
CUNY656/McMillan
Printer: cupusbw
0 521 67015 2
February 18, 2007
316
21:58
ADVANCED ALGORITHMS
recurFib 5
recurFib 3
recurFib 4
recurFib 3
recurFib 2
recurFib 1
recurFib 1 recurFib ø
1
recurFib 2
1
recurFib 1
1
recurFib 2
recurFib ø recurFib 1
0
1
recurFib 1
recurFib ø
1
0
0
FIGURE 17.1. Tree generated from Recursive Fibonacci Computation.
An algorithm designed using dynamic programming techniques starts by solving the simplest subproblem it can solve, using that solution to solve more complex subproblems until the problem is solved. The solutions to each subproblem are typically stored in an array for easy access. We can easily comprehend the essence of dynamic programming by examining the dynamic programming algorithm for computing a Fibonacci number. Here’s the code followed by an explanation of how it works: static long iterFib(int n) { int[] val = new int[n]; if ((n == 1) || (n == 2)) return 1; else { val[1] = 1; val[2] = 2; for(int i = 3; i 1) treeList.MergeTree(); MakeKey(treeList.RemoveTree(), ""); string newStr = translate(input); for(int i = 0; i