Lecture 13: The Knapsack Problem

144 downloads 96 Views 68KB Size Report
Lecture 13: The Knapsack Problem. Outline of this Lecture. Introduction of the 0-1 Knapsack Problem. A dynamic programming solution to this problem. 1 ...
Lecture 13: The Knapsack Problem Outline of this Lecture

Introduction of the 0-1 Knapsack Problem.

A dynamic programming solution to this problem.

1

0-1 Knapsack Problem Informal Description: We have computed  data files  bytes that we want to store, and we have available of storage. File  has size  bytes and takes  minutes to recompute. We want to avoid as much recomputing as possible, so we want to find a subset of files to store such that  The files have combined size at most . The total computing time of the stored files is as large as possible. We can not store parts of files, it is the whole file or nothing. How should we select the files?

2

0-1 Knapsack Problem Formal description: Given two numbers



-tuples of positive



         and    , we wish to determine the subset and  !#" %$& ' (*) (of files to store) that maximizes . ,+subject to  0/   ,+ Remark: This is an optimization problem. Brute force: Try all

$ 

possible subsets



.

Question: Any solution better than the brute-force? 3

Recall of the Divide-and-Conquer

1. Partition the problem into subproblems.

2. Solve the subproblems.

3. Combine the solutions to solve the original one.

Remark: If the subproblems are not independent, i.e. subproblems share subsubproblems, then a divideand-conquer algorithm repeatedly solves the common subsubproblems. Thus, it does more work than necessary! Question: Any better solution? Yes–Dynamic programming (DP)! 4

The Idea of Dynamic Programming Dynamic programming is a method for solving optimization problems. The idea: Compute the solutions to the subsub-problems once and store the solutions in a table, so that they can be reused (repeatedly) later. Remark: We trade space for time.

5

The Idea of Developing a DP Algorithm

Step1: Structure: Characterize the structure of an optimal solution. – Decompose the problem into smaller problems, and find a relation between the structure of the optimal solution of the original problem and the solutions of the smaller problems.

Step2: Principle of Optimality: Recursively define the value of an optimal solution. – Express the solution of the original problem in terms of optimal solutions for smaller problems.

6

The Idea of Developing a DP Algorithm

Step 3: Bottom-up computation: Compute the value of an optimal solution in a bottom-up fashion by using a table structure.

Step 4: Construction of optimal solution: Construct an optimal solution from computed information.

Steps 3 and 4 may often be combined.

7

Remarks on the Dynamic Programming Approach

Steps 1-3 form the basis of a dynamic-programming solution to a problem.

Step 4 can be omitted if only the value of an optimal solution is required.

8

Developing a DP Algorithm for Knapsack

Step 1: Decompose the problem into smaller problems.



  6

We construct an array 1 2 345 3 . " /  /  , and  /  /  , the entry For 1 278 ( 6 will store the maximum (combined) !#" computing time of any subset of files %$&  (9) of (combined) size at most  . If we can compute all the entries of this array, then  6 will contain the maximum the array entry 1 275 computing time of files that can fit into the storage, that is, the solution to our problem.

9

Developing a DP Algorithm for Knapsack Step 2: Recursively define the value of an optimal solution in terms of solutions to smaller problems.

Initial Settings: Set

1 2  :  56 60; ; =  > 1 2

 , then  F 1 27 Note that if  so the lemma is correct in any case.

11

Developing a DP Algorithm for Knapsack Step 3: Bottom-up computing 1 not recursion). Bottom:

1 2    6H; 

for all

278  6

(using iteration,

 /  / 

.

Bottom-up computation: Computing the table using

1 2