Multiprocessor scheduling

From HandWiki

In computer science, multiprocessor scheduling is an optimization problem involving the scheduling of computational tasks in a multiprocessor environment.[1] The problem statement is: "Given a set J of jobs where job ji has length li and a number of processors m, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?".[2] The problem is often called the minimum makespan problem: the makespan of a schedule is defined as the time it takes the system to complete all processes, and the goal is to find a schedule that minimizes the makespan. The problem has many variants.

Algorithms

In the simplest case, the processors are identical (i.e., have the same speed), and the jobs are independent (i.e., no job requires the output of another process). In this case, the multiprocessor scheduling problem is a variant of the multiway number partitioning problem. In both problems, the goal is to partition numbers into subsets with nearly-equal sum. Multiprocessor scheduling with identical machines is a special case in which the objective is to minimize the largest sum. Many algorithms for multiway number partitioning, both exact and approximate, can be used to attain this objective.

A more complex case is when the processors have different speeds: when job i is scheduled to processor j, it takes size(i)/speed(j) time to complete. This case requires a more careful analysis.

  • The simple algorithm called greedy number partitioning, or the LPT algorithm (Longest Processing Time), sorts the jobs by their length, longest first, and then assigns them to the processor with the earliest end time so far. While originally developed for identical processors, it has good asymptotic convergence properties even with different speeds.[3]
  • Hochbaum and Shmoys,[4] who developed a PTAS for identical processors, extended their PTAS to handle processors with different speeds.
  • Epstein and Sgall[5] generalized this PTAS to handle more general objective functions. Let si (for i between 1 and k) be the makespan of processor i in a given schedule. Instead of minimizing the objective function max(si), one can minimize the objective function max(f(si)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(si)).

The most complex case is when the jobs may be dependent. For example, take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of ordering exists between the tasks. In fact it is clear that it can be modelled with partial ordering. Then, by definition, the set of tasks constitute a lattice structure. This adds further complication to the multiprocessor scheduling problem.

Static versus Dynamic

Multiprocessor scheduling algorithms are static or dynamic. A scheduling algorithm is static if the scheduling decisions as to what computational tasks will be allocated to what processors are made before running the program. An algorithm is dynamic if it is taken at run time. For static scheduling algorithms, a typical approach is to rank the tasks according to their precedence relationships and use a list scheduling technique to schedule them onto the processors.[6]

See also

  • Job shop scheduling, a similar problem for scheduling jobs on machines. Some variants of multiprocessor scheduling and job shop scheduling are equivalent problems.

References

  1. A compendium of NP optimization problems. Editors: Pierluigi Crescenzi, and Viggo Kann
  2. Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company. p. 238. ISBN 978-0716710448. https://archive.org/details/computersintract0000gare. 
  3. Frenk, J. B. G.; Rinnooy Kan, A. H. G. (1987-05-01). "The Asymptotic Optimality of the LPT Rule". Mathematics of Operations Research 12 (2): 241–254. doi:10.1287/moor.12.2.241. ISSN 0364-765X. https://pubsonline.informs.org/doi/abs/10.1287/moor.12.2.241. 
  4. Hochbaum, Dorit S.; Shmoys, David B. (1988-06-01). "A Polynomial Approximation Scheme for Scheduling on Uniform Processors: Using the Dual Approximation Approach". SIAM Journal on Computing 17 (3): 539–551. doi:10.1137/0217033. ISSN 0097-5397. https://epubs.siam.org/doi/abs/10.1137/0217033. 
  5. Epstein, Leah; Sgall, Jiri (2004-05-01). "Approximation Schemes for Schedulingon Uniformly Related and Identical Parallel Machines" (in en). Algorithmica 39 (1): 43–57. doi:10.1007/s00453-003-1077-7. ISSN 1432-0541. https://doi.org/10.1007/s00453-003-1077-7. 
  6. Kwok, Yu-Kwong; Ahmad, Ishfaq (1999-12-01). "Static scheduling algorithms for allocating directed task graphs to multiprocessors". ACM Computing Surveys 31 (4): 406–471. doi:10.1145/344588.344618. ISSN 0360-0300.