Back to the main page of LSP/EPFL Peripheral Systems Laboratory (EPFL-DI/LSP)
[Publications] [GigaServer]

Scheduling Data Intensive Particle Physics Analysis Jobs on Clusters of PCs

S.Ponce, R.D. Hersch

International Journal of Computational Science and Engineering (IJCSE)

Scheduling policies are proposed for parallelizing data intensive particle physics analysis applications on computer clusters. Particle physics analysis jobs require the analysis of tens of thousands of particle collision events, each event requiring typically 200ms processing time and 600KB of data. Many jobs are launched concurrently by a large number of physicists. At a first view, particle physics jobs seem to be easy to parallelize, since particle collision events can be processed independently one from another. However, since large amounts of data need to be accessed, the real challenge resides in making an efficient use of the underlying computing resources. We propose several job parallelization and scheduling policies aiming at reducing job processing times and at increasing the sustainable load of a cluster server. The complexity of each policy is analysed as a measure of the scalability of the system.

Since particle collision events are usually reused by several jobs, cache based job splitting strategies considerably increase cluster utilisation and reduce job processing times. Compared with straightforward job scheduling on a processing farm, cache based first in first out job splitting speeds up average response times by an order of magnitude and reduces job waiting times in the system’s queues from hours to minutes.

By scheduling the jobs out of order, according to the availability of their collision events in the node disk caches, response times are further reduced, especially at high loads. In the delayed scheduling policy, job requests are accumulated during a time period, divided into subjob requests according to a parameterizable subjob size, and scheduled at the beginning of the next time period according to the availability of their data segments within the disk node caches. Delayed scheduling sustains a load close to the maximal theoretically sustainable load of a cluster, but at the cost of longer average response times. We also propose an adaptive delay scheduling approach, where the scheduling delay is adapted to the current load. This last scheduling approach sustains very high loads and offers low response times at normal loads. We analyse the benefits of pipelining computation and accesses to tertiary storage and to the local disk caches. Pipelining tends to increase the throughput of jobs and allows the system to sustain higher loads.

Finally we analyse the complexity of the different scheduling algorithms both in terms of space and time. The system is highly scalable and supports a cluster of up to several tens of thousands of nodes.

Download the full paper: PDF 268 kb


[Overview][Research][Teaching][Publications][Staff]

<basile.schaeli@epfl(add: .ch)>
Last modified: 2007/09/26 21:26:40