Back to the main page of LSP/EPFL Peripheral Systems Laboratory (EPFL-DI/LSP)
[Research] [GigaServer]

Fault-tolerant Parallel Framework

Dynamic Parallel Schedules

The project aims at providing a framework for the development of applications running on high-performance servers made of PC's connected through a commodity network (Gigabit Ethernet). A first generation framework was developed and used for a variety of applications in the years 1997 to 2000. A second generation framework called Dynamic Parallel Schedules is now operational. This framework relies on the creation of a parallel program execution graph, on the deployment of this graph onto computing nodes and threads and on its execution, all carried out at run-time. The DPS framework runs on multiple platforms (Linux, Unix, Windows) and is available under the GPL license.

Fault-Tolerant Dynamic Parallel Schedules

We are now developing a new version of the DPS framework which supports fault-tolerance. This new version will be extremely useful in order to run parallel programs on clusters of hundreds of computing nodes. One or several nodes may crash or become unavailable, nevertheless the parallel program execution will automatically recover and pursue its execution. For compute intensive applications, fault-tolerance overhead adds an execution time overhead of only a few percents. Program recovery in case of node crashes is also relatively fast, thanks to periodic checkpointing between executing nodes and backup nodes.

3D Volume Image Storage (2002)

Large 3D volumes need to be segmented into smaller size subvolumes both for ensuring efficient access to the data and for storing the data on multiple disks. This is especially useful for large medical 3D volume data sets such as CT, MRI, or cryosections. We propose a striped file library which distributes the subvolumes onto multiple disks (multiple subfiles). This library is available with complete source code.

Computer-Aided Parallelization Tool (1998)

This first generation Computer-Aided Parallelization Tool (CAP) simplifies the development of parallel applications running on clusters of PC's. CAP enables programmers to specify the parallel program behavior by appropriate parallel constructs. Within these constructs, invididual operations are user specified methods with input and ouput data objects. The parallel constructs specify the flow of data objects between operations. Operations are mapped to the available threads at compile time. A second generation parallelization framework relying on dynamic schedules is now available (see above).

Parallel File System Components (1998)

Parallel file system components enable developpers to create programs which access in parallel to disks located on the same or on different PC's. For high performance, programmers may combine thanks to CAP pipelined parallel file access and processing operations. For example, a parallel program running on one of the slave nodes may read several image parts from its local disks, and at the same time process the previously read parts. When the program runs on many processors, high I/O and processing throughputs may be achieved simultanuously.

Application: Extracting Slices from the Visible Human (1998)

One of the project's most spectacular results is the creation of a pipelined parallel program enabling Internet users to extract oblique slices and curved surfaces from the National Library of Medicine's Visible Human, a volume having a size of 2048x1216x1888 colour pixels (~13 GB) with a resolution of respectively 3 pixels/mm along the horizontal axes and of 1 pixel/mm along the vertical axis. This volume is striped onto the disks as volumic elements of size 32x32x16. When a user asks for a slice having a given orientation and position, a request is sent to the master server PC which sends it to the storage and processing server PCs which in turn split the global slice access request into approximately 450 extent access and slice part extraction requests (depending on the actual size and orientation of the requested slice). In a 5 PC (8 processors), 60 disk configuration, a global disk throughput of 108 MB/s is achieved and is overlapped with the processing operations required to extract and project into display space the desired slice part. The server's master PC assembles the slice parts together, compresses the resulting image (JPEG) and sends them to the Web client. Other applications have also benefited from the first generation parallel schedule framework.


[Overview][Research][Teaching][Publications][Staff]

<basile.schaeli@epfl(add: .ch)>
Last modified: 2007/09/26 21:22:12