TY - JOUR
T1 - Engineering parallel symbolic programs in GpH
AU - Loidl, Hans Wolfgang
AU - Trinder, Philip W.
AU - Hammond, Kevin
AU - Junaidu, Sahalu B.
AU - Morgan, Richard G.
AU - Peyton Jones, Simon L.
PY - 1999/10
Y1 - 1999/10
N2 - We investigate the claim that functional languages offer low-cost parallelism in the context of symbolic programs on modest parallel architectures. In our investigation we present the first comparative study of the construction of large applications in a parallel functional language, in our case in Glasgow Parallel Haskell (GpH). The applications cover a range of application areas, use several parallel programming paradigms, and are measured on two very different parallel architectures. On the applications level the most significant result is that we are able to achieve modest wall-clock speedups (between factors of 2 and 10) over the optimised sequential versions for all but one of the programs. Speedups are obtained even for programs that were not written with the intention of being parallelised. These gains are achieved with a relatively small programmer-effort. One reason for the relative ease of parallelisation is the use of evaluation strategies, a new parallel programming technique that separates the algorithm from the co-ordination of parallel behaviour. On the language level we show that the combination of lazy and parallel evaluation is useful for achieving a high level of abstraction. In particular we can describe top-level parallelism, and also preserve module abstraction by describing parallelism over the data structures provided at the module interface ('data-oriented parallelism'). Furthermore, we find that the determinism of the language is helpful, as is the largely implicit nature of parallelism in GPH.
AB - We investigate the claim that functional languages offer low-cost parallelism in the context of symbolic programs on modest parallel architectures. In our investigation we present the first comparative study of the construction of large applications in a parallel functional language, in our case in Glasgow Parallel Haskell (GpH). The applications cover a range of application areas, use several parallel programming paradigms, and are measured on two very different parallel architectures. On the applications level the most significant result is that we are able to achieve modest wall-clock speedups (between factors of 2 and 10) over the optimised sequential versions for all but one of the programs. Speedups are obtained even for programs that were not written with the intention of being parallelised. These gains are achieved with a relatively small programmer-effort. One reason for the relative ease of parallelisation is the use of evaluation strategies, a new parallel programming technique that separates the algorithm from the co-ordination of parallel behaviour. On the language level we show that the combination of lazy and parallel evaluation is useful for achieving a high level of abstraction. In particular we can describe top-level parallelism, and also preserve module abstraction by describing parallelism over the data structures provided at the module interface ('data-oriented parallelism'). Furthermore, we find that the determinism of the language is helpful, as is the largely implicit nature of parallelism in GPH.
UR - http://www.scopus.com/inward/record.url?scp=0033335139&partnerID=8YFLogxK
U2 - 10.1002/(SICI)1096-9128(199910)11:12<701::AID-CPE443>3.0.CO;2-P
DO - 10.1002/(SICI)1096-9128(199910)11:12<701::AID-CPE443>3.0.CO;2-P
M3 - Article
SN - 1040-3108
VL - 11
SP - 701
EP - 752
JO - Concurrency: Practice and Experience
JF - Concurrency: Practice and Experience
IS - 12
ER -