Parallelising large irregular programs: An experience with Naira

Sahalu B. Junaidu, Phil W. Trinder

Research output: Contribution to journalArticle

Abstract

Naira is a compiler for Haskell, written in Glasgow parallel Haskell. It exhibits modest, but irregular, parallelism that is determined by properties of the program being compiled, e.g. the complexity of the types and of the pattern matching. We report four experiments into Naira's parallel behaviour using a set of realistic inputs: namely the 18 Haskell modules of Naira itself. The issues investigated are: Does increasing input size improve sequential efficiency and speedup? To what extent do high communications latencies reduce average parallelism and speedup? Does migrating running threads between processors improve average parallelism and speedup at all latencies? © 2002 Published by Elsevier Science Inc.

Original languageEnglish
Pages (from-to)229-240
Number of pages12
JournalInformation Sciences
Volume140
Issue number3-4
DOIs
Publication statusPublished - Feb 2002

Fingerprint

Pattern matching
Communication
Experiments

Cite this

Junaidu, Sahalu B. ; Trinder, Phil W. / Parallelising large irregular programs : An experience with Naira. In: Information Sciences. 2002 ; Vol. 140, No. 3-4. pp. 229-240.
@article{07bc026afe2e4bffa62b7e0bc70ac195,
title = "Parallelising large irregular programs: An experience with Naira",
abstract = "Naira is a compiler for Haskell, written in Glasgow parallel Haskell. It exhibits modest, but irregular, parallelism that is determined by properties of the program being compiled, e.g. the complexity of the types and of the pattern matching. We report four experiments into Naira's parallel behaviour using a set of realistic inputs: namely the 18 Haskell modules of Naira itself. The issues investigated are: Does increasing input size improve sequential efficiency and speedup? To what extent do high communications latencies reduce average parallelism and speedup? Does migrating running threads between processors improve average parallelism and speedup at all latencies? {\circledC} 2002 Published by Elsevier Science Inc.",
author = "Junaidu, {Sahalu B.} and Trinder, {Phil W.}",
year = "2002",
month = "2",
doi = "10.1016/S0020-0255(01)00173-6",
language = "English",
volume = "140",
pages = "229--240",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier",
number = "3-4",

}

Parallelising large irregular programs : An experience with Naira. / Junaidu, Sahalu B.; Trinder, Phil W.

In: Information Sciences, Vol. 140, No. 3-4, 02.2002, p. 229-240.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Parallelising large irregular programs

T2 - An experience with Naira

AU - Junaidu, Sahalu B.

AU - Trinder, Phil W.

PY - 2002/2

Y1 - 2002/2

N2 - Naira is a compiler for Haskell, written in Glasgow parallel Haskell. It exhibits modest, but irregular, parallelism that is determined by properties of the program being compiled, e.g. the complexity of the types and of the pattern matching. We report four experiments into Naira's parallel behaviour using a set of realistic inputs: namely the 18 Haskell modules of Naira itself. The issues investigated are: Does increasing input size improve sequential efficiency and speedup? To what extent do high communications latencies reduce average parallelism and speedup? Does migrating running threads between processors improve average parallelism and speedup at all latencies? © 2002 Published by Elsevier Science Inc.

AB - Naira is a compiler for Haskell, written in Glasgow parallel Haskell. It exhibits modest, but irregular, parallelism that is determined by properties of the program being compiled, e.g. the complexity of the types and of the pattern matching. We report four experiments into Naira's parallel behaviour using a set of realistic inputs: namely the 18 Haskell modules of Naira itself. The issues investigated are: Does increasing input size improve sequential efficiency and speedup? To what extent do high communications latencies reduce average parallelism and speedup? Does migrating running threads between processors improve average parallelism and speedup at all latencies? © 2002 Published by Elsevier Science Inc.

UR - http://www.scopus.com/inward/record.url?scp=0036466734&partnerID=8YFLogxK

U2 - 10.1016/S0020-0255(01)00173-6

DO - 10.1016/S0020-0255(01)00173-6

M3 - Article

VL - 140

SP - 229

EP - 240

JO - Information Sciences

JF - Information Sciences

SN - 0020-0255

IS - 3-4

ER -