Abstract
Most implementations of machine learning algorithms are based on special-purpose frameworks such as TensorFlow or PyTorch. While these frameworks are convenient to use, they introduce multi-million lines of code dependency that one has to trust, understand and potentially modify. As an alternative, this paper investigates a direct implementation of a state of the art Convolutional Neural Network (CNN) in an array language. While our implementation requires 150 lines of code to define the special-purpose operators needed for CNNs, which are readily provided through frameworks such as TensorFlow and PyTorch, our implementation outperforms these frameworks by factors 2 and 3 on a fixed set of hardware - a 64-core GPU-accelerated machine; for a simple example network. The resulting specification is written in a rank-polymorphic data-parallel style, and it can be immediately leveraged by optimising compilers. Indeed, array languages make neural networks fast.
Original language | English |
---|---|
Title of host publication | ARRAY 2021: Proceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming |
Editors | Tze Meng Low, Jeremy Gibbons |
Publisher | Association for Computing Machinery |
Pages | 39-50 |
Number of pages | 12 |
ISBN (Electronic) | 9781450384667 |
DOIs | |
Publication status | Published - 17 Jun 2021 |
Event | 7th ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming 2021 - Virtual, Online, Canada Duration: 21 Jun 2021 → … |
Conference
Conference | 7th ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming 2021 |
---|---|
Abbreviated title | ARRAY 2021 |
Country/Territory | Canada |
City | Virtual, Online |
Period | 21/06/21 → … |
Keywords
- array language
- machine learning
ASJC Scopus subject areas
- Theoretical Computer Science
- Computational Theory and Mathematics