Algebraic Programming

Auto-parallelising and auto-optimising frameworks for algebraically expressed codes

Algebraic Programming (ALP) encompasses research in various software technologies that allow programming with explicit algebraic structures, combined with their automated parallelisation and optimisation.

Our view

Our view is that algebraic structures and properties determine the type of optimisations applicable to programs. Instead of software frameworks and programming models attempting to infer which optimisations are applicable, we instead submit that users should annotate programs explicitly with the algebraic knowledge they quite often already assume while coding. Optimising software frameworks may then immediately exploit this knowledge.

In contrast, when translating algorithms to code using contemporary tools, algebraic information is lost only for compilers and other tools to re-discover them for optimisation passes. Instead of such a potentially lossy and often suboptimal bottom-up approach, ALP allows for an optimal top-down processing of programs.

We pursue this direction at various levels of the software stack: from compiler and IRs to low-level communication layers, DSLs, programming models, and template libraries. Our projects relate to ongoing, open research, each summarised below. We warmly invite everyone interested to download, try, use, and contribute to the projects collected here.


We present the main ALP project as a C++ realisation of the above vision. This project is a direct evolution of ALP/GraphBLAS that we have been developing since 2016. Whereas ALP/GraphBLAS focuses on generalised sparse linear algebra, we are currently investigating ALP for generalised dense linear algebraic programming. To have ALP generate efficient dense codes, we believe that novel compiler technologies such as MLIR are pivotal; see also our MLIR Linnea project.

Finally, in this day and age where multi-core programming has become mandatory even for driving commodity devices, auto-parallelisation and the integration of optimised parallel software into a wide diversity of software stacks is required. The LPF project addresses some of the barriers associated with resolving this critical challenge.

In summary, we currently maintain the following projects:

  • C++ ALP for Algebraic Programming in C++;
  • MLIR Linnea for high-level linear algebraic compiler transformations; and
  • Lightweight Parallel Foundations for high-performance and predictable distributed-memory communications used for auto-parallelisation.


The repositories maintained here are part of our research activities and as such, is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. See also the licenses associated to each individual repository found under this page.

C++ ALP, and ALP/GraphBLAS

The ALP project provides an auto-parallelising and auto-optimising C++17 Template Library that allows expressing sequential code with explicit linear algebraic annotations. The syntax follows closely that of the Standard Template Library (STL). ALP programmers need not be aware of any deep parallelisation or optimisation concerns: you write the maths, and the system does the rest!

Part of ALP is our hybrid shared- and distributed-memory parallel ALP/GraphBLAS. An earlier version was presented in detail in 2020, though its specification and implementation have evolved since. We are preparing an updated paper and associated version 1.0 of ALP/GraphBLAS.

ALP has several unique features that were initially introduced and evolved as part of ALP/GraphBLAS. These include:

  • algebraic type traits, which allow the framework to inspect, at compile-time, whether e.g. operators have certain algebraic properties-- and adapt optimisation passes accordingly. For example, if a series of commutative operator applications is requested, then a framework could reorder that series in order to improve cache hit rates. Algebraic type traits may also be inspected by user codes and libraries implemented on top of ALP.
  • user processes, providing STL-compatible I/O through iterators, and providing both sequential and parallel I/O for speeding up workloads that interact with distributed data. Through the ALP Launcher abstraction, which encapsulates a group of user processes that collaborative execute ALP codes, programs are furthermore easily integrated into auxiliary parallel contexts. This enables, for example, workers in a Spark cluster to temporarily band together and execute a parallel ALP/GraphBLAS program.
  • performance semantics. Implementations may differ in the performance semantics they define, but must specify clear costings for each ALP primitive. Cost dimensions include work, data movement, and memory usage. Performance semantics help gauge the scalability characteristics of executing ALP workloads on different datasets and different scales of compute systems, helps algorithm designers decide on the best possible forms of an ALP program, and helps systems select the best possible implementations for given workloads.

See the ALP project page for more details and how to get started!

MLIR Linnea

The MLIR Linnea dialect aims to enable the representation of high-level linear algebraic concepts within MLIR. This includes computations on structured matrices and vectors, where information on the symmetry of a matrix may, for example, drive the selection for a matrix decomposition algorithm or drive the optimised synthesis of the linear algebraic operations it involves.

The design of MLIR Linnea dialect builds on previous research by Barthels et al. and the corresponding Linnea tool. The dialect operates on two-dimensional tensors as a custom type, and introduces new attributes and operations for reasoning over this matrix type. Properties are inferred and propagated through a symbolic engine. The proposed dialect is one level higher compared to LinAlg (which we lower to) and additionally relies on the built-in MLIR tensor type. MLIR Linnea optimisations include a subset of those in the generalised matrix chain algorithm, while added functionalities include semiring annotations.

For more details and current progress, see the MLIR Linnea project page!

Lightweight Parallel Foundations

We are pleased to release the Lightweight Parallel Foundations (LPF) to the public. This communication layer is designed to both implement, as well as enable the wide use of, immortal algorithms. The LPF is not intended as a replacement of other high performance communication libraries such as MPI or BSPlib, nor is it intended as a replacement of frameworks like Hadoop, Spark, or Giraph.

Instead, the LPF allows for the implementation of proven optimal algorithms in a way they can be reused from any other third-party parallel application, no matter which (parallel) framework that application is composed in. This works towards realising the vision immortal algorithms: a small set of often-used and often performance-critical algorithms are ideally designed once, proven optimal once, implemented once, and reused forever.

The LPF enables the distributed-memory auto-parallelisation that users of the C++ ALP framework enjoy, and also enables its integration with arbitrary auxiliary application frameworks.

For more details, visit the project page or the corresponding short ArXiV paper.

Feedback, bug reports, and contributions

Feel free to make use of the issue or discussion boards on the individual project pages; we encourage and look forward to free and open interactions. Our researchers are making use of the exact same system and are looking forward to interacting with you!

Albert-Jan N. Yzelman, Huawei Technologies Switzerland AG