Overview
GRIN is a compiler framework and an intermediate representation. It is short for Graph Reduction Intermediate Notation. GRIN could significantly improve the tooling, performance and size of functional programs and could enable functional technologies to target new platforms like WebAssembly.
Functional languages are compiled in three stages:
- Language frontend
- High-level optimizer (functional)
- Low-level optimizer (imperative)
While LLVM handles the last step perfectly, GRIN as a functional optimizer can capture the original language semantics and can perform transformations that are infeasible at LLVM level.
Currently the following language frontends are under development:
- Haskell
The Haskell language evolves with the Glasgow Haskell Compiler. GHC development is usually focused on language features and high-level optimization while the machine code generator gets less attention. GHC/GRIN is a combination of GHC’s Haskell language frontend and the GRIN optimizer. It is work in progress, check its current status. - Idris
Adding GRIN optimizer to the Idris compiler pipeline will make programs faster and smaller. Idris/GRIN can compile many programs but the runtime is work in progress. - Agda
Plugging the GRIN optimizer after the Agda frontend is on our roadmap but currently Agda/GRIN is only an initial code stub.
GRIN aims to bring the benefits of whole program optimization to a wide range of functional programming languages.
Support the project on Patreon.
Benefits For Programmers
GRIN helps to improve the industrial presence of Haskell.
Tooling
Good tooling is essential for industrial software development.
In order to get anywhere near feature parity with the tools of mainstream programming languages, we need to inspect the whole program at the same time.
This can help with all stages of development: immediate feedback while typing, visual debugging and profiling.
With such runtime tooling it would be possible to show memory structures, debug laziness and visualize unevaluated expressions.
Having access to the whole program could improve the code editor experience too.
It would be possible to highlight optimization effects on source code, e.g. dead code/data, linear variable usage, laziness, strictness, tail call, unboxing, stack/heap allocation.
It seems feasible to implement these cool features using Language Server Protocol and GRIN.
Smaller Executables
Whole program analysis helps the compiler to remove dead code and dead data fields more effectively. For example it can remove the unused type class instances. This results in much smaller executables. It also cuts down the number of referenced external libraries and symbols in the program binary.
Better Performance
Whole program optimization can remove lots of redundant computation, e.g. unnecessary laziness and redundant memory operations. These program simplifications often make other optimizations possible. GRIN represents memory operations and laziness explicitly. This allows aggressive memory layout optimizations, e.g. unboxing, turning heap values to stack/register values. GRIN also eliminates indirect function calls which enables LLVM to perform more optimizations.
New Platforms
GRIN uses LLVM for machine code generation. LLVM provides robust tooling and support for all mainstream platforms. With this design choice the main platforms can be easily supported, i.e. x64, ARM, WebAssembly covering desktop, mobile and web.
Benefits For Researchers
GRIN provides a framework for functional language experimentation.
Analysis Framework
Whole program compilation makes it easy to observe and analyse programs. GHC/GRIN allows researchers to experiment with real-world functional programs. The GHC/GRIN compiler pipeline can serialize both the STG level and the GRIN level intermediate representation (IR) for the whole program. With this framework it is easy to convert large Haskell programs to a research IR. We also plan to support all GHC primitive operations in the GRIN interpreter and GRIN native code generator.
Related Work
The GRIN Project aims to utilize the most recent results of compiler research, especially pointer analysis and whole program optimization.
Whole program compilers
-
A modern look at GRIN, an optimizing functional language back end
Our recent paper gives an overview of GRIN related projects and other whole program compilers.
e.g. Boquist GRIN, UHC, JHC, LHC, HRC, MLton -
Intel Labs Haskell Research Compiler / Measuring the Haskell Gap
HRC is a research compiler that showed the performance effect of whole program optimization on Haskell. -
MLton
MLton is the leading industrial strength whole program compiler for SML. It showcased that whole program compilation is feasible with low compile times.
Program analysis
-
Souffle datalog compiler
Souffle synthesizes a native parallel C program from a logic specification. It is used to implement points-to, control flow and other analyses efficiently. -
P4F: Pushdown Control-Flow Analysis for Free
P4F is an advanced control flow analysis. It can boost the optimizer efficiency with providing sophisticated control flow information.
Vectorisation
-
ISPC: Intel SPMD Program Compiler
Single Program Multiple Data (SPMD) is the programming model used by the GPUs. ISPC implements the SPMD model on CPU SIMD vector instructions like SSE and AVX. It shows that interprocedural data flow vectorisation can be much more performant than loop vectorisation. -
FLRC: Automatic SIMD Vectorization for Haskell
Intel Labs Haskell Research compiler utilized a SIMD vectorisation optimization that was specially designed for pure functional languages.
Memory management
-
ASAP Memory Management
ASAP (As Static As Possible) describes a compile-time automatic memory management system using whole program analysis. It essentially generates a specialized garbage collector for each compiled program. With ASAP it seems possible to run Haskell programs without a run-time garbage collector. -
Gibbon / Compiling Tree Transforms to Operate on Packed Representations
Gibbon is a research compiler that experiments with packed memory data representation. It compiles functional programs to work with pointerless data representation which reduces cache misses and improves runtime performance. This technique essentially turns data (de)serialization into raw memory copy.
Support
The project is supported by these awesome backers.
If you’d like to join them, please consider become a backer or sponsor on Patreon.
Ask Us
Please ask if you have any questions.
(i.e. code, design, research, support, etc.)
Email: [email protected]
Twitter: @csaba_hruska
FAQ
What is the difference between GHC and GRIN?
GHC is an incremental compiler, therefore it cannot perform whole-program optimization, i.e. optimization across compilation units (with some exceptions at the highest level, e.g. rewrite rules).
GRIN is a whole-program optimizer, which goes all the way down to the level of primitive memory access operations.
Why don’t you improve GHC instead of GRIN?
Whole-program optimization is a fundamentally different design decision that cannot be easily retrofitted into GHC proper.
Instead, we try to reuse as many parts as possible, e.g. the GHC frontend.
Can you reuse the GHC runtime for GHC/GRIN?
No, because the GHC runtime is built for the STG memory model.
In contrast, there’s no uniform memory representation in GRIN.
Patreon Sponsors
Sam Griffin
Timothy Klim