Skip to content

Latest commit

 

History

History
124 lines (96 loc) · 4.18 KB

README.md

File metadata and controls

124 lines (96 loc) · 4.18 KB

A very (!) fast BrainFuck interpreter in Python

Here is a BrainFuck example:

+++++ +++++             initialize counter (cell #0) to 10
[                       use loop to set the next four cells to 70/100/30/10
> +++++ ++              add  7 to cell #1
> +++++ +++++           add 10 to cell #2
> +++                   add  3 to cell #3
> +                     add  1 to cell #4
<<<< -                  decrement counter (cell #0)
]
> ++ .                  print 'H'
> + .                   print 'e'
+++++ ++ .              print 'l'
.                       print 'l'
+++ .                   print 'o'
> ++ .                  print ' '
<< +++++ +++++ +++++ .  print 'W'
> .                     print 'o'
+++ .                   print 'r'
----- - .               print 'l'
----- --- .             print 'd'
> + .                   print '!'
> .                     print '\n'

How to use the interpreter:

python2 ./bf.py hello.bf
Hello World!

Speeding things up

With Pypy

If you try to run a long BrainFuck program like mandel.b, you will realize our interpreter is pretty slow.

python2 ./bf.py examples/mandel.b
# wait 1h45

A first simple way of speeding things up is to use Pypy instead of CPython.

PYPY_VERSION="pypy2.7-v7.3.9"
wget "https://downloads.python.org/pypy/${PYPY_VERSION}-linux64.tar.bz2"
tar -xjf "${PYPY_VERSION}-linux64.tar.bz2"
mv "${PYPY_VERSION}-linux64" pypy
# Only 1m30 now!
./pypy/bin/pypy ./bf.py ./examples/mandel.b

With a JIT

The interpreter is actually written in RPython, so it can be statically compiled using the Pypy toolchain. Download the latest source of Pypy and uncompress it in a pypy-src folder. Note that you could also install rpython from PyPI.

wget "https://downloads.python.org/pypy/${PYPY_VERSION}-src.tar.bz2"
tar -xjf "${PYPY_VERSION}-src.tar.bz2"
mv "${PYPY_VERSION}-src" pypy-src

Then you can build from the Python script bf.py an executable binary bf-c:

# The compilation will take about 20s
python2 pypy-src/rpython/bin/rpython bf.py
# Mandelbrot now completes in 32s
./bf-c examples/mandel.b

You can rebuild the bf-c using --opt=jit to add a JIT to your BrainFuck interpreter:

# The compilation will take about 7m (you can speed this up by using Pypy)
python2 pypy-src/rpython/bin/rpython --opt=jit bf.py
# Mandelbrot now completes in about 5 seconds(!)
./bf-c examples/mandel.b

Let's compare with a C implementation

I also looked for a fast BrainFuck interpreter, written in C. After compilation with gcc -O3 (6.2), running mandel.b take about 5 seconds to run, so it is in the same order of magnitude as the JIT version (without -O3, it takes 10 seconds).

gcc -O3 ./resources/bff4.c -o bff4
# About 5s
./bff4 < examples/mandel.b

Let's compile the BrainFuck directly

To complete those numbers, I finally tested a Brainfuck to C translator, then compiled the C version of the mandel.b program. With -O3, the compiled mandel.b runs in a bit less than 1 second (without -O3, it takes 15 seconds).

gcc resources/brainfucc.c -o brainfucc
./brainfucc < examples/mandel.b > mandel.c
gcc -O3 mandel.c -o mandel
# 950ms
./mandel

Summary

Here is a summary of the speed gain I could observe on Ubuntu 16.10 (core i7, 8Go of RAM), running mandel.b:

  • the initial bf.py with CPython (2.7): about 1h45 (baseline)
  • the initial bf.py with Pypy (5.6.0): 1m30s (70x)
  • the bf-c without JIT: 32s (x200)
  • the bf-c with JIT: 5 seconds (x1250)
  • the bff4 C implementation: 5 seconds with -O3, 10 seconds without
  • the mandel binary built when compiling mandel.b directly: 1 second with -O3, 15 seconds without

The JIT addition contains code from this amazing tutorial on JITs.

If the BrainFuck interpreter bf.py is a bit hairy to look at, you can check out the step_by_step folder to go from the simplest interpreter, then a bit better, then using only RPython code, then with the JIT-specific code, then with some final optimizations.