Wednesday, September 07, 2011

Parallelism & Performance: Why O-notation matters

UPDATE: There was a bug in the way the suffix map was built in the original post that lolplz found. I updated this post with that bug fixed, however, the content of the post has not changed significantly.

Aleksandar Prokopec presented a very interesting talk about Scala Parallel Collections in Scala Days 2011. I saw it today and gave me some food for thought,

He opens his talk with an intriguing code snippet:

for {
  s <- surnames
  n <- names
  if s endsWith n
} yield (n , s)
What this code does, given what presumably are two sequences of strings, it builds all pairs (name, surname) where the surname ends with the name.

The algorithm used is very brute force, it's order is roughly O(N^2) (it's basically a Cartesian Product being filtered). He then goes on to show that by just using parallel collections:

for {
  s <- surnames.par
  n <- names.par
  if s endsWith n
} yield (n , s)
He can leverage all the cores and reduce the runtime by two or four, depending on the number of cores available in the machine where it's running. For example, the non-parallel version runs in 1040ms, the parallel one runs in 575ms with two cores, and in 305ms with four. Which is indeed very impressive for such a minor change.

What concerns me is that, no matter how many cores you add, the problem as presented is still O(N^2). It is true that many useful problems can only be speeded up by throwing more hardware at it, but most of the times, using the right data representation can yield even bigger gains.

If we use this problem as an example, we can build a slightly more complex implementation, but that hopefully is a lot faster. The approach I'm taking is that of building a suffix map for surnames. There are more efficient data structures (memory wise) to do this, but for simplicity I'll use a Map:

val suffixMap = collection.mutable.Map[String, List[String]]().withDefaultValue(Nil)
for (s <- surnames; i <- 0 until s.length; suffix = s.substring(i)) 
    suffixMap(suffix) = s :: suffixMap(suffix)
Having built the prefix map, we can naively rewrite the loop to use it (instead of the Cartesian Product):
for {
  n <- names
  s <- suffixMap(n)
} yield (n , s)
In theory, this loop is roughly O(N) (assuming the map is a HashMap), since it now does a constant number of operations on each name it processes, rather than processing all the names for each surname (I'm ignoring the fact that the map returns lists).
Note: The algorithmic order does not change much if we take into account the suffix map construction. Let's assume that we have S surnames and N names, the order of building the suffix map is O(S) and the order building the pairs is O(N), the total order of the algorithm is O(N+S). If we assume that N is approximately equal to S, then the order is O(N+N) which is the same than O(2N), which can be simplified to O(N).
So, let's see if this holds up in real life. For this I wrote the following scala script that runs a few passes of several different implementations:
#!/bin/sh
exec scala -deprecation -savecompiled "$0" "$@"
!#
def benchmark[T](name: String)(body: =>T) {
    val start = System.currentTimeMillis()
    val result = body
    val end = System.currentTimeMillis()
    println(name + ": " + (end - start))
    result
}

val surnames = (1 to 10000).map("Name" + _)
val names    = (1 to 10000).map("Name" + _)

val suffixMap = collection.mutable.Map[String, List[String]]().withDefaultValue(Nil)
for (s <- surnames; i <- 0 until s.length; suffix = s.substring(i)) 
    suffixMap(suffix) = s :: suffixMap(suffix)

for( i <- 1 to 5 ) {
    println("Run #" + i)
    benchmark("Brute force") {
        for {
            s <- surnames
            n <- names
            if s endsWith n
        } yield (n , s)
    }

    benchmark("Parallel") {
        for {
            s <- surnames.par
            n <- names.par
            if s endsWith n
        } yield (n , s)
    }

    benchmark("Smart") {
        val suffixMap = collection.mutable.Map[String, List[String]]().withDefaultValue(Nil)
        for (s <- surnames; i <- 0 until s.length; suffix = s.substring(i)) 
            suffixMap(suffix) = s :: suffixMap(suffix)
        for {
            n <- names
            s <- suffixMap(n)
        } yield (n , s)
    }

    benchmark("Smart (amortized)") {
        for {
            n <- names
            s <- suffixMap(n)
        } yield (n , s)
    }
}
There are four implementations:
  • Brute Force: the original implementation
  • Parallel: same as before, but using parallel collections.
  • Smart: Using the prefix map (and measuring the map construction)
  • Smart (amortized): same as before, but with the prefix map cost amortized.

Benchmark Results

Running this script in a four core machine, I get the following results:
Run #1
Brute force: 2158
Parallel: 1355
Smart: 153
Smart (amortized): 27
Run #2
Brute force: 1985
Parallel: 899
Smart: 82
Smart (amortized): 7
Run #3
Brute force: 1947
Parallel: 716
Smart: 69
Smart (amortized): 5
Run #4
Brute force: 1932
Parallel: 714
Smart: 67
Smart (amortized): 6
Run #5
Brute force: 1933
Parallel: 713
Smart: 68
Smart (amortized): 5
As expected, the parallel version runs 3.5 times as fast as the naive one, but the implementation using the "Smart" approach runs more than 30 times faster than the naive one. If we were able to amortize the cost of building the suffix map, the speed-up is even more staggering, at a whooping 380 times faster (although this is not always possible)!

What we can conclude from this, paraphrasing Fred Brooks1, is that regarding performance there is no silver bullet. The basics matter, maybe now they even matter more than ever.

Analyzing algorithmic order, in its most basic form (making huge approximations) is a very practical tool to solving hard performance problems. Still, the simple approach used here to optimize the problem is also parallelizable, which for a large enough problem, it might gain some speedup using parallel collections.

Don't get me wrong, I love Scala parallel collections and parallel algorithms are increasingly important, but they are by no means a magical solution that can be used for any problem (and I think Aleksandar is with me in this one).

Footnotes

1 misquoting is probably more accurate.

Tuesday, May 17, 2011

File locks in bash

For quite a while I've been looking for a portable utility that mimics Procmail's "lockfile" command. I didn't need all the functionality, just for it to lock a single file and support a retry limit and sleep parameters.

I finally implemented one using Bash's "noclobber" option. I don't know if it will work correctly on NFS, but it should work fine on most filesystems. Hopefully it will be useful to some of you.

#!/bin/bash
set -e
declare SCRIPT_NAME="$(basename $0)"

function usage {
 echo "Usage: $SCRIPT_NAME [options] <lock file>"
 echo "Options"
 echo "       -r, --retries"
 echo "           limit the number of retries before giving up the lock."
 echo "       -s, --sleeptime, -<seconds>"
 echo "           number of seconds between each attempt. Defaults to 8 seconds."
 exit 1
}

#Check that at least one argument is provided
if [ $# -lt 1 ]; then usage; fi

declare RETRIES=-1
declare SLEEPTIME=8 #in seconds
#Parse options
for arg; do
 case "$arg" in
  -r|--retries) shift; 
   if [ $# -lt 2 ]; then usage; fi; 
   RETRIES="$1"; shift 
   echo "$RETRIES" | egrep -q '^-?[0-9]+$' || usage #check that it's a number
   ;;
  -s|--sleeptime) shift; 
   if [ $# -lt 2 ]; then usage; fi; 
   SLEEPTIME="$1"; shift 
   echo "$SLEEPTIME" | egrep -q '^[0-9]+$' || usage #check that it's a number
   ;;
  --) shift ; break ;;
  -[[:digit:]]*) 
   if [ $# -lt 1 ]; then usage; fi; 
   SLEEPTIME=${1:1}; shift
   echo "$SLEEPTIME" | egrep -q '^[0-9]+$' || usage #check that it's a number
   ;;
  --*) usage;; #fail on other options
 esac
done

#Check that only one argument is left
if [ $# -ne 1 ]; then usage; fi

declare lockfile="$1"
for (( i=0; $RETRIES < 0 || i < $RETRIES; i++ )); do
 if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
 then
  exit 0
 fi
 #Wait a bit
 sleep $SLEEPTIME
done

#Failed
cat $lockfile
exit 1

Thursday, April 14, 2011

Content-aware image resizing

Today I'm going to discuss a technique called Seam Carving, originally presented in Siggraph 2007. This algorithm at it's core it's fairly simple but produces impressive results.

We will start from this image:

And take 200 pixels from its width, and turn it into this one:

Note that the image wasn't just resized, but most of the detail is still there. The size reduction is rather aggressive so there are some artifacts. But the results are quite good.

This algorithm works by repeatedly finding vertical seams of pixels and removing them. It chooses which one to remove by finding the seam with the minimal amount of energy.

The whole algorithm revolves around an energy function. In this case, I'm using a function suggested in the original paper which is based on the luminance of the image. What we do is compute the vertical and horizontal derivatives of the image, take the absolute value of each, and add both. The derivative is approximated by a simple subtraction.

The following code computes the energy of the image. The intensities image is basically the grayscale version of the image, normalized between 0 and 1.
private static FloatImage computeEnergy(FloatImage intensities) {
        int w = intensities.getWidth(), h = intensities.getHeight();
        final FloatImage energy = FloatImage.createSameSize(intensities);
        for(int x = 0; x < w-1; x++) {
            for(int y = 0; y < h-1; y++) {
                //I'm aproximating the derivatives by subtraction
                float e = abs(intensities.get(x,y)-intensities.get(x+1,y))
                        + abs(intensities.get(x,y)-intensities.get(x,y+1));
                energy.set(x,y, e);
            }
        }
        return energy;
    }

After applying this function to our image, we get the following:

You can observe that the edges are highlighted (i.e. have more energy). That is caused by our choice of an energy function. Since we're taking the derivatives and adding its absolute value, abrupt changes in luminance are highlighted (i.e. edges).

The next step is where things start to get interesting. To find the minimal energy seam, we build an image with the accumulated minimal energy. We do so by computing an image where the value of each pixel is the value of the minimum of the three above it, plus the energy of that pixel:


We do so with the following code:

final FloatImage energy = computeEnergy(intensities);

    final FloatImage minima = FloatImage.createSameSize(energy);
    //First row is equal to the energy
    for(int x = 0; x < w; x++) {
        minima.set(x,0, energy.get(x,0));
    }

    //I assume that the rightmost pixel column in the energy image is garbage
    for(int y = 1; y < h; y++) {
        minima.set(0,y, energy.get(0,y) + min(minima.get(0, y - 1),
                minima.get(1, y - 1)));

        for(int x = 1; x < w-2; x++) {
            final float sum = energy.get(x,y) + min(min(minima.get(x - 1, y - 1),
                    minima.get(x, y - 1)),minima.get(x + 1, y - 1));
            minima.set(x,y, sum);
        }
        minima.set(w-2,y, energy.get(w-2,y) + min(minima.get(w-2, y - 1),
                minima.get(w-3, y - 1)));
    }

Once we do this, the last row contains the sum of all the potential minimal seams.


With this, we search the last row for the one with the minimum total value:

//We find the minimum seam
    float minSum = Float.MAX_VALUE;
    int seamTip = -1;
    for(int x = 1; x < w-1; x++) {
        final float v = minima.get(x, h-1);
        if(v < minSum) {
            minSum=v;
            seamTip=x;
        }
    }

And backtrace the seam:

//Backtrace the seam
    final int[] seam = new int[h];
    seam[h-1]=seamTip;
    for(int x = seamTip, y = h-1; y > 0; y--) {
        float left = x>0?minima.get(x-1, y-1):Float.MAX_VALUE;
        float up = minima.get(x, y-1);
        float right = x+1<w?minima.get(x+1, y-1):Float.MAX_VALUE;
        if(left < up && left < right) x=x-1;
        else if(right < up && right < left) x= x+1;
        seam[y-1]=x;
    }
}

Having the minimum energy seam, all is left to do is remove it.

If we repeat this process several times, removing one seam at a time, we end up with a smaller image. Check the following video to see this algorithm in action:


If you want to reduce an image vertically, you have to find horizontal seams. If you want to do it vertically and horizontally you have to find which seam has the least energy (either the vertical or the horizontal one) and remove that one.

This implementation is quick & dirty and very simplistic. Many optimization can be done to make it work faster. It is also quite incomplete. By priming the energy image, you can influence the algorithm to avoid distorting certain objects in the image or to particularly pick one.

It is also possible to use it to enlarge an image (although I haven't implemented it), and by a combination of both methods one can selectively remove objects from an image.

The full source code for this demo follows. Have fun!

import javax.imageio.ImageIO;
import java.io.File;
import java.io.IOException;
import java.awt.image.BufferedImage;
import java.awt.*;
import static java.lang.Math.abs;
import static java.lang.Math.min;

public class SeamCarving
{
    public static void main(String[] args) throws IOException {
        final BufferedImage input = ImageIO.read(new File(args[0]));


        final BufferedImage[] toPaint = new BufferedImage[]{input};
        final Frame frame = new Frame("Seams") {

            @Override
            public void update(Graphics g) {
                final BufferedImage im = toPaint[0];
                if (im != null) {
                    g.clearRect(0,0,getWidth(), getHeight());
                    g.drawImage(im,0,0,this);
                }
            }
        };
        frame.setSize(input.getWidth(), input.getHeight());
        frame.setVisible(true);

        BufferedImage out = input;
        for(int i = 0; i < 200; i++) {
            out = deleteVerticalSeam(out);
            toPaint[0]=out;
            frame.repaint();
        }
    }

    private static BufferedImage deleteVerticalSeam(BufferedImage input) {
        return deleteVerticalSeam(input, findVerticalSeam(input));
    }

    private static BufferedImage deleteVerticalSeam(final BufferedImage input, final int[] seam) {
        int w = input.getWidth(), h = input.getHeight();
        final BufferedImage out = new BufferedImage(w-1,h, BufferedImage.TYPE_INT_ARGB);

        for(int y = 0; y < h; y++) {
            for(int x = 0; x < seam[y]; x++) {
                    out.setRGB(x,y,input.getRGB(x, y));
            }
            for(int x = seam[y]+1; x < w; x++) {
                    out.setRGB(x-1,y,input.getRGB(x, y));
            }
        }
        return out;
    }

    private static int[] findVerticalSeam(BufferedImage input) {
        final int w = input.getWidth(), h = input.getHeight();
        final FloatImage intensities = FloatImage.fromBufferedImage(input);
        final FloatImage energy = computeEnergy(intensities);

        final FloatImage minima = FloatImage.createSameSize(energy);
        //First row is equal to the energy
        for(int x = 0; x < w; x++) {
            minima.set(x,0, energy.get(x,0));
        }

        //I assume that the rightmost pixel column in the energy image is garbage
        for(int y = 1; y < h; y++) {
            minima.set(0,y, energy.get(0,y) + min(minima.get(0, y - 1),
                    minima.get(1, y - 1)));

            for(int x = 1; x < w-2; x++) {
                final float sum = energy.get(x,y) + min(min(minima.get(x - 1, y - 1),
                        minima.get(x, y - 1)),minima.get(x + 1, y - 1));
                minima.set(x,y, sum);
            }
            minima.set(w-2,y, energy.get(w-2,y) + min(minima.get(w-2, y - 1),minima.get(w-3, y - 1)));
        }

        //We find the minimum seam
        float minSum = Float.MAX_VALUE;
        int seamTip = -1;
        for(int x = 1; x < w-1; x++) {
            final float v = minima.get(x, h-1);
            if(v < minSum) {
                minSum=v;
                seamTip=x;
            }
        }

        //Backtrace the seam
        final int[] seam = new int[h];
        seam[h-1]=seamTip;
        for(int x = seamTip, y = h-1; y > 0; y--) {
            float left = x>0?minima.get(x-1, y-1):Float.MAX_VALUE;
            float up = minima.get(x, y-1);
            float right = x+1<w?minima.get(x+1, y-1):Float.MAX_VALUE;
            if(left < up && left < right) x=x-1;
            else if(right < up && right < left) x= x+1;
            seam[y-1]=x;
        }

        return seam;
    }

    private static FloatImage computeEnergy(FloatImage intensities) {
        int w = intensities.getWidth(), h = intensities.getHeight();
        final FloatImage energy = FloatImage.createSameSize(intensities);
        for(int x = 0; x < w-1; x++) {
            for(int y = 0; y < h-1; y++) {
                //I'm approximating the derivatives by subtraction
                float e = abs(intensities.get(x,y)-intensities.get(x+1,y))
                        + abs(intensities.get(x,y)-intensities.get(x,y+1));
                energy.set(x,y, e);
            }
        }
        return energy;
    }
}

import java.awt.image.BufferedImage;

public final class FloatImage {
    private final int width;
    private final int height;
    private final float[] data;

    public FloatImage(int width, int height) {
        this.width = width;
        this.height = height;
        this.data = new float[width*height];
    }

    public int getWidth() {
        return width;
    }

    public int getHeight() {
        return height;
    }

    public float get(final int x, final int y) {
        if(x < 0 || x >= width) throw new IllegalArgumentException("x: " + x);
        if(y < 0 || y >= height) throw new IllegalArgumentException("y: " + y);
        return data[x+y*width];
    }

    public void set(final int x, final int y, float value) {
        if(x < 0 || x >= width) throw new IllegalArgumentException("x: " + x);
        if(y < 0 || y >= height) throw new IllegalArgumentException("y: " + y);
        data[x+y*width] = value;
    }

    public static FloatImage createSameSize(final BufferedImage sample) {
        return new FloatImage(sample.getWidth(), sample.getHeight());
    }

    public static FloatImage createSameSize(final FloatImage sample) {
        return new FloatImage(sample.getWidth(), sample.getHeight());
    }

    public static FloatImage fromBufferedImage(final BufferedImage src) {
        final int width = src.getWidth();
        final int height = src.getHeight();
        final FloatImage result = new FloatImage(width, height);
        for(int x = 0; x < width; x++) {
            for(int y = 0; y < height; y++) {
                final int argb = src.getRGB(x, y);
                int r = (argb >>> 16) & 0xFF;
                int g = (argb >>> 8) & 0xFF;
                int b = argb & 0xFF;
                result.set(x,y, (r*0.3f+g*0.59f+b*0.11f)/255);
            }
        }
        return result;
    }
    public BufferedImage toBufferedImage(float scale) {
        final BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
        for(int x = 0; x < width; x++) {
            for(int y = 0; y < height; y++) {
                final int intensity = ((int) (get(x, y) * scale)) & 0xFF;
                result.setRGB(x,y,0xFF000000 | intensity | intensity << 8 | intensity << 16);
            }
        }
        return result;
    }
}

Friday, April 08, 2011

Nerding with the Y-combinator

What follows is a pointless exercise. Hereby I present you the Y-combinator in Java with generics:
public class Combinators {
    interface F<A,B> {
        B apply(A x);
    }
    //Used for proper type checking
    private static interface FF<A, B> extends F<FF<A, B>, F<A, B>> {}

    //The Y-combinator
    public static <A, B> F<A, B> Y(final F<F<A, B>,F<A, B>> f) {
        return U(new FF<A, B>() {
            public F<A, B> apply(final FF<A, B> x) {
                return f.apply(new F<A, B>() {
                    public B apply(A y) {
                        return U(x).apply(y);
                    }
                });
            }
        });
    }

    //The U-combinator
    private static <A,B> F<A, B> U(FF<A, B> a) {
        return a.apply(a);
    }

    static F<F<Integer, Integer>, F<Integer, Integer>> factorialGenerator() {
        return new F<F<Integer, Integer>, F<Integer, Integer>>() {
            public F<Integer, Integer> apply(final F<Integer, Integer> fact) {
                return new F<Integer, Integer>() {
                    public Integer apply(Integer n) {
                        return n == 0 ? 1 : n * fact.apply(n-1);
                    }
                };
            }
        };
    }

    public static void main(String[] args) {
        F<Integer, Integer> fact = Y(factorialGenerator());
        System.out.println(fact.apply(6));
    }
}
Having the Y-combinator implemented in Java, actually serves no purpose (Java supports recursion) but it was interesting to see if it could be done with proper generics.

Monday, April 04, 2011

Paper programming

When I was a kid, all I wanted was a computer. Finally when I was twelve I made a bargain with my dad. I would give up the graduation trip in exchange for a Commodore 64 (graduation trips are customary in Argentina when you finish primary and secondary school).


We bought a "Segundamano" (lit. second hand) magazine and found a used one for U$S 200. My dad contacted the seller and we went to pick it up.

You have to keep in mind that this was 1989 and the social and economic landscape in Argentina was a mess. That year the inflation rate was over 3000% (it is not a typo) and those 200 dollars was a lot of money, so my dad really made an effort.

If you are still paying attention, you might have noticed that I never mentioned a Datasette nor a disk drive. All I got was a Commodore 64, plain and simple. But this was not going to stop me.

After we got it, my dad was concerned that I might get too obsessed with the computer, so he placed some additional constraints on when I could use it (the fact that we had only one TV might have also been a factor in his decision). I was allowed to use it only on Saturday mornings before noon.

In retrospective, I think that this two factors made me a better programmer.

At that time I had some experience programming in Logo mostly. It was a Logo dialect in spanish that we used at school. We had a Commodore-128 laboratory at school (about eight machines with disk drives and monitors). I started learning Logo when I was 8, by the time I was twelve I could program a bit of BASIC also, but not much since literature was scarce.

One great thing about the Commodore 64 was that it came with BASIC, but most importantly, it came with a manual! The Commodore's Basic manual was my first programming book.

What happened was that I was forced to work as if I had punched cards. I would spend most of my spare time during the week writing programs in a notebook I had. Thinking about ways to solve problems and reading and re-reading the C64 manual.

On saturday mornings I would wake up at 6 am and hook-up the computer to the TV and start typing whatever program I was working on that week. Run it and debug it, improve it and around noon my mom would start reminding me that time was up. So at that point I began listing the BASIC source code and copying it back to my notebook.

It was during this time that I rediscovered things like an optimized Bubble-sort, although I didn't know its name then (and I wouldn't learn it for many more years). I still vividly remember the moment. It was one afternoon that I was trying to figure out a way to sort an array, so I started playing with a deck of cards. I finally figured out that I could do several passes of comparison and exchanges on adjacent cards. And if I did exactly as many passes as the number of elements the array would be sorted. I also noticed that the largest element would be at the end of the array after the first pass, and the second largest would be in place after the second pass and so on. This allowed me to save about half the number of comparisons.

The most valuable thing that I learned during this time is that when it comes to programming, thinking hard about the problem at hand before doing anything pays off, big time!

I eventually was able to save enough for a Datasette (it costed 40 dollars, my dad payed half of it) and I was able to build much larger programs.

The biggest one I wrote was an interpreter with a multitasking runtime. It was very basic. I had no notion of a lexer, let alone a parser. Commands were composed of a single letter (the first one of the line) and several arguments. You could declare procedures with names and call them. It was like pidgin-basic (since my exposure was to basic) but the procedure structure resembled Logo.

Programs were stored in memory in several arrays. There was a main one that contained statements, and couple to hold procedure names and offsets into the main array.

The runtime divided the screen in 4 areas (this was a 40x25 text screen, so not much room was left for each), and each window could run a separate program. The runtime would do a round robin on each running program, executing a single statement of each on each pass. For this it had to keep keeping a separate program counter and variables for each. I even planned to add windowing calls to create custom windows but I never got to finish it.

It was at this time that I also got interested in electronics, so I built a a few contraptions controlled by the C64, but that's a tale for another post.

Wednesday, March 30, 2011

Matching Regular Expressions using its Derivatives

Introduction

Regular expressions are expressions that describe a set of strings over a particular alphabet. We will begin with a crash course on simple regular expressions. You can assume that we're talking about text and characters but in fact this can be generalized to any (finite) alphabet.

The definition of regular expressions is quite simple1, there are three basic (i.e. terminal) regular expressions:
  • The null expression (denoted as: ) which never matches anything
  • The empty expression, that only matches the empty string (I will use ε to represent this expression since it's customary2)
  • The character literal expression (usually called 'c'), that matches a single character

These three basic building blocks can be combined using some operators to form more complex expressions:
  • sequencing of regular expressions matches two regular expressions in sequence
  • alternation matches one of the two sub-expressions (usually represented by a '|' symbol)
  • the repetition operator (aka. Kleene's star) matches zero or more repetitions of the specified subexpression

Some examples will make this clearer:
  • The expression 'a' will only match the character 'a'. Similarly 'b' will only match 'b'. If we combine them by sequencing 'ab' will match 'ab'.
  • The expression 'a|b' will match either 'a' or 'b'.
  • If we combine sequencing with alternation as in '(a|b)(a|b)' (the parenthesis are clarifying), it will match: 'aa', 'ab', 'ba' or 'bb'.
  • Kleene's star as mentioned before matches zero or more of the preceding subexpression. So the expression 'a*' will match: '', 'a', 'aa', 'aaa', 'aaaa', ...
  • We can do more complex combinations, such as 'ab*(c|ε)' that will match things like: 'a', 'ab', 'ac', 'abc', 'abb', 'abbc', ... that is any string starting with an 'a' followed by zero or more 'b''s and optionally ending in a 'c'.

Typical implementations of regular expression matchers convert the regular expression to an NFA or a DFA (which are a kind of finite state machine).

Anyway, a few weeks ago I ran into a post about using the derivative of a regular expression for matching.

It is a quite intriguing concept and worth exploring. The original post gives an implementation in Scheme3. But leaves out some details that make it a bit tricky to implement. I'll try to walk you through the concept, up to a working implementation in Java.

Derivative of a Regular Expression

So, first question: What's the derivative of a regular expression?

The derivative of a regular expression with respect to a character 'c' computes a new regular expression that matches what the original expression would match, assuming it had just matched the character 'c'.

As usual, some examples will (hopefully) help clarify things:
  • The expression 'foo' derived with respect to 'f' yields the expression: 'oo' (which is what's left to match).
  • The expression 'ab|ba' derived with respect to 'a', yields the expression: 'b'
    Similarly, the expression 'ab|ba' derived with respect to 'b', yields the expression: 'a'
  • The expression '(ab|ba)*' derived with respect to 'a', yields the expression: 'b(ab|ba)*'
As we explore this notion, we will work a RegEx class. The skeleton of this class looks like this:
public abstract class RegEx {
    public abstract RegEx derive(char c);
    public abstract RegEx simplify();
//...
    public static final RegEx unmatchable = new RegEx() { /* ... */ }
    public static final RegEx empty = new RegEx() { /* ... */ }
}
It includes constants for the unmatchable (null) and empty expressions, and a derive and simplify methods wich we will cover in detail (but not just now).

Before we go in detail about the rules of regular expression derivation, let's take a small -but necessary- detour and cover some details that will help us get a working implementation.

The formalization of the derivative of a regular expression depends on a set of simplifying constructors that are necessary for a correct implementation. These will be defined a bit more formally and we will build the skeleton of its implementation at this point.

Let's begin with the sequencing operation, we define the following constructor (ignore spaces):
seq( ∅, _ ) = ∅
seq( _, ∅ ) = ∅
seq( ε, r2 ) = r2
seq( r1, ε ) = r1
seq( r1, r2 ) = r1 r2
The first two definitions state that if you have a sequence with the null expression (∅, which is unmatchable) and any other expression, it's the same than having the null expression (i.e. it will not match anything).

The third and fourth definitions state that if you have a sequence of the empty expression (ε, matches only the empty string) and any other expression, is the same than just having the other expression (the empty expression is the identity with respect to the sequence operator).

The fifth and last definition just builds a regular sequence.

With this, we can draft a first implementation of a sequence constructor (in the gang-of-four's parlance it's a factory method):
    public RegEx seq(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == unmatchable || r2 == unmatchable) return unmatchable;
        if(r1 == empty) return r2;
        if(r2 == empty) return r1;
        return new RegEx() {
             // ....
        };
    }
I'm leaving out the details of the RegEx for the time being, we will come back to them soon enough.

The alternation operator also has simplifying constructor that is analogous to the sequence operator:
alt( ε, _  ) = ε
alt(  _, ε ) = ε
alt( ∅, r2 ) = r2
alt( r1, ∅ ) = r1
alt( r1, r2 ) = r1 | r2
If you look closely, the first two definitions are rather odd. They basically reduce an alternation with the empty expression to the empty expression (ε). This is because the simplifying constructors are used as part of a simplification function that reduces a regular expression to the empty expression if it matches the empty expression. We'll see how this works with the rest of it in a while.

The third and fourth definitions are fairly logical, an alternation with an unmatchable expression is the same than the alternative (the unmatchable expression is the identity with respect to the alternation operator).

The last one is the constructor.

Taking these details into account, we can build two factory methods, one internal and one external:
    private RegEx alt0(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == empty || r2 == empty) return empty;
        return alt(r2);
    }

    public RegEx alt(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == unmatchable) return r2;
        if(r2 == unmatchable) return r1;
        return new RegEx() {
             //.....
        };
    }
The internal one alt0 includes the first two simplification rules, the public one is user-facing. That is, it has to let you build something like: 'ab*(c|ε)'.

Finally, the repetition operator (Kleene's star) has the following simplification rules:
rep( ∅ ) = ε
rep( ε ) = ε
rep( re ) = re*
The first definition states that a repetition of the unmatchable expression, matches at least the empty string.

The second definition states that a repetition of the empty expression is the same than matching the empty expression.

And as usual, the last one is the constructor for all other cases.

A skeleton for the rep constructor is rather simple:
    public RegEx rep() {
        final RegEx re = this;
        if(re == unmatchable || re == empty) return empty;
        return new RegEx() {
             // ....
        };
    }

Simplify & Derive

As hinted earlier on, derivation is based on a simplification function. This simplification function reduces a regular expression to the empty regular expression (ε epsilon) if it matches the empty string or the unmatchable expression (∅) if it does not.

The simplification function is defined as follows:

s(∅) = ∅
s(ε) = ε
s(c) = ∅
s(re1 re2) = seq(s(re1), s(re2))
s(re1 | re2) = alt(s(re1), s(re2))
s(re*) = ε
Note that this function depends on the simplifying constructors we described earlier on.

Suppose that we want to check if the expression 'ab*(c|ε)' matches the empty expression, if we do all the substitutions:

  1. seq(s(ab*),s(c|ε))
  2. seq(s(seq(s(a), s(b*))),s(alt(s(c), s(ε))))
  3. seq(s(seq(∅, s(ε))),s(alt(∅, ε)))
  4. seq(s(seq(∅, ε)),s(ε))
  5. seq(s(∅),ε)
  6. seq(∅,ε)

We get the null/unmatchable expression as a result. This means that the expression 'ab*(c|ε)' does not match the empty string.

If on the other hand we apply the reduction on 'a*|b':

  1. alt(s(a*), s(b))
  2. alt(ε, ∅)
  3. ε
We get the empty expression, hence the regular expression 'a*|b' will match the empty string.

The derivation function given a regular expression and a character 'x' derives a new regular expression as if having matched 'x'.

Derivation is defined by the following set of rules:

D( ∅, _ ) = ∅
D( ε, _ ) = ∅
D( c, x ) = if c == x then ε else ∅
D(re1 re2, x) = alt(
                     seq( s(re1) , D(re2, x) ),
                     seq( D(re1, x), re2 )
                )
D(re1 | re2, x)  = alt( D(re1, x) , D(re2, x) )
D(re*, x)        = seq( D(re, x)  , rep(re) )
The first two definition define the derivative of the unmatchable and empty expressions regarding any character, wich yields the unmatchable expression.

The third definition states that if a character matcher (for example 'a') is derived with respect to the same character yields the empty expression otherwise yields the unmatchable expression.

The fourth rule is a bit more involved, but trust me, it works.

The fifth rule states that the derivative of an alternation is the alternation of the derivatives (suitably simplified).

And the last one, describes how to derive a repetition. For example D('(ba)*', 'b') yields 'a(ba)*'.

We now have enough information to implement the simplify and

Matching

If you haven't figured it out by now, matching works by walking the string we're checking character by character and successively deriving the regular expression until we either run out of characters, at wich point we simplify the derived expression and see if it matches the empty string. Or we end up getting the unmatchable expression, at wich point it is impossible that the rest of the string will match.

A iterative implementation of a match method is as follows:

    public boolean matches(final String text) {
        RegEx d = this;
        String s = text;
        //The 'unmatchable' test is not strictly necessary, but avoids unnecessary derivations
        while(!s.isEmpty() && d != unmatchable) {
            d = d.derive(s.charAt(0));
            s = s.substring(1);
        }
        return d.simplify() == empty;
    }
If we match 'ab*(c|ε)' against the text "abbc", we get the following derivatives:
  1. D(re, a) = ab*(c|ε) , rest: "bbc"
  2. D(re, b) = b*(c|ε) , rest: "bc"
  3. D(re, b) = b*(c|ε) , rest: "c"
  4. D(re, c) = b*(c|ε) , rest: ""
And if we simplify the last derivative we get the empty expression, therefore we have a match.

One interesting fact of this matching strategy is that it is fairly easy to implement a non-blocking matcher. That is, doing incremental matching as we receive characters.

Implementation

The following is the complete class with all methods implemented. I provide a basic implementation of the toString method (which is nice for debugging), and a helper text method which is a shortcut to build an expression for a sequence of characters. This class is fairly easy to modify to match over a different alphabet, such as arbitrary objects and Iterables instead of Strings (it can be easily generified).
public abstract class RegEx {
    public abstract RegEx derive(char c);
    public abstract RegEx simplify();

    public RegEx seq(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == unmatchable || r2 == unmatchable) return unmatchable;
        if(r1 == empty) return r2;
        if(r2 == empty) return r1;
        return new RegEx() {
            @Override
            public RegEx derive(char c) {
                return r1.simplify().seq(r2.derive(c))
                        .alt0(r1.derive(c).seq(r2));
            }

            @Override
            public RegEx simplify() {
                return r1.simplify().seq(r2.simplify());
            }

            @Override
            public String toString() {
                return r1 + "" + r2;
            }
        };
    }

    private RegEx alt0(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == empty || r2 == empty) return empty;
        return alt(r2);
    }

    public RegEx alt(final RegEx r2) {
        final RegEx r1 = this;
        if(r1 == unmatchable) return r2;
        if(r2 == unmatchable) return r1;
        return new RegEx() {
            @Override
            public RegEx derive(char c) {
                return r1.derive(c).alt0(r2.derive(c));
            }

            @Override
            public RegEx simplify() {
                return r1.simplify().alt0(r2.simplify());
            }

            @Override
            public String toString() {
                return "(" + r1 + "|" + r2 + ")";
            }
        };
    }

    public RegEx rep() {
        final RegEx re = this;
        if(re == unmatchable || re == empty) return empty;
        return new RegEx() {
            @Override
            public RegEx derive(char c) {
                return re.derive(c).seq(re.rep());
            }

            @Override
            public RegEx simplify() {
                return empty;
            }

            @Override
            public String toString() {
                String s = re.toString();
                return s.startsWith("(")
                        ? s + "*"
                        :"(" + s + ")*";
            }

        };
    }
    
    public static RegEx character(final char exp) {
        return new RegEx() {
            @Override
            public RegEx derive(char c) {
                return exp == c?empty:unmatchable;
            }

            @Override
            public RegEx simplify() {
                return unmatchable;
            }

            @Override
            public String toString() {
                return ""+ exp;
            }
        };
    }

    public static RegEx text(final String text) {
        RegEx result;
        if(text.isEmpty()) {
            result = empty;
        } else {
            result = character(text.charAt(0));
            for (int i = 1; i < text.length(); i++) {
                result = result.seq(character(text.charAt(i)));
            }
        }
        return result;
    }


    public boolean matches(final String text) {
        RegEx d = this;
        String s = text;
        //The 'unmatchable' test is not strictly necessary, but avoids unnecessary derivations
        while(!s.isEmpty() && d != unmatchable) {
            d = d.derive(s.charAt(0));
            s = s.substring(1);
        }
        return d.simplify() == empty;
    }

    private static class ConstantRegEx extends RegEx {
        private final String name;
        ConstantRegEx(String name) {
            this.name = name;
        }

        @Override
        public RegEx derive(char c) {
            return unmatchable;
        }

        @Override
        public RegEx simplify() {
            return this;
        }

        @Override
        public String toString() {
            return name;
        }
    }

    public static final RegEx unmatchable = new ConstantRegEx("<null>");
    public static final RegEx empty = new ConstantRegEx("<empty>");

    public static void main(String[] args) {
        final RegEx regEx = character('a')
                             .seq(character('b').rep())
                             .seq(character('c').alt(empty));
        if(regEx.matches("abbc")) {
            System.out.println("Matches!!!");
        }
    }
}

Disclaimer: Any bugs/misconceptions regarding this are my errors, so take everything with a grain of salt. Feel free to use the code portrayed here for any purpose whatsoever, if you do something cool with it I'd like to know, but no pressure.

Footnotes

  1. Sometimes the simpler something is, the harder it is to understand. See lambda calculus for example.
  2. I will not use ε (epsilon) to also represent the empty string since I think it is confusing, even though it is also customary.
  3. I think that the Scheme implementation in that article won't work if you use the repetition operator, but I haven't tested it. It might just as well be that my Scheme-foo is a bit rusty.

Monday, March 14, 2011

Pratt Parsers

Some time ago I came across Pratt parsers. I had never seen them before, and I found them quite elegant.

They were first described by Vaughan Pratt in the 1973 paper "Top down operator precedence". From a theoretical perspective they are not particularly interesting, but from an engineering point of view they are fantastic.

Let's start with a real-world example. This is the grammar from the expression language for my Performance Invariants agent:
/* omitted */
import static performance.compiler.TokenType.*;

public final class SimpleGrammar
    extends Grammar<TokenType> {
    private SimpleGrammar() {
        infix(LAND, 30);
        infix(LOR, 30);

        infix(LT, 40);
        infix(GT, 40);
        infix(LE, 40);
        infix(GE, 40);
        infix(EQ, 40);
        infix(NEQ, 40);

        infix(PLUS, 50);
        infix(MINUS, 50);

        infix(MUL, 60);
        infix(DIV, 60);

        unary(MINUS, 70);
        unary(NOT, 70);

        infix(DOT, 80);

        clarifying(LPAREN, RPAREN, 0);
        delimited(DOLLAR_LCURLY, RCURLY, 70);

        literal(INT_LITERAL);
        literal(LONG_LITERAL);
        literal(FLOAT_LITERAL);
        literal(DOUBLE_LITERAL);
        literal(ID);
        literal(THIS);
        literal(STATIC);
    }

    public static Expr<TokenType> parse(final String text) throws ParseException {
        final Lexer<TokenType> lexer = new JavaLexer(text, 0 , text.length());
        final PrattParser<TokenType> prattParser = new PrattParser<TokenType>(INSTANCE, lexer);
        final Expr<TokenType> expr = prattParser.parseExpression(0);
        if(prattParser.current().getType() != EOF) {
            throw new ParseException("Unexpected token: " + prattParser.current());
        }
        return expr;
    }

    private static final SimpleGrammar INSTANCE = new SimpleGrammar();
}

Pretty, isn't it?

The number represents a precedence, for infix operators is quite obvious (it's basically a precedence table), but for clarifying and delimited expressions it sets the lower bound for the subexpression. In the grammar above, the delimited expression only accepts dot expressions and literals, parenthesis on the other hand, accept anything.

So, how does the parser work? The PrattParser itself is rather elegant also:
/* omitted */
public final class PrattParser<T> {
    private final Grammar<T> grammar;
    private final Lexer<T> lexer;
    private Token<T> current;

    public PrattParser(Grammar<T> grammar, Lexer<T> lexer)
            throws ParseException
    {
        this.grammar = grammar;
        this.lexer = lexer;
        current = lexer.next();
    }

    public Expr<T> parseExpression(int stickiness) throws ParseException {
        Token<T> token = consume();
        final PrefixParser<T> prefix = grammar.getPrefixParser(token);
        if(prefix == null) {
            throw new ParseException("Unexpected token: " + token);
        }
        Expr<T> left = prefix.parse(this, token);

        while (stickiness < grammar.getStickiness(current())) {
            token = consume();

            final InfixParser<T> infix = grammar.getInfixParser(token);
            left = infix.parse(this, left, token);
        }

        return left;
    }

    public Token<T> current() {
        return current;
    }

    public Token<T> consume() throws ParseException {
        Token<T> result = current;
        current = lexer.next();
        return result;
    }
}

All the magic happens in the parseExpression method.

Given the current token, it fetches an appropriate prefix parser. Prefix parsers recognize simple expressions (such as literals, unary operators, delimited expressions, etc.). Then it goes to process infix parsers according to precedence (stickiness).

Pratt parsers are a variation of recursive descent parsers. The parseExpression methods represents a generalized rule in the grammar.

At this point you're thinking there must be more to this. The trick must be in the Grammar class:
/* omitted */
public class Grammar<T> {
    private Map<T, PrefixParser<T>> prefixParsers = new HashMap<T, PrefixParser<T>>();
    private Map<T, InfixParser<T>>  infixParsers = new HashMap<T, InfixParser<T>>();

    PrefixParser<T> getPrefixParser(Token<T> token) {
        return prefixParsers.get(token.getType());
    }

    int getStickiness(Token<T> token) {
        final InfixParser infixParser = getInfixParser(token);
        return infixParser == null?Integer.MIN_VALUE:infixParser.getStickiness();
    }

    InfixParser<T> getInfixParser(Token<T> token) {
        return infixParsers.get(token.getType());
    }

    protected void infix(T ttype, int stickiness)
    {
        infix(ttype, new InfixParser<T>(stickiness));
    }

    protected void infix(T ttype, InfixParser<T> value) {
        infixParsers.put(ttype, value);
    }

    protected void unary(T ttype, int stickiness)
    {
        prefixParsers.put(ttype, new UnaryParser<T>(stickiness));
    }
    protected void literal(T ttype)
    {
        prefix(ttype, new LiteralParser<T>());
    }

    protected void prefix(T ttype, PrefixParser<T> value) {
        prefixParsers.put(ttype, value);
    }

    protected void delimited(T left, T right, int subExpStickiness) {
        prefixParsers.put(left, new DelimitedParser<T>(right, subExpStickiness, true));
    }

    protected void clarifying(T left, T right, int subExpStickiness) {
        prefixParsers.put(left, new DelimitedParser<T>(right, subExpStickiness, false));
    }
}

Nope. Just a couple of maps and some factory methods.

Even the infix and prefix parsers are rather simple:
public class InfixParser<T> {
    private final int stickiness;
    protected InfixParser(int stickiness) {
        this.stickiness = stickiness;
    }

    public Expr<T> parse(PrattParser<T> prattParser, Expr<T> left, Token<T> token)
            throws ParseException {
        return new BinaryExpr<T>(token, left, prattParser.parseExpression(getStickiness()));
    }

    protected int getStickiness() {
        return stickiness;
    }
}

class LiteralParser<T>
        extends PrefixParser<T> {
    public Expr<T> parse(PrattParser<T> prattParser, Token<T> token)
            throws ParseException {
        return new ConstantExpr<T>(token);
    }
}

class UnaryParser<T>
    extends PrefixParser<T> {
    private final int stickiness;

    public UnaryParser(int stickiness) {
        this.stickiness = stickiness;
    }

    public Expr<T> parse(PrattParser<T> prattParser, Token<T> token)
            throws ParseException {
        return new UnaryExpr<T>(token, prattParser.parseExpression(stickiness));
    }
}

The infix and prefix parsers, just build an AST node. They recursively parse sub-expressions if necessary. If you want to check how delimited expressions work, you can browse the code in github.

These parsers have several interesting characteristics. One one of them is that the grammar can be modified at runtime (even though it's not shown here) by adding/removing parsers, even while parsing. You can also easily add conditional grammars for sub-languages (think embedded SQL for example).

The code shown here only supports an LL(1) grammar (if I'm not mistaken), but adding additional lookahead should allow for LL(k) grammars.

Another interesting fact is that they way the parser is extended (by adding infix/prefix parsers) naturally yields grammars without left recursion.

One thing to note is that in my simple expression language, I'm not syntactically restricting the types of sub-expressions that infix operators receive, so that has to be checked in a later stage.

The only downside I can think of (besides the LL(k)-ness), is that these parsers are heavily geared towards expressions (everything is an expressions), but with some creativity statements could be added. For example, you could treat the semicolon in Java/C/C++/etc. as an infix operator.

Feel free to take all this code as yours for any purpose whatsoever. Happy hacking!

Friday, February 25, 2011

Performance Invariants (Part II)

A few days ago I wrote a post about performance invariants. The basic idea behind them, is that there should be an easy way to declare performance constraints at the source code level, and that you should be able to check them every time you run your unit tests. To make a long story short, I have been a busy little bee for the last few days and managed to build a reasonable proof-of-concept.

Let's start with simple example:
import performance.annotation.Expect;
...
class Test {
    @Expect("InputStream.read == 0")
    static void process(List<String> list) {
        //...
    }
}
What we're asserting here is that we want to make sure that the number of calls to methods called read defined in classes named InputStream should be exactly zero.

If we want to exclude basically all IO, we can change the expectation to:
import performance.annotation.Expect;
...
class Test {
    @Expect("InputStream.read == 0 && OutputStream.write == 0")
    static void process(List<String> list) {
        //...
    }
}
Note that these are checked even for code that is called indirectly by the method process.

If we add an innocent looking println:
    @Expect("InputStream.read == 0 && OutputStream.write == 0")
    static void process(List<String> list) {
        System.out.println("Hi!");
        //...
    }

And run it with the agent enabled by using:
~>java -javaagent:./performance-1.0-SNAPSHOT-jar-with-dependencies.jar \
       -Xbootclasspath/a:./performance-1.0-SNAPSHOT-jar-with-dependencies.jar Test
You should get something like the following output:
Hi!
Exception in thread "main" java.lang.AssertionError: Method 'Test.process' did not fulfil: InputStream.read == 0 && OutputStream.write == 0
         Matched: [#OutputStream.write=7, #InputStream.read=0]
         Dynamic: []
        at performance.runtime.PerformanceExpectation.validate(PerformanceExpectation.java:69)
        at performance.runtime.ThreadHelper.endExpectation(ThreadHelper.java:52)
        at performance.runtime.Helper.endExpectation(Helper.java:61)
        at Test.process(Test.java:17)
        at Test.main(Test.java:39)
This is witchraft, I say! ... well kind of.

Let's stop a moment and consider what's going on here. Notice the first line of the output. It contains the text "Hi!" that we printed. This happens because the check is performed after the method process finishes. In the fourth line, you can see how many times each method matched during the execution of the process method. Ignore the "Dynamic" list for just a second.

Let's try something a bit more interesting:
    class Customer { /*... */}
    //...
    @Expect("Statement.executeUpdate < ${customers.size}")
    void storeCustomers(List<Customer> customers) {
        //...
    }
Note the ${customers.size} in the expression, what this intuitively mean is that we want to take the size of the list as an upper bound. It's like the poor programmer's big-O notation. If we were to run this, but assuming that we execute two updates for each customer (instead of one as asserted), we would get:
Exception in thread "main" java.lang.AssertionError: Method 'Test.storeCustomers' did not fulfil: Statement.executeUpdate < ${customers.size}
         Matched: [#Statement.executeUpdate=50]
         Dynamic: [customers.size=25.0]
        at performance.runtime.PerformanceExpectation.validate(PerformanceExpectation.java:69)
        at performance.runtime.ThreadHelper.endExpectation(ThreadHelper.java:52)
        at performance.runtime.Helper.endExpectation(Helper.java:61)
        at Test.storeCustomers(Test.java:19)
        at Test.main(Test.java:42)
Check the third line, this time, the "Dynamic" list contains the length of the list. In general, expressions of the form ${a.b.c.d} are called dynamic values. They refer to arguments, instance variables or static variables. For example:
  • ${static.CONSTANT} refers to a variable named CONSTANT in the current class.
  • ${this.instance} refers to a variable named 'instance' in the current object (only valid for instance methods).
  • ${n} refers to an argument named 'n' (this only works if the class has debug information)
  • ${3} refers to the fourth argument from the left (zero based indexing)
All dynamic values MUST yield a numeric value, otherwise a failure will be reported at runtime. Currently the library will complain if any dynamic value is null.

Although this is an early implementation, it is enough to start implementing performance invariants that can be checked every time you run your unit tests.
Enough for today, in a followup post I'll go into the internals of the agent. If you want to browse the source code or try it out, go and grab a copy from github.

Tuesday, February 22, 2011

Performance Invariants

UPDATE A newer post on this subject can be found here

Let's start with a problem: How do you make unit tests that test for performance?

It might seem simple, but consider that:
  • Test must be stable across hardware/software configurations
  • Machine workload should not affect results (at least on normal situations)

A friend of mine (Fernando Rodriguez-Olivera if you must know) thought of the following (among many other things):
For each test run, record interesting metrics, such as:
  • specific method calls
  • number of queries executed
  • number of I/O operations
  • etc.
And after the test run, assert that these values are under a certain threshold. If they're not, fail the test.
He even implemented a proof-of-concept using BeanShell to record these stats to a file during the test run, and it would check the constraints after the fact.

Yesterday I was going over these ideas while preparing a presentation on code quality and something just clicked: annotate methods with performance invariants.
The concept is similar to pre/post conditions. Each annotation is basically a post condition on the method call that states which performance "promises" the method makes.

For example you should be able to do something like:
@Ensure("queryCount <= 1")
public CustomerInfo loadCustomerInfo() {...}
Or maybe something like this:
@Ensure("count(java.lang.Comparable.compareTo) < ceil(log(this.customers.size()))")
public CustomerInfo findById(String id) {...}

These promises are enabled only during testing since checking for them might be a bit expensive for a production system.

As this is quite recent I don't have anything working (yet), but I think it's worth exploring.
If I manage to find some time to try and build it, I'll post some updates here.

Monday, February 21, 2011

gluUnProject for iPhone/iOS

I had a hard time finding a suitable implementation of gluUnProject for an iPhone project I was working on, so I decided to port the original implementation by SGI.

This is the header file (I called it "project.h"):
#ifndef __GLU_PROJECT_IOS
#define __GLU_PROJECT_IOS

#include <OpenGLES/ES1/gl.h>
#include <OpenGLES/ES1/glext.h>

void
gluPerspective(GLfloat fovy, GLfloat aspect, GLfloat zNear, GLfloat zFar);

void
gluLookAt(GLfloat eyex, GLfloat eyey, GLfloat eyez, GLfloat centerx,
    GLfloat centery, GLfloat centerz, GLfloat upx, GLfloat upy,
    GLfloat upz);

GLint
gluProject(GLfloat objx, GLfloat objy, GLfloat objz, 
     const GLfloat modelMatrix[16], 
     const GLfloat projMatrix[16],
     const GLint viewport[4],
     GLfloat *winx, GLfloat *winy, GLfloat *winz);

GLint
gluUnProject(GLfloat winx, GLfloat winy, GLfloat winz,
    const GLfloat modelMatrix[16], 
    const GLfloat projMatrix[16],
    const GLint viewport[4],
    GLfloat *objx, GLfloat *objy, GLfloat *objz);


GLint
gluUnProject4(GLfloat winx, GLfloat winy, GLfloat winz, GLfloat clipw,
     const GLfloat modelMatrix[16], 
     const GLfloat projMatrix[16],
     const GLint viewport[4],
     GLclampf nearVal, GLclampf farVal,      
     GLfloat *objx, GLfloat *objy, GLfloat *objz,
     GLfloat *objw);

void
gluPickMatrix(GLfloat x, GLfloat y, GLfloat deltax, GLfloat deltay,
     GLint viewport[4]);

#endif
And the source code:
#include "project.h"
#include <math.h>


/*
** Make m an identity matrix
*/
static void __gluMakeIdentityf(GLfloat m[16])
{
    m[0+4*0] = 1; m[0+4*1] = 0; m[0+4*2] = 0; m[0+4*3] = 0;
    m[1+4*0] = 0; m[1+4*1] = 1; m[1+4*2] = 0; m[1+4*3] = 0;
    m[2+4*0] = 0; m[2+4*1] = 0; m[2+4*2] = 1; m[2+4*3] = 0;
    m[3+4*0] = 0; m[3+4*1] = 0; m[3+4*2] = 0; m[3+4*3] = 1;
}

#define __glPi 3.14159265358979323846

void
gluPerspective(GLfloat fovy, GLfloat aspect, GLfloat zNear, GLfloat zFar)
{
    GLfloat m[4][4];
    float sine, cotangent, deltaZ;
    float radians = fovy / 2 * __glPi / 180;

    deltaZ = zFar - zNear;
    sine = sin(radians);
    if ((deltaZ == 0) || (sine == 0) || (aspect == 0)) {
 return;
    }
    cotangent = cos(radians) / sine;

    __gluMakeIdentityf(&m[0][0]);
    m[0][0] = cotangent / aspect;
    m[1][1] = cotangent;
    m[2][2] = -(zFar + zNear) / deltaZ;
    m[2][3] = -1;
    m[3][2] = -2 * zNear * zFar / deltaZ;
    m[3][3] = 0;
    glMultMatrixf(&m[0][0]);
}

static void normalize(float v[3])
{
    float r;

    r = sqrt( v[0]*v[0] + v[1]*v[1] + v[2]*v[2] );
    if (r == 0.0) return;

    v[0] /= r;
    v[1] /= r;
    v[2] /= r;
}

static void cross(float v1[3], float v2[3], float result[3])
{
    result[0] = v1[1]*v2[2] - v1[2]*v2[1];
    result[1] = v1[2]*v2[0] - v1[0]*v2[2];
    result[2] = v1[0]*v2[1] - v1[1]*v2[0];
}

void
gluLookAt(GLfloat eyex, GLfloat eyey, GLfloat eyez, GLfloat centerx,
   GLfloat centery, GLfloat centerz, GLfloat upx, GLfloat upy,
   GLfloat upz)
{
    float forward[3], side[3], up[3];
    GLfloat m[4][4];

    forward[0] = centerx - eyex;
    forward[1] = centery - eyey;
    forward[2] = centerz - eyez;

    up[0] = upx;
    up[1] = upy;
    up[2] = upz;

    normalize(forward);

    /* Side = forward x up */
    cross(forward, up, side);
    normalize(side);

    /* Recompute up as: up = side x forward */
    cross(side, forward, up);

    __gluMakeIdentityf(&m[0][0]);
    m[0][0] = side[0];
    m[1][0] = side[1];
    m[2][0] = side[2];

    m[0][1] = up[0];
    m[1][1] = up[1];
    m[2][1] = up[2];

    m[0][2] = -forward[0];
    m[1][2] = -forward[1];
    m[2][2] = -forward[2];

    glMultMatrixf(&m[0][0]);
    glTranslatef(-eyex, -eyey, -eyez);
}

static void __gluMultMatrixVecf(const GLfloat matrix[16], const GLfloat in[4],
        GLfloat out[4])
{
    int i;

    for (i=0; i<4; i++) {
 out[i] = 
     in[0] * matrix[0*4+i] +
     in[1] * matrix[1*4+i] +
     in[2] * matrix[2*4+i] +
     in[3] * matrix[3*4+i];
    }
}

/*
** Invert 4x4 matrix.
** Contributed by David Moore (See Mesa bug #6748)
*/
static int __gluInvertMatrixf(const GLfloat m[16], GLfloat invOut[16])
{
    float inv[16], det;
    int i;

    inv[0] =   m[5]*m[10]*m[15] - m[5]*m[11]*m[14] - m[9]*m[6]*m[15]
             + m[9]*m[7]*m[14] + m[13]*m[6]*m[11] - m[13]*m[7]*m[10];
    inv[4] =  -m[4]*m[10]*m[15] + m[4]*m[11]*m[14] + m[8]*m[6]*m[15]
             - m[8]*m[7]*m[14] - m[12]*m[6]*m[11] + m[12]*m[7]*m[10];
    inv[8] =   m[4]*m[9]*m[15] - m[4]*m[11]*m[13] - m[8]*m[5]*m[15]
             + m[8]*m[7]*m[13] + m[12]*m[5]*m[11] - m[12]*m[7]*m[9];
    inv[12] = -m[4]*m[9]*m[14] + m[4]*m[10]*m[13] + m[8]*m[5]*m[14]
             - m[8]*m[6]*m[13] - m[12]*m[5]*m[10] + m[12]*m[6]*m[9];
    inv[1] =  -m[1]*m[10]*m[15] + m[1]*m[11]*m[14] + m[9]*m[2]*m[15]
             - m[9]*m[3]*m[14] - m[13]*m[2]*m[11] + m[13]*m[3]*m[10];
    inv[5] =   m[0]*m[10]*m[15] - m[0]*m[11]*m[14] - m[8]*m[2]*m[15]
             + m[8]*m[3]*m[14] + m[12]*m[2]*m[11] - m[12]*m[3]*m[10];
    inv[9] =  -m[0]*m[9]*m[15] + m[0]*m[11]*m[13] + m[8]*m[1]*m[15]
             - m[8]*m[3]*m[13] - m[12]*m[1]*m[11] + m[12]*m[3]*m[9];
    inv[13] =  m[0]*m[9]*m[14] - m[0]*m[10]*m[13] - m[8]*m[1]*m[14]
             + m[8]*m[2]*m[13] + m[12]*m[1]*m[10] - m[12]*m[2]*m[9];
    inv[2] =   m[1]*m[6]*m[15] - m[1]*m[7]*m[14] - m[5]*m[2]*m[15]
             + m[5]*m[3]*m[14] + m[13]*m[2]*m[7] - m[13]*m[3]*m[6];
    inv[6] =  -m[0]*m[6]*m[15] + m[0]*m[7]*m[14] + m[4]*m[2]*m[15]
             - m[4]*m[3]*m[14] - m[12]*m[2]*m[7] + m[12]*m[3]*m[6];
    inv[10] =  m[0]*m[5]*m[15] - m[0]*m[7]*m[13] - m[4]*m[1]*m[15]
             + m[4]*m[3]*m[13] + m[12]*m[1]*m[7] - m[12]*m[3]*m[5];
    inv[14] = -m[0]*m[5]*m[14] + m[0]*m[6]*m[13] + m[4]*m[1]*m[14]
             - m[4]*m[2]*m[13] - m[12]*m[1]*m[6] + m[12]*m[2]*m[5];
    inv[3] =  -m[1]*m[6]*m[11] + m[1]*m[7]*m[10] + m[5]*m[2]*m[11]
             - m[5]*m[3]*m[10] - m[9]*m[2]*m[7] + m[9]*m[3]*m[6];
    inv[7] =   m[0]*m[6]*m[11] - m[0]*m[7]*m[10] - m[4]*m[2]*m[11]
             + m[4]*m[3]*m[10] + m[8]*m[2]*m[7] - m[8]*m[3]*m[6];
    inv[11] = -m[0]*m[5]*m[11] + m[0]*m[7]*m[9] + m[4]*m[1]*m[11]
             - m[4]*m[3]*m[9] - m[8]*m[1]*m[7] + m[8]*m[3]*m[5];
    inv[15] =  m[0]*m[5]*m[10] - m[0]*m[6]*m[9] - m[4]*m[1]*m[10]
             + m[4]*m[2]*m[9] + m[8]*m[1]*m[6] - m[8]*m[2]*m[5];

    det = m[0]*inv[0] + m[1]*inv[4] + m[2]*inv[8] + m[3]*inv[12];
    if (det == 0)
        return GL_FALSE;

    det = 1.0 / det;

    for (i = 0; i < 16; i++)
        invOut[i] = inv[i] * det;

    return GL_TRUE;
}

static void __gluMultMatricesf(const GLfloat a[16], const GLfloat b[16],
    GLfloat r[16])
{
    int i, j;

    for (i = 0; i < 4; i++) {
 for (j = 0; j < 4; j++) {
     r[i*4+j] = 
  a[i*4+0]*b[0*4+j] +
  a[i*4+1]*b[1*4+j] +
  a[i*4+2]*b[2*4+j] +
  a[i*4+3]*b[3*4+j];
 }
    }
}

GLint
gluProject(GLfloat objx, GLfloat objy, GLfloat objz, 
       const GLfloat modelMatrix[16], 
       const GLfloat projMatrix[16],
              const GLint viewport[4],
       GLfloat *winx, GLfloat *winy, GLfloat *winz)
{
    float in[4];
    float out[4];

    in[0]=objx;
    in[1]=objy;
    in[2]=objz;
    in[3]=1.0;
    __gluMultMatrixVecf(modelMatrix, in, out);
    __gluMultMatrixVecf(projMatrix, out, in);
    if (in[3] == 0.0) return(GL_FALSE);
    in[0] /= in[3];
    in[1] /= in[3];
    in[2] /= in[3];
    /* Map x, y and z to range 0-1 */
    in[0] = in[0] * 0.5 + 0.5;
    in[1] = in[1] * 0.5 + 0.5;
    in[2] = in[2] * 0.5 + 0.5;

    /* Map x,y to viewport */
    in[0] = in[0] * viewport[2] + viewport[0];
    in[1] = in[1] * viewport[3] + viewport[1];

    *winx=in[0];
    *winy=in[1];
    *winz=in[2];
    return(GL_TRUE);
}

GLint
gluUnProject(GLfloat winx, GLfloat winy, GLfloat winz,
  const GLfloat modelMatrix[16], 
  const GLfloat projMatrix[16],
                const GLint viewport[4],
         GLfloat *objx, GLfloat *objy, GLfloat *objz)
{
    float finalMatrix[16];
    float in[4];
    float out[4];

    __gluMultMatricesf(modelMatrix, projMatrix, finalMatrix);
    if (!__gluInvertMatrixf(finalMatrix, finalMatrix)) return(GL_FALSE);

    in[0]=winx;
    in[1]=winy;
    in[2]=winz;
    in[3]=1.0;

    /* Map x and y from window coordinates */
    in[0] = (in[0] - viewport[0]) / viewport[2];
    in[1] = (in[1] - viewport[1]) / viewport[3];

    /* Map to range -1 to 1 */
    in[0] = in[0] * 2 - 1;
    in[1] = in[1] * 2 - 1;
    in[2] = in[2] * 2 - 1;

    __gluMultMatrixVecf(finalMatrix, in, out);
    if (out[3] == 0.0) return(GL_FALSE);
    out[0] /= out[3];
    out[1] /= out[3];
    out[2] /= out[3];
    *objx = out[0];
    *objy = out[1];
    *objz = out[2];
    return(GL_TRUE);
}

GLint
gluUnProject4(GLfloat winx, GLfloat winy, GLfloat winz, GLfloat clipw,
       const GLfloat modelMatrix[16], 
       const GLfloat projMatrix[16],
       const GLint viewport[4],
       GLclampf nearVal, GLclampf farVal,      
       GLfloat *objx, GLfloat *objy, GLfloat *objz,
       GLfloat *objw)
{
    float finalMatrix[16];
    float in[4];
    float out[4];

    __gluMultMatricesf(modelMatrix, projMatrix, finalMatrix);
    if (!__gluInvertMatrixf(finalMatrix, finalMatrix)) return(GL_FALSE);

    in[0]=winx;
    in[1]=winy;
    in[2]=winz;
    in[3]=clipw;

    /* Map x and y from window coordinates */
    in[0] = (in[0] - viewport[0]) / viewport[2];
    in[1] = (in[1] - viewport[1]) / viewport[3];
    in[2] = (in[2] - nearVal) / (farVal - nearVal);

    /* Map to range -1 to 1 */
    in[0] = in[0] * 2 - 1;
    in[1] = in[1] * 2 - 1;
    in[2] = in[2] * 2 - 1;

    __gluMultMatrixVecf(finalMatrix, in, out);
    if (out[3] == 0.0) return(GL_FALSE);
    *objx = out[0];
    *objy = out[1];
    *objz = out[2];
    *objw = out[3];
    return(GL_TRUE);
}

void
gluPickMatrix(GLfloat x, GLfloat y, GLfloat deltax, GLfloat deltay,
    GLint viewport[4])
{
    if (deltax <= 0 || deltay <= 0) { 
 return;
    }

    /* Translate and scale the picked region to the entire window */
    glTranslatef((viewport[2] - 2 * (x - viewport[0])) / deltax,
     (viewport[3] - 2 * (y - viewport[1])) / deltay, 0);
    glScalef(viewport[2] / deltax, viewport[3] / deltay, 1.0);
}

I'm thinking on porting the entire GLUT library to the iPhone and sharing it on github. Anyone interested?