NewRPL breezed through its first job.

It generated 3 tables with 1008 numbers each at 2016 digits precision. To compute each number, there was about 3000 iterations of a series expansion with real multiplications, divisions and powers. All being executed with 2016 digits precision. All this number crunching was done in only 5 seconds per table on a laptop (that's about 9 million floating point operations at 2016 digits in 5 seconds). Of course, all this was on a PC, not on the real hardware.

This job couldn't be done on a 50g due to the size of the tables. Each table takes 880 kbytes, all 3 are 2.4MBytes. How will the calculator handle these tables? A new compression algorithm was written from scratch. The algorithm compressed each table to a size between 84 and 88 kbytes, for an estimated compression factor of 10:1. Not bad, considering the algorithm needs to be able to extract each individual number from the data and decompress it on the fly, as fast as possible. This is unlike normal compression algorithms that need to decompress the entire "blob" to access data within.

The algorithm was inspired on LZ4 (but ended up quite different), and on the PC it was able to decompress all 1008 numbers on a table, 1000 times in less than 3 seconds. In terms of throughput, this is equivalent to about 900 MBytes/second.

No we are ready to implement the decimal CORDIC method.... but that's a different story.