LZ4 binding for Pharo

Hi guys. In the last days I wrote a Pharo binding for the LZ4 compressor (thanks to Camillo Bruni for pointing out), and so I wanted to share it. The main goal of LZ4 is to be really fast in compressing and uncompressing but not to obtain the biggest compression ratio possible.

The main reason why I wrote this binding is for Fuel serializer, with the idea of compressing/uncompressing the serialization (ByteArray) of a graph. Hopefully, with a little bit of overhead (for compressing and uncompressing), we gain a lot in writing to the stream (mostly with files and network). However, the binding is not coupled with Fuel at all.

I have documented all the steps to install and run LZ4 in Pharo here.  Please, if you give it a try, let me know if it worked or if you had problems.

I would also like to do some more benchmarks with it, because so far I only did a few. So if you have benchmarks to share with me, please do it.

So far LZ4 does not provide a streaming like API. We tried with Camillo to build a streaming API in Pharo (like ZLibWriteStream, GZipWriteStream, etc) but the results were not good enough. So we are still analyzing this.

Ahhh yes, for the binding I use Native Boost FFI, so I guess I will wrote a post soon to explain how to wrap a very simple library with NB.

See you,


3 thoughts on “LZ4 binding for Pharo

  1. With the speed of LZ4, it doesn’t sound like streaming would be much of a benefit to a total transfer time, so I assume it would be to reduce memory overhead?
    (Where total transfer time = encoding time + transfer time + decoding time, where streaming would help you overlap encoding/decoding with transfer)

    At least if http://pokecraft.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO is anything to go by 🙂

    Like

    1. Hi Henry. Yes, indeed, it was to reduce memory overhead in case we want to compress a really large graph. The idea was to write/read in chunks… and yes, that overlap you said. This was a good idea because the compression ratio of LZ4 doesn’t change much if we take chunks (of say 1 mb) because the algorithm doesn’t take into account the whole bytearray (I guess that’s one of the reasons why it is so fast).
      BTW, thanks for the bench link…it clearly demonstrates the speed of LZ4 🙂

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s