Wiki page
[benchmarks (2019 update)] by
sandro
2019-02-08 09:38:25.
D 2019-02-08T09:38:25.421
L benchmarks\s(2019\supdate)
P cd5476a157a6bf456415838bea5448242043f6e1
U sandro
W 33021
Back to <a href="https://www.gaia-gis.it/fossil/librasterlite2/wiki?name=rasterlite2-doc">RasterLite2 doc index</a><hr><br>
<h1>RasterLite2 reference Benchmarks (2019 update)</h1>
<h2>Intended scopes</h2>
In recent years new and innovative <a href="https://en.wikipedia.org/wiki/Lossless_compression">lossless compression algorithms</a> have been developed.<br>
The current benchmark is intended to check and verify by practical testing how these new compression methods do practically perform under the most usual conditions.<br>
More specifically, a comparison will be made between the relative performances of new and older lossless compression methods.
<h2>The contenders</h2>
The following <b><i>general purpose</i></b> lossless compression methods will be systematically compared:
<ul>
<li><b>DEFLATE</b>: (aka <b>Zip</b>)<br>
<a href="https://en.wikipedia.org/wiki/DEFLATE">This</a> is the most classic and almost universally adopted lossless compression method.<br>
It was initially introduced about 30 years ago (in <b>1991</b>), so it can be assumed to be the venerable decane of all them.</li>
<li><b>LZMA</b>: (aka <b>7-Zip</b>)<br>
<a href="https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Markov_chain_algorithm">This</a> is a well known and widely adopted lossless compression method.<br>
It's younger than DEFLATE having been introduced about 20 years ago (in <b>1998</b>). LZMA is an extremist interpretation of lossless compression.<br> It's usually able to achieve really impressive compression ratios (by far better than DEFLATE can do), but at the cost of severely sacrificing the compression speed; LZMA can be easily deadly slow.</li>
<li><b>LZ4</b><br>
<a href="https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)">This</a> is a more modern algorithm having been introduced less than 10 years ago (in <b>2011</b>), so it's diffusion and adoption is still rather limited.<br>
LZ4 too is an extremist interpretation of lossless compression, but it goes exactly in the opposite direction of LZMA.<br>
It's strongly optimized so to be extremely fast, but at the cost of sacrificing the compression ratios.</li>
<li><b>ZSTD</b> (aka <b>Zstandard</b>)<br>
<a href="https://en.wikipedia.org/wiki/Zstandard">This</a> is a very recently introduced algorithm (<b>2015</b>), and it's adoption is still rather limited.<br>
Curiously enough, both LZ4 and ZSTD are developed and maintained by the same author (Yann Collet).<br>
ZSTD is a well balanced algorithm pretending to be a most modern replacement for DEFLATE, being able to be faster and/or to achieve better compression ratios.<br>
Just few technical details about the most relevant innovations introduced by ZSTD:
<ul>
<li>The old DEFLATE was designed so to require a very limited amount of memory, and this impaired someway it's efficiency.<br>
Modern HW can easily support a lot of memory, so ZSTD borrows few ideas from LZMA about a less constrained and more efficient memory usage.<br>
More specifically, DEFLATE is based on a moving data window of only <b>32KB</b>; both LZMA and ZSTD adopt a more generous moving window of <b>1MB</b>.</li>
<li>Both DEFLATE and ZSTD adopts the classic <a href="https://en.wikipedia.org/wiki/Huffman_coding">Huffman coding</a> for reducing the information entropy.<br>
But ZSTD can also support a further advanced mechanism based on <a href="https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS">Finite State Entropy</a>, a very recent technique being much faster.</li>
</ul></li>
</ul>
<br>
Whenever possible and appropriate the following lossless compression methods specifically intended for <b><i>images / rasters</i></b> will be tested as well:
<ul>
<li><b>PNG</b><br>
<a href="https://en.wikipedia.org/wiki/Portable_Network_Graphics">This</a> is a very popular format supporting RGB and Grayscale images (with or without Alpha transparencies).<br>
PNG fully depends on DEFLATE for data compression.</li>
<li><b>CharLS</b><br>
This is an image format (RGB and Grayscale) having a limited diffusion but rather popular for storying medical imagery.<br>
CharLS is based on <a href="https://en.wikipedia.org/wiki/Lossless_JPEG">Lossless JPEG</a>, a genuinely lossless image compression schema
not to be confused with plain JPEG (that is the most classic example of <a href="https://en.wikipedia.org/wiki/Lossy_compression">lossy compression</a>).</li>
<li><b>Jpeg2000</b><br>
<a href="https://en.wikipedia.org/wiki/JPEG_2000">This</a> is intended to be a more advanced replacement for JPEG, but it's not yet so widely supported as its ancestor.<br>
Jpeg2000 is an inherently <b>lossy compression</b>, but under special settings it can effectively support a genuine <b>lossless compression</b> mode.</li>
<li><b>WebP</b><br>
<a href="https://en.wikipedia.org/wiki/WebP">This</a> too is an innovative image format pretending to be a better replacement for JPEG.<br>
WebP images are expected to support the same visual quality of JPEG but requiring a significantly reduced storage space.<br>
Exactly as Jpeg2000, WebP too is an inherently <b>lossy compression</b>, but under special settings it can effectively support a genuine <b>lossless compression</b> mode.</li>
</ul>
<br>
<hr>
<h1>Testing generic datasets</h1>
We'll start first by testing several generic datasets, so to stress all compression methods under the most common conditions.<br>
The same dataset will be compressed and then decompressed using each method, so to gather informations about:
<ul>
<li>the <b>size</b> of the resulting compressed file.<br>
The ratio between the uncompressed and compressed sizes will correspond to the <b>compression ratio</b>.</li>
<li>the <b>time</b> required to <b>compress</b> the original dataset.</li>
<li>the <b>time</b> required to <b>decompress</b> the compressed file so to recover the initial uncompressed dataset.</li>
</ul>
<br>
<b>Note</b>: compressing is a much harder operation than decompressing, and will always require more time.<br>
The speed differences between the various compression algorithms will be strong and well marked when compressing, but also the differences in decompression speeds (although less impressive) are worth to be carefully evaluated.
<ul>
<li>for any compression algorithm being slow (or even very slow) when compressing can be easily considered a trivial and forgivable issue.<br>
Compression usually happens only once in the lifetime of a compressed dataset, and there are many ways for minimizing the adverse effects of intrinsic slowness.<br>
You could e.g. compress your files in batch mode, may be during off-peak hours, and in such a scenario reaching stronger compression ratios could easily justify a longer process time.<br>
Or alternatively you could enable (if possible) a multithread compression approach (parallel processing), so to significantly reduce the required time.</li>
<li>being slow when decompressing is a much more serious issue, because decompression will happen more frequently; very frequently in some specific scenario.<br>
So a certain degree of slowness in decompression could easily become a serious bottleneck severely limiting the overall performances of your system.</li>
</ul>
<h3>test #1 - compressing many CSV files</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Uncompressed Size</th><th bgcolor="#d0ff90">Algorithm</th><th bgcolor="#d0ff90">Compressed Size</th>
<th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<th rowspan="4" align="center">0.97 GB</td>
<td align="center">LZ4</td><td align="right">289 MB</td><td align="center">3.46</td><td align="right">6.550 sec</td><td align="right">2.256 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">DEFLATE</td><td align="right">155 MB</td><td align="center">6.44</td><td align="right">33.079 sec</td><td align="right">2.159 sec</td>
</tr>
<tr>
<td align="center">ZSTD</td><td align="right">110 MB</td><td align="center">9.09</td><td align="right">2.924 sec</td><td align="right">1.313 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">LZMA</td><td align="right">47 MB</td><td align="center">21.42</td><td align="right">1220.329 sec</td><td align="right">10.179 sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>The sample was a tarball containing a whole <a href="https://en.wikipedia.org/wiki/General_Transit_Feed_Specification">GTFS</a> dataset.</li>
<li>Text files are usually expected to be highly compressible (so many repetitions of the same worlds and values), and this test confirms the expectations.</li>
<li><b>LZ4</b> is very fast both when compressing and decompressing, but the compression ratio is rather disappointing.</li>
<li><b>DEFLATE</b> is a very effective and well balanced compromise between speed and effectiveness.<br>
It scores a decent compression ratio and it's fast enough both when compressing and decompressing.</li>
<li><b>ZSTD</b> clearly wins this first match hands down; it's impressively fast (in both directions) and it scores a very good compression ratio.</li>
<li><b>LZMA</b> scores a really impressive compressive ratio, but it's deadly slow when compressing (more than 10 times slower than DEFLATE).<br>
But what's really bad is that it's slow even when decompressing (about 5 times slower than DEFLATE).</li>
</ul>
<br><br>
<h3>test #2 - compressing a SQLite database file</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Uncompressed Size</th><th bgcolor="#d0ff90">Algorithm</th><th bgcolor="#d0ff90">Compressed Size</th>
<th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<th rowspan="4" align="center">1.13 GB</td>
<td align="center">LZ4</td><td align="right">508 MB</td><td align="center">2.29</td><td align="right">10.333 sec</td><td align="right">2.123 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">DEFLATE</td><td align="right">323 MB</td><td align="center">3.60</td><td align="right">54.343 sec</td><td align="right">3.173 sec</td>
</tr>
<tr>
<td align="center">ZSTD</td><td align="right">219 MB</td><td align="center">5.31</td><td align="right">4.331 sec</td><td align="right">1.522 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">LZMA</td><td align="right">82 MB</td><td align="center">14.26</td><td align="right">646.670 sec</td><td align="right">17.930 sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>The sample was a SQLite/SpatiaLite database containing the same GTFS</a> dataset used in the previous test.</li>
<li>Databases are usually expected to be strongly compressible (so many repetitions of ZERO, SPACE and NULL values), and this test confirms the expectations.</li>
<li><b>LZ4</b> confirms to be very fast but not very effective.</li>
<li><b>DEFLATE</b> confirms to be still valid despite its venerable age.</li>
<li><b>ZSTD</b> is once more the winner of this test, being both fast and effective.</li>
<li><b>LZMA</b> confirms to be unbeatable for reaching very high compression ratios, but unhappily it confirms its barely tolerable slowness.</li>
</ul>
<br><br>
<h3>test #3 - compressing many Shapefiles</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Uncompressed Size</th><th bgcolor="#d0ff90">Algorithm</th><th bgcolor="#d0ff90">Compressed Size</th>
<th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<th rowspan="4" align="center">1.19 GB</td>
<td align="center">LZ4</td><td align="right">0.99 GB</td><td align="center">1.20</td><td align="right">6.413 sec</td><td align="right">0.893 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">DEFLATE</td><td align="right">870 MB</td><td align="center">1.40</td><td align="right">48.004 sec</td><td align="right">4.553 sec</td>
</tr>
<tr>
<td align="center">ZSTD</td><td align="right">880 MB</td><td align="center">1.39</td><td align="right">5.416 sec</td><td align="right">1.292 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">LZMA</td><td align="right">682 MB</td><td align="center">1.79</td><td align="right">740.077 sec</td><td align="right">45.624 sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>The sample was a tarball containing several Shapefiles (Road Network and Administrative Boundaries of Tuscany).</li>
<li>Shapefiles contain plenty of raw binary data, and consequently are rather hard to be strongly compressed.<br>
This fully explains why in this specific test the compression ratios are always very bland.</li>
<li><b>LZ4</b> confirms to be very fast but not very effective.</li>
<li><b>DEFLATE</b> confirms to be still valid despite its venerable age.</li>
<li><b>ZSTD</b> is once more the winner of this test, being noticeably faster than DEFLATE.<br>
But it's worth noting that in this specific test it's unable to reach a better compression ratio than DEFLATE.</li>
<li><b>LZMA</b> confirms to be unbeatable for reaching very high compression ratios, but unhappily it confirms its barely tolerable slowness.</li>
</ul>
<br><br>
<h3>test #4 - compressing a Landsat 8 scene (satellite imagery)</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Uncompressed Size</th><th bgcolor="#d0ff90">Algorithm</th><th bgcolor="#d0ff90">Compressed Size</th>
<th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr><tr>
<th rowspan="4" align="center">1.78 GB</td>
<td align="center">LZ4</td><td align="right">1.07 GB</td><td align="center">1.65</td><td align="right">5.104 sec</td><td align="right">1.285 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">DEFLATE</td><td align="right">928 MB</td><td align="center">1.97</td><td align="right">56.643 sec</td><td align="right">7.176 sec</td>
</tr>
<tr>
<td align="center">ZSTD</td><td align="right">929 MB</td><td align="center">1.96</td><td align="right">7.261 sec</td><td align="right">2.329 sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center">LZMA</td><td align="right">798 MB</td><td align="center">2.29</td><td align="right">957.182 sec</td><td align="right">95.288 sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>The sample was a tarball containing a Landsat 8 scene.</li>
<li>Satellite imagery contain plenty of raw binary data, and consequently are rather hard to be strongly compressed.<br>
This fully explains why in this specific test the compression ratios are always very bland.</li>
<li><b>LZ4</b> confirms to be very fast but not very effective.</li>
<li><b>DEFLATE</b> confirms to be still valid despite its venerable age.</li>
<li><b>ZSTD</b> is once more the winner of this test, being noticeably faster than DEFLATE.<br>
But it's worth noting that in this specific test it's unable to reach a better compression ratio than DEFLATE.</li>
<li><b>LZMA</b> confirms to be unbeatable for reaching very high compression ratios, but unhappily it confirms its barely tolerable slowness.</li>
</ul>
<br><br>
<b>Final assessment (and lessons learned)</b>
<ul>
<li>The intrinsic efficiency of all lossless compression algorithm strongly depends on the internal data distribution within the sample.
<ul>
<li>samples presenting a very regular and easily predictable internal distribution have a <b>low information entropy</b>, and can be strongly compressed.<br>
A typical example: text files written is some language based on the Latin alphabet.</li>
<li>samples presenting an irregular and random internal distribution have a <b>high information entropy</b>, and can be only moderately compressed.<br>
A typical example: any kind of binary file.<br>
<b>Note</b>: any binary file presenting a perfectly random internal distribution of values is conceptually impossible to be compressed at all.</li>
</ul></li>
<li>any lossless compression strategy implies a trade off between speed and compression ratio:
<ul>
<li>you can optimize for speed, but in this case you are necessarily sacrificing the compression ratio.<br>
(this is the choice adopted by LZ4).</li>
<li>at the opposite side of the spectrum you can optimize for high compression ratios, but in this case you are necessarily sacrificing speed.<br>
(this is the choice adopted by LZMA).</li>
<li>the wisest approach falls somewhere in the middle; a well balanced mix (a reasonable compromise) between speed and compression ratio.<br>
(this is the choice of both DEFLATE and ZSTD).</li>
</ul>
<li>the very recently introduced ZSTD clearly is a superior alternative to the old DEFLATE:
<ul>
<li>ZSTD is always noticeably faster than DEFLATE, both when compressing and decompressing.</li>
<li>ZSTD is not always able to reach better compression ratios then DEFLATE (it depends on the sample's information entropy).<br>
On many common cases ZSTD can easily outperform DEFLATE compression ratios.<br>
When not, it still remains able to achieve (more or less) the same compression ratios than DEFLATE but in a faster time.</li>
</ul></li>
<li>LZ4 is not really interesting (at least for general purpose scopes).
It's surely very fast, but not impressively faster than ZSTD.<br>
And it's compression ratios are always too mild and bland to be really appealing.</li>
<li>LZMA has no alternatives when very strong compression ratios are an absolute must.<br>
But its terrible slowness (both when compressing and decompressing) must always be taken in very serious account, because it could easily become a severe bottleneck.</li>
<li>DEFLATE isn't at all dead; despite its rather venerable age it still confirms to be an honest performer.<br>
And considering its almost universal and pervasive adoption it will surely survive for many long years to come.</li>
</ul>
<br>
<hr>
<h1>Testing Raster Coverages</h1>
This second group of tests will be more specifically focussed on directly comparing the various lossless compression methods as implemented by RasterLite2 for encoding and decoding Raster Coverage Tiles.
<ul>
<li>Several distinct RasterLite2 databases will be created and fully populated by importing the same sample but by applying a different compression method for each database.</li>
<li>The <b>compression ratios</b> will be then computed from the sizes of the <u>uncompressed</u> database (method <b>NONE</b>) and any other database based on the same sample.</li>
<li>The <b>compression time</b> will be the time (as reported by <b>rl2tool</b>) required for creating and fully populating each database.</li>
<li>The <b>decompression time</b> will be the time (as reported by <b>spatialite CLI</b>) for executing an SQL script containing 256 <b>SELECT RL2_GetMapImageFromRaster()</b> statements.<br>
All requested images will be 1000x1000 pixels at full resolution, centered on different locations and adopting various SLD/SE styles.<br>
This is assumed to be a realistic and significative evaluation, because it basically corresponds to the typical workload of an hypothetical WMS server.</li>
<li><b>Note</b>: the measured timings will not directly correspond to the intrinsic speed of each compression method.<br>
There are obviously several disturbing factors (mainly due to I/O operations) to be taken in account.<br>
However the operational sequence is strictly the same for all tests based on the same sample, so the unique factor explaining for different timings is the compression method itself.</li>
</ul>
<br>
<h3>Test #5 - Grayscale Raster Coverage</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Compression Method</th><th bgcolor="#d0ff90">DB Size</th><th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<td align="center"><b>NONE</b> <i>no compression</i></td><td align="right">481 MB</td><td align="center">1.00</td><td align="right">54sec</td><td align="right">1min 44sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LZ4</b> <i>very fast compression</i></td><td align="right">416 MB</td><td align="center">1.16</td><td align="right">59sec</td><td align="right">1min 48sec</td>
</tr>
<tr>
<td align="center"><b>DEFLATE</b> <i>zip compression</i></td><td align="right">349 MB</td><td align="center">1.38</td><td align="right">1min 5sec</td><td align="right">1min 44sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>ZSTD</b> <i>fast compression</i></td><td align="right">346 MB</td><td align="center">1.39</td><td align="right">1min 0sec</td><td align="right">1min 54sec</td>
</tr>
<tr>
<td align="center"><b>LZMA</b> <i>7-zip compression</i></td><td align="right">345 MB</td><td align="center">1.40</td><td align="right">3min 2sec</td><td align="right">2min 3sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>PNG</b> <i>lossless image format</i></td><td align="right">346 MB</td><td align="center">1.39</td><td align="right">1min 8sec</td><td align="right">1min 41sec</td>
</tr>
<tr>
<td align="center"><b>LL_WEBP</b> <i>lossless WEbP</i></td><td align="right">320 MB</td><td align="center">1.50</td><td align="right">4min 27sec</td><td align="right">2min 02sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LL_JP2</b> <i>lossless Jpeg2000</i></td><td align="right">323 MB</td><td align="center">1.49</td><td align="right">4min 26sec</td><td align="right">2min 21sec</td>
</tr>
<tr>
<td align="center"><b>CHARLS</b> <i>lossless JPEG</i></td><td align="right">339 MB</td><td align="center">1.42</td><td align="right">2min 38sec</td><td align="right">2min 6sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>this test was based on a sample of 25 B&W TIFF+TFW Sections (forming a 5x5 square) centered around the city of Florence.<br>
The original dataset is the Ortophoto imagery (year 1978; scale 1:10000) published by <a href="http://www502.regione.toscana.it/geoscopio/cartoteca.html">Tuscany</a></li>
<li>as we were expecting from our previous tests, lossless copression can very difficult reach strong compression ratios when applied to photographic images.</li>
<li>in this specic test DEFLATE, ZSTD, and PNG score more or less equivalent compression ratios, and they mark very similar compression and decompression timings.<br>
It's worth noting that DEFLATE, ZSTD and PNG require more or less the same decompression time than NONE (uncompressed), so they don't cause any rendering bottleneck.</li>
<li>as we were expecting LZ4 is fast but unable to reach a decent compression ratio.</li>
<li>LZMA confirms to be very slow both when compressing and decompressing.</li>
<li>The real delusion comes from LL_WEBP, LL_JP2 and CHARLS.<br>
These algorithms are specifically designed for compressing photographic imagery, but they are unable to outperform the other generic multipurpose compression algorithms.<br>
They score marginally better compression ratios, but they are deadly slow.
The game is not worth the candle.</li>
</ul>
<br>
<br>
<h3>Test #6 - RGB Raster Coverage</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Compression Method</th><th bgcolor="#d0ff90">DB Size</th><th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<td align="center"><b>NONE</b> <i>no compression</i></td><td align="right">1.51 GB</td><td align="center">1.00</td><td align="right">1min 17sec</td><td align="right">1min 51sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LZ4</b> <i>very fast compression</i></td><td align="right">1.21 GB</td><td align="center">1.25</td><td align="right">1min 31sec</td><td align="right">1min 47sec</td>
</tr>
<tr>
<td align="center"><b>DEFLATE</b> <i>zip compression</i></td><td align="right">800 MB</td><td align="center">1.94</td><td align="right">1min 56sec</td><td align="right">1min 40sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>ZSTD</b> <i>fast compression</i></td><td align="right">816 MB</td><td align="center">1.90</td><td align="right">1min 29sec</td><td align="right">1min 37sec</td>
</tr>
<tr>
<td align="center"><b>LZMA</b> <i>7-zip compression</i></td><td align="right">710 MB</td><td align="center">2.18</td><td align="right">7min 23sec</td><td align="right">2min 11sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>PNG</b> <i>lossless image format</i></td><td align="right">830 MB</td><td align="center">1.86</td><td align="right">2min 29sec</td><td align="right">1min 49sec</td>
</tr>
<tr>
<td align="center"><b>LL_WEBP</b> <i>lossless WEbP</i></td><td align="right">525 MB</td><td align="center">2.95</td><td align="right">7min 18sec</td><td align="right">1min 48sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LL_JP2</b> <i>lossless Jpeg2000</i></td><td align="right">802 MB</td><td align="center">1.92</td><td align="right">11min 31sec</td><td align="right">3min 16sec</td>
</tr>
<tr>
<td align="center"><b>CHARLS</b> <i>lossless JPEG</i></td><td align="right">912 MB</td><td align="center">1.70</td><td align="right">7min 54sec</td><td align="right">2min 47sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>this test was based on a sample of 9 RGB TIFF+TFW Sections (forming a 3x3 square) centered around the town of San Giovanni Valdarno.<br>
The original dataset is the exactly the same we'll see in the following test, but in this case the Near Infrared spectral band was completely removed.</li>
<li>this test simply confirms the general pattern we've already seen about Grayscale.</li>
<li>the unique exception is LL_WEBP, that in this case scores the best compression ratio of them all, and marks a fairly good decompression time.</li>
</ul>
<br>
<br>
<h3>Test #7 - Multispectral (4-bands) Raster Coverage</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Compression Method</th><th bgcolor="#d0ff90">DB Size</th><th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<td align="center"><b>NONE</b> <i>no compression</i></td><td align="right">2.01 GB</td><td align="center">1.00</td><td align="right">3min 18sec</td><td align="right">1min 55sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LZ4</b> <i>very fast compression</i></td><td align="right">1.61 GB</td><td align="center">1.24</td><td align="right">3min 41sec</td><td align="right">1min 48sec</td>
</tr>
<tr>
<td align="center"><b>DEFLATE</b> <i>zip compression</i></td><td align="right">1.02 GB</td><td align="center">1.97</td><td align="right">5min 5sec</td><td align="right">1min 42sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>ZSTD</b> <i>fast compression</i></td><td align="right">1.07 GB</td><td align="center">1.87</td><td align="right">3min 35sec</td><td align="right">1min 46sec</td>
</tr>
<tr>
<td align="center"><b>LZMA</b> <i>7-zip compression</i></td><td align="right">882 MB</td><td align="center">2.34</td><td align="right">11min 7sec</td><td align="right">2min 15sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>PNG</b> <i>lossless image format</i></td><td align="right">1.08 GB</td><td align="center">1.85</td><td align="right">4min 43sec</td><td align="right">1min 47sec</td>
</tr>
<tr>
<td align="center"><b>LL_WEBP</b> <i>lossless WEbP</i></td><td align="right">758 MB</td><td align="center">2.72</td><td align="right">9min 36sec</td><td align="right">1min 51sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LL_JP2</b> <i>lossless Jpeg2000</i></td><td align="right">1.05 GB</td><td align="center">1.92</td><td align="right">16min 23sec</td><td align="right">3min 53sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>this test was based on a sample of 9 4-bands (RGB+NearInfrared) TIFF+TFW Sections (forming a 3x3 square) centered around the town of San Giovanni Valdarno.<br>
The original dataset is the Ortophoto imagery (year 2013; scale 1:2000) published by <a href="http://www502.regione.toscana.it/geoscopio/cartoteca.html">Tuscany</a></li>
<li>this test simply confirms the general pattern we've already seen about Grayscale and RGB.</li>
<li>in this case too LL_WEBP scores the best compression ratio of them all, and marks a fairly good decompression time.</li>
</ul>
<br>
<br>
<h3>Test #8 - Datagrid Raster Coverage (ASCII Grid - floating point single precision)</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Compression Method</th><th bgcolor="#d0ff90">DB Size</th><th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<td align="center"><b>NONE</b> <i>no compression</i></td><td align="right">2.01 GB</td><td align="center">1.00</td><td align="right">6min 30sec</td><td align="right">2min 6sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LZ4</b> <i>very fast compression</i></td><td align="right">845 MB</td><td align="center">2.45</td><td align="right">6min 36sec</td><td align="right">2min 9sec</td>
</tr>
<tr>
<td align="center"><b>DEFLATE</b> <i>zip compression</i></td><td align="right">623 MB</td><td align="center">3.32</td><td align="right">7min 2sec</td><td align="right">2min 6sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>ZSTD</b> <i>fast compression</i></td><td align="right">614 MB</td><td align="center">3.36</td><td align="right">6min 26sec</td><td align="right">1min 55sec</td>
</tr>
<tr>
<td align="center"><b>LZMA</b> <i>7-zip compression</i></td><td align="right">513 MB</td><td align="center">4.03</td><td align="right">11min 20sec</td><td align="right">3min 5sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>this test was based on a huge ASCII Grid (DTM, 10m x 10m cell size).<br>
The original dataset is the Orographic DTM 10x10 published by <a href="http://www502.regione.toscana.it/geoscopio/cartoteca.html">Tuscany</a></li>
<li>this specific test evidentiates a slight superiority of ZSTD above DEFLATE; it's able to score a better compression ratio and it's faster both when compressing and decompressing.</li>
<li>LZ4 confirms to be fast but unable to score a good compression ratio.</li>
<li>LZMA confirms to score impressive compression ratios but at the cost of a barely tolerable slowness.</li>
</ul>
<br>
<br>
<h3>Test #9 - Datagrid Raster Coverage (TIFF - INT16)</h3>
<table cellspacing="6" cellpadding="8" border="1" bgcolor="#ffffe0">
<tr><th bgcolor="#d0ff90">Compression Method</th><th bgcolor="#d0ff90">DB Size</th><th bgcolor="#d0ff90">Compression Ratio</th><th bgcolor="#d0ff90">Compression Time</th><th bgcolor="#d0ff90">Decompression Time</th></tr>
<tr>
<td align="center"><b>NONE</b> <i>no compression</i></td><td align="right">480 MB</td><td align="center">1.00</td><td align="right">17sec</td><td align="right">1min 39sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>LZ4</b> <i>very fast compression</i></td><td align="right">317 MB</td><td align="center">1.51</td><td align="right">21sec</td><td align="right">1min 48sec</td>
</tr>
<tr>
<td align="center"><b>DEFLATE</b> <i>zip compression</i></td><td align="right">205 MB</td><td align="center">2.34</td><td align="right">28sec</td><td align="right">1min 39sec</td>
</tr>
<tr bgcolor="#dfddc0">
<td align="center"><b>ZSTD</b> <i>fast compression</i></td><td align="right">207 MB</td><td align="center">2.32</td><td align="right">20sec</td><td align="right">1min 42sec</td>
</tr>
<tr>
<td align="center"><b>LZMA</b> <i>7-zip compression</i></td><td align="right">168 MB</td><td align="center">2.86</td><td align="right">2min 0sec</td><td align="right">2min 3sec</td>
</tr>
</table>
<b>Quick assessment:</b>
<ul>
<li>this test was based on the very popular ETOPO1 global relief model of Earth's surface published by <a href="https://www.ngdc.noaa.gov/mgg/global/global.html">NOAA</a></li>
<li>this specific test fails to evidentiates any superiority of ZSTD above DEFLATE; they are substantially on par.</li>
<li>LZ4 confirms to be fast but unable to score a good compression ratio.</li>
<li>LZMA confirms to score impressive compression ratios but at the cost of a barely tolerable slowness.</li>
</ul>
<br>
<br>
<br>
<br>
<hr><br>
Back to <a href="https://www.gaia-gis.it/fossil/librasterlite2/wiki?name=rasterlite2-doc">RasterLite2 doc index</a>
Z 4fae3f3b7422cb752c36016fe2a4cf73