grinder:workspace zeph$ python try.py
135464 / 1 120412 / 1 105361 / 1 90309 / 1 75258 / 1 60206 / 1 45155 / 1 30103 / 1 15052 / 1 iterations:9
reconstructing . . . . . . . . . . reconstructed_size:150516
basically… was a lot of time I had to make a demo on this… keeping the result of a division, and the module, we can compress a file infinite times…
res = src while res > pip: res = src / pip mod = src % pip saving.append(mod) src = res
every file is a string of ones and zeros, right? so, it is a number
we can recursively apply this algorithm till we reach the size we want
it is extremely time consuming, so, better use it only for log files and ISO images
the target is to use a divisor as much as big as the “number” to divide… this is easy to achieve using a standard math serie like Fibonacci… 1*10^6 number of fibonacci is pretty big… 1*10^9 HUGE!
at the end you only need to keep, this string “1*10^9” (6 digits!) + the number of iterations (1 or 2 digits?) + the last result of the division and the last result of the module
while True: mod = saving.pop() back = back * pip + mod
in the sample I went down from 150k to 15k … not bad, ah?
(k, I used an array as support datastructure, but the tryout is to show that it is possible)
I already imagine storage of logs of ISPs, with a FPGA implementation of this trick…