Elias omega coding
Elias ω coding or Elias omega coding is a universal code encoding the positive integers developed by Peter Elias. Like Elias gamma coding and Elias delta coding, it works by prefixing the positive integer with a representation of its order of magnitude in a universal code. Unlike those other two codes, however, Elias omega recursively encodes that prefix; thus, they are sometimes known as recursive Elias codes.
Omega coding is used in applications where the largest encoded value is not known ahead of time, or to compress data in which small values are much more frequent than large values.
To encode a positive integer N:
- Place a "0" at the end of the code.
- If N = 1, stop; encoding is complete.
- Prepend the binary representation of N to the beginning of the code. This will be at least two bits, the first bit of which is a 1.
- Let N equal the number of bits just prepended, minus one.
- Return to Step 2 to prepend the encoding of the new N.
To decode an Elias omega-encoded positive integer:
- Start with a variable N, set to a value of 1.
- If the next bit is a "0" then stop. The decoded number is N.
- If the next bit is a "1" then read it plus N more bits, and use that binary number as the new value of N. Go back to Step 2.
Examples
[edit]Omega codes can be thought of as a number of "groups". A group is either a single 0 bit, which terminates the code, or two or more bits beginning with 1, which is followed by another group.
The first few codes are shown below. Included is the so-called implied distribution, describing the distribution of values for which this coding yields a minimum-size code; see Relationship of universal codes to practical compression for details.
Value | Code | Implied probability |
---|---|---|
1 | 0 | 1/2 |
2 | 10 0 | 1/8 |
3 | 11 0 | 1/8 |
4 | 10 100 0 | 1/64 |
5 | 10 101 0 | 1/64 |
6 | 10 110 0 | 1/64 |
7 | 10 111 0 | 1/64 |
8 | 11 1000 0 | 1/128 |
9 | 11 1001 0 | 1/128 |
10 | 11 1010 0 | 1/128 |
11 | 11 1011 0 | 1/128 |
12 | 11 1100 0 | 1/128 |
13 | 11 1101 0 | 1/128 |
14 | 11 1110 0 | 1/128 |
15 | 11 1111 0 | 1/128 |
16 | 10 100 10000 0 | 1/2048 |
17 | 10 100 10001 0 | 1/2048 |
... | ||
100 | 10 110 1100100 0 | 1/8192 |
1000 | 11 1001 1111101000 0 | 1/131,072 |
10,000 | 11 1101 10011100010000 0 | 1/2,097,152 |
100,000 | 10 100 10000 11000011010100000 0 | 1/268,435,456 |
1,000,000 | 10 100 10011 11110100001001000000 0 | 1/2,147,483,648 |
The encoding for 1 googol, 10100, is 11 1000 101001100 (15 bits of length header) followed by the 333-bit binary representation of 1 googol, which is 10010 01001001 10101101 00100101 10010100 11000011 01111100 11101011 00001011 00100111 10000100 11000100 11001110 00001011 11110011 10001010 11001110 01000000 10001110 00100001 00011010 01111100 10101010 10110010 01000011 00001000 10101000 00101110 10001111 00010000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 and a trailing 0, for a total of 349 bits.
A googol to the hundredth power (1010000) is a 33,220-bit binary number. Its omega encoding is 33,243 bits long: 11 1111 1000000111000100 (22 bits), followed by 33,220 bits of the value, and a trailing 0. Under Elias delta coding, the same number is 33,250 bits long: 000000000000000 1000000111000100 (31 bits) followed by 33,219 bits of the value. The omega and delta coding are, respectively, 0.07% and 0.09% longer than the ordinary 33,220-bit binary representation of the number.
Code length
[edit]For the encoding of a positive integer N, the number of bits needed, B(N), is recursively:That is, the length of the Elias omega code for the integer iswhere the number of terms in the sum is bounded above by the binary iterated logarithm. To be precise, let . We have for some , and the length of the code is . Since , we have .
Since the iterated logarithm grows slower than all for any fixed , the asymptotic growth rate is , where the sum terminates when it drops below one.
Asymptotic optimality
[edit]Elias omega coding is an asymptotically optimal prefix code.[1]
Proof sketch. A prefix code must satisfy the Kraft inequality. For the Elias omega coding, the Kraft inequality statesNow, the summation is asymptotically the same as an integral, giving usIf the denominator terminates at some point , then the integral diverges as . However, if the denominator terminates at some point , then the integral converges as . The Elias omega code is on the edge between diverging and converging.
Example code
[edit]Encoding
[edit]void eliasOmegaEncode(char* source, char* dest)
{
IntReader intreader(source);
BitWriter bitwriter(dest);
while (intreader.hasLeft())
{
int num = intreader.getInt();
BitStack bits;
while (num > 1) {
int len = 0;
for (int temp = num; temp > 0; temp >>= 1) // calculate 1 floor(log2(num))
len ;
for (int i = 0; i < len; i )
bits.pushBit((num >> i) & 1);
num = len - 1;
}
while (bits.length() > 0)
bitwriter.putBit(bits.popBit());
bitwriter.putBit(false); // write one zero
}
bitwriter.close();
intreader.close();
}
Decoding
[edit]void eliasOmegaDecode(char* source, char* dest) {
BitReader bitreader(source);
IntWriter intwriter(dest);
while (bitreader.hasLeft())
{
int num = 1;
while (bitreader.inputBit()) // potentially dangerous with malformed files.
{
int len = num;
num = 1;
for (int i = 0; i < len; i)
{
num <<= 1;
if (bitreader.inputBit())
num |= 1;
}
}
intwriter.putInt(num); // write out the value
}
bitreader.close();
intwriter.close();
}
Generalizations
[edit]Elias omega coding does not encode zero or negative integers. One way to encode all non-negative integers is to add 1 before encoding and then subtract 1 after decoding, or use the very similar Levenshtein coding. One way to encode all integers is to set up a bijection, mapping all integers (0, 1, -1, 2, -2, 3, -3, ...) to strictly positive integers (1, 2, 3, 4, 5, 6, 7, ...) before encoding.
See also
[edit]References
[edit]- ^ Elias, P. (March 1975). "Universal codeword sets and representations of the integers". IEEE Transactions on Information Theory. 21 (2): 194–203. doi:10.1109/TIT.1975.1055349. ISSN 0018-9448.
Further reading
[edit]- Elias, Peter (March 1975). "Universal codeword sets and representations of the integers". IEEE Transactions on Information Theory. 21 (2): 194–203. doi:10.1109/tit.1975.1055349.
- Fenwick, Peter (2003). "Universal Codes". In Sayood, Khalid (ed.). Lossless Compression Handbook. New York, NY, USA: Academic Press. pp. 55–78. doi:10.1016/B978-012620861-0/50004-8. ISBN 978-0123907547.