text stringlengths 0 99.6k |
|---|
r 2 111 8 * 2 = 16 3 * 2 = 6 |
c 1 1100 8 * 1 = 8 4 * 1 = 4 |
d 1 1101 8 * 1 = 8 4 * 1 = 4 |
all others 0 ---------- ---------- |
totals: 88 23 |
We could represent this information as a binary tree: |
c |
/ |
a b /---- d |
/ / / |
root --- --- --- r |
To get the Huffman code we code a 0 bit each time we |
traverse a branch to the left, and a 1 bit each time we |
traverse a branch to the right. Thus the codes are generated |
as in the table above and every character gets a unique |
code. The decompressor simply starts at the root, reads the |
squeezed file one bit at a time, and moves through the tree |
until it reaches a terminal node and then sends the |
character in that position to the output file. |
The most frequently occurring characters are kept |
closest to the root and thus have shorter codes. Those with |
lower frequencies of occurrence are kept further away and |
get longer codes. The result is often a file that is |
significantly shorter than the original. |
When all bytes occur with about the same frequency, |
as in machine language program files, then all the codes |
are about the same length and not much is gained. In fact, |
since the de-coding information (the tree) must be |
included in the output file, the result can often be |
longer, particularly on short files. |
ARC VERSION 2.20 PAGE - 32 |
For those of you that are interested in statistics, we |
have included a small utility program with ARC that analyzes |
the frequency distribution of the bytes in a file and |
graphically displays the results. On the top portion of the |
screen you will see the frequency distribution of the bytes |
in the file. On the bottom portion is a bar graph |
representing the lengths of the Huffman codes generated by |
the squeeze algorithm. A huffman code can be anywhere from 0 |
to 24 bits in length. Each bit in the Huffman code is |
represented by two pixels on the graphics screen. To run the |
utility you must have ARC in memory and type: |
a:analyze [d:]filename |
The program will then read through 'd:filename' and |
display a frequency distribution for the file. |
But how do we come up with the best tree to use to |
generate the Huffman codes? If you sit down and think about |
it you will realize that even if only a dozen or so bytes |
are used, the number of possible trees is quite large. |
Huffman squeezing gets it name from the man that came |
up with a solution to this problem. We actually tried to |
figure it out for ourselves and ended up utterly confused |
until we came across Huffman's article(2). Huffman makes it |
look simple. |
Lets go back to our previous example of "abracadabra". |
We start out with the following frequency distribution: |
character frequency |
--------- --------- |
a 5 |
b 2 |
r 2 |
c 1 |
d 1 |
We start by picking off the two lowest frequencies and |
forming a partial tree with them. In this case "c" and "d". |
This gives us a new table: |
____________________ |
2. Huffman, D.A., "A METHOD FOR THE CONSTRUCTION OF MINIMUM |
REDUNDANCY CODES", Proc. IRE, 40(9), 1098-1101(1952) |
ARC VERSION 2.20 PAGE - 33 |
character frequency |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.