Aim of the undertaking
- Generating Huffman codeword utilizing Huffman Coding to be transmitted to the decipherer.
- Compare and analyze quality of 7 additive, fixed different Differential pulsation codification transition ( DPCM ) forecasters to happen out which one achieves the best compaction ratio.
- Compare and deduce the tight image against the original image to guarantee our consequence has lossless compaction.
As we know, there is strong correlativity or connexion between spatially next pels. The purpose of the prognostic cryptography is to take redundancy between back-to-back pels to ease the encryption of merely the residuary part between existent and predicated ( merely new information ) . In other manner, a pel is coded as the difference between its existent value and a predicted value, which was computed from antecedently decoded values. As a consequence of that, compaction ratio depends on the discrepancy of the image, degree of quantisation of the difference values and quality of the anticipation.
After prognostic cryptography, we can get down to compact the original file size by utilizing Differential pulsation codification transition ( DPCM ) encryption and do a Huffman information codebook before conveying the image. The decipherer uses the Huffman codebook to first decrypt the Huffman information and followed by decrypting Differential pulsation codification transition ( DPCM ) to deduce the constructed image..
Differential Pulse Code Modulation ( DPCM ) Encoding
Encode original Lena512.pgm image to DPCM values utilizing 7 additive and fixed different Predictor methods:
- ( A + C ) /2
- ( A + B + C ) /3
- A+ ( B-C ) /2
- B+ ( A-C ) /2
- ( A+B ) /2
Harmonizing to 7 above expressions, we can calculate the difference between the old pel and current 1. The forecaster codification table consequence will be sent to the following measure, entropy encoder.
The consequences show that the DPCM forecaster B + ( A -C ) /2 achieves the best compaction Ratio
Entropy encoder ( Huffman coding ) :
Below portion shows how the attack of Huffman cryptography is generated:
- Measure 1: Recover the end product of DPCM encoding.
- Measure 2: Each value from DPCM encoding will be generated harmonizing to each happening chance in falling order.
- Measure 3: Delegate the Huffman codeword for each computed chance value
Each kid chances is added to make the parent. The adding of chances continues until the root with concluding chance 1.0 as shown above. Hence, the Huffman Coding Table is formed.
A spot is assigned to every node. The ‘0 ‘ spot is assigned to every left sub-tree of every node and the ‘1 ‘ spot is assigned to every right sub-tree of every node.
Average length per symbol = 1×0.6+2×0.2+3×0.1+3×0.1= 1.6 bits/symbol
In our experiment, since the optimum forecaster is B + ( A -C ) /2, the codebook is generated harmonizing to this forecaster. The DPCM values ranges from [ -75, 87 ] . The tabular array below shows the first 8 values among 162 DPCM values in Huffman codebook.
RESULT DISCUSSIONS: ( Optimal Predictor B + ( A -C ) /2, Encoding Part )
Average bit/pixel=Compressed Size ( Bits ) / ( columns x rows ) =Compressed Size/ ( 512 x 512 )
After bring forthing 162 codewords we are able to accomplish 902428 spots. The derived mean bit/pixel is about 3.442 bits/pixel.
Compression Ratio= Original image size / Compressed image size
Compression Ratio= 8 spots / ( Average bit/pixel ) ( image declaration: 8 spots per pel )
The original image ( Lena512.pgm image ) is a fixed 8 bits/pixel. Therefore, the compaction ratio is 8/3.442=2.3239 by utilizing Huffman codifications.
In order to compose a file into binary spot format, the 8 binary next spots is read and converted an whole number. For Huffman decryption, the whole numbers are written into a file.
Entropy decipherer ( Huffman Decoding ) :
The Huffman decipherer uses the tight image to renew the DPCM tabular array.
- The compressed file whole numbers are read and converted into a set of binary spots.
- If the binary codification is non a prefix of any codification, read each and every spot in the codebook to happen a lucifer. If no lucifer is found, continue the hunt till a lucifer is found in the codebook. Retreive the corresponding DPCM value for the binary codification set.
- Measure 2 is repeated until the binary set is exhausted. A DPCM tabular array is generated and this is called the DPCM decryption procedure.
After change overing whole numbers into a set of binary spots, we have encoded spot watercourse of the first few pels: 110111001101110011011101
Harmonizing to Huffman search tabular array, the corresponding DPCM values are: 87 87 86
After decrypting DPCM, the reconstructed image is retrieved.
Harmonizing to the above images, it can be concluded that the compaction is a lossless one. There is no difference between original image and compressed one, as above images shown in world or DPCM values encoded and decoded in theory.
Analyzing Different Forecasters for Lena.pgm
Through above diagram, we can see the compaction ratio of utilizing one neighbouring pel ( Predictor 1: A ) for anticipation is lower than utilizing two or three adjacent pels. And forecasters which use 2 adjacent pels have lower compaction ratio than 1s with 3 adjacent pels
Analyzing Different Forecasters for other images:
To farther evaluate public presentation of the forecasters, 4 different images were chosen to calculate the effects the public presentation of the forecasters.
From the above diagrams, compaction ratio of methods from 3 to 6 ( They are: ( A + B + C ) /3, A+B-C, A+ ( B-C ) /2, B+ ( A-C ) /2 ) is higher than the remainder. As there is strong connexion between next pels, any forecaster which can use connexion between next pels produces good compaction ratio
As a consequence of that, the methods which use 1 or 2 adjacent pels can non use this connexion good.
Forecaster A is used ( merely 1 pel is used ) for the first method. The encoded values are dependent on the old value of the same row. Therefore, the first column ‘s values can non be predicted or this forecaster does non utilize connexion between neighbouring pels suitably.
On the other manus, the images that fall at the centre part have a lower compaction ratio than 1s which spreads over the full white-grey-black scale..
Since the conducted experiments pertain merely on.pgm format images ( Black-And-White image ) , it is unable to find compaction ratio of coloured images ( RGB, YCbCr, HSV colour bases )
Harmonizing to all above experiments and diagrams, we can asseverate that there is no 1 definite forecaster for every image to accomplish best compaction ratio because different images need different forecasters to accomplish better compaction consequences.
In general, since there is strong correlativity between spatially next pels, any forecaster which can use connexion between next pels produces good compaction ratio. In our experiment compaction method from 3 to 6 will bring forth better compaction than the remainder.
In world, Static Huffman Coding is popular and easy to implement but does non obtain theoretically optimal figure of spots to encode symbols because of status: Huffman codeword must be an integer figure of spots long, e.g. if chance of 1 symbol is 0.9, the optimal codification word size should be 0.15 spots but in Huffman Coding, it is 1 spot. Furthermore, if symbol chances are unknown or non stable ( beginning alterations ) , Dynamic Huffman coding should be chosen but the execution is really complicated.
On the other manus, so as to accomplish non-integer length cryptography and chance derived in real-time, Arithmetic cryptography is a good option. However, the execution of Arithmetic cryptography is slow due to many generations ( in some instances, divisions ) .