Update README.md
Browse files
README.md
CHANGED
|
@@ -32,7 +32,6 @@ Decompression mechanism
|
|
| 32 |
Using the model output by the learning mechanism and the binary file (.dat) output by the compression mechanism, the image group input to the compression mechanism is restored. By inferring by inputting keyframes, the inference result of the compression mechanism is reproduced. The processing of Density-based Spatial Decoding and Partitioned Entropy Decoding is performed in the reverse order of the compression mechanism, and the original difference is restored. Since the error-bounded quantization process is lossy compression, it is not included in the decompression mechanism. The inference result and the difference are added to restore the original image and output it.
|
| 33 |
|
| 34 |
|
| 35 |
-
|
| 36 |
- **Developed by:** Mina
|
| 37 |
- **Funded by [optional]:** [More Information Needed]
|
| 38 |
- **Shared by [optional]:** Amarjit Singh
|
|
@@ -43,15 +42,35 @@ Using the model output by the learning mechanism and the binary file (.dat) outp
|
|
| 43 |
|
| 44 |
### Model Sources [optional]
|
| 45 |
|
| 46 |
-
<!-- Provide the basic links for the model. -->
|
| 47 |
-
|
| 48 |
- **Repository:** (https://github.com/mina98/TEZip-Libtorch-Main.git)
|
| 49 |
- **Paper [optional]:** [More Information Needed]
|
| 50 |
- **Demo [optional]:** [More Information Needed]
|
| 51 |
|
| 52 |
## Uses
|
| 53 |
|
| 54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
## How to Get Started with the Model
|
| 57 |
|
|
@@ -69,104 +88,76 @@ Run example
|
|
| 69 |
|
| 70 |
./build/main tezip --learn model/ data/ --verbose --model convlstm
|
| 71 |
|
|
|
|
|
|
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-

|
| 77 |
-
|
| 78 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 79 |
-
|
| 80 |
-
### Testing Data, Factors & Metrics
|
| 81 |
-
|
| 82 |
-
#### Testing Data
|
| 83 |
-
|
| 84 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 85 |
-
|
| 86 |
-
[More Information Needed]
|
| 87 |
-
|
| 88 |
-
#### Factors
|
| 89 |
-
|
| 90 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
#### Metrics
|
| 95 |
-
|
| 96 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 97 |
-
|
| 98 |
-
[More Information Needed]
|
| 99 |
-
|
| 100 |
-
### Results
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
|
|
|
|
| 106 |
|
|
|
|
|
|
|
| 107 |
|
| 108 |
-
|
| 109 |
|
| 110 |
-
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
-
|
| 113 |
|
| 114 |
-
|
| 115 |
|
| 116 |
-
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
|
| 126 |
-
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
-
[More Information Needed]
|
| 131 |
|
| 132 |
-
|
| 133 |
|
| 134 |
-
|
| 135 |
|
| 136 |
-
|
| 137 |
|
| 138 |
-
|
|
|
|
| 139 |
|
| 140 |
-
####
|
| 141 |
|
| 142 |
-
[More Information Needed]
|
| 143 |
|
| 144 |
## Citation [optional]
|
| 145 |
|
| 146 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 147 |
-
|
| 148 |
-
**BibTeX:**
|
| 149 |
-
|
| 150 |
-
[More Information Needed]
|
| 151 |
-
|
| 152 |
-
**APA:**
|
| 153 |
-
|
| 154 |
-
[More Information Needed]
|
| 155 |
-
|
| 156 |
-
## Glossary [optional]
|
| 157 |
-
|
| 158 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 159 |
-
|
| 160 |
-
[More Information Needed]
|
| 161 |
-
|
| 162 |
-
## More Information [optional]
|
| 163 |
-
|
| 164 |
-
[More Information Needed]
|
| 165 |
-
|
| 166 |
-
## Model Card Authors [optional]
|
| 167 |
-
|
| 168 |
-
[More Information Needed]
|
| 169 |
-
|
| 170 |
-
## Model Card Contact
|
| 171 |
-
|
| 172 |
-
[More Information Needed]
|
|
|
|
| 32 |
Using the model output by the learning mechanism and the binary file (.dat) output by the compression mechanism, the image group input to the compression mechanism is restored. By inferring by inputting keyframes, the inference result of the compression mechanism is reproduced. The processing of Density-based Spatial Decoding and Partitioned Entropy Decoding is performed in the reverse order of the compression mechanism, and the original difference is restored. Since the error-bounded quantization process is lossy compression, it is not included in the decompression mechanism. The inference result and the difference are added to restore the original image and output it.
|
| 33 |
|
| 34 |
|
|
|
|
| 35 |
- **Developed by:** Mina
|
| 36 |
- **Funded by [optional]:** [More Information Needed]
|
| 37 |
- **Shared by [optional]:** Amarjit Singh
|
|
|
|
| 42 |
|
| 43 |
### Model Sources [optional]
|
| 44 |
|
|
|
|
|
|
|
| 45 |
- **Repository:** (https://github.com/mina98/TEZip-Libtorch-Main.git)
|
| 46 |
- **Paper [optional]:** [More Information Needed]
|
| 47 |
- **Demo [optional]:** [More Information Needed]
|
| 48 |
|
| 49 |
## Uses
|
| 50 |
|
| 51 |
+
Original source code exist in https://github.com/mina98/TEZip-Libtorch-Main.git
|
| 52 |
+
|
| 53 |
+
Setting Up the Environment
|
| 54 |
+
|
| 55 |
+
Before running the model, set up the necessary environment variables:
|
| 56 |
+
|
| 57 |
+
export PATH=/home/mwahba/cuda-12.1/bin${PATH:+:${PATH}}
|
| 58 |
+
export LD_LIBRARY_PATH=/home/mwahba/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
|
| 59 |
+
|
| 60 |
+
Creating Training Data
|
| 61 |
+
To create training data, use the following command:
|
| 62 |
+
|
| 63 |
+
./build/main train_data_create train_data_dir val_data_dir(optional) save_dir
|
| 64 |
+
|
| 65 |
+
Replace:
|
| 66 |
+
|
| 67 |
+
train_data_dir with the path to your training data
|
| 68 |
+
val_data_dir (optional) with the path to your validation data
|
| 69 |
+
save_dir with the directory where the processed data should be saved.
|
| 70 |
+
|
| 71 |
+
Run example
|
| 72 |
+
|
| 73 |
+
./build/main train_data_create 2011_09_26/2011_09_26_drive_0027_sync/ data
|
| 74 |
|
| 75 |
## How to Get Started with the Model
|
| 76 |
|
|
|
|
| 88 |
|
| 89 |
./build/main tezip --learn model/ data/ --verbose --model convlstm
|
| 90 |
|
| 91 |
+
Compressing Data
|
| 92 |
+
To compress data using the trained model, use the following command:
|
| 93 |
|
| 94 |
+
./build/main compress model_dir data_dir save_dir --preprocess preprocess_level --window window_size --verbose --mode mode_type --model model_name --bound bound_value
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
|
| 96 |
+
Replace:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
|
| 98 |
+
model_dir with the path to the trained model,
|
| 99 |
+
data_dir with the directory containing the data to be compressed,
|
| 100 |
+
save_dir with the directory where compressed data will be saved,
|
| 101 |
+
preprocess_level with the preprocessing level (e.g., 3),
|
| 102 |
+
window_size with the window size for processing (e.g., 5),
|
| 103 |
+
mode_type with the compression mode (e.g., pwrel),
|
| 104 |
+
model_name with the model type (e.g., convlstm and prednet),
|
| 105 |
+
bound_value with the compression bound (e.g., 0.000001).
|
| 106 |
|
| 107 |
+
./build/main tezip--compress model_convlstm 2011_09_26/2011_09_26_drive_0027_sync/image_02 comp_0.1/ --preprocess 3 --window 5 --verbose --mode pwrel --model convlstm --bound 0.1
|
| 108 |
|
| 109 |
+
Decompressing Data
|
| 110 |
|
| 111 |
+
To decompress data using the trained model, use the following command:
|
| 112 |
+
./build/main decompress model_dir compressed_data_dir save_dir --verbose --model model_name
|
| 113 |
|
| 114 |
+
Replace:
|
| 115 |
|
| 116 |
+
model_dir with the path to the trained model,
|
| 117 |
+
compressed_data_dir with the directory containing the compressed data,
|
| 118 |
+
save_dir with the directory where decompressed data will be saved,
|
| 119 |
+
model_name with the model type (e.g., convlstm).
|
| 120 |
|
| 121 |
+
The --verbose flag enables detailed logging of the decompression process.
|
| 122 |
|
| 123 |
+
Run example
|
| 124 |
|
| 125 |
+
./build/main tezip --uncompress model_convlstm comp_0.1/ decomp_0.1 --verbose --model convlstm
|
| 126 |
|
| 127 |
+
Below is a revised breakdown of the project's file structure:
|
| 128 |
|
| 129 |
+
compress.cpp: Implements the compression functionality.
|
| 130 |
+
decompress.cpp: Implements the decompression functionality.
|
| 131 |
+
the_data.cpp: Generates sequences for LibTorch models.
|
| 132 |
+
conv_lstm_cell.cpp: Defines the ConvLSTM cell used in the ConvLSTM model.
|
| 133 |
+
convlstm.cpp: Defines the ConvLSTM layer used in the ConvLSTM model.
|
| 134 |
+
seq2seq.cpp: Creates an instance of a ConvLSTM model.
|
| 135 |
+
train_convlstm.cpp: Contains the training function for the ConvLSTM model.
|
| 136 |
+
convlstmcell.cpp: Implements the ConvLSTM cell for PredNet.
|
| 137 |
+
prednet.cpp: Implements the PredNet definition.
|
| 138 |
+
tezip.cpp: Acts as the master module, integrating training, compression, and decompression.
|
| 139 |
+
train_data_create.cpp: Generates hickle data for training.
|
| 140 |
+
train.cpp: Contains the training function for the PredNet model.
|
| 141 |
+
manual_data_loader.cpp: Simulates the data loader functionality.
|
| 142 |
+
main.cpp: Serves as the master file to run both the train_data_create and tezip modules.
|
| 143 |
|
| 144 |
+
This structure clearly outlines the responsibilities of each file in the project.
|
| 145 |
|
| 146 |
+
## Evaluation
|
| 147 |
|
|
|
|
| 148 |
|
| 149 |
+

|
| 150 |
|
| 151 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 152 |
|
| 153 |
+
### Testing Data, Factors & Metrics
|
| 154 |
|
| 155 |
+
Compression ratio
|
| 156 |
+
Compression/Decompression Time
|
| 157 |
|
| 158 |
+
#### Summary
|
| 159 |
|
|
|
|
| 160 |
|
| 161 |
## Citation [optional]
|
| 162 |
|
| 163 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|