Update README.md
Browse files
README.md
CHANGED
|
@@ -56,98 +56,162 @@ Using the model output by the learning mechanism and the binary file (.dat) outp
|
|
| 56 |
|
| 57 |
Original source code exist in https://github.com/mina98/TEZip-Libtorch-Main.git
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
-
|
| 62 |
|
| 63 |
-
|
| 64 |
-
-export LD_LIBRARY_PATH=/home/mwahba/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
|
| 65 |
|
| 66 |
-
|
| 67 |
-
To create training data, use the following command:
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
train_data_dir with the path to your training data
|
| 74 |
-
val_data_dir (optional) with the path to your validation data
|
| 75 |
-
save_dir with the directory where the processed data should be saved.
|
| 76 |
-
|
| 77 |
-
Run example
|
| 78 |
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
## How to Get Started with the Model
|
| 82 |
|
| 83 |
-
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
-
|
| 86 |
|
| 87 |
-
|
| 88 |
-
data_dir with the dataset path
|
| 89 |
-
save_dir with the directory where the trained model should be saved
|
| 90 |
-
model_name with the specific model architecture to be used (e.g., convlstm).
|
| 91 |
-
The --verbose flag enables detailed logging.
|
| 92 |
|
| 93 |
-
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
-
|
|
|
|
|
|
|
|
|
|
| 96 |
|
| 97 |
-
|
| 98 |
-
|
|
|
|
|
|
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
model_name with the model type (e.g., convlstm and prednet),
|
| 111 |
-
bound_value with the compression bound (e.g., 0.000001).
|
| 112 |
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
|
| 116 |
|
| 117 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
./build/main decompress model_dir compressed_data_dir save_dir --verbose --model model_name
|
|
|
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
The --verbose flag enables detailed logging of the decompression process.
|
| 128 |
-
|
| 129 |
-
Run example
|
| 130 |
|
|
|
|
|
|
|
| 131 |
./build/main tezip --uncompress model_convlstm comp_0.1/ decomp_0.1 --verbose --model convlstm
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
|
| 152 |
## Evaluation
|
| 153 |
|
|
|
|
| 56 |
|
| 57 |
Original source code exist in https://github.com/mina98/TEZip-Libtorch-Main.git
|
| 58 |
|
| 59 |
+
# Neural Compression Model
|
| 60 |
|
| 61 |
+
## Model Description
|
| 62 |
|
| 63 |
+
This model implements neural compression using ConvLSTM and PredNet architectures for efficient video/image data compression and decompression.
|
|
|
|
| 64 |
|
| 65 |
+
## Environment Setup
|
|
|
|
| 66 |
|
| 67 |
+
### Prerequisites
|
| 68 |
+
- CUDA 12.1
|
| 69 |
+
- LibTorch
|
| 70 |
+
- C++ build environment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
+
### Environment Variables
|
| 73 |
+
Set up the following environment variables before running:
|
|
|
|
| 74 |
|
| 75 |
+
```bash
|
| 76 |
+
export PATH=/home/mwahba/cuda-12.1/bin${PATH:+:${PATH}}
|
| 77 |
+
export LD_LIBRARY_PATH=/home/mwahba/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
|
| 78 |
+
```
|
| 79 |
|
| 80 |
+
## Usage
|
| 81 |
|
| 82 |
+
### 1. Creating Training Data
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
**Command:**
|
| 85 |
+
```bash
|
| 86 |
+
./build/main train_data_create train_data_dir val_data_dir(optional) save_dir
|
| 87 |
+
```
|
| 88 |
|
| 89 |
+
**Parameters:**
|
| 90 |
+
- `train_data_dir`: Path to your training data
|
| 91 |
+
- `val_data_dir`: (Optional) Path to your validation data
|
| 92 |
+
- `save_dir`: Directory where processed data should be saved
|
| 93 |
|
| 94 |
+
**Example:**
|
| 95 |
+
```bash
|
| 96 |
+
./build/main train_data_create 2011_09_26/2011_09_26_drive_0027_sync/ data
|
| 97 |
+
```
|
| 98 |
|
| 99 |
+
### 2. Training the Model
|
| 100 |
|
| 101 |
+
**Command:**
|
| 102 |
+
```bash
|
| 103 |
+
./build/main train_model model_dir data_dir save_dir --verbose --model model_name
|
| 104 |
+
```
|
| 105 |
|
| 106 |
+
**Parameters:**
|
| 107 |
+
- `model_dir`: Path to directory containing the model
|
| 108 |
+
- `data_dir`: Dataset path
|
| 109 |
+
- `save_dir`: Directory where trained model should be saved
|
| 110 |
+
- `model_name`: Model architecture (e.g., `convlstm`, `prednet`)
|
| 111 |
+
- `--verbose`: Enables detailed logging
|
|
|
|
|
|
|
| 112 |
|
| 113 |
+
**Example:**
|
| 114 |
+
```bash
|
| 115 |
+
./build/main tezip --learn model/ data/ --verbose --model convlstm
|
| 116 |
+
```
|
| 117 |
|
| 118 |
+
### 3. Data Compression
|
| 119 |
|
| 120 |
+
**Command:**
|
| 121 |
+
```bash
|
| 122 |
+
./build/main compress model_dir data_dir save_dir --preprocess preprocess_level --window window_size --verbose --mode mode_type --model model_name --bound bound_value
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
**Parameters:**
|
| 126 |
+
- `model_dir`: Path to the trained model
|
| 127 |
+
- `data_dir`: Directory containing data to be compressed
|
| 128 |
+
- `save_dir`: Directory where compressed data will be saved
|
| 129 |
+
- `preprocess_level`: Preprocessing level (e.g., `3`)
|
| 130 |
+
- `window_size`: Window size for processing (e.g., `5`)
|
| 131 |
+
- `mode_type`: Compression mode (e.g., `pwrel`)
|
| 132 |
+
- `model_name`: Model type (`convlstm`, `prednet`)
|
| 133 |
+
- `bound_value`: Compression bound (e.g., `0.000001`)
|
| 134 |
+
|
| 135 |
+
**Example:**
|
| 136 |
+
```bash
|
| 137 |
+
./build/main tezip --compress model_convlstm 2011_09_26/2011_09_26_drive_0027_sync/image_02 comp_0.1/ --preprocess 3 --window 5 --verbose --mode pwrel --model convlstm --bound 0.1
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
### 4. Data Decompression
|
| 141 |
+
|
| 142 |
+
**Command:**
|
| 143 |
+
```bash
|
| 144 |
./build/main decompress model_dir compressed_data_dir save_dir --verbose --model model_name
|
| 145 |
+
```
|
| 146 |
|
| 147 |
+
**Parameters:**
|
| 148 |
+
- `model_dir`: Path to the trained model
|
| 149 |
+
- `compressed_data_dir`: Directory containing compressed data
|
| 150 |
+
- `save_dir`: Directory where decompressed data will be saved
|
| 151 |
+
- `model_name`: Model type (e.g., `convlstm`)
|
| 152 |
+
- `--verbose`: Enables detailed logging
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
|
| 154 |
+
**Example:**
|
| 155 |
+
```bash
|
| 156 |
./build/main tezip --uncompress model_convlstm comp_0.1/ decomp_0.1 --verbose --model convlstm
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
## Model Architecture
|
| 160 |
+
|
| 161 |
+
### Supported Models
|
| 162 |
+
- **ConvLSTM**: Convolutional LSTM for spatiotemporal sequence modeling
|
| 163 |
+
- **PredNet**: Predictive coding network for video prediction
|
| 164 |
+
|
| 165 |
+
### Key Features
|
| 166 |
+
- Neural compression and decompression
|
| 167 |
+
- Support for video/image sequences
|
| 168 |
+
- Configurable preprocessing levels
|
| 169 |
+
- Adjustable compression bounds
|
| 170 |
+
- Window-based processing
|
| 171 |
+
|
| 172 |
+
## File Structure
|
| 173 |
+
|
| 174 |
+
### Core Components
|
| 175 |
+
- **`main.cpp`**: Master file for running train_data_create and tezip modules
|
| 176 |
+
- **`tezip.cpp`**: Master module integrating training, compression, and decompression
|
| 177 |
+
- **`train_data_create.cpp`**: Generates hickle data for training
|
| 178 |
+
|
| 179 |
+
### Model Implementation
|
| 180 |
+
- **`convlstm.cpp`**: ConvLSTM layer implementation
|
| 181 |
+
- **`conv_lstm_cell.cpp`**: ConvLSTM cell definition
|
| 182 |
+
- **`seq2seq.cpp`**: ConvLSTM model instance creation
|
| 183 |
+
- **`train_convlstm.cpp`**: ConvLSTM training function
|
| 184 |
+
- **`prednet.cpp`**: PredNet model implementation
|
| 185 |
+
- **`convlstmcell.cpp`**: ConvLSTM cell for PredNet
|
| 186 |
+
- **`train.cpp`**: PredNet training function
|
| 187 |
+
|
| 188 |
+
### Data Processing
|
| 189 |
+
- **`the_data.cpp`**: Sequence generation for LibTorch models
|
| 190 |
+
- **`manual_data_loader.cpp`**: Data loader functionality simulation
|
| 191 |
+
- **`compress.cpp`**: Compression functionality implementation
|
| 192 |
+
- **`decompress.cpp`**: Decompression functionality implementation
|
| 193 |
+
|
| 194 |
+
## Technical Requirements
|
| 195 |
+
|
| 196 |
+
### Dependencies
|
| 197 |
+
- CUDA 12.1
|
| 198 |
+
- LibTorch
|
| 199 |
+
- C++ compiler with C++14 support or higher
|
| 200 |
+
|
| 201 |
+
### Hardware Requirements
|
| 202 |
+
- NVIDIA GPU with CUDA support
|
| 203 |
+
- Sufficient GPU memory for model training and inference
|
| 204 |
+
|
| 205 |
+
## Performance
|
| 206 |
+
|
| 207 |
+
### Compression Efficiency
|
| 208 |
+
- Configurable compression bounds (e.g., 0.1, 0.000001)
|
| 209 |
+
- Adaptive preprocessing levels
|
| 210 |
+
- Window-based processing for memory efficiency
|
| 211 |
+
|
| 212 |
+
### Processing Modes
|
| 213 |
+
- **pwrel**: Pixel-wise relative compression mode
|
| 214 |
+
- Additional modes may be available depending on implementation
|
| 215 |
|
| 216 |
## Evaluation
|
| 217 |
|