Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,100 +1,39 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
-
|
| 4 |
-
| User | Overall Score |
|
| 5 |
-
|:---:|:---:|
|
| 6 |
-
| Loupe (ours) | **0.846** |
|
| 7 |
-
| Rank 2 | 0.8161 |
|
| 8 |
-
| Rank 3 | 0.8151 |
|
| 9 |
-
| Rank 4 | 0.815 |
|
| 10 |
-
| Rank 5 | 0.815 |
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
-
|
| 14 |
-
### 1. Create environment
|
| 15 |
-
```bash
|
| 16 |
-
conda create -y -n loupe python=3.11
|
| 17 |
-
conda activate loupe
|
| 18 |
-
pip install -r requirements.txt
|
| 19 |
-
mkdir ./pretrained_weights/PE-Core-L14-336
|
| 20 |
-
```
|
| 21 |
-
|
| 22 |
-
### 2. Prepare pretrained weights
|
| 23 |
-
Download [Perception Encoder](https://github.com/facebookresearch/perception_models) following their original instructions, and place `PE-Core-L14-336.pt` at `./pretrained_weights/PE-Core-L14-336`. This can be done with `huggingface-cli`:
|
| 24 |
-
```bash
|
| 25 |
-
export HF_ENDPOINT=https://hf-mirror.com
|
| 26 |
-
huggingface-cli download facebook/PE-Core-L14-336 PE-Core-L14-336.pt --local-dir ./pretrained_weights/PE-Core-L14-336
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
### 3. Prepare datasets
|
| 30 |
-
Download the dataset to any location of your choice. Then, use the [`dataset_preprocess.ipynb`](./dataset_preprocess.ipynb) notebook to preprocess the dataset. This process converts the dataset into a directly loadable `DatasetDict` and saves it in `parquet` format.
|
| 31 |
-
|
| 32 |
-
After preprocessing, you will obtain a dataset with three splits: `train`, `valid`, and `test`. Each item in these splits has the following structure:
|
| 33 |
|
| 34 |
-
|
| 35 |
-
{
|
| 36 |
-
"image": "path/to/image", # but will be loaded as an actual PIL.Image.Image object
|
| 37 |
-
"mask": "path/to/mask", # set to None for real images without masks
|
| 38 |
-
"name": "basename_of_image.png"
|
| 39 |
-
}
|
| 40 |
-
```
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
```bash
|
| 51 |
-
python src/train.py stage=cls
|
| 52 |
-
```
|
| 53 |
-
|
| 54 |
-
During training, two directories will be automatically created:
|
| 55 |
-
|
| 56 |
-
* `./results/checkpoints` — contains the DeepSpeed-format checkpoint with the highest AUC on the validation set (when using the default training strategy, which can be configured in `./configs/base.yaml`).
|
| 57 |
-
* `./results/{stage.name}` — contains logs in TensorBoard format. You can monitor the training progress by running:
|
| 58 |
-
|
| 59 |
-
```bash
|
| 60 |
-
tensorboard --logdir=./results/cls
|
| 61 |
-
# or `tensorboard --logdir=./results/seg`, etc.
|
| 62 |
-
```
|
| 63 |
|
| 64 |
-
|
| 65 |
|
| 66 |
-
|
| 67 |
|
| 68 |
-
The third stage is optional, which jointly trains the backbone, classifier head, and segmentation head. By default, a portion of the validation set is used as training data, while the remainder is reserved for validation. The reason why I use validation set as an extra training set is the test set used in the competition is slightly out-of-distribution (OOD). I found that
|
| 69 |
-
if continue training on the original training set will result in overfitting. However, if you prefer to train the whole network from scratch directly on the training set, you can do so by:
|
| 70 |
```bash
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
```
|
| 76 |
-
|
| 77 |
-
All training configurations can be adjusted within the `configs/` directory. Detailed comments are provided to facilitate quick and clear configuration.
|
| 78 |
-
|
| 79 |
-
## How to test or predict
|
| 80 |
-
By default, testing is performed on the full validation set. This means it is not suitable for evaluating Loupe trained in the third stage, since the third stage trains Loupe on the validation set itself (see above). Alternatively, if you are willing to make a slight modification to [data loading process](./src/data_module.py) to have Loupe train on the training set instead, this limitation can be avoided.
|
| 81 |
-
|
| 82 |
-
To evaluate a trained model, you can run:
|
| 83 |
-
```bash
|
| 84 |
-
python src/infer.py stage=test ckpt.checkpoint_paths=["checkpoints/cls/model.safetensors","checkpoints/seg/model.safetensors"]
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
The `ckpt.checkpoint_paths` configuration is defined under `configs/ckpt`. It is a list that specifies the checkpoints to load sequentially during execution.
|
| 88 |
-
|
| 89 |
-
The prediction step is essentially the same as the test step. You only need to add an additional parameter to specify the output directory for predictions. For example:
|
| 90 |
-
|
| 91 |
-
```bash
|
| 92 |
-
python src/infer.py stage=test \
|
| 93 |
-
ckpt.checkpoint_paths=["checkpoints/cls/model.safetensors","checkpoints/seg/model.safetensors"] \
|
| 94 |
-
stage.pred_output_dir=./pred_outputs
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
The classification predictions will be saved in `./pred_outputs/predictions.txt`, and the mask outputs will be stored in `./pred_outputs/masks`. For more details on available parameters, please refer to `configs/stage/test.yaml`.
|
| 98 |
-
|
| 99 |
-
## Code reading guides
|
| 100 |
-
Nobody cares this work, leave this section blank.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Loupe
|
| 3 |
+
emoji: 📊
|
| 4 |
+
colorFrom: yellow
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 5.34.0
|
| 8 |
+
python_version: 3.11
|
| 9 |
+
app_file: app.py
|
| 10 |
+
pinned: false
|
| 11 |
+
---
|
| 12 |
|
| 13 |
+
# {{项目名称}} Demo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
[](https://huggingface.co/spaces/{{your-username}}/{{your-space-name}})
|
| 16 |
|
| 17 |
+
{{简短的项目描述,说明这个Demo的功能和用途}}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
## 功能特性
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
- 功能点1
|
| 22 |
+
- 功能点2
|
| 23 |
+
- 功能点3
|
| 24 |
|
| 25 |
+
## 快速开始
|
| 26 |
|
| 27 |
+
1. 点击上方"Open in Spaces"按钮
|
| 28 |
+
2. 等待应用加载完成
|
| 29 |
+
3. 按照界面提示使用Demo功能
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
## 本地运行
|
| 32 |
|
| 33 |
+
如需在本地运行此Demo:
|
| 34 |
|
|
|
|
|
|
|
| 35 |
```bash
|
| 36 |
+
git clone https://huggingface.co/spaces/xxwyyds/loupe
|
| 37 |
+
cd loupe
|
| 38 |
+
pip install -r requirements.txt
|
| 39 |
+
python app.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|