pasqualedem's picture
Push model using huggingface_hub.
30fe23f verified
metadata
base_model: DCAMA
language: en
license: mit
tags:
  - few-shot segmentation
  - distillation
  - image-segmentation
name: DistillFSS-DCAMA
library: pytorch
ArXiv: '2512.05613'
repo_url: https://github.com/pasqualedem/DistillFSS
paper_url: https://arxiv.org/abs/2512.05613
parameters: |
  dataloader:
    num_workers: 0
  dataset:
    datasets:
      test_weedmap:
        prompt_images: 5
        test_root: data/weedmap/0_rotations_processed_003_test/RedEdge/003
        train_root: data/weedmap/0_rotations_processed_003_test/RedEdge/000
    preprocess:
      image_size: 384
      mean:
      - 0.485
      - 0.456
      - 0.406
      std:
      - 0.229
      - 0.224
      - 0.225
  model:
    name: distillator
    params:
      student:
        name: conv_distillator
        num_classes: 2
      teacher:
        backbone: swin
        backbone_checkpoint: checkpoints/swin_base_patch4_window12_384.pth
        concat_support: false
        image_size: 384
        model_checkpoint: checkpoints/swin_fold0_pascal_modcross_soft.pt
        name: dcama
  push_to_hub:
    repo_name: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot
  refinement:
    hot_parameters:
    - model.conv1
    - model.conv2
    - model.conv3
    - model.mixer1
    - model.mixer2
    - model.mixer3
    - student
    iterations_is_num_classes: false
    loss:
      name: refine_distill
    lr: 0.001
    max_iterations: 500
    subsample: 1
    substitutor: paired
  test:
    prompt_to_use: null
  tracker:
    cache_dir: tmp
    group: WeedMap
    log_frequency: 1
    project: FSSWeed
    tags:
    - WeedMap
    - Distill
    test_image_log_frequency: 10
    tmp_dir: tmp
    train_image_log_frequency: 25
repo_id: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot

DistillFSS-DCAMA is a distilled version of the DCAMA model for a specific downstream segmentation task. The DistillFSS framework allows to distill large few-shot segmentation models into smaller and more efficient ones, while improving or maintaining their performance on the target task.

How to use this model: Clone the repository:

git clone https://github.com/pasqualedem/DistillFSS.git

Install the required dependencies as specified in the repository.

Load the model using the following code snippet:

from distillfss.models.dcama.distillator import DistilledDCAMA
model = DistilledDCAMA.from_pretrained("pasqualedem/DistillFSS_WeedMap_DCAMA_5shot")

YAML configuration:

dataloader:
  num_workers: 0
dataset:
  datasets:
    test_weedmap:
      prompt_images: 5
      test_root: data/weedmap/0_rotations_processed_003_test/RedEdge/003
      train_root: data/weedmap/0_rotations_processed_003_test/RedEdge/000
  preprocess:
    image_size: 384
    mean:
    - 0.485
    - 0.456
    - 0.406
    std:
    - 0.229
    - 0.224
    - 0.225
model:
  name: distillator
  params:
    student:
      name: conv_distillator
      num_classes: 2
    teacher:
      backbone: swin
      backbone_checkpoint: checkpoints/swin_base_patch4_window12_384.pth
      concat_support: false
      image_size: 384
      model_checkpoint: checkpoints/swin_fold0_pascal_modcross_soft.pt
      name: dcama
push_to_hub:
  repo_name: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot
refinement:
  hot_parameters:
  - model.conv1
  - model.conv2
  - model.conv3
  - model.mixer1
  - model.mixer2
  - model.mixer3
  - student
  iterations_is_num_classes: false
  loss:
    name: refine_distill
  lr: 0.001
  max_iterations: 500
  subsample: 1
  substitutor: paired
test:
  prompt_to_use: null
tracker:
  cache_dir: tmp
  group: WeedMap
  log_frequency: 1
  project: FSSWeed
  tags:
  - WeedMap
  - Distill
  test_image_log_frequency: 10
  tmp_dir: tmp
  train_image_log_frequency: 25