Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

CIDAS
/
clipseg-rd64-refined

Image Segmentation
Transformers
PyTorch
Safetensors
clipseg
vision
Model card Files Files and versions
xet
Community
9

Instructions to use CIDAS/clipseg-rd64-refined with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use CIDAS/clipseg-rd64-refined with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("image-segmentation", model="CIDAS/clipseg-rd64-refined")
    # Load model directly
    from transformers import AutoProcessor, CLIPSegForImageSegmentation
    
    processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
    model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

📋 Documentation Enhancement Suggestion

#10 opened 3 months ago by
CroviaTrust

📋 Documentation Enhancement Suggestion

#9 opened 3 months ago by
CroviaTrust

Adding `safetensors` variant of this model

#5 opened almost 3 years ago by
SFconvertbot

What about tensorflow version?

1
#4 opened almost 3 years ago by
a1b1

How do you get this to work with higher resolution images? Would it require retraining?

#3 opened about 3 years ago by
wtl890103123

Can you provide an example script for this task

#2 opened about 3 years ago by
MonsterMMORPG

Nice work :)

5
#1 opened over 3 years ago by
Bailey24
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs