Instructions to use q-future/one-align with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use q-future/one-align with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="q-future/one-align", trust_remote_code=True) pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("q-future/one-align", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Upload visual_encoder.py with huggingface_hub
Browse files- visual_encoder.py +1 -0
visual_encoder.py
CHANGED
|
@@ -690,6 +690,7 @@ class MplugOwlVisualAbstractorLayer(nn.Module):
|
|
| 690 |
|
| 691 |
|
| 692 |
class MplugOwlVisualAbstractorEncoder(nn.Module):
|
|
|
|
| 693 |
def __init__(self, config):
|
| 694 |
super().__init__()
|
| 695 |
self.config = config
|
|
|
|
| 690 |
|
| 691 |
|
| 692 |
class MplugOwlVisualAbstractorEncoder(nn.Module):
|
| 693 |
+
_no_split_modules = ["MplugOwlVisualAbstractorLayer"]
|
| 694 |
def __init__(self, config):
|
| 695 |
super().__init__()
|
| 696 |
self.config = config
|