lucataco commited on
Commit
08c684a
·
verified ·
1 Parent(s): 4aed899

Add Replicate inference provider examples

Browse files
Files changed (1) hide show
  1. README.md +91 -6
README.md CHANGED
@@ -1,10 +1,95 @@
1
  ---
2
- title: Inference Provider Examples
3
- emoji: 🏃
4
- colorFrom: purple
5
- colorTo: blue
6
  sdk: static
7
- pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Replicate Inference Provider Examples
3
+ emoji: 🚀
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: static
7
+ pinned: true
8
  ---
9
 
10
+ # Replicate Inference Provider Examples
11
+
12
+ Use Replicate through Hugging Face's standard `InferenceClient` by setting `provider="replicate"`. These examples use your `HF_TOKEN` and models available through Hugging Face Inference Providers.
13
+
14
+ ## Image generation
15
+
16
+ ```python
17
+ import os
18
+ from huggingface_hub import InferenceClient
19
+
20
+ client = InferenceClient(
21
+ provider="replicate",
22
+ api_key=os.environ["HF_TOKEN"],
23
+ )
24
+
25
+ image = client.text_to_image(
26
+ "A cinematic photo of an astronaut riding a horse",
27
+ model="Tongyi-MAI/Z-Image-Turbo",
28
+ )
29
+
30
+ image.save("replicate-astronaut.png")
31
+ ```
32
+
33
+ ## Image editing
34
+
35
+ ```python
36
+ import os
37
+ from huggingface_hub import InferenceClient
38
+
39
+ client = InferenceClient(
40
+ provider="replicate",
41
+ api_key=os.environ["HF_TOKEN"],
42
+ )
43
+
44
+ with open("cat.png", "rb") as image_file:
45
+ input_image = image_file.read()
46
+
47
+ image = client.image_to_image(
48
+ input_image,
49
+ prompt="Turn the cat into a tiger.",
50
+ model="black-forest-labs/FLUX.2-dev",
51
+ )
52
+
53
+ image.save("replicate-tiger.png")
54
+ ```
55
+
56
+ ## Video generation
57
+
58
+ ```python
59
+ import os
60
+ from huggingface_hub import InferenceClient
61
+
62
+ client = InferenceClient(
63
+ provider="replicate",
64
+ api_key=os.environ["HF_TOKEN"],
65
+ )
66
+
67
+ video = client.text_to_video(
68
+ "A young man walking on the street",
69
+ model="Wan-AI/Wan2.2-T2V-A14B",
70
+ )
71
+ ```
72
+
73
+ ## Speech recognition
74
+
75
+ ```python
76
+ import os
77
+ from huggingface_hub import InferenceClient
78
+
79
+ client = InferenceClient(
80
+ provider="replicate",
81
+ api_key=os.environ["HF_TOKEN"],
82
+ )
83
+
84
+ output = client.automatic_speech_recognition(
85
+ "sample1.flac",
86
+ model="openai/whisper-large-v3",
87
+ )
88
+ ```
89
+
90
+ ## More resources
91
+
92
+ - [Replicate org on Hugging Face](https://huggingface.co/replicate)
93
+ - [Run with Replicate collection](https://huggingface.co/collections/replicate/run-with-replicate-6a04d0792d027edbf66c7155)
94
+ - [Replicate provider docs](https://huggingface.co/docs/inference-providers/providers/replicate)
95
+ - [Supported Replicate models](https://huggingface.co/models?inference_provider=replicate&sort=trending)