ํ์ดํ๋ผ์ธ [[pipelines]]
ํ์ดํ๋ผ์ธ์ ๋ชจ๋ธ์ ์ถ๋ก ์ ํ์ฉํ ์ ์๋ ํ๋ฅญํ๊ณ ์ฌ์ด ๋ฐฉ๋ฒ์ ๋๋ค. ์ด ํ์ดํ๋ผ์ธ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ๋ณต์กํ ์ฝ๋๋ฅผ ๋๋ถ๋ถ ์ถ์ํํ์ฌ, ๊ฐ์ฒด๋ช ์ธ์(Named Entity Recognition), ๋ง์คํฌ๋ ์ธ์ด ๋ชจ๋ธ๋ง(Masked Language Modeling), ๊ฐ์ ๋ถ์(Sentiment Analysis), ํน์ฑ ์ถ์ถ(Feature Extraction), ์ง์์๋ต(Question Answering) ๋ฑ์ ์ฌ๋ฌ ์์ ์ ํนํ๋ ๊ฐ๋จํ API๋ฅผ ์ ๊ณตํฉ๋๋ค. ์ฌ์ฉ ์์๋ ์์ ์์ฝ์ ์ฐธ๊ณ ํ์ธ์.
ํ์ดํ๋ผ์ธ ์ถ์ํ๋ ๋ค์ ๋ ๊ฐ์ง ๋ฒ์ฃผ๋ก ๋๋ฉ๋๋ค.
- [
ํ์ดํ๋ผ์ธ]์ ๋ค๋ฅธ ๋ชจ๋ ํ์ดํ๋ผ์ธ์ ์บก์ํํ๋ ๊ฐ์ฅ ๊ฐ๋ ฅํ ๊ฐ์ฒด์ ๋๋ค. - ์์ ๋ณ ํ์ดํ๋ผ์ธ์ ์ค๋์ค, ์ปดํจํฐ ๋น์ , ์์ฐ์ด ์ฒ๋ฆฌ, ๋ฉํฐ๋ชจ๋ฌ ์์ ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
ํ์ดํ๋ผ์ธ ์ถ์ํ [[the-pipeline-abstraction]]
ํ์ดํ๋ผ์ธ ์ถ์ํ๋ ์ฌ์ฉ ๊ฐ๋ฅํ ๋ชจ๋ ํ์ดํ๋ผ์ธ์ ๊ฐ์ธ๋ ๋ํผ์ ๋๋ค. ๋ค๋ฅธ ํ์ดํ๋ผ์ธ์ฒ๋ผ ์ธ์คํด์คํ๋๋ฉฐ, ์ถ๊ฐ์ ์ธ ํธ์ ๊ธฐ๋ฅ์ ์ ๊ณตํฉ๋๋ค.
๋จ์ผ ํญ๋ชฉ ํธ์ถ ์์:
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
hub์์ ํน์ ๋ชจ๋ธ์ ์ฌ์ฉํ๋ ค๋ ๊ฒฝ์ฐ, ํด๋น ๋ชจ๋ธ์ด ์ด๋ฏธ ํ๋ธ์ ์์ ์ ์ ์ํ๊ณ ์๋ค๋ฉด ์์ ๋ช ์ ์๋ตํ ์ ์์ต๋๋ค.
>>> pipe = pipeline(model="FacebookAI/roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
์ฌ๋ฌ ํญ๋ชฉ์ ์ฒ๋ฆฌํ๋ ค๋ฉด ๋ฆฌ์คํธ๋ฅผ ์ ๋ฌํ์ธ์.
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"] )
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
์ ์ฒด ๋ฐ์ดํฐ์
์ ์ํํ๋ ค๋ฉด dataset์ ์ง์ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ด๋ ๊ฒ ํ๋ฉด ์ ์ฒด ๋ฐ์ดํฐ๋ฅผ ํ ๋ฒ์ ๋ฉ๋ชจ๋ฆฌ์ ์ฌ๋ฆด ํ์๋ ์๊ณ , ๋ฐฐ์น ์ฒ๋ฆฌ๋ฅผ ๋ฐ๋ก ๊ตฌํํ์ง ์์๋ ๋ฉ๋๋ค.
์ด ๋ฐฉ์์ GPU์์ ์ฌ์ฉ์ ์ ์ ๋ฃจํ์ ์ ์ฌํ ์๋๋ก ์๋ํ๋ฉฐ, ๋ง์ฝ ๊ทธ๋ ์ง ์์ ๊ฒฝ์ฐ ์ด์๋ฅผ ๋ฑ๋กํด ์ฃผ์ธ์.
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (*pt* ์ ์ฉ)๋ ๋ฐ์ดํฐ์
ํญ๋ชฉ์ ๋์
๋๋ฆฌ์์ ์ง์ ๋ ํค๋ง ๋ฐํํฉ๋๋ค.
# ์ด ์์ ์์๋ *target* ํญ๋ชฉ์ด ํ์ํ์ง ์์ผ๋ฏ๋ก KeyDataset์ ์ฌ์ฉํฉ๋๋ค. ๋ฌธ์ฅ ์ ์
๋ ฅ์๋ KeyPairDataset์ ์ฌ์ฉํ์ธ์.
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
๋ ํธ๋ฆฌํ๊ฒ ์ฌ์ฉํ๋ ค๋ฉด ์ ๋๋ ์ดํฐ๋ ๊ฐ๋ฅํฉ๋๋ค.
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# ๋ฐ์ดํฐ๋ ๋ฐ์ดํฐ์
, ๋ฐ์ดํฐ๋ฒ ์ด์ค, ํ ๋๋ HTTP ์์ฒญ์์ ์ฌ ์ ์์ต๋๋ค.
# ์๋ฒ์์
# ์ฃผ์: ๋ฐ๋ณต์ ์ด๋ฏ๋ก `num_workers > 1` ๋ณ์๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค.
# ๋ฐ์ดํฐ๋ฅผ ์ ์ฒ๋ฆฌํ๊ธฐ ์ํด ์ฌ๋ฌ ์ค๋ ๋๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ฌ์ ํ
# ๋ฉ์ธ ์ค๋ ๋๊ฐ ๋๊ท๋ชจ ์ถ๋ก ์ ์ํํ๋ ๋์ ํ๋์ ์ค๋ ๋๊ฐ ์ ์ฒ๋ฆฌ๋ฅผ ์ํํ ์ ์์ต๋๋ค.
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
[[autodoc]] pipeline
ํ์ดํ๋ผ์ธ ๋ฐฐ์น ์ฒ๋ฆฌ [[pipeline-batching]]
๋ชจ๋ ํ์ดํ๋ผ์ธ์ ๋ฐฐ์น ์ฒ๋ฆฌ๋ฅผ ์ง์ํฉ๋๋ค. ๋ฆฌ์คํธ, Dataset, Generator ์ ๋ฌ ์ ์คํธ๋ฆฌ๋ฐ ๊ธฐ๋ฅ์ ์ฌ์ฉํ ๋ ์๋ํฉ๋๋ค.
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# ์ด์ ๊ณผ ๋์ผํ ์ถ๋ ฅ์ด์ง๋ง, ๋ด์ฉ์ ๋ฐฐ์น๋ก ๋ชจ๋ธ์ ์ ๋ฌํฉ๋๋ค.
ํ์ง๋ง ๋ฐฐ์น ์ฒ๋ฆฌ๊ฐ ํญ์ ์ฑ๋ฅ ํฅ์์ ๋ณด์ฅํ๋ ๊ฒ์ ์๋๋๋ค. ํ๋์จ์ด, ๋ฐ์ดํฐ, ๋ชจ๋ธ์ ๋ฐ๋ผ ์๋๊ฐ 10๋ฐฐ๋ก ๋นจ๋ผ์ง์๋, 5๋ฐฐ ๋๋ ค์ง ์ ์์ต๋๋ค.
์ฃผ๋ก ์๋ ํฅ์์ด ์๋ ์์:
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
# On GTX 970
------------------------------
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
์ฃผ๋ก ์๋ ์ ํ๊ฐ ์๋ ์์:
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
์ด๋ ๋ค๋ฅธ ๋ฌธ์ฅ๋ค์ ๋นํด ๊ฐํ์ ์ผ๋ก ๋งค์ฐ ๊ธด ๋ฌธ์ฅ์ด ํฌํจ๋ ๊ฒฝ์ฐ์
๋๋ค. ์ด ๊ฒฝ์ฐ ์ ์ฒด ๋ฐฐ์น๊ฐ 400ํ ํฐ ๊ธธ์ด๋ก
([64, 400]) ๋์ด์ผ ํ๋ฏ๋ก, [64, 4] ๋์ [64, 400]์ด ๋์ด ํฌ๊ฒ ์๋๊ฐ ์ ํ๋ฉ๋๋ค. ๊ฒ๋ค๊ฐ, ๋ ํฐ ๋ฐฐ์น์์๋ ํ๋ก๊ทธ๋จ์ด ์ถฉ๋ํ ์ ์์ต๋๋ค.
------------------------------
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
์ผ๋ฐ์ ์ธ ํด๊ฒฐ์ฑ ์ ์์ผ๋ฉฐ, ์ฌ์ฉ ์ฌ๋ก์ ๋ฐ๋ผ ๋ค๋ฅผ ์ ์์ต๋๋ค.
์ฌ์ฉ์๋ฅผ ์ํ ๊ฒฝํ์ ์ง์นจ:
ํ๋์จ์ด์ ์ค์ ์ํฌ๋ก๋๋ก ์ฑ๋ฅ์ ์ธก์ ํ์ธ์. ์ธก์ ์ด ๋ต์ ๋๋ค.
์ค์๊ฐ ์ถ๋ก (latency)์ด ์ค์ํ๋ค๋ฉด ๋ฐฐ์น ์ฒ๋ฆฌํ์ง ๋ง์ธ์.
CPU ์ฌ์ฉ ์์๋ ๋ฐฐ์น ์ฒ๋ฆฌํ์ง ์๋ ๊ฒ์ด ์ข์ต๋๋ค.
GPU์์ ์ ์ ๋ฐ์ดํฐ ์ฒ๋ฆฌ(throughput)๊ฐ ๋ชฉ์ ์ด๋ผ๋ฉด
- ์ ๋ ฅ ์ํ์ค ๊ธธ์ด("์ค์ " ๋ฐ์ดํฐ)๋ฅผ ์ ๋ชจ๋ฅด๋ ๊ฒฝ์ฐ, ๊ธฐ๋ณธ์ ์ผ๋ก ๋ฐฐ์น ์ฒ๋ฆฌํ์ง ๋ง๊ณ ์ฑ๋ฅ์ ์ธก์ ํ๋ฉด์ ์์๋ก ๋ฐฐ์น๋ฅผ ์ ์ฉํด ๋ณด๊ณ , ์คํจ ์ ์ด๋ฅผ ๋ณต๊ตฌํ ์ ์๋๋ก OOM ๊ฒ์ฌ ๋ก์ง์ ์ถ๊ฐํ์ธ์. (์ํ์ค ๊ธธ์ด๋ฅผ ์ ์ดํ์ง ์์ผ๋ฉด ์ธ์ ๊ฐ๋ ์คํจํ๊ฒ ๋ฉ๋๋ค.)
- ์ํ์ค ๊ธธ์ด๊ฐ ์ผ์ ํ๋ค๋ฉด ๋ฐฐ์น ์ฒ๋ฆฌ๊ฐ ์ ๋ฆฌํ ์ ์์ต๋๋ค. ์ธก์ ํ๋ฉฐ OOM๊น์ง ์๋ํด ๋ณด์ธ์.
- GPU ๋ฉ๋ชจ๋ฆฌ๊ฐ ํด์๋ก ๋ฐฐ์น ์ฒ๋ฆฌ์ ์ด์ ์ด ํฝ๋๋ค.
๋ฐฐ์น ์ฒ๋ฆฌ ํ์ฑํ ์ OOM์ ํธ๋ค๋งํ ์ ์๋๋ก ๋๋นํ์ธ์.
ํ์ดํ๋ผ์ธ ์ฒญํฌ ๋ฐฐ์น ์ฒ๋ฆฌ [[pipeline-chunk-batching]]
์ ๋ก์ท ๋ถ๋ฅ ๋ฐ ์ง์์๋ต ํ์ดํ๋ผ์ธ์ ๋จ์ผ ์
๋ ฅ์ด ์ฌ๋ฌ ํฌ์๋ ํจ์ค๋ฅผ ์ ๋ฐํ ์ ์์ด ๋ฐฐ์น ํฌ๊ธฐ ์ธ์๋ฅผ ๊ทธ๋๋ก ์ฌ์ฉํ๋ฉด ๋ฌธ์ ๊ฐ ๋ฐ์ํ ์ ์์ต๋๋ค.
์ด๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ ํ์ดํ๋ผ์ธ์ ์ฒญํฌ ํ์ดํ๋ผ์ธ ํํ๋ก ๋์ํฉ๋๋ค. ์์ฝํ๋ฉด
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
์ด์ ๋ด๋ถ์ ์ผ๋ก๋
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
ํ์ดํ๋ผ์ธ์ ์ฌ์ฉ ๋ฐฉ์์ด ๋์ผํ๋ฏ๋ก, ์ฝ๋์๋ ๊ฑฐ์ ์ํฅ์ ์ฃผ์ง ์์ต๋๋ค.
ํ์ดํ๋ผ์ธ์ ๋ฐฐ์น ์ฒ๋ฆฌ๋ฅผ ์๋์ผ๋ก ์ํํ๊ธฐ ๋๋ฌธ์ ์
๋ ฅ์ด ๋ช ๋ฒ์ ํฌ์๋ ํจ์ค๋ฅผ ๋ฐ์์ํค๋์ง ๊ณ ๋ คํ ํ์ ์์ด, ๋ฐฐ์น ํฌ๊ธฐ๋ ์
๋ ฅ๊ณผ ๋ฌด๊ดํ๊ฒ ์ต์ ํํ ์ ์์ต๋๋ค.
๋ค๋ง ์์ ์ธ๊ธํ ์ฃผ์์ฌํญ์ ์ฌ์ ํ ์ ํจํฉ๋๋ค.
ํ์ดํ๋ผ์ธ FP16 ์ถ๋ก [[pipeline-fp16-inference]]
๋ชจ๋ธ์ FP16 ๋ชจ๋๋ก ์คํํ ์ ์์ผ๋ฉฐ, GPU์์ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ ์ฝํ๋ฉด์ ์ฒ๋ฆฌ ์๋๋ฅผ ํฌ๊ฒ ํฅ์์ํฌ ์ ์์ต๋๋ค. ๋๋ถ๋ถ์ ๋ชจ๋ธ์ ์ฑ๋ฅ ์ ํ ์์ด FP16์ ์ง์ํ๋ฉฐ, ๋ชจ๋ธ์ด ํด์๋ก ์ฑ๋ฅ ์ ํ ๊ฐ๋ฅ์ฑ์ ๋ ๋ฎ์์ง๋๋ค.
FP16 ์ถ๋ก ์ ํ์ฑํํ๋ ค๋ฉด ํ์ดํ๋ผ์ธ ์์ฑ์์ dtype=torch.float16 ๋๋ dtype='float16'์ ์ ๋ฌํ์ธ์. ์ด ๊ธฐ๋ฅ์ ํ์ดํ ์น ๋ฐฑ์๋๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋ธ์์๋ง ์๋ํ๋ฉฐ, ์
๋ ฅ์ ๋ด๋ถ์ ์ผ๋ก FP16 ํ์์ผ๋ก ๋ณํ๋ฉ๋๋ค.
ํ์ดํ๋ผ์ธ ์ฌ์ฉ์ ์ ์ ์ฝ๋ [[pipeline-custom-code]]
ํน์ ํ์ดํ๋ผ์ธ์ ์ค๋ฒ๋ผ์ด๋ํ๋ ค๋ฉด, ๋จผ์ ํด๋น ์์
์ ๋ํ ์ด์๋ฅผ ๋ฑ๋กํด ์ฃผ์ธ์. ํ์ดํ๋ผ์ธ์ ๋ชฉํ๋ ๋๋ถ๋ถ์ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ์ง์ํ๋ ๊ฒ์ด๋ฏ๋ก, transformers ํ์ด ์ถ๊ฐ ์ง์์ ๊ณ ๋ คํ ์ ์์ต๋๋ค.
๊ฐ๋จํ ์๋ํ๋ ค๋ฉด ํ์ดํ๋ผ์ธ ํด๋์ค๋ฅผ ์์ํ์ธ์.
class MyPipeline(TextClassificationPipeline):
def postprocess():
# ์ฌ์ฉ์ ์ ์ ํ์ฒ๋ฆฌ ์ฝ๋ ์์ฑ
scores = scores * 100
# ์ถ๊ฐ ์ฝ๋ ์์ฑ
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# ๋๋ *pipeline* ํจ์๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
์ด๋ฅผ ํตํด ์ํ๋ ๋ชจ๋ ์ปค์คํ ์ฝ๋๋ฅผ ์ ์ฉํ ์ ์์ต๋๋ค.
ํ์ดํ๋ผ์ธ ๊ตฌํํ๊ธฐ [[implementing-a-pipeline]]
์ค๋์ค [[audio]]
์ค๋์ค ์์ ์ ์ฌ์ฉํ ์ ์๋ ํ์ดํ๋ผ์ธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
AudioClassificationPipeline [[transformers.AudioClassificationPipeline]]
[[autodoc]] AudioClassificationPipeline - call - all
AutomaticSpeechRecognitionPipeline [[transformers.AutomaticSpeechRecognitionPipeline]]
[[autodoc]] AutomaticSpeechRecognitionPipeline - call - all
TextToAudioPipeline [[transformers.TextToAudioPipeline]]
[[autodoc]] TextToAudioPipeline - call - all
ZeroShotAudioClassificationPipeline [[transformers.ZeroShotAudioClassificationPipeline]]
[[autodoc]] ZeroShotAudioClassificationPipeline - call - all
์ปดํจํฐ ๋น์ [[computer-vision]]
์ปดํจํฐ ๋น์ ์์ ์ ์ฌ์ฉํ ์ ์๋ ํ์ดํ๋ผ์ธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
DepthEstimationPipeline [[transformers.DepthEstimationPipeline]]
[[autodoc]] DepthEstimationPipeline - call - all
ImageClassificationPipeline [[transformers.ImageClassificationPipeline]]
[[autodoc]] ImageClassificationPipeline - call - all
ImageSegmentationPipeline [[transformers.ImageSegmentationPipeline]]
[[autodoc]] ImageSegmentationPipeline - call - all
ImageToImagePipeline [[transformers.ImageToImagePipeline]]
[[autodoc]] ImageToImagePipeline - call - all
ObjectDetectionPipeline [[transformers.ObjectDetectionPipeline]]
[[autodoc]] ObjectDetectionPipeline - call - all
VideoClassificationPipeline [[transformers.VideoClassificationPipeline]]
[[autodoc]] VideoClassificationPipeline - call - all
ZeroShotImageClassificationPipeline [[transformers.ZeroShotImageClassificationPipeline]]
[[autodoc]] ZeroShotImageClassificationPipeline - call - all
ZeroShotObjectDetectionPipeline [[transformers.ZeroShotObjectDetectionPipeline]]
[[autodoc]] ZeroShotObjectDetectionPipeline - call - all
์์ฐ์ด ์ฒ๋ฆฌ [[natural-language-processing]]
์์ฐ์ด ์ฒ๋ฆฌ ์์ ์ ์ฌ์ฉํ ์ ์๋ ํ์ดํ๋ผ์ธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
FillMaskPipeline [[transformers.FillMaskPipeline]]
[[autodoc]] FillMaskPipeline - call - all
QuestionAnsweringPipeline [[transformers.QuestionAnsweringPipeline]]
[[autodoc]] QuestionAnsweringPipeline - call - all
SummarizationPipeline [[transformers.SummarizationPipeline]]
[[autodoc]] SummarizationPipeline - call - all
TableQuestionAnsweringPipeline [[transformers.TableQuestionAnsweringPipeline]]
[[autodoc]] TableQuestionAnsweringPipeline - call
TextClassificationPipeline [[transformers.TextClassificationPipeline]]
[[autodoc]] TextClassificationPipeline - call - all
TextGenerationPipeline [[transformers.TextGenerationPipeline]]
[[autodoc]] TextGenerationPipeline - call - all
Text2TextGenerationPipeline [[transformers.Text2TextGenerationPipeline]]
[[autodoc]] Text2TextGenerationPipeline - call - all
TokenClassificationPipeline [[transformers.TokenClassificationPipeline]]
[[autodoc]] TokenClassificationPipeline - call - all
TranslationPipeline [[transformers.TranslationPipeline]]
[[autodoc]] TranslationPipeline - call - all
ZeroShotClassificationPipeline [[transformers.ZeroShotClassificationPipeline]]
[[autodoc]] ZeroShotClassificationPipeline - call - all
๋ฉํฐ๋ชจ๋ฌ [[multimodal]]
๋ฉํฐ๋ชจ๋ฌ ์์ ์ ์ฌ์ฉํ ์ ์๋ ํ์ดํ๋ผ์ธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
DocumentQuestionAnsweringPipeline [[transformers.DocumentQuestionAnsweringPipeline]]
[[autodoc]] DocumentQuestionAnsweringPipeline - call - all
FeatureExtractionPipeline [[transformers.FeatureExtractionPipeline]]
[[autodoc]] FeatureExtractionPipeline - call - all
ImageFeatureExtractionPipeline [[transformers.ImageFeatureExtractionPipeline]]
[[autodoc]] ImageFeatureExtractionPipeline - call - all
ImageToTextPipeline [[transformers.ImageToTextPipeline]]
[[autodoc]] ImageToTextPipeline - call - all
ImageTextToTextPipeline [[transformers.ImageTextToTextPipeline]]
[[autodoc]] ImageTextToTextPipeline - call - all
MaskGenerationPipeline [[transformers.MaskGenerationPipeline]]
[[autodoc]] MaskGenerationPipeline - call - all
VisualQuestionAnsweringPipeline [[transformers.VisualQuestionAnsweringPipeline]]
[[autodoc]] VisualQuestionAnsweringPipeline - call - all
Parent class: Pipeline [[transformers.Pipeline]]
[[autodoc]] Pipeline