Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"이미지 프로세서(Image processor)","local":"image-processors","sections":[{"title":"이미지 프로세서 클래스(Image processor classes)","local":"image-processor-classes","sections":[],"depth":2},{"title":"빠른 이미지 프로세서(Fast image processors)","local":"fast-image-processors","sections":[],"depth":2},{"title":"전처리(Preprocess)","local":"preprocess","sections":[{"title":"패딩(Padding)","local":"padding","sections":[],"depth":3}],"depth":2}],"depth":1}"> | |
| <link href="/docs/transformers/main/ko/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/entry/start.3df1c19e.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/scheduler.53228c21.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/singletons.83674bae.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/index.e93d0901.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/paths.aee36068.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/entry/app.5226fb4b.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/preload-helper.cb103237.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/index.3db2ce32.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/nodes/0.46820ae2.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/nodes/23.65aecedc.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/CopyLLMTxtMenu.1327b590.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/MermaidChart.svelte_svelte_type_style_lang.49b88d99.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/CodeBlock.ada04ea6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/ko/_app/immutable/chunks/HfOption.be649c8b.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"이미지 프로세서(Image processor)","local":"image-processors","sections":[{"title":"이미지 프로세서 클래스(Image processor classes)","local":"image-processor-classes","sections":[],"depth":2},{"title":"빠른 이미지 프로세서(Fast image processors)","local":"fast-image-processors","sections":[],"depth":2},{"title":"전처리(Preprocess)","local":"preprocess","sections":[{"title":"패딩(Padding)","local":"padding","sections":[],"depth":3}],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <div class="items-center shrink-0 min-w-[100px] max-sm:min-w-[50px] justify-end ml-auto flex" style="float: right; margin-left: 10px; display: inline-flex; position: relative; z-index: 10;"><div class="inline-flex rounded-md max-sm:rounded-sm"><button class="inline-flex items-center gap-1 h-7 max-sm:h-7 px-2 max-sm:px-1.5 text-sm font-medium text-gray-800 border border-r-0 rounded-l-md max-sm:rounded-l-sm border-gray-200 bg-white hover:shadow-inner dark:border-gray-850 dark:bg-gray-950 dark:text-gray-200 dark:hover:bg-gray-800" aria-live="polite"><span class="inline-flex items-center justify-center rounded-md p-0.5 max-sm:p-0 hover:text-gray-800 dark:hover:text-gray-200"><svg class="sm:size-3.5 size-3" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg></span> <span>Copy page</span></button> <button class="inline-flex items-center justify-center w-6 max-sm:w-5 h-7 max-sm:h-7 disabled:pointer-events-none text-sm text-gray-500 hover:text-gray-700 dark:hover:text-white rounded-r-md max-sm:rounded-r-sm border border-l transition border-gray-200 bg-white hover:shadow-inner dark:border-gray-850 dark:bg-gray-950 dark:text-gray-200 dark:hover:bg-gray-800" aria-haspopup="menu" aria-expanded="false" aria-label="Open copy menu"><svg class="transition-transform text-gray-400 overflow-visible sm:size-3.5 size-3 rotate-0" width="1em" height="1em" viewBox="0 0 12 7" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M1 1L6 6L11 1" stroke="currentColor"></path></svg></button></div> </div> <h1 class="relative group"><a id="image-processors" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-processors"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>이미지 프로세서(Image processor)</span></h1> <p data-svelte-h="svelte-rfz9lx">이미지 프로세서는 이미지를 픽셀 값, 즉 이미지의 색상과 크기를 나타내는 텐서로 변환합니다. 이 픽셀 값은 비전 모델의 입력으로 사용됩니다. 이때 사전 학습된 모델이 새로운 이미지를 올바르게 인식하려면 입력되는 이미지의 형식이 학습 당시 사용했던 데이터와 똑같아야 합니다. 이미지 프로세서는 다음과 같은 작업을 통해 이미지 형식을 통일시켜주는 역할을 합니다.</p> <ul data-svelte-h="svelte-1ogz8y1"><li>이미지 크기를 조절하는 <code>center_crop()</code></li> <li>픽셀 값을 정규화하는 <code>normalize()</code> 또는 크기를 재조정하는 <code>rescale()</code></li></ul> <p data-svelte-h="svelte-1chrb1w">Hugging Face <a href="https://hf.co" rel="nofollow">Hub</a>나 로컬 디렉토리에 있는 비전 모델에서 이미지 프로세서의 설정(이미지 크기, 정규화 및 리사이즈 여부 등)을 불러오려면 <a href="/docs/transformers/main/ko/internal/image_processing_utils#transformers.ImageProcessingMixin.from_pretrained">from_pretrained()</a>를 사용하세요. 각 사전 학습된 모델의 설정은 <a href="https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json" rel="nofollow">preprocessor_config.json</a> 파일에 저장되어 있습니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor | |
| image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1d126w2">이미지를 이미지 프로세서에 전달하여 픽셀 값으로 변환하고, <code>return_tensors="pt"</code> 를 설정하여 PyTorch 텐서를 반환받으세요. 이미지가 텐서로 어떻게 보이는지 궁금하다면 입력값을 한번 출력해보시는걸 추천합니다!</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image | |
| <span class="hljs-keyword">import</span> requests | |
| url = <span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/image_processor_example.png"</span> | |
| image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw).convert(<span class="hljs-string">"RGB"</span>) | |
| inputs = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-kp4eny">이 가이드에서는 이미지 프로세서 클래스와 비전 모델을 위한 이미지 전처리 방법에 대해 다룰 예정입니다.</p> <h2 class="relative group"><a id="image-processor-classes" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-processor-classes"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>이미지 프로세서 클래스(Image processor classes)</span></h2> <p data-svelte-h="svelte-1v3rxk4">이미지 프로세서들은 <code>center_crop()</code>, <code>normalize()</code>, <code>rescale()</code> 함수를 제공하는 <code>BaseImageProcessor</code> 클래스를 상속받습니다. 이미지 프로세서에는 두 가지 종류가 있습니다.</p> <ul data-svelte-h="svelte-yt78we"><li><code>BaseImageProcessor</code>는 파이썬 기반 구현체입니다.</li> <li><code>BaseImageProcessorFast</code>는 더 빠른 <a href="https://pytorch.org/vision/stable/index.html" rel="nofollow">torchvision-backed</a> 버전입니다. <a href="https://pytorch.org/docs/stable/tensors.html" rel="nofollow">torch.Tensor</a>입력의 배치 처리 시 최대 33배 더 빠를 수 있습니다. <code>BaseImageProcessorFast</code>는 현재 모든 비전 모델에서 사용할 수 있는 것은 아니기 때문에 모델의 API 문서를 참조하여 지원 여부를 확인해 주세요.</li></ul> <p data-svelte-h="svelte-evsudc">각 이미지 프로세서는 이미지 프로세서를 불러오고 저장하기 위한 <a href="/docs/transformers/main/ko/internal/image_processing_utils#transformers.ImageProcessingMixin.from_pretrained">from_pretrained()</a>와 <a href="/docs/transformers/main/ko/internal/image_processing_utils#transformers.ImageProcessingMixin.save_pretrained">save_pretrained()</a> 메소드를 제공하는 <a href="/docs/transformers/main/ko/internal/image_processing_utils#transformers.ImageProcessingMixin">ImageProcessingMixin</a> 클래스를 상속받아 기능을 확장시킵니다.</p> <p data-svelte-h="svelte-1myfk86">이미지 프로세서를 불러오는 방법은 <a href="/docs/transformers/main/ko/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>를 사용하거나 모델별 이미지 프로세서를 사용하는 방식 두 가지가 있습니다.</p> <div class="flex space-x-2 items-center my-1.5 mr-8 h-7 !pl-0 -mx-3 md:mx-0"><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd border-gray-800 bg-black dark:bg-gray-700 text-white">AutoImageProcessor </div><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd text-gray-500 cursor-pointer opacity-90 hover:text-gray-700 dark:hover:text-gray-200 hover:shadow-sm">model-specific image processor </div></div> <div class="language-select"><p data-svelte-h="svelte-dn2uum"><a href="./model_doc/auto">AutoClass</a> API는 이미지 프로세서가 어떤 모델과 연관되어 있는지 직접 지정하지 않고도 편리하게 불러올 수 있는 방법을 제공합니다.</p> <p data-svelte-h="svelte-1r9vr1g"><a href="/docs/transformers/main/ko/model_doc/auto#transformers.AutoImageProcessor.from_pretrained">from_pretrained()</a>를 사용해 이미지 프로세서를 불러옵니다. 만약 빠른 프로세서를 사용하고 싶다면 <code>use_fast=True</code>를 추가하세요.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor | |
| image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>, use_fast=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> </div> <h2 class="relative group"><a id="fast-image-processors" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fast-image-processors"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>빠른 이미지 프로세서(Fast image processors)</span></h2> <p data-svelte-h="svelte-9z0kd4"><code>BaseImageProcessorFast</code>는 <a href="https://pytorch.org/vision/stable/index.html" rel="nofollow">torchvision</a>을 기반으로 하며, 특히 GPU에서 처리할 때 속도가 훨씬 빠릅니다. 이 클래스는 기존 <code>BaseImageProcessor</code>와 완전히 동일하게 설계되었기 때문에, 모델이 지원한다면 별도 수정 없이 바로 교체해서 사용할 수 있습니다. <a href="https://pytorch.org/get-started/locally/#mac-installation" rel="nofollow">torchvision</a>을 설치한 뒤 <code>use_fast</code> 파라미터를 <code>True</code>로 지정해주시면 됩니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor | |
| processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"facebook/detr-resnet-50"</span>, use_fast=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1jk5j7c"><code>device</code> 파라미터를 사용해 어느 장치에서 처리할지 지정할 수 있습니다. 만약 입력값이 텐서(tensor)라면 그 텐서와 동일한 장치에서, 그렇지 않은 경우에는 기본적으로 CPU에서 처리됩니다. 아래는 빠른 프로세서를 GPU에서 사용하도록 설정하는 예제입니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> torchvision.io <span class="hljs-keyword">import</span> read_image | |
| <span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DetrImageProcessorFast | |
| images = read_image(<span class="hljs-string">"image.jpg"</span>) | |
| processor = DetrImageProcessorFast.from_pretrained(<span class="hljs-string">"facebook/detr-resnet-50"</span>) | |
| images_processed = processor(images, return_tensors=<span class="hljs-string">"pt"</span>, device=<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <details data-svelte-h="svelte-1j86zhg"><summary>Benchmarks</summary> <p>이 벤치마크는 NVIDIA A10G Tensor Core GPU가 장착된 <a href="https://aws.amazon.com/ec2/instance-types/g5/" rel="nofollow">AWS EC2 g5.2xlarge</a> 인스턴스에서 측정된 결과입니다.</p> <div class="flex"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_padded.png"></div> <div class="flex"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_batched_compiled.png"></div> <div class="flex"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_single.png"></div> <div class="flex"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_batched.png"></div></details> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>전처리(Preprocess)</span></h2> <p data-svelte-h="svelte-sgfohg">Transformers의 비전 모델은 입력값으로 PyTorch 텐서 형태의 픽셀 값을 받습니다. 이미지 프로세서는 이미지를 바로 이 픽셀 값 텐서(배치 크기, 채널 수, 높이, 너비)로 변환하는 역할을 합니다. 이 과정에서 모델이 요구하는 크기로 이미지를 조절하고, 픽셀 값 또한 모델 기준에 맞춰 정규화하거나 재조정합니다.</p> <p data-svelte-h="svelte-19l2d06">이러한 이미지 전처리는 이미지 증강과는 다른 개념입니다. 이미지 증강은 학습 데이터를 늘리거나 과적합을 막기 위해 이미지에 의도적인 변화(밝기, 색상, 회전 등)를 주는 기술입니다. 반면, 이미지 전처리는 이미지를 사전 학습된 모델이 요구하는 입력 형식에 정확히 맞춰주는 작업에만 집중합니다.</p> <p data-svelte-h="svelte-1swau77">일반적으로 모델 성능을 높이기 위해, 이미지는 보통 증강 과정을 거친 뒤 전처리되어 모델에 입력됩니다. 이때 증강 작업은 <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb" rel="nofollow">Albumentations</a>, <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb" rel="nofollow">Kornia</a>) 와 같은 라이브러리를 사용할 수 있으며, 이후 전처리 단계에서 이미지 프로세서를 사용하면 됩니다.</p> <p data-svelte-h="svelte-1g6uev0">이번 가이드에서는 이미지 증강을 위해 torchvision의 <a href="https://pytorch.org/vision/stable/transforms.html" rel="nofollow">transforms</a> 모듈을 사용하겠습니다.</p> <p data-svelte-h="svelte-rjitkw">우선 <a href="https://hf.co/datasets/food101" rel="nofollow">food101</a> 데이터셋의 일부만 샘플로 불러와서 시작하겠습니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset | |
| dataset = load_dataset(<span class="hljs-string">"ethz/food101"</span>, split=<span class="hljs-string">"train[:100]"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-h3p25i"><a href="https://pytorch.org/vision/stable/transforms.html" rel="nofollow">transforms</a> 모듈의 <a href="https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html" rel="nofollow">Compose</a>API는 여러 변환을 하나로 묶어주는 역할을 합니다. 여기서는 이미지를 무작위로 자르고 리사이즈하는 <a href="https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html" rel="nofollow">RandomResizedCrop</a>과 색상을 무작위로 바꾸는 <a href="https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html" rel="nofollow">ColorJitter</a>를 함께 사용해보겠습니다.</p> <p data-svelte-h="svelte-1nm1ghv">이때 잘라낼 이미지의 크기는 이미지 프로세서에서 가져올 수 있습니다. 모델에 따라 정확한 높이와 너비가 필요할 때도 있고, 가장 짧은 변 <code>shortest_edge</code> 값만 필요할 때도 있습니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> RandomResizedCrop, ColorJitter, Compose | |
| size = ( | |
| image_processor.size[<span class="hljs-string">"shortest_edge"</span>] | |
| <span class="hljs-keyword">if</span> <span class="hljs-string">"shortest_edge"</span> <span class="hljs-keyword">in</span> image_processor.size | |
| <span class="hljs-keyword">else</span> (image_processor.size[<span class="hljs-string">"height"</span>], image_processor.size[<span class="hljs-string">"width"</span>]) | |
| ) | |
| _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=<span class="hljs-number">0.5</span>, hue=<span class="hljs-number">0.5</span>)])<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1cjstvw">준비된 변환값 들을 이미지에 적용하고, RGB 형식으로 바꿔줍니다. 그 다음, 이렇게 증강된 이미지를 이미지 프로세서에 넣어 픽셀 값을 반환합니다.</p> <p data-svelte-h="svelte-fu6ad">여기서 <code>do_resize</code>파라미터를 <code>False</code>로 설정한 이유는, 앞선 증강 단계에서 <a href="https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html" rel="nofollow">RandomResizedCrop</a>을 통해 이미 이미지 크기를 조절했기 때문입니다. 만약 증강 과정을 생략한다면, 이미지 프로세서는 <code>image_mean</code>과 <code>image_std</code>값(전처리기 설정 파일에 저장됨)을 사용해 자동으로 리사이즈와 정규화를 수행하게 됩니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">def</span> <span class="hljs-title function_">transforms</span>(<span class="hljs-params">examples</span>): | |
| images = [_transforms(img.convert(<span class="hljs-string">"RGB"</span>)) <span class="hljs-keyword">for</span> img <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"image"</span>]] | |
| examples[<span class="hljs-string">"pixel_values"</span>] = image_processor(images, do_resize=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"pixel_values"</span>] | |
| <span class="hljs-keyword">return</span> examples<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-kanfya"><code>set_transform</code>을 사용하면 결합된 증강 및 전처리 기능을 전체 데이터셋에 실시간으로 적용됩니다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START -->dataset.set_transform(transforms)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-11at5lj">이제 처리된 픽셀 값을 다시 이미지로 변환하여 증강 및 전처리 결과가 어떻게 나왔는지 직접 확인해 봅시다.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt | |
| img = dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"pixel_values"</span>] | |
| plt.imshow(img.permute(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">0</span>))<!-- HTML_TAG_END --></pre></div> <div class="flex gap-4" data-svelte-h="svelte-1r7hnw2"><div><img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">이전</figcaption></div> <div><img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">이후</figcaption></div></div> <p data-svelte-h="svelte-1bcmtuw">이미지 프로세서는 전처리뿐만 아니라, 객체 탐지나 분할과 같은 비전 작업에서 모델의 결과값을 바운딩 박스나 분할 맵처럼 의미 있는 예측으로 바꿔주는 후처리 기능도 갖추고 있습니다.</p> <h3 class="relative group"><a id="padding" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#padding"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>패딩(Padding)</span></h3> <p data-svelte-h="svelte-1nmwbx0"><a href="./model_doc/detr">DETR</a>과 같은 일부 모델은 훈련 중에 <a href="https://paperswithcode.com/method/image-scale-augmentation" rel="nofollow">scale augmentation</a>을 사용하기 때문에 한 배치 내에 포함된 이미지들의 크기가 제각각 일 수 있습니다. 아시다시피 크기가 서로 다른 이미지들은 하나의 배치로 묶을 수 없죠.</p> <p data-svelte-h="svelte-1peq0l1">이 문제를 해결하려면 이미지에 특수 패딩 토큰인 <code>0</code>을 채워 넣어 크기를 통일시켜주면 됩니다. <a href="https://github.com/huggingface/transformers/blob/9578c2597e2d88b6f0b304b5a05864fd613ddcc1/src/transformers/models/detr/image_processing_detr.py#L1151" rel="nofollow">pad</a> 메소드로 패딩을 적용하고, 이렇게 크기가 통일된 이미지들을 배치로 묶기 위해 사용자 정의 <code>collate</code> 함수를 만들어 사용하세요.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="language-py "><!-- HTML_TAG_START --><span class="hljs-keyword">def</span> <span class="hljs-title function_">collate_fn</span>(<span class="hljs-params">batch</span>): | |
| pixel_values = [item[<span class="hljs-string">"pixel_values"</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> batch] | |
| encoding = image_processor.pad(pixel_values, return_tensors=<span class="hljs-string">"pt"</span>) | |
| labels = [item[<span class="hljs-string">"labels"</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> batch] | |
| batch = {} | |
| batch[<span class="hljs-string">"pixel_values"</span>] = encoding[<span class="hljs-string">"pixel_values"</span>] | |
| batch[<span class="hljs-string">"pixel_mask"</span>] = encoding[<span class="hljs-string">"pixel_mask"</span>] | |
| batch[<span class="hljs-string">"labels"</span>] = labels | |
| <span class="hljs-keyword">return</span> batch<!-- HTML_TAG_END --></pre></div> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/ko/image_processors.md" target="_blank"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M31,16l-7,7l-1.41-1.41L28.17,16l-5.58-5.59L24,9l7,7z"></path><path d="M1,16l7-7l1.41,1.41L3.83,16l5.58,5.59L8,23l-7-7z"></path><path d="M12.419,25.484L17.639,6.552l1.932,0.518L14.351,26.002z"></path></svg> <span data-svelte-h="svelte-zjs2n5"><span class="underline">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_1vzb9oe = { | |
| assets: "/docs/transformers/main/ko", | |
| base: "/docs/transformers/main/ko", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/transformers/main/ko/_app/immutable/entry/start.3df1c19e.js"), | |
| import("/docs/transformers/main/ko/_app/immutable/entry/app.5226fb4b.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 23], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 43.8 kB
- Xet hash:
- 075ae5e48f81dc84f841a0de1925b546b1c3d32b3bf54a713e6c306d9ac067ed
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.