Buckets:

rtrm's picture
download
raw
18.9 kB
<meta charset="utf-8" /><meta http-equiv="content-security-policy" content=""><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;diffusers&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;supported-pipelines&quot;,&quot;title&quot;:&quot;Supported pipelines&quot;}],&quot;title&quot;:&quot;Diffusers&quot;}" data-svelte="svelte-1phssyn">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/assets/pages/__layout.svelte-hf-doc-builder.css">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/start-hf-doc-builder.js">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/chunks/vendor-hf-doc-builder.js">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/chunks/paths-hf-doc-builder.js">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/pages/__layout.svelte-hf-doc-builder.js">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/pages/index.mdx-hf-doc-builder.js">
<link rel="modulepreload" href="/docs/diffusers/v0.21.0/ko/_app/chunks/IconCopyLink-hf-doc-builder.js">
<p align="center"><br>
<img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400">
<br></p>
<h1 class="relative group"><a id="diffusers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#diffusers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a>
<span>Diffusers
</span></h1>
<p>πŸ€— DiffusersλŠ” 이미지, μ˜€λ””μ˜€, 심지어 λΆ„μžμ˜ 3D ꡬ쑰λ₯Ό μƒμ„±ν•˜κΈ° μœ„ν•œ μ΅œμ²¨λ‹¨ 사전 ν›ˆλ ¨λœ diffusion λͺ¨λΈμ„ μœ„ν•œ λΌμ΄λΈŒλŸ¬λ¦¬μž…λ‹ˆλ‹€. κ°„λ‹¨ν•œ μΆ”λ‘  μ†”λ£¨μ…˜μ„ μ°Ύκ³  μžˆλ“ , 자체 diffusion λͺ¨λΈμ„ ν›ˆλ ¨ν•˜κ³  μ‹Άλ“ , πŸ€— DiffusersλŠ” 두 κ°€μ§€ λͺ¨λ‘λ₯Ό μ§€μ›ν•˜λŠ” λͺ¨λ“ˆμ‹ νˆ΄λ°•μŠ€μž…λ‹ˆλ‹€. 저희 λΌμ΄λΈŒλŸ¬λ¦¬λŠ” <a href="conceptual/philosophy#usability-over-performance">μ„±λŠ₯보닀 μ‚¬μš©μ„±</a>, <a href="conceptual/philosophy#simple-over-easy">κ°„νŽΈν•¨λ³΄λ‹€ λ‹¨μˆœν•¨</a>, 그리고 <a href="conceptual/philosophy#tweakable-contributorfriendly-over-abstraction">좔상화보닀 μ‚¬μš©μž μ§€μ • κ°€λŠ₯μ„±</a>에 쀑점을 두고 μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€.</p>
<p>이 λΌμ΄λΈŒλŸ¬λ¦¬μ—λŠ” μ„Έ κ°€μ§€ μ£Όμš” ꡬ성 μš”μ†Œκ°€ μžˆμŠ΅λ‹ˆλ‹€:</p>
<ul><li>λͺ‡ μ€„μ˜ μ½”λ“œλ§ŒμœΌλ‘œ μΆ”λ‘ ν•  수 μžˆλŠ” μ΅œμ²¨λ‹¨ <a href="api/pipelines/overview">diffusion νŒŒμ΄ν”„λΌμΈ</a>.</li>
<li>생성 속도와 ν’ˆμ§ˆ κ°„μ˜ κ· ν˜•μ„ λ§žμΆ”κΈ° μœ„ν•΄ μƒν˜Έκ΅ν™˜μ μœΌλ‘œ μ‚¬μš©ν•  수 μžˆλŠ” <a href="api/schedulers/overview">λ…Έμ΄μ¦ˆ μŠ€μΌ€μ€„λŸ¬</a>.</li>
<li>λΉŒλ”© λΈ”λ‘μœΌλ‘œ μ‚¬μš©ν•  수 있고 μŠ€μΌ€μ€„λŸ¬μ™€ κ²°ν•©ν•˜μ—¬ 자체적인 end-to-end diffusion μ‹œμŠ€ν…œμ„ λ§Œλ“€ 수 μžˆλŠ” 사전 ν•™μŠ΅λœ <a href="api/models">λͺ¨λΈ</a>.</li></ul>
<div class="mt-10"><div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"><a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview"><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">결과물을 μƒμ„±ν•˜κ³ , λ‚˜λ§Œμ˜ diffusion μ‹œμŠ€ν…œμ„ κ΅¬μΆ•ν•˜κ³ , ν™•μ‚° λͺ¨λΈμ„ ν›ˆλ ¨ν•˜λŠ” 데 ν•„μš”ν•œ κΈ°λ³Έ κΈ°μˆ μ„ λ°°μ›Œλ³΄μ„Έμš”. πŸ€— Diffusersλ₯Ό 처음 μ‚¬μš©ν•˜λŠ” 경우 μ—¬κΈ°μ—μ„œ μ‹œμž‘ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€!</p></a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">νŒŒμ΄ν”„λΌμΈ, λͺ¨λΈ, μŠ€μΌ€μ€„λŸ¬λ₯Ό λ‘œλ“œν•˜λŠ” 데 도움이 λ˜λŠ” μ‹€μš©μ μΈ κ°€μ΄λ“œμž…λ‹ˆλ‹€. λ˜ν•œ νŠΉμ • μž‘μ—…μ— νŒŒμ΄ν”„λΌμΈμ„ μ‚¬μš©ν•˜κ³ , 좜λ ₯ 생성 방식을 μ œμ–΄ν•˜κ³ , μΆ”λ‘  속도에 맞게 μ΅œμ ν™”ν•˜κ³ , λ‹€μ–‘ν•œ ν•™μŠ΅ 기법을 μ‚¬μš©ν•˜λŠ” 방법도 배울 수 μžˆμŠ΅λ‹ˆλ‹€.</p></a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy"><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
<p class="text-gray-700">λΌμ΄λΈŒλŸ¬λ¦¬κ°€ μ™œ 이런 λ°©μ‹μœΌλ‘œ μ„€κ³„λ˜μ—ˆλŠ”μ§€ μ΄ν•΄ν•˜κ³ , 라이브러리 μ΄μš©μ— λŒ€ν•œ 윀리적 κ°€μ΄λ“œλΌμΈκ³Ό μ•ˆμ „ κ΅¬ν˜„μ— λŒ€ν•΄ μžμ„Ένžˆ μ•Œμ•„λ³΄μ„Έμš”.</p></a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models"><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">πŸ€— Diffusers 클래슀 및 λ©”μ„œλ“œμ˜ μž‘λ™ 방식에 λŒ€ν•œ 기술 μ„€λͺ….</p></a></div></div>
<h2 class="relative group"><a id="supported-pipelines" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#supported-pipelines"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a>
<span>Supported pipelines
</span></h2>
<table><thead><tr><th>Pipeline</th>
<th>Paper/Repository</th>
<th align="center">Tasks</th></tr></thead>
<tbody><tr><td><a href="./api/pipelines/alt_diffusion">alt_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2211.06679" rel="nofollow">AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities</a></td>
<td align="center">Image-to-Image Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/audio_diffusion">audio_diffusion</a></td>
<td><a href="https://github.com/teticio/audio-diffusion.git" rel="nofollow">Audio Diffusion</a></td>
<td align="center">Unconditional Audio Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/controlnet">controlnet</a></td>
<td><a href="https://arxiv.org/abs/2302.05543" rel="nofollow">Adding Conditional Control to Text-to-Image Diffusion Models</a></td>
<td align="center">Image-to-Image Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/cycle_diffusion">cycle_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2210.05559" rel="nofollow">Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance</a></td>
<td align="center">Image-to-Image Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/dance_diffusion">dance_diffusion</a></td>
<td><a href="https://github.com/williamberman/diffusers.git" rel="nofollow">Dance Diffusion</a></td>
<td align="center">Unconditional Audio Generation</td></tr>
<tr><td><a href="./api/pipelines/ddpm">ddpm</a></td>
<td><a href="https://arxiv.org/abs/2006.11239" rel="nofollow">Denoising Diffusion Probabilistic Models</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/ddim">ddim</a></td>
<td><a href="https://arxiv.org/abs/2010.02502" rel="nofollow">Denoising Diffusion Implicit Models</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./if">if</a></td>
<td><a href="./api/pipelines/if"><strong>IF</strong></a></td>
<td align="center">Image Generation</td></tr>
<tr><td><a href="./if">if_img2img</a></td>
<td><a href="./api/pipelines/if"><strong>IF</strong></a></td>
<td align="center">Image-to-Image Generation</td></tr>
<tr><td><a href="./if">if_inpainting</a></td>
<td><a href="./api/pipelines/if"><strong>IF</strong></a></td>
<td align="center">Image-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/latent_diffusion">latent_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/latent_diffusion">latent_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td>
<td align="center">Super Resolution Image-to-Image</td></tr>
<tr><td><a href="./api/pipelines/latent_diffusion_uncond">latent_diffusion_uncond</a></td>
<td><a href="https://arxiv.org/abs/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/paint_by_example">paint_by_example</a></td>
<td><a href="https://arxiv.org/abs/2211.13227" rel="nofollow">Paint by Example: Exemplar-based Image Editing with Diffusion Models</a></td>
<td align="center">Image-Guided Image Inpainting</td></tr>
<tr><td><a href="./api/pipelines/pndm">pndm</a></td>
<td><a href="https://arxiv.org/abs/2202.09778" rel="nofollow">Pseudo Numerical Methods for Diffusion Models on Manifolds</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/score_sde_ve">score_sde_ve</a></td>
<td><a href="https://openreview.net/forum?id=PxTIG12RRHS" rel="nofollow">Score-Based Generative Modeling through Stochastic Differential Equations</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/score_sde_vp">score_sde_vp</a></td>
<td><a href="https://openreview.net/forum?id=PxTIG12RRHS" rel="nofollow">Score-Based Generative Modeling through Stochastic Differential Equations</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/semantic_stable_diffusion">semantic_stable_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2301.12247" rel="nofollow">Semantic Guidance</a></td>
<td align="center">Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/text2img">stable_diffusion_text2img</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/img2img">stable_diffusion_img2img</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td>
<td align="center">Image-to-Image Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/inpaint">stable_diffusion_inpaint</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td>
<td align="center">Text-Guided Image Inpainting</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/panorama">stable_diffusion_panorama</a></td>
<td><a href="https://multidiffusion.github.io/" rel="nofollow">MultiDiffusion</a></td>
<td align="center">Text-to-Panorama Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/pix2pix">stable_diffusion_pix2pix</a></td>
<td><a href="https://arxiv.org/abs/2211.09800" rel="nofollow">InstructPix2Pix: Learning to Follow Image Editing Instructions</a></td>
<td align="center">Text-Guided Image Editing</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/pix2pix_zero">stable_diffusion_pix2pix_zero</a></td>
<td><a href="https://pix2pixzero.github.io/" rel="nofollow">Zero-shot Image-to-Image Translation</a></td>
<td align="center">Text-Guided Image Editing</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/attend_and_excite">stable_diffusion_attend_and_excite</a></td>
<td><a href="https://arxiv.org/abs/2301.13826" rel="nofollow">Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models</a></td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/self_attention_guidance">stable_diffusion_self_attention_guidance</a></td>
<td><a href="https://arxiv.org/abs/2210.00939" rel="nofollow">Improving Sample Quality of Diffusion Models Using Self-Attention Guidance</a></td>
<td align="center">Text-to-Image Generation Unconditional Image Generation</td></tr>
<tr><td><a href="./stable_diffusion/image_variation">stable_diffusion_image_variation</a></td>
<td><a href="https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations" rel="nofollow">Stable Diffusion Image Variations</a></td>
<td align="center">Image-to-Image Generation</td></tr>
<tr><td><a href="./stable_diffusion/latent_upscale">stable_diffusion_latent_upscale</a></td>
<td><a href="https://twitter.com/StabilityAI/status/1590531958815064065" rel="nofollow">Stable Diffusion Latent Upscaler</a></td>
<td align="center">Text-Guided Super Resolution Image-to-Image</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion/model_editing">stable_diffusion_model_editing</a></td>
<td><a href="https://time-diffusion.github.io/" rel="nofollow">Editing Implicit Assumptions in Text-to-Image Diffusion Models</a></td>
<td align="center">Text-to-Image Model Editing</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td>
<td align="center">Text-Guided Image Inpainting</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td>
<td><a href="https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion" rel="nofollow">Depth-Conditional Stable Diffusion</a></td>
<td align="center">Depth-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td>
<td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td>
<td align="center">Text-Guided Super Resolution Image-to-Image</td></tr>
<tr><td><a href="./api/pipelines/stable_diffusion_safe">stable_diffusion_safe</a></td>
<td><a href="https://arxiv.org/abs/2211.05105" rel="nofollow">Safe Stable Diffusion</a></td>
<td align="center">Text-Guided Generation</td></tr>
<tr><td><a href="./stable_unclip">stable_unclip</a></td>
<td>Stable unCLIP</td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./stable_unclip">stable_unclip</a></td>
<td>Stable unCLIP</td>
<td align="center">Image-to-Image Text-Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/stochastic_karras_ve">stochastic_karras_ve</a></td>
<td><a href="https://arxiv.org/abs/2206.00364" rel="nofollow">Elucidating the Design Space of Diffusion-Based Generative Models</a></td>
<td align="center">Unconditional Image Generation</td></tr>
<tr><td><a href="./api/pipelines/text_to_video">text_to_video_sd</a></td>
<td><a href="https://modelscope.cn/models/damo/text-to-video-synthesis/summary" rel="nofollow">Modelscope’s Text-to-video-synthesis Model in Open Domain</a></td>
<td align="center">Text-to-Video Generation</td></tr>
<tr><td><a href="./api/pipelines/unclip">unclip</a></td>
<td><a href="https://arxiv.org/abs/2204.06125" rel="nofollow">Hierarchical Text-Conditional Image Generation with CLIP Latents</a>(implementation by <a href="https://github.com/kakaobrain/karlo" rel="nofollow">kakaobrain</a>)</td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td>
<td align="center">Text-to-Image Generation</td></tr>
<tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td>
<td align="center">Image Variations Generation</td></tr>
<tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td>
<td align="center">Dual Image and Text Guided Generation</td></tr>
<tr><td><a href="./api/pipelines/vq_diffusion">vq_diffusion</a></td>
<td><a href="https://arxiv.org/abs/2111.14822" rel="nofollow">Vector Quantized Diffusion Model for Text-to-Image Synthesis</a></td>
<td align="center">Text-to-Image Generation</td></tr></tbody></table>
<script type="module" data-hydrate="1022j38">
import { start } from "/docs/diffusers/v0.21.0/ko/_app/start-hf-doc-builder.js";
start({
target: document.querySelector('[data-hydrate="1022j38"]').parentNode,
paths: {"base":"/docs/diffusers/v0.21.0/ko","assets":"/docs/diffusers/v0.21.0/ko"},
session: {},
route: false,
spa: false,
trailing_slash: "never",
hydrate: {
status: 200,
error: null,
nodes: [
import("/docs/diffusers/v0.21.0/ko/_app/pages/__layout.svelte-hf-doc-builder.js"),
import("/docs/diffusers/v0.21.0/ko/_app/pages/index.mdx-hf-doc-builder.js")
],
params: {}
}
});
</script>

Xet Storage Details

Size:
18.9 kB
Β·
Xet hash:
cf4296b3baea4da272eef35548457df890eb8835f9d2714b9c1560c7100aa1c7

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.