text stringlengths 5 631k | id stringlengths 14 178 | metadata dict | __index_level_0__ int64 0 647 |
|---|---|---|---|
[
{
"question": "Which of the following best describes a Large Language Model (LLM)?",
"answer_a": "A model specializing in language recognition",
"answer_b": "A massive neural network that understands and generates human language",
"answer_c": "A model exclusively used for language ... | agents-course/quiz/data/unit_1.json/0 | {
"file_path": "agents-course/quiz/data/unit_1.json",
"repo_id": "agents-course",
"token_count": 154
} | 0 |
# Build Your Own Pokémon Battle Agent
Now that you’ve explored the potential and limitations of Agentic AI in games, it’s time to get hands-on. In this section, you’ll **build your very own AI Agent to battle in Pokémon-style turn-based combat**, using everything you’ve learned throughout the course.
We’ll break the ... | agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx/0 | {
"file_path": "agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx",
"repo_id": "agents-course",
"token_count": 5276
} | 1 |
# Introduction to Agents
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/>
Welcome to this first unit, where **you'll build a solid foundation in the fundamentals of AI Agents** including:
- **Understanding Agents**
- What is an Agent, an... | agents-course/units/en/unit1/introduction.mdx/0 | {
"file_path": "agents-course/units/en/unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 530
} | 2 |
# Test Your Understanding of LangGraph
Let's test your understanding of `LangGraph` with a quick quiz! This will help reinforce the key concepts we've covered so far.
This is an optional quiz and it's not graded.
### Q1: What is the primary purpose of LangGraph?
Which statement best describes what LangGraph is desig... | agents-course/units/en/unit2/langgraph/quiz1.mdx/0 | {
"file_path": "agents-course/units/en/unit2/langgraph/quiz1.mdx",
"repo_id": "agents-course",
"token_count": 1169
} | 3 |
<CourseFloatingBanner
classNames="absolute z-10 right-0 top-0"
notebooks={[
{label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb"},
]}
askForHelpUrl="http://hf.co/join/discord" />
# Multi-A... | agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx/0 | {
"file_path": "agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx",
"repo_id": "agents-course",
"token_count": 9133
} | 4 |
# Conclusion
**Congratulations on finishing the Agents Course!**
Through perseverance and dedication, you’ve built a solid foundation in the world of AI Agents.
But finishing this course is **not the end of your journey**. It’s just the beginning: don’t hesitate to explore the next section where we share curated re... | agents-course/units/en/unit4/conclusion.mdx/0 | {
"file_path": "agents-course/units/en/unit4/conclusion.mdx",
"repo_id": "agents-course",
"token_count": 142
} | 5 |
# De LLMs a Agentes de IA
Aprendimos en la [primera unidad](https://huggingface.co/learn/agents-course/unit1/introduction) del curso que los Agentes de IA son capaces de planificar y tomar decisiones.
Y aunque los LLMs han permitido interacciones más naturales con los NPCs, la IA Agéntica va un paso más allá al permit... | agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx/0 | {
"file_path": "agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx",
"repo_id": "agents-course",
"token_count": 1073
} | 6 |
# Observar: Integrando Retroalimentación para Reflexionar y Adaptarse
Las observaciones son **cómo un Agente percibe las consecuencias de sus acciones**.
Proporcionan información crucial que alimenta el proceso de pensamiento del Agente y guía acciones futuras.
Son **señales del entorno**—ya sean datos de una API, m... | agents-course/units/es/unit1/observations.mdx/0 | {
"file_path": "agents-course/units/es/unit1/observations.mdx",
"repo_id": "agents-course",
"token_count": 1208
} | 7 |
# Índice de Contenidos
Este marco de trabajo de LlamaIndex es parte de la unidad 2 del curso. Puedes acceder a la unidad 2 sobre LlamaIndex en hf.co/learn <a href="https://hf.co/learn/agents-course/unit2/llama-index/introduction">aquí</a>
| Título | Descripción |
| --- | --- |
| [Introducción](introduction.mdx) | In... | agents-course/units/es/unit2/llama-index/README.md/0 | {
"file_path": "agents-course/units/es/unit2/llama-index/README.md",
"repo_id": "agents-course",
"token_count": 382
} | 8 |
# Pequeño Quiz (no calificado) [[quiz2]]
Es hora de poner a prueba tu comprensión de las secciones *Agentes de Código*, *Agentes de Llamada a Herramientas* y *Herramientas*. Este quiz es opcional y no está calificado.
---
### P1: ¿Cuál es la diferencia clave entre crear una herramienta con el decorador `@tool` versu... | agents-course/units/es/unit2/smolagents/quiz2.mdx/0 | {
"file_path": "agents-course/units/es/unit2/smolagents/quiz2.mdx",
"repo_id": "agents-course",
"token_count": 2768
} | 9 |
# Reclama tu Certificado 🎓
Si obtuviste una puntuación **superior al 30%, ¡felicitaciones! 👏 Ahora eres elegible para reclamar tu certificado oficial.**
Sigue los pasos a continuación para recibirlo:
1. Visita la [página del certificado](https://huggingface.co/spaces/agents-course/Unit4-Final-Certificate).
2. **In... | agents-course/units/es/unit4/get-your-certificate.mdx/0 | {
"file_path": "agents-course/units/es/unit4/get-your-certificate.mdx",
"repo_id": "agents-course",
"token_count": 387
} | 10 |
# Introduction
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/pokemon_thumbnail.png" alt="Bonus Unit 3 AI in Games"/>
🎶Je veux être le meilleur... 🎶
Bienvenue dans cette **unité bonus**, où vous explorerez l'intersection passionnante entre **les agents et les jeux... | agents-course/units/fr/bonus-unit3/introduction.mdx/0 | {
"file_path": "agents-course/units/fr/bonus-unit3/introduction.mdx",
"repo_id": "agents-course",
"token_count": 757
} | 11 |
# Quiz rapide 1 [[quiz1]]
---
### Q1 : Qu'est-ce qu'un agent ?
Laquelle des propositions suivantes décrit le mieux un agent en IA ?
<Question
choices={[
{
text: "Un système qui ne traite que du texte statique et n'interagit jamais avec son environnement.",
explain: "Un agent doit être capable de prendre une acti... | agents-course/units/fr/unit1/quiz1.mdx/0 | {
"file_path": "agents-course/units/fr/unit1/quiz1.mdx",
"repo_id": "agents-course",
"token_count": 2303
} | 12 |
# Utiliser les agents dans LlamaIndex
Vous vous souvenez d'Alfred, notre agent majordome serviable d'avant ? Eh bien, il va recevoir une mise à niveau !
Maintenant que nous comprenons les outils disponibles dans LlamaIndex, nous pouvons lui donner de nouvelles capacités pour mieux nous servir.
Mais avant de continuer... | agents-course/units/fr/unit2/llama-index/agents.mdx/0 | {
"file_path": "agents-course/units/fr/unit2/llama-index/agents.mdx",
"repo_id": "agents-course",
"token_count": 2938
} | 13 |
<CourseFloatingBanner
classNames="absolute z-10 right-0 top-0"
notebooks={[
{label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/smolagents/retrieval_agents.ipynb"},
]}
askForHelpUrl="http://hf.co/join/discord" />
# Constru... | agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx/0 | {
"file_path": "agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx",
"repo_id": "agents-course",
"token_count": 3755
} | 14 |
# Introduction à l'unité finale [[introduction]]
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit4/thumbnail.jpg" alt="AI Agents Course thumbnail" width="100%"/>
Bienvenue dans l'unité finale du cours ! 🎉
Jusqu'à présent, vous avez **acquis de solides connaissances sur les ... | agents-course/units/fr/unit4/introduction.mdx/0 | {
"file_path": "agents-course/units/fr/unit4/introduction.mdx",
"repo_id": "agents-course",
"token_count": 648
} | 15 |
# 셀프 체크! (업데이트됨) [[quiz2]]
뭐라고요?! 또 퀴즈라고요? 우리도 알아요... 😅 하지만 걱정 마세요! 이 퀴즈는 **방금 배운 핵심 개념을 확실히 이해**하는 데 도움을 주기 위해 준비되었습니다.
이번 퀴즈에서는 대규모 언어 모델(LLM), 메시지 시스템, 도구(tool) 등 AI 에이전트를 이해하고 구축하는 데 필수적인 요소들을 다룹니다.
### Q1: AI 도구(tool)를 가장 잘 설명하는 것은 무엇인가요? [[q1-which-of-the-following-best-describes-an-ai-tool]]
<Question
ch... | agents-course/units/ko/unit1/quiz2.mdx/0 | {
"file_path": "agents-course/units/ko/unit1/quiz2.mdx",
"repo_id": "agents-course",
"token_count": 2638
} | 16 |
# Что такое LLM?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-1.jpg" alt="Unit 1 planning"/>
В предыдущем разделе мы узнали, что каждый агент нуждается в ** AI Модели как в ядре**, и что LLM являются наиболее распространенным типом AI моделей использующи... | agents-course/units/ru-RU/unit1/what-are-llms.mdx/0 | {
"file_path": "agents-course/units/ru-RU/unit1/what-are-llms.mdx",
"repo_id": "agents-course",
"token_count": 11630
} | 17 |
# Giới thiệu về Agent
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/>
Chào mừng bạn đến với chương đầu tiên, nơi **bạn sẽ xây dựng nền tảng vững chắc về nguyên lý cơ bản của AI agent** bao gồm:
- **Hiểu về Agent**
- Agent là gì và hoạt ... | agents-course/units/vi/unit1/introduction.mdx/0 | {
"file_path": "agents-course/units/vi/unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 1317
} | 18 |
# 简介 (Introduction)

欢迎来到第一个**附加单元**,在这里你将学习如何**为函数调用 (function calling) 微调大语言模型 (Large Language Model, LLM)**。
在大语言模型领域,函数调用正在迅速成为一项*必须掌握*的技术。
这个想法是,不同于我们在第1单元中仅依赖基于提示的方法,函数调用在训练阶段就训练你的模型**采取行动和解释观察结果*... | agents-course/units/zh-CN/bonus-unit1/introduction.mdx/0 | {
"file_path": "agents-course/units/zh-CN/bonus-unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 1875
} | 19 |
# 第一单元测验 (Unit 1 Quiz)
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub4DONE.jpg" alt="Unit 1 planning"/>
恭喜你完成第一单元的学习!让我们测试一下你对目前所学关键概念的理解。
通过测验后,请继续下一部分领取你的证书。
祝你好运!
## 测验 (Quiz)
这是一个交互式测验。测验托管在 Hugging Face Hub 的空间中。你将通过一系列选择题来测试你对本单元所学关键概念的理解。完成测验... | agents-course/units/zh-CN/unit1/final-quiz.mdx/0 | {
"file_path": "agents-course/units/zh-CN/unit1/final-quiz.mdx",
"repo_id": "agents-course",
"token_count": 958
} | 20 |
# 欢迎来到 `LangGraph` 的世界
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/LangGraph.png" alt="Unit 2.3 缩略图"/>
欢迎来到学习旅程的下一站!在本章节中,您将学习如何使用 [`LangGraph`](https://github.com/langchain-ai/langgraph) 框架来构建应用程序,该框架能帮助您组织和编排复杂的 LLM 工作流。
`LangGraph` 是一个通过提供对智能体流程的**控制**工具,帮... | agents-course/units/zh-CN/unit2/langgraph/introduction.mdx/0 | {
"file_path": "agents-course/units/zh-CN/unit2/langgraph/introduction.mdx",
"repo_id": "agents-course",
"token_count": 983
} | 21 |
# `smolagents` 简介
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/thumbnail.jpg" alt="Unit 2.1 Thumbnail"/>
欢迎来到本模块,在这里你将学习**如何使用 [`smolagents`](https://github.com/huggingface/smolagents) 库构建有效的智能体**,该库提供了一个轻量级框架,用于创建功能强大的AI智能体。
`smolagents` 是 Hugging Face 的一个... | agents-course/units/zh-CN/unit2/smolagents/introduction.mdx/0 | {
"file_path": "agents-course/units/zh-CN/unit2/smolagents/introduction.mdx",
"repo_id": "agents-course",
"token_count": 3994
} | 22 |
# 那现在呢?我应该学习哪些主题?
Agentic AI 是一个快速发展的领域,了解基础协议对于构建智能自主系统至关重要。
你应该熟悉的两个重要标准是:
- **模型上下文协议 (MCP)**
- **代理对代理协议 (A2A)**
## 🔌 模型上下文协议 (MCP)
Anthropic 的 **模型上下文协议 (MCP)** 是一个开放标准,使 AI 模型能够安全无缝地**连接外部工具、数据源和应用程序**,从而使代理更加智能和自主。
可以将 MCP 想象为一个**通用适配器**,就像 USB-C 接口一样,使 AI 模型能够插入各种数字环境**而无需为每一个进行定制集成**。
MCP 正在迅速获得行业关注,... | agents-course/units/zh-CN/unit4/additional-readings.mdx/0 | {
"file_path": "agents-course/units/zh-CN/unit4/additional-readings.mdx",
"repo_id": "agents-course",
"token_count": 962
} | 23 |
# Porting a custom kernel
| candle/candle-book/src/cuda/porting.md/0 | {
"file_path": "candle/candle-book/src/cuda/porting.md",
"repo_id": "candle",
"token_count": 7
} | 24 |
//! #A simplified example in Rust of training a neural network and then using it based on the Candle Framework by Hugging Face.
//! Author: Evgeny Igumnov 2023 igumnovnsk@gmail.com
//! This program implements a neural network to predict the winner of the second round of elections based on the results of the first round... | candle/candle-book/src/simplified.rs/0 | {
"file_path": "candle/candle-book/src/simplified.rs",
"repo_id": "candle",
"token_count": 2903
} | 25 |
use crate::benchmarks::{BenchDevice, BenchDeviceHandler};
use candle_core::{
quantized::{self, GgmlDType, QMatMul},
Device, Module, Tensor,
};
use criterion::{black_box, criterion_group, Criterion, Throughput};
use std::time::Instant;
fn run(matmul: &QMatMul, x: &Tensor) {
matmul.forward(x).unwrap();
}
fn... | candle/candle-core/benches/benchmarks/qmatmul.rs/0 | {
"file_path": "candle/candle-core/benches/benchmarks/qmatmul.rs",
"repo_id": "candle",
"token_count": 1085
} | 26 |
pub trait VecOps: num_traits::NumAssign + Copy {
fn min(self, rhs: Self) -> Self;
fn max(self, rhs: Self) -> Self;
/// Dot-product of two vectors.
///
/// # Safety
///
/// The length of `lhs` and `rhs` have to be at least `len`. `res` has to point to a valid
/// element.
#[inline(al... | candle/candle-core/src/cpu/kernels.rs/0 | {
"file_path": "candle/candle-core/src/cpu/kernels.rs",
"repo_id": "candle",
"token_count": 2456
} | 27 |
#![allow(dead_code)]
use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT};
use crate::{CpuStorage, DType, Error, Layout, Result, Shape};
#[derive(Debug, Clone)]
pub struct MetalDevice;
#[derive(Debug)]
pub struct MetalStorage;
#[derive(thiserror::Error, Debug)]
pub enum MetalError {
#[error("{0}")]
Message(... | candle/candle-core/src/dummy_metal_backend.rs/0 | {
"file_path": "candle/candle-core/src/dummy_metal_backend.rs",
"repo_id": "candle",
"token_count": 3182
} | 28 |
//! Support for the [GGUF file format](https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md).
//!
use super::{GgmlDType, QTensor};
use crate::{Context, Device, Result};
use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};
use std::collections::HashMap;
pub const DEFAULT_ALIGNMENT: u64 = 32;
#[derive(De... | candle/candle-core/src/quantized/gguf_file.rs/0 | {
"file_path": "candle/candle-core/src/quantized/gguf_file.rs",
"repo_id": "candle",
"token_count": 9550
} | 29 |
use crate::{Result, Tensor};
#[macro_export]
macro_rules! test_device {
// TODO: Switch to generating the two last arguments automatically once concat_idents is
// stable. https://github.com/rust-lang/rust/issues/29599
($fn_name: ident, $test_cpu: ident, $test_cuda: ident, $test_metal: ident) => {
... | candle/candle-core/src/test_utils.rs/0 | {
"file_path": "candle/candle-core/src/test_utils.rs",
"repo_id": "candle",
"token_count": 1110
} | 30 |
use candle_core::{DType, Result, Tensor};
struct TmpFile(std::path::PathBuf);
impl TmpFile {
fn create(base: &str) -> TmpFile {
let filename = std::env::temp_dir().join(format!(
"candle-{}-{}-{:?}",
base,
std::process::id(),
std::thread::current().id(),
... | candle/candle-core/tests/serialization_tests.rs/0 | {
"file_path": "candle/candle-core/tests/serialization_tests.rs",
"repo_id": "candle",
"token_count": 981
} | 31 |
use candle::Tensor;
pub struct Dataset {
pub train_images: Tensor,
pub train_labels: Tensor,
pub test_images: Tensor,
pub test_labels: Tensor,
pub labels: usize,
}
pub mod cifar;
pub mod fashion_mnist;
pub mod mnist;
| candle/candle-datasets/src/vision/mod.rs/0 | {
"file_path": "candle/candle-datasets/src/vision/mod.rs",
"repo_id": "candle",
"token_count": 100
} | 32 |
# candle-chinese-clip
Contrastive Language-Image Pre-Training (CLIP) is an architecture trained on
pairs of images with related texts. This one is trained using in chinese instead of english.
## Running on cpu
```bash
$ cargo run --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion... | candle/candle-examples/examples/chinese_clip/README.md/0 | {
"file_path": "candle/candle-examples/examples/chinese_clip/README.md",
"repo_id": "candle",
"token_count": 1129
} | 33 |
pub const LAYERNORM_KERNELS: &str = include_str!(concat!(env!("OUT_DIR"), "/layernorm_kernels.ptx"));
| candle/candle-examples/examples/custom-ops/cuda_kernels.rs/0 | {
"file_path": "candle/candle-examples/examples/custom-ops/cuda_kernels.rs",
"repo_id": "candle",
"token_count": 44
} | 34 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle_transformers::models::distilbert::{
Config, DistilBertForMaskedLM, DistilBertModel, DTYPE,
};
use anyhow::{Context, Error as E, Result};
use candle::{Device, Tensor};
use candle_nn::VarBuilde... | candle/candle-examples/examples/distilbert/main.rs/0 | {
"file_path": "candle/candle-examples/examples/distilbert/main.rs",
"repo_id": "candle",
"token_count": 4559
} | 35 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle_transformers::models::jina_bert::{BertModel, Config, PositionEmbeddingType};
use anyhow::Error as E;
use candle::{DType, Module, Tensor};
use candle_nn::VarBuilder;
use clap::Parser;
#[derive(P... | candle/candle-examples/examples/jina-bert/main.rs/0 | {
"file_path": "candle/candle-examples/examples/jina-bert/main.rs",
"repo_id": "candle",
"token_count": 3414
} | 36 |
# candle-mobileclip
MobileCLIP is family of efficient CLIP-like models using FastViT-based image encoders.
See [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/abs/2311.17049)
## Running on an example on cpu
```
$ cargo run --example mobileclip --release -- --images "c... | candle/candle-examples/examples/mobileclip/README.md/0 | {
"file_path": "candle/candle-examples/examples/mobileclip/README.md",
"repo_id": "candle",
"token_count": 379
} | 37 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use anyhow::{Error as E, Result};
use clap::{Parser, ValueEnum};
use candle_transformers::models::olmo::{Config, Model as OLMo};
use candle_transformers::models::olmo2::{Config as Config2, Model as OLMo2};... | candle/candle-examples/examples/olmo/main.rs/0 | {
"file_path": "candle/candle-examples/examples/olmo/main.rs",
"repo_id": "candle",
"token_count": 4321
} | 38 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use anyhow::{Error as E, Result};
use clap::Parser;
use candle_transformers::models::pixtral::{vision_model, Config, Model};
use candle::{DType, Device, Module, Tensor};
use candle_examples::token_output_... | candle/candle-examples/examples/pixtral/main.rs/0 | {
"file_path": "candle/candle-examples/examples/pixtral/main.rs",
"repo_id": "candle",
"token_count": 5495
} | 39 |
# candle-recurrent-gemma
This model card corresponds to the 2B base version of the RecurrentGemma model
[huggingface model card](https://huggingface.co/google/recurrentgemma-2b).
```bash
cargo run --features cuda -r --example recurrent-gemma -- \
--prompt "Write me a poem about Machine Learning."
```
| candle/candle-examples/examples/recurrent-gemma/README.md/0 | {
"file_path": "candle/candle-examples/examples/recurrent-gemma/README.md",
"repo_id": "candle",
"token_count": 101
} | 40 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle::{DType, IndexOp, D};
use candle_nn::{Module, VarBuilder};
use candle_transformers::models::resnet;
use clap::{Parser, ValueEnum};
#[derive(Clone, Copy, Debug, ValueEnum)]
enum Which {
#[val... | candle/candle-examples/examples/resnet/main.rs/0 | {
"file_path": "candle/candle-examples/examples/resnet/main.rs",
"repo_id": "candle",
"token_count": 1288
} | 41 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use anyhow::Result;
use candle::{DType, IndexOp, Tensor};
use candle_nn::VarBuilder;
use candle_transformers::models::snac::{Config, Model};
use clap::{Parser, ValueEnum};
use hf_hub::api::sync::Api;
mod a... | candle/candle-examples/examples/snac/main.rs/0 | {
"file_path": "candle/candle-examples/examples/snac/main.rs",
"repo_id": "candle",
"token_count": 3485
} | 42 |
# candle-stella-en-v5: Implementation of [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) embedding model
As of 7th Oct 2024, *Stella_en_1.5B_v5* is one of the top ranking model on `retrieval` and `reranking` tasks in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard.
[Model car... | candle/candle-examples/examples/stella-en-v5/README.md/0 | {
"file_path": "candle/candle-examples/examples/stella-en-v5/README.md",
"repo_id": "candle",
"token_count": 1149
} | 43 |
# candle-yi
Candle implentations of the Yi family of bilingual (English, Chinese) LLMs.
## Running an example
```bash
$ cargo run --example yi -- --prompt "Here is a test sentence"
> python
> print("Hello World")
>
```
| candle/candle-examples/examples/yi/README.md/0 | {
"file_path": "candle/candle-examples/examples/yi/README.md",
"repo_id": "candle",
"token_count": 73
} | 44 |
// Copied from https://github.com/ruuda/bs1770/blob/master/src/lib.rs
// BS1770 -- Loudness analysis library conforming to ITU-R BS.1770
// Copyright 2020 Ruud van Asseldonk
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// A copy ... | candle/candle-examples/src/bs1770.rs/0 | {
"file_path": "candle/candle-examples/src/bs1770.rs",
"repo_id": "candle",
"token_count": 7220
} | 45 |
/******************************************************************************
* Copyright (c) 2023, Tri Dao.
******************************************************************************/
#pragma once
// #include <c10/cuda/CUDAException.h> // For C10_CUDA_CHECK and C10_CUDA_KERNEL_LAUNCH_CHECK
#include "error.h... | candle/candle-flash-attn/kernels/flash_fwd_launch_template.h/0 | {
"file_path": "candle/candle-flash-attn/kernels/flash_fwd_launch_template.h",
"repo_id": "candle",
"token_count": 10705
} | 46 |
# candle-kernels
This crate contains CUDA kernels used from candle. Some of these implementations
come from the [dfdx crate](https://github.com/coreylowman/dfdx).
| candle/candle-kernels/README.md/0 | {
"file_path": "candle/candle-kernels/README.md",
"repo_id": "candle",
"token_count": 45
} | 47 |
#include "cuda_utils.cuh"
#include<stdint.h>
#define WHERE_OP(TYPENAME, ID_TYPENAME, FN_NAME) \
extern "C" __global__ void FN_NAME( \
const size_t numel, \
const size_t num_dims, \
const size_t *info, \
const ID_TYPENAME *ids, \
const TYPENAME *t, \
const TYPENAME *f, \
TYPENAME *out \
) ... | candle/candle-kernels/src/ternary.cu/0 | {
"file_path": "candle/candle-kernels/src/ternary.cu",
"repo_id": "candle",
"token_count": 1345
} | 48 |
#include <metal_stdlib>
#include <metal_integer>
#include <metal_atomic>
using namespace metal;
// Constants
// 2^32 and 1/2^32. Useful for converting between float and uint.
static constexpr constant ulong UNIF01_NORM32 = 4294967296;
static constexpr constant float UNIF01_INV32 = 2.328306436538696289e-10;
// 2 * pi
... | candle/candle-metal-kernels/src/random.metal/0 | {
"file_path": "candle/candle-metal-kernels/src/random.metal",
"repo_id": "candle",
"token_count": 3671
} | 49 |
mod benchmarks;
use criterion::criterion_main;
criterion_main!(
benchmarks::softmax::benches,
benchmarks::layer_norm::benches,
benchmarks::conv::benches
);
| candle/candle-nn/benches/bench_main.rs/0 | {
"file_path": "candle/candle-nn/benches/bench_main.rs",
"repo_id": "candle",
"token_count": 58
} | 50 |
//! Layer Normalization.
//!
//! This layer applies Layer Normalization over a mini-batch of inputs as described in [`Layer
//! Normalization`]. The input is expected to have three dimensions: a batch dimension, a length,
//! and a hidden size, the normalization is applied over the last dimension.
//!
//! # Example
//!... | candle/candle-nn/src/layer_norm.rs/0 | {
"file_path": "candle/candle-nn/src/layer_norm.rs",
"repo_id": "candle",
"token_count": 2656
} | 51 |
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle::test_utils::to_vec0_round;
use candle::{Device, Result, Tensor};
/* Equivalent python code:
import torch
import torch.nn.functional as F
input = torch.tensor([
[ 1.1050, 0.3013, -1.5394, -... | candle/candle-nn/tests/loss.rs/0 | {
"file_path": "candle/candle-nn/tests/loss.rs",
"repo_id": "candle",
"token_count": 1344
} | 52 |
from .module import Module
from typing import Optional, Tuple, Any
from candle import Tensor
import candle
class Embedding(Module):
"""A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices.
The input... | candle/candle-pyo3/py_src/candle/nn/sparse.py/0 | {
"file_path": "candle/candle-pyo3/py_src/candle/nn/sparse.py",
"repo_id": "candle",
"token_count": 590
} | 53 |
//! Implementation of BLIP text encoder/decoder.
//!
//! - 📝 [Paper](https://arxiv.org/abs/2201.12086). BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation"
//!
//! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning)
//... | candle/candle-transformers/src/models/blip_text.rs/0 | {
"file_path": "candle/candle-transformers/src/models/blip_text.rs",
"repo_id": "candle",
"token_count": 7345
} | 54 |
//! Implementation of the Depth Anything model from FAIR.
//!
//! See:
//! - ["Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data"](https://github.com/LiheYoung/Depth-Anything)
//!
use std::sync::Arc;
use candle::D::Minus1;
use candle::{Module, Result, Tensor};
use candle_nn::ops::Identity;
use candle... | candle/candle-transformers/src/models/depth_anything_v2.rs/0 | {
"file_path": "candle/candle-transformers/src/models/depth_anything_v2.rs",
"repo_id": "candle",
"token_count": 9268
} | 55 |
//! MetaVoice Studio ML Models
//!
//! See MetaVoice's TTS and voice cloning models:
//! - [Github](https://github.com/metavoiceio/metavoice-src)
//! - [Website](https://studio.metavoice.ai/)
use candle::{DType, Device, Error as E, IndexOp, Module, Result, Tensor, D};
use candle_nn::{embedding, linear_b, rms_norm, Emb... | candle/candle-transformers/src/models/metavoice.rs/0 | {
"file_path": "candle/candle-transformers/src/models/metavoice.rs",
"repo_id": "candle",
"token_count": 21765
} | 56 |
//! # MobileNet-v4
//!
//! MobileNet-v4 inference implementation based on timm.
//!
//! ## Paper
//!
//! ["MobileNetV4 - Universal Models for the Mobile Ecosystem"](https://arxiv.org/abs/2404.10518)
//!
//! ## References
//!
//! - [PyTorch Implementation](https://github.com/huggingface/pytorch-image-models/blob/main/ti... | candle/candle-transformers/src/models/mobilenetv4.rs/0 | {
"file_path": "candle/candle-transformers/src/models/mobilenetv4.rs",
"repo_id": "candle",
"token_count": 16908
} | 57 |
//! Microsoft Phi model implementation
//!
//! The Phi series are decoder-only transformers designed for code and language tasks.
//!
//! Key characteristics:
//! - Decoder-only transformer architecture
//! - RoPE embeddings
//! - Layer normalization
//! - QK normalization
//!
//! - ⚡ [Interactive Wasm Example](https:/... | candle/candle-transformers/src/models/phi.rs/0 | {
"file_path": "candle/candle-transformers/src/models/phi.rs",
"repo_id": "candle",
"token_count": 6213
} | 58 |
//! Phi3 model implementation with quantization support.
//!
//! Phi3 is a language model intended for research purposes.
//! This implementation provides quantization for reduced memory usage.
//!
//! Key characteristics:
//! - Multi-head attention
//! - RMSNorm for layer normalization
//! - Rotary positional embeddin... | candle/candle-transformers/src/models/quantized_phi3.rs/0 | {
"file_path": "candle/candle-transformers/src/models/quantized_phi3.rs",
"repo_id": "candle",
"token_count": 6108
} | 59 |
//! RWKV v6 model implementation.
//!
//! The [RWKV model](https://wiki.rwkv.com/) is a recurrent neural network model
//! with performance on par with transformer architectures. Several variants are
//! available, candle implements the v5 and v6 versions and can be used with
//! Eagle 7B([blog post](https://blog.rwkv.... | candle/candle-transformers/src/models/rwkv_v6.rs/0 | {
"file_path": "candle/candle-transformers/src/models/rwkv_v6.rs",
"repo_id": "candle",
"token_count": 6204
} | 60 |
//! Ancestral sampling with Euler method steps.
//!
//! Based on the original [`k-diffusion` implementation by Katherine Crowson]( https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72).
//!
use super::{
schedulers::{
betas_for_alpha_bar, BetaSche... | candle/candle-transformers/src/models/stable_diffusion/euler_ancestral_discrete.rs/0 | {
"file_path": "candle/candle-transformers/src/models/stable_diffusion/euler_ancestral_discrete.rs",
"repo_id": "candle",
"token_count": 4097
} | 61 |
use candle::{DType, Device, Error, Tensor};
use crate::models::whisper::audio::{log_mel_spectrogram_, Float};
pub fn pcm_to_mel<T: Float>(samples: &[T], filters: &[T]) -> Vec<T> {
log_mel_spectrogram_(
samples,
filters,
super::N_FFT,
super::HOP_LENGTH,
super::N_MELS,
... | candle/candle-transformers/src/models/voxtral/audio.rs/0 | {
"file_path": "candle/candle-transformers/src/models/voxtral/audio.rs",
"repo_id": "candle",
"token_count": 1051
} | 62 |
use crate::models::with_tracing::{linear, Linear};
use candle::{DType, Module, Result, Tensor};
use candle_nn::{
embedding, layer_norm, ops::softmax_last_dim, Activation, Embedding, LayerNorm, VarBuilder,
};
#[derive(Debug, Clone, serde::Deserialize)]
pub struct Config {
pub hidden_size: usize,
pub layer_n... | candle/candle-transformers/src/models/xlm_roberta.rs/0 | {
"file_path": "candle/candle-transformers/src/models/xlm_roberta.rs",
"repo_id": "candle",
"token_count": 8889
} | 63 |
use candle_transformers::models::bert;
use wasm_bindgen::prelude::*;
pub use bert::{BertModel, Config, DTYPE};
pub use tokenizers::{PaddingParams, Tokenizer};
#[wasm_bindgen]
extern "C" {
// Use `js_namespace` here to bind `console.log(..)` instead of just
// `log(..)`
#[wasm_bindgen(js_namespace = consol... | candle/candle-wasm-examples/bert/src/lib.rs/0 | {
"file_path": "candle/candle-wasm-examples/bert/src/lib.rs",
"repo_id": "candle",
"token_count": 226
} | 64 |
use crate::console_log;
use crate::worker::{ModelData, Worker, WorkerInput, WorkerOutput};
use std::str::FromStr;
use wasm_bindgen::prelude::*;
use wasm_bindgen_futures::JsFuture;
use yew::{html, Component, Context, Html};
use yew_agent::{Bridge, Bridged};
async fn fetch_url(url: &str) -> Result<Vec<u8>, JsValue> {
... | candle/candle-wasm-examples/llama2-c/src/app.rs/0 | {
"file_path": "candle/candle-wasm-examples/llama2-c/src/app.rs",
"repo_id": "candle",
"token_count": 5448
} | 65 |
//load Candle Bert Module wasm module
let init, ModelEncoder;
async function fetchArrayBuffer(url) {
const cacheName = "t5-candle-cache";
const cache = await caches.open(cacheName);
const cachedResponse = await cache.match(url);
if (cachedResponse) {
const data = await cachedResponse.arrayBuffer();
ret... | candle/candle-wasm-examples/t5/T5ModelEncoderWorker.js/0 | {
"file_path": "candle/candle-wasm-examples/t5/T5ModelEncoderWorker.js",
"repo_id": "candle",
"token_count": 873
} | 66 |
use candle_wasm_example_whisper::worker::{Decoder as D, ModelData};
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub struct Decoder {
decoder: D,
}
#[wasm_bindgen]
impl Decoder {
#[wasm_bindgen(constructor)]
#[allow(clippy::too_many_arguments)]
pub fn new(
weights: Vec<u8>,
tokenizer:... | candle/candle-wasm-examples/whisper/src/bin/m.rs/0 | {
"file_path": "candle/candle-wasm-examples/whisper/src/bin/m.rs",
"repo_id": "candle",
"token_count": 694
} | 67 |
mod app;
pub mod coco_classes;
pub mod model;
pub mod worker;
pub use app::App;
pub use worker::Worker;
| candle/candle-wasm-examples/yolo/src/lib.rs/0 | {
"file_path": "candle/candle-wasm-examples/yolo/src/lib.rs",
"repo_id": "candle",
"token_count": 37
} | 68 |
module.exports = {
root: true,
parser: "@typescript-eslint/parser",
extends: [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",
"plugin:svelte/recommended",
"prettier",
],
plugins: ["@typescript-eslint"],
ignorePatterns: ["*.cjs"],
overrides: [
{
files: ["*.svelte"],
parser: "svelte... | chat-ui/.eslintrc.cjs/0 | {
"file_path": "chat-ui/.eslintrc.cjs",
"repo_id": "chat-ui",
"token_count": 420
} | 69 |
{
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.codeActionsOnSave": {
"source.fixAll": "explicit"
},
"eslint.validate": ["javascript", "svelte"],
"[svelte]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[typescript]": {
"editor.defaultFormatter": "e... | chat-ui/.vscode/settings.json/0 | {
"file_path": "chat-ui/.vscode/settings.json",
"repo_id": "chat-ui",
"token_count": 153
} | 70 |
{{- if and .Values.serviceAccount.enabled .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
metadata:
name: "{{ .Values.serviceAccount.name | default (include "name" .) }}"
namespace: {{ .Release.Namespace }}
... | chat-ui/chart/templates/service-account.yaml/0 | {
"file_path": "chat-ui/chart/templates/service-account.yaml",
"repo_id": "chat-ui",
"token_count": 154
} | 71 |
# Llama.cpp
| Feature | Available |
| --------------------------- | --------- |
| [Tools](../tools) | No |
| [Multimodal](../multimodal) | No |
Chat UI supports the llama.cpp API server directly without the need for an adapter. You can do this using the `llamacpp` endpoint ... | chat-ui/docs/source/configuration/models/providers/llamacpp.md/0 | {
"file_path": "chat-ui/docs/source/configuration/models/providers/llamacpp.md",
"repo_id": "chat-ui",
"token_count": 1026
} | 72 |
ENV_LOCAL_PATH=/app/.env.local
if test -z "${DOTENV_LOCAL}" ; then
if ! test -f "${ENV_LOCAL_PATH}" ; then
echo "DOTENV_LOCAL was not found in the ENV variables and .env.local is not set using a bind volume. Make sure to set environment variables properly. "
fi;
else
echo "DOTENV_LOCAL was found in... | chat-ui/entrypoint.sh/0 | {
"file_path": "chat-ui/entrypoint.sh",
"repo_id": "chat-ui",
"token_count": 266
} | 73 |
import type { App } from "$api";
import { base } from "$app/paths";
import { treaty, type Treaty } from "@elysiajs/eden";
import { browser } from "$app/environment";
import superjson from "superjson";
import ObjectId from "bson-objectid";
superjson.registerCustom<ObjectId, string>(
{
isApplicable: (value): value is... | chat-ui/src/lib/APIClient.ts/0 | {
"file_path": "chat-ui/src/lib/APIClient.ts",
"repo_id": "chat-ui",
"token_count": 717
} | 74 |
<script lang="ts">
import { createEventDispatcher, onDestroy, onMount } from "svelte";
import { cubicOut } from "svelte/easing";
import { fade, fly } from "svelte/transition";
import Portal from "./Portal.svelte";
import { browser } from "$app/environment";
import CarbonClose from "~icons/carbon/close";
interfa... | chat-ui/src/lib/components/Modal.svelte/0 | {
"file_path": "chat-ui/src/lib/components/Modal.svelte",
"repo_id": "chat-ui",
"token_count": 822
} | 75 |
<script lang="ts">
import type { Model } from "$lib/types/Model";
import { getTokenizer } from "$lib/utils/getTokenizer";
import type { PreTrainedTokenizer } from "@huggingface/transformers";
import { untrack } from "svelte";
interface Props {
classNames?: string;
prompt?: string;
modelTokenizer: Exclude<Mo... | chat-ui/src/lib/components/TokensCounter.svelte/0 | {
"file_path": "chat-ui/src/lib/components/TokensCounter.svelte",
"repo_id": "chat-ui",
"token_count": 449
} | 76 |
<script lang="ts">
import { invalidateAll } from "$app/navigation";
import { page } from "$app/state";
import { base } from "$app/paths";
import type { Model } from "$lib/types/Model";
interface Props {
models: Model[];
currentModel: Model;
}
let { models, currentModel }: Props = $props();
let selectedMo... | chat-ui/src/lib/components/chat/ModelSwitch.svelte/0 | {
"file_path": "chat-ui/src/lib/components/chat/ModelSwitch.svelte",
"repo_id": "chat-ui",
"token_count": 640
} | 77 |
<script lang="ts">
import { usePublicConfig } from "$lib/utils/PublicConfig.svelte";
const publicConfig = usePublicConfig();
interface Props {
classNames?: string;
}
let { classNames = "" }: Props = $props();
</script>
{#if publicConfig.PUBLIC_APP_ASSETS === "chatui"}
<svg
height="30"
width="30"
viewB... | chat-ui/src/lib/components/icons/Logo.svelte/0 | {
"file_path": "chat-ui/src/lib/components/icons/Logo.svelte",
"repo_id": "chat-ui",
"token_count": 538
} | 78 |
import type { Migration } from ".";
import { collections } from "$lib/server/database";
import { ObjectId } from "mongodb";
const resetTools: Migration = {
_id: new ObjectId("000000000000000000000007"),
name: "Reset tools to empty",
up: async () => {
const { settings } = collections;
await settings.updateMany(... | chat-ui/src/lib/migrations/routines/07-reset-tools-in-settings.ts/0 | {
"file_path": "chat-ui/src/lib/migrations/routines/07-reset-tools-in-settings.ts",
"repo_id": "chat-ui",
"token_count": 133
} | 79 |
import {
Issuer,
type BaseClient,
type UserinfoResponse,
type TokenSet,
custom,
} from "openid-client";
import { addHours, addWeeks } from "date-fns";
import { config } from "$lib/server/config";
import { sha256 } from "$lib/utils/sha256";
import { z } from "zod";
import { dev } from "$app/environment";
import typ... | chat-ui/src/lib/server/auth.ts/0 | {
"file_path": "chat-ui/src/lib/server/auth.ts",
"repo_id": "chat-ui",
"token_count": 3197
} | 80 |
import type { MessageFile } from "$lib/types/Message";
import { z } from "zod";
export interface FileProcessorOptions<TMimeType extends string = string> {
supportedMimeTypes: TMimeType[];
maxSizeInMB: number;
}
export type ImageProcessor<TMimeType extends string = string> = (file: MessageFile) => Promise<{
file: B... | chat-ui/src/lib/server/endpoints/document.ts/0 | {
"file_path": "chat-ui/src/lib/server/endpoints/document.ts",
"repo_id": "chat-ui",
"token_count": 706
} | 81 |
import { error } from "@sveltejs/kit";
import { collections } from "$lib/server/database";
import type { Conversation } from "$lib/types/Conversation";
import type { SharedConversation } from "$lib/types/SharedConversation";
import type { MessageFile } from "$lib/types/Message";
export async function downloadFile(
sh... | chat-ui/src/lib/server/files/downloadFile.ts/0 | {
"file_path": "chat-ui/src/lib/server/files/downloadFile.ts",
"repo_id": "chat-ui",
"token_count": 397
} | 82 |
import { collectDefaultMetrics, Registry, Counter, Summary } from "prom-client";
import express from "express";
import { logger } from "$lib/server/logger";
import { config } from "$lib/server/config";
import type { Model } from "$lib/types/Model";
import { onExit } from "./exitHandler";
import { promisify } from "util... | chat-ui/src/lib/server/metrics.ts/0 | {
"file_path": "chat-ui/src/lib/server/metrics.ts",
"repo_id": "chat-ui",
"token_count": 2366
} | 83 |
import { config } from "$lib/server/config";
import { Client } from "@gradio/client";
import { SignJWT } from "jose";
import JSON5 from "json5";
import {
MessageToolUpdateType,
MessageUpdateType,
type MessageToolUpdate,
} from "$lib/types/MessageUpdate";
import { logger } from "$lib/server/logger";
export async func... | chat-ui/src/lib/server/tools/utils.ts/0 | {
"file_path": "chat-ui/src/lib/server/tools/utils.ts",
"repo_id": "chat-ui",
"token_count": 1175
} | 84 |
import type { WebSearchScrapedSource, WebSearchSource } from "$lib/types/WebSearch";
import type { MessageWebSearchUpdate } from "$lib/types/MessageUpdate";
import { withPage } from "./playwright";
import { spatialParser } from "./parser";
import { htmlToMarkdownTree } from "../markdown/tree";
import { timeout } from ... | chat-ui/src/lib/server/websearch/scrape/scrape.ts/0 | {
"file_path": "chat-ui/src/lib/server/websearch/scrape/scrape.ts",
"repo_id": "chat-ui",
"token_count": 863
} | 85 |
import { writable } from "svelte/store";
export const isAborted = writable<boolean>(false);
| chat-ui/src/lib/stores/isAborted.ts/0 | {
"file_path": "chat-ui/src/lib/stores/isAborted.ts",
"repo_id": "chat-ui",
"token_count": 30
} | 86 |
import type { WebSearchSource } from "$lib/types/WebSearch";
import type { ToolCall, ToolResult } from "$lib/types/Tool";
export type MessageUpdate =
| MessageStatusUpdate
| MessageTitleUpdate
| MessageToolUpdate
| MessageWebSearchUpdate
| MessageStreamUpdate
| MessageFileUpdate
| MessageFinalAnswerUpdate
| Me... | chat-ui/src/lib/types/MessageUpdate.ts/0 | {
"file_path": "chat-ui/src/lib/types/MessageUpdate.ts",
"repo_id": "chat-ui",
"token_count": 1093
} | 87 |
import type { env as publicEnv } from "$env/dynamic/public";
import { page } from "$app/state";
import { base } from "$app/paths";
import type { Transporter } from "@sveltejs/kit";
import { getContext } from "svelte";
type PublicConfigKey = keyof typeof publicEnv;
class PublicConfigManager {
#configStore = $state<R... | chat-ui/src/lib/utils/PublicConfig.svelte.ts/0 | {
"file_path": "chat-ui/src/lib/utils/PublicConfig.svelte.ts",
"repo_id": "chat-ui",
"token_count": 691
} | 88 |
type Gen<T, TReturn> = AsyncGenerator<T, TReturn, undefined>;
type GenPromiseMap<T, TReturn> = Map<
Gen<T, TReturn>,
Promise<{ gen: Gen<T, TReturn> } & IteratorResult<T, TReturn>>
>;
/** Merges multiple async generators into a single async generator that yields values from all of them in parallel. */
export async f... | chat-ui/src/lib/utils/mergeAsyncGenerators.ts/0 | {
"file_path": "chat-ui/src/lib/utils/mergeAsyncGenerators.ts",
"repo_id": "chat-ui",
"token_count": 407
} | 89 |
import { collections } from "$lib/server/database";
import { ObjectId } from "mongodb";
import { describe, expect, it } from "vitest";
import { insertLegacyConversation, insertSideBranchesConversation } from "./treeHelpers.spec";
import { addChildren } from "./addChildren";
import type { Message } from "$lib/types/Mes... | chat-ui/src/lib/utils/tree/addChildren.spec.ts/0 | {
"file_path": "chat-ui/src/lib/utils/tree/addChildren.spec.ts",
"repo_id": "chat-ui",
"token_count": 1301
} | 90 |
import { UrlDependency } from "$lib/types/UrlDependency";
import type { ConvSidebar } from "$lib/types/ConvSidebar";
import { useAPIClient, handleResponse } from "$lib/APIClient";
import { getConfigManager } from "$lib/utils/PublicConfig.svelte";
export const load = async ({ depends, fetch }) => {
depends(UrlDependen... | chat-ui/src/routes/+layout.ts/0 | {
"file_path": "chat-ui/src/routes/+layout.ts",
"repo_id": "chat-ui",
"token_count": 1058
} | 91 |
import { config } from "$lib/server/config";
import { collections } from "$lib/server/database.js";
import { toolFromConfigs } from "$lib/server/tools/index.js";
import { ReviewStatus } from "$lib/types/Review";
import type { CommunityToolDB } from "$lib/types/Tool.js";
import { ObjectId } from "mongodb";
import { edit... | chat-ui/src/routes/api/tools/[toolId]/+server.ts/0 | {
"file_path": "chat-ui/src/routes/api/tools/[toolId]/+server.ts",
"repo_id": "chat-ui",
"token_count": 1425
} | 92 |
import { useAPIClient, handleResponse } from "$lib/APIClient";
import { UrlDependency } from "$lib/types/UrlDependency";
import { redirect } from "@sveltejs/kit";
export const load = async ({ params, depends, fetch }) => {
depends(UrlDependency.Conversation);
const client = useAPIClient({ fetch });
try {
return... | chat-ui/src/routes/conversation/[id]/+page.ts/0 | {
"file_path": "chat-ui/src/routes/conversation/[id]/+page.ts",
"repo_id": "chat-ui",
"token_count": 147
} | 93 |
import ModelThumbnail from "./ModelThumbnail.svelte";
import { redirect, type RequestHandler } from "@sveltejs/kit";
import { Resvg } from "@resvg/resvg-js";
import satori from "satori";
import { html } from "satori-html";
import InterRegular from "$lib/server/fonts/Inter-Regular.ttf";
import InterBold from "$lib/ser... | chat-ui/src/routes/models/[...model]/thumbnail.png/+server.ts/0 | {
"file_path": "chat-ui/src/routes/models/[...model]/thumbnail.png/+server.ts",
"repo_id": "chat-ui",
"token_count": 516
} | 94 |
<script lang="ts">
import { base } from "$app/paths";
import { afterNavigate, goto } from "$app/navigation";
import { useSettingsStore } from "$lib/stores/settings";
import CarbonCheckmark from "~icons/carbon/checkmark";
import Modal from "$lib/components/Modal.svelte";
interface Props {
children?: import("sv... | chat-ui/src/routes/settings/+layout.svelte/0 | {
"file_path": "chat-ui/src/routes/settings/+layout.svelte",
"repo_id": "chat-ui",
"token_count": 433
} | 95 |
{
"license": "Apache-2.0",
"creators": [
{
"affiliation": "Hugging Face",
"name": "Quentin Lhoest"
},
{
"orcid": "0000-0003-1727-1045",
"affiliation": "Hugging Face",
"name": "Albert Villanova del Moral"
},
{
... | datasets/.zenodo.json/0 | {
"file_path": "datasets/.zenodo.json",
"repo_id": "datasets",
"token_count": 1953
} | 96 |
# Differences between Dataset and IterableDataset
There are two types of dataset objects, a [`Dataset`] and an [`IterableDataset`].
Whichever type of dataset you choose to use or create depends on the size of the dataset.
In general, an [`IterableDataset`] is ideal for big datasets (think hundreds of GBs!) due to its ... | datasets/docs/source/about_mapstyle_vs_iterable.mdx/0 | {
"file_path": "datasets/docs/source/about_mapstyle_vs_iterable.mdx",
"repo_id": "datasets",
"token_count": 3723
} | 97 |
# Create an image dataset
There are two methods for creating and sharing an image dataset. This guide will show you how to:
* Create an image dataset from local files in python with [`Dataset.push_to_hub`]. This is an easy way that requires only a few steps in python.
* Create an image dataset with `ImageFolder` and... | datasets/docs/source/image_dataset.mdx/0 | {
"file_path": "datasets/docs/source/image_dataset.mdx",
"repo_id": "datasets",
"token_count": 2592
} | 98 |
# Utilities
## Configure logging
🤗 Datasets strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is ... | datasets/docs/source/package_reference/utilities.mdx/0 | {
"file_path": "datasets/docs/source/package_reference/utilities.mdx",
"repo_id": "datasets",
"token_count": 725
} | 99 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.