Spaces:
Sleeping
Apply for community grant: Personal project (gpu and storage)
GPU + Storage Grant Request for Educational Space â LLM Kitchen đł
Hi Hugging Face team đ
Iâm BC, a student and openâsource contributor passionate about making AI education more transparent, handsâon, and joyful. My mission is simple: AI for everyone â and for everyone to truly understand how LLMs work, from the inside out.
Iâm writing to request a physical GPU grant (A10G, T4, or L4) along with persistent storage to support my personal project, LLM Kitchen.
LLM Kitchen is a curriculumârich Space that helps beginners train and publish small language models from scratch. Itâs built around playful metaphorsâlike AutoâSeasoning⢠for hyperparameter tuning and Inference Kitchen for testing outputsâto make model training feel intuitive and creative. The interface guides users through architecture selection, hyperparameter configuration, backend training, inference testing, and optional publishing to the Hub. Itâs designed to be friendly for students, educators, and curious newcomers alike.
This is a personal project by me (BC), not affiliated with SmilyAI. While Iâm incredibly grateful for the ZeroGPU grant that powers another Space I maintain, ZeroGPU is too limited for this kind of educational workflow.
I understand Hugging Face doesnât typically grant GPUs for training, but I want to clarify that LLM Kitchen is not a production trainer. Itâs an educational sandbox designed to help learners understand how language models work.All training is ephemeral unless published, capped in size, and protected by a 48-hour timeout. The goal isnât optimizationâitâs transparency, reproducibility, and curriculum-style learning. Iâve built in safeguards and playful scaffolding to make this a responsible, beginner-friendly teaching tool.
Why ZeroGPU Falls Short for LLM Kitchen
- 48âhour timeout: To conserve resources, Iâve implemented a 48âhour training limit, but ZeroGPUâs shorter execution windows still interrupt curriculumâstyle workflows.
- No wasted Hub resources: The Space stores nothing on the Hub unless the user explicitly chooses to publish. Without persistent storage, all training is ephemeral.
- Model size constraints: Even with models capped at 4â8 layers and small batch sizes, backend execution is essential for curriculum ramping, norm tracking, and diagnostic probesânone of which are feasible under ZeroGPUâs constraints.
đ Why This Grant Matters
With a physical GPU and storage, I could:
- Run persistent backend training with curriculumâstyle logging, norm evolution tracking, and latent reasoning probes
- Let users publish trained models directly to the Hugging Face Hub with custom model cards
- Expand to support word problems, multiâstep reasoning, and CSV trace logging for early/mid/late inference chains
- Avoid accidental quota drain and timeouts, making the Space stable for classrooms and community demos
- Build reproducible workflows that show how architecture, hyperparameters, and training dynamics affect model behavior
đ Impact
LLM Kitchen is not just a toolâitâs a gateway to understanding AI. With Hugging Faceâs support, it could:
- Reach hundreds of students in classrooms and online workshops within the first year
- Serve as a handsâon lab for educators introducing AI concepts in an accessible, playful way
- Empower selfâlearners worldwide to go beyond âusingâ AI and instead understand its inner workings
- Foster a community of curious builders who share their trained models, insights, and experiments openly
By making the inner workings of LLMs visible and approachable, we can empower every learner, in every corner of the world, to not just use AI â but to understand it deeply.
âď¸ Technical Readiness
LLM Kitchen is already fully scaffolded and operational in a limited form:
- UI & UX: Complete, with guided steps for architecture selection, hyperparameter tuning, backend training, inference testing, and publishing
- Backend logic: Implemented with curriculum ramping, norm tracking, and diagnostic probes ready to run on a physical GPU
- Publishing flow: Integrated with Hugging Face Hub for optional model uploads and custom model card generation
- Resource safeguards: Model size caps, batch size limits, and a 48âhour timeout to ensure responsible usage
- Educational content: Playful metaphors, inline explanations, and visual feedback designed for beginners and classrooms
With hardware in place, I can immediately enable persistent training, richer diagnostics, and reproducible workflowsâno major development delays required.
Thank you for considering this requestâand for all the encouragement and tools you provide to the openâsource community. Iâm happy to share walkthroughs, logs, or demo links if helpful.
Warmly,
BC
Hi HF Team â quick clarification to avoid confusion:
I submitted two requests:
â First request: Framed as âtraining platformâ â rightly declined (I understand â not what grants are for!)
â Second request: Reframed as âeducational demoâ â lightweight, 1â2 epoch fine-tuning for architecture/hyperparameter intuition â explicitly NOT production training.
This second request is my official, revised submission â aligned with Spaces like dreambooth-training and clip-prefix-training.
No need to consider the first â this is the one Iâd love your feedback on đ
Thank you!
â Bc