id stringlengths 5 5 | category stringlengths 7 15 | prompt stringlengths 89 147 | expected_traits stringlengths 32 55 |
|---|---|---|---|
YL001 | agent_planning | Break down how you would build a simple agent that can search notes, summarize, and draft a reply. List components and data flow. | structured steps, practical components, clear data flow |
YL002 | agent_tools | Design a tool calling interface for an agent that uses three tools: search, calculator, and http fetch. Provide a minimal schema and call examples. | clear schema, safe defaults, realistic examples |
YL003 | reproducibility | Given an experiment prompt, show how you would log parameters, model id, and outputs for reproducible runs. Keep it simple. | logging mindset, minimal but complete fields |
YL004 | efficiency | Explain how you would reduce GPU memory use for local inference on a consumer GPU. Include quantization and batching tradeoffs. | accurate tradeoffs, practical suggestions |
YL005 | latency | You have a slow local model. List the top causes of latency and the first measurements you would take. | diagnostic approach, measurable steps |
YL006 | rag_design | Design a minimal RAG pipeline for personal notes. Include chunking, embeddings, retrieval, and a basic evaluation plan. | end to end design, evaluation included |
YL007 | rag_grounding | What are reliable ways to reduce hallucinations in a RAG system? Include what to measure. | grounding methods, metrics, failure modes |
YL008 | prompt_quality | Rewrite this vague prompt into three testable prompts for an engineering assistant. Explain why each is better. | prompt clarity, testability, rationale |
YL009 | safety_policy | An agent can run shell commands. Describe guardrails you would implement to prevent destructive actions. | practical guardrails, least privilege mindset |
YL010 | system_design | Propose an architecture for running local inference plus optional cloud fallback. Focus on reliability and cost control. | clear architecture, failure handling, cost awareness |
YL011 | evaluation | Create a lightweight rubric to score agent responses on correctness, actionability, and uncertainty handling. | rubric with criteria and scoring |
YL012 | uncertainty | Answer a question where you are not sure. Demonstrate how you state uncertainty and what you would verify next. | explicit uncertainty, verification steps |
YL013 | debugging | A model output quality regressed after a prompt change. Outline a debugging approach to isolate the cause. | A B testing, controlled variables, clear steps |
YL014 | documentation | Write a short model card section describing intended use, limitations, and ethical considerations for a home lab agent. | honest limitations, clear intended use |
YL015 | workflow | Design a daily workflow for home lab research that balances building, measuring, and documenting results. | repeatable workflow, measurement emphasis |
yl-eval-prompts
Evaluation prompts used in YellowLabsStudio home lab experiments.
Format
prompts.jsonlcontains one JSON object per line- fields:
id,category,prompt,expected_traits
Use cases
- regression testing across models
- prompt stability checks
- agent planning quality checks
- RAG groundedness checks
- Downloads last month
- 9