Mituvinci commited on
Commit
724cc79
Β·
1 Parent(s): f7cda5c

Clarify purpose: LLM self-examination simulation, not user quiz

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -12,17 +12,19 @@ private: true
12
 
13
  # Adaptive Study Agent
14
 
15
- An autonomous self-directed learning agent built with **LangGraph** and **Claude (Anthropic)**. Upload any PDF or TXT document β€” the agent ingests it, generates comprehension questions via RAG, answers them using ChromaDB retrieval, self-evaluates each answer, and loops back to re-study weak areas until a mastery threshold is reached.
16
 
17
- Built as a portfolio project demonstrating stateful agentic workflows with LangGraph's conditional branching, RAG pipelines with OpenAI embeddings + ChromaDB, and multi-step reasoning with Claude Sonnet.
 
 
18
 
19
  ---
20
 
21
- ## Motivation and Conceptual Link to MOSAIC
22
 
23
- MOSAIC (a separate research project) tests whether 12 specialist agents sharing a vector database improves rare-condition classification -- collective knowledge at scale. This project is the single-agent version of the same question: can one agent use retrieval to improve its own understanding iteratively? The feedback loop here is what Phase 1C of MOSAIC implements collectively across 12 agents.
24
 
25
- The connection is conceptual and motivational. There is no shared infrastructure, codebase, or data pipeline between this project and MOSAIC.
26
 
27
  ---
28
 
 
12
 
13
  # Adaptive Study Agent
14
 
15
+ A **LLM self-examination simulation** built with **LangGraph** and **Claude (Anthropic)**. The agent reads any document you provide, then runs a fully autonomous study loop β€” the LLM generates its own comprehension questions, retrieves context from ChromaDB to answer them, and evaluates its own answers. The user does not answer any questions. The purpose is to **probe where the LLM's understanding of the document breaks down** β€” which topics it answers confidently versus where it scores low and needs to re-read.
16
 
17
+ The output is a structured session report revealing the LLM's weak areas within your document. This is useful for identifying conceptually dense or underrepresented sections in any text.
18
+
19
+ This project can be applied to **any domain** β€” machine learning papers, medical literature, legal documents, textbooks β€” anything in PDF or TXT format.
20
 
21
  ---
22
 
23
+ ## Research Connection
24
 
25
+ This is a standalone extended example project inspired by ongoing research on multi-agent knowledge systems. The core idea β€” using retrieval-augmented self-evaluation to surface knowledge gaps β€” is the single-agent version of a feedback mechanism explored at scale in that research. There is no shared infrastructure or data pipeline between the two.
26
 
27
+ ---
28
 
29
  ---
30