MohitGupta41 commited on
Commit
9bcb07b
·
1 Parent(s): 0b6a8eb

Increased Context Window and improved prompt

Browse files
Files changed (1) hide show
  1. Data/profile_data.md +3 -3
Data/profile_data.md CHANGED
@@ -2463,7 +2463,7 @@ A **voice-first personal AI** that lets users talk to “Mohit’s AI voice twin
2463
 
2464
  ### Project Overview
2465
 
2466
- **M.A.R.S.H.A.L.** delivers a polished, browser-based assistant that speaks and listens in real time. Users ask questions via mic (or text), the frontend relays queries to a FastAPI backend, and the backend generates **strictly grounded** answers from a curated profile markdown. The UI provides **provider/model controls**, **API key inputs**, **TTS tuning**, and a floating card of **links extracted** from the most recent answer. Designed for demos, portfolios, and hands-free Q\&A, it keeps responses concise, first-person, and date-aware.
2467
 
2468
  ---
2469
 
@@ -2525,7 +2525,7 @@ A **voice-first personal AI** that lets users talk to “Mohit’s AI voice twin
2525
 
2526
  ### Results & Impact
2527
 
2528
- M.A.R.S.H.A.L. gives portfolio visitors and recruiters a **hands-free, humanized** way to explore Mohit’s work. The system proves out a **production-lean agentic pattern**—voice capture → grounded LLM → friendly speech output—while keeping security conscious (no hardcoded keys) and deployment-ready (static SPA + serverless API).
2529
 
2530
  ---
2531
 
@@ -2543,7 +2543,7 @@ M.A.R.S.H.A.L. gives portfolio visitors and recruiters a **hands-free, humanized
2543
 
2544
  ### Project's Detailed Readme
2545
  #### M.A.R.S.H.A.L (Mohit’s AgenticAI Representation System for Humanized Assistance and Legacy)
2546
- ##### M.A.R.S.H.A.L. Frontend Readme
2547
 
2548
  A sleek, voice-driven React frontend that connects to your **Voice Twin Backend** (FastAPI) and lets users talk to “Mohit’s AI voice twin”. It supports **speech-to-text**, **text-to-speech**, and **link extraction**, and it can talk to either **Gemini** or **Hugging Face** models via your backend.
2549
 
 
2463
 
2464
  ### Project Overview
2465
 
2466
+ **MARSHAL** delivers a polished, browser-based assistant that speaks and listens in real time. Users ask questions via mic (or text), the frontend relays queries to a FastAPI backend, and the backend generates **strictly grounded** answers from a curated profile markdown. The UI provides **provider/model controls**, **API key inputs**, **TTS tuning**, and a floating card of **links extracted** from the most recent answer. Designed for demos, portfolios, and hands-free Q\&A, it keeps responses concise, first-person, and date-aware.
2467
 
2468
  ---
2469
 
 
2525
 
2526
  ### Results & Impact
2527
 
2528
+ MARSHAL gives portfolio visitors and recruiters a **hands-free, humanized** way to explore Mohit’s work. The system proves out a **production-lean agentic pattern**—voice capture → grounded LLM → friendly speech output—while keeping security conscious (no hardcoded keys) and deployment-ready (static SPA + serverless API).
2529
 
2530
  ---
2531
 
 
2543
 
2544
  ### Project's Detailed Readme
2545
  #### M.A.R.S.H.A.L (Mohit’s AgenticAI Representation System for Humanized Assistance and Legacy)
2546
+ ##### MARSHAL Frontend Readme
2547
 
2548
  A sleek, voice-driven React frontend that connects to your **Voice Twin Backend** (FastAPI) and lets users talk to “Mohit’s AI voice twin”. It supports **speech-to-text**, **text-to-speech**, and **link extraction**, and it can talk to either **Gemini** or **Hugging Face** models via your backend.
2549