Feat: Enable native parallelize=True flag for efficient batch persona generation ba1c332 Running verified AUXteam commited on 5 days ago
Fix: ensure sampled_characteristics is initialized to prevent UnboundLocalError during LLM retries ba7285a verified AUXteam commited on 5 days ago
Feat: Implement dynamic validation and pipeline logic from IPYNB 9c10f0a verified AUXteam commited on 5 days ago
Feat: Adapt tinytroupe_manager to use new persona_pipeline logic 4ba3568 verified AUXteam commited on 5 days ago
Fix: Fallback to sequential persona generation with delay to prevent Google 429 Too Many Requests errors 43cee92 verified AUXteam commited on 6 days ago
Fix: remove trailing slash from google genai proxy base_url e50e09e verified AUXteam commited on 9 days ago
Fix: brutally enforce base_url for Google GenAI in all OpenAI wrappers 9164da9 verified AUXteam commited on 9 days ago
Fix: allow token counting logic to intercept gemini model gracefully d04ec3a verified AUXteam commited on 9 days ago
Feat: Set Google Gemini GenerativeLanguage API endpoint as default d8a0541 verified AUXteam commited on 9 days ago
Fix: Strip empty `stop` parameter for Google Gemini fallback c8a667b verified AUXteam commited on 9 days ago
Fix: Strip unsupported payload parameters for Google Gemini OpenAI-compatible proxy 20d0ebf verified AUXteam commited on 9 days ago
Feat: Fallback to Google gemini-3-flash-preview on primary LLM proxy error d0fe591 verified AUXteam commited on 9 days ago
Fix: Fallback handling for empty JSON extraction in backend manager ddb005f verified AUXteam commited on 10 days ago
Fix: Add missing dependency pypandoc for TinyTroupe ResultsExtractor 3ad83e6 verified AUXteam commited on 10 days ago
Refactor: Use TinyPersonFactory and ResultsExtractor for production LLM stability 6183e7b verified AUXteam commited on 10 days ago
Fix: combine system messages to satisfy strict LLM chatml parser 8e7cffc verified AUXteam commited on 11 days ago
Fix: allow token counting logic to intercept alias models gracefully 10e7104 verified AUXteam commited on 11 days ago
Fix: enforce system message to be the first in openai_utils requests fc4e65c verified AUXteam commited on 11 days ago
Fix: Add local file persistence to JobRegistry to prevent 404s after space restart b92124b verified AUXteam commited on 11 days ago
Fix: revert MODEL to gpt-4o to resolve alias-large unavailability d833191 verified AUXteam commited on 11 days ago
Fix: Replace deprecated alias-fast with alias-large in fallback cascade a8301f1 verified AUXteam commited on 11 days ago
Fix: map base_url explicitely to Helmholtz endpoint in OpenAI client constructor aa7bdaa verified AUXteam commited on 12 days ago
Fix: strictly map OPENAI_API_KEY to environment in config 982c368 verified AUXteam commited on 12 days ago
Fix: Explicitly provide dummy token to OpenAI client constructor if environment vars not mapped early enough b93426f verified AUXteam commited on 12 days ago
Fix: map BLABLADOR_API_KEY to OPENAI_API_KEY in environment for TinyTroupe client validation 46026e0 verified AUXteam commited on 12 days ago
Fix: Append uuid to mock persona names to prevent TinyTroupe naming collision d9ec5e4 verified AUXteam commited on 12 days ago
Feat: Enable real LLM querying inside TinyTroupe simulation loops 6a2a662 verified AUXteam commited on 13 days ago
Fix SyntaxWarning for invalid escape sequence in llm.py 9bc10b9 verified AUXteam commited on 13 days ago
Fix: bypass strict git clone check to prevent startup log errors b7a0f4f verified AUXteam commited on 13 days ago
Fix: Add missing dependencies (chevron, textdistance, llama-index) to requirements.txt 6247416 verified AUXteam commited on 13 days ago