Add context length error detection and implement code reduction strategy in prompt construction for LLM inference. Enhance logging and return values to indicate code truncation status.
Add functions for redacting secrets in error messages and safely storing model configuration in session state to enhance security and prevent sensitive information exposure.