AGI / cpp /request_parsing.h
Dmitry Beresnev
Refactor the C++ LLM manager into modular components, moves Python modules under python/, and keeps the current control-plane behavior intact. The C++ server now has clearer separation for config, model lifecycle, runtime services, request parsing, HTTP helpers, and server routing, while Docker build/runtime paths were updated to compile multiple C++ files and load Python code from the new package folder
332826f
raw
history blame contribute delete
269 Bytes
#pragma once
#include <optional>
#include <string>
#include "llm_manager_types.h"
std::optional<TokenEstimate> estimate_chat_tokens(
const json &payload,
const LimitsConfig &limits,
std::string &error);
bool request_stream_enabled(const json &payload);