--- title: TestOfLocalGradioRAG_byCodeLlama70B emoji: 📚 colorFrom: green colorTo: purple sdk: gradio sdk_version: 4.16.0 app_file: app.py pinned: false license: other --- # This is part of an ongoing research project whereby code-assist LLMs are orchestrated by one or more human coders in order to construct fundamentals and improvements in LLM-related architecture. This is a Gradio app initially drafted using CodeLlama-70B, intended to retrieve specific datachunks using RAG embeding, from tabular csv data, and then connect those into the user prompt and the master prompt and then feed them into a Mistral model called from Hugging Face ran locally, then return the response to the user via the gradio app GUI. To use this app, simply run the following command: Check out the Gradio configuration reference at https://huggingface.co/docs/hub/spaces-config-reference