File size: 880 Bytes
34b0ca7
3371bbb
34b0ca7
 
 
 
 
 
 
 
 
 
3371bbb
0480eee
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
title: TestOfLocalGradioRAG_byCodeLlama70B
emoji: 📚
colorFrom: green
colorTo: purple
sdk: gradio
sdk_version: 4.16.0
app_file: app.py
pinned: false
license: other
---

# This is part of an ongoing research project whereby code-assist LLMs are orchestrated by one or more human coders in order to construct fundamentals and improvements in LLM-related architecture.

This is a Gradio app initially drafted using CodeLlama-70B, intended to retrieve specific datachunks using RAG embeding, from tabular csv data, and then connect those into the user prompt and the master prompt and then feed them into a Mistral model called from Hugging Face ran locally, then return the response to the user via the gradio app GUI.

To use this app, simply run the following command:


Check out the Gradio configuration reference at https://huggingface.co/docs/hub/spaces-config-reference