BishanSingh246 commited on
Commit
2ddc3f6
·
1 Parent(s): aa59bce

Add application file

Browse files
Files changed (1) hide show
  1. README.md +12 -41
README.md CHANGED
@@ -1,41 +1,12 @@
1
- # auto_evaluator
2
-
3
- ### Aim
4
- In this lesson we want to create a app which thake some data and from that data we provide some questions and answer. Then we test that answer through LLM and grade them.
5
-
6
- Context
7
-
8
- Document Question-Answering is a popular LLM use-case. LangChain makes it easy to assemble LLM components (e.g., models and retrievers) into chains that support question-answering: input documents are split into chunks and stored in a retriever, relevant chunks are retrieved given a user question and passed to an LLM for synthesis into an answer.
9
-
10
- Challenge
11
-
12
- The quality of QA systems can vary considerably; for example, we have seen cases of hallucination and poor answer quality due specific parameter settings. But, it is not always obvious to (1) evaluate the answer quality in a systematic way and (2) use this evaluation to guide improved QA chain settings (e.g., chunk size) or components (e.g., model or retriever choice).
13
-
14
- App overview
15
-
16
- This app aims to address the above limitations. Recent work from Anthropic has used model-written evaluation sets. OpenAI and others have shown that model-graded evaluation is an effective way to evaluate models. This app combines both of these ideas into a single workspace, auto-generating a QA test set and auto-grading the result of the specified QA chain.
17
-
18
- ![image](https://github.com/DeepakJaiz/auto_evaluator/assets/120568685/d9a2aaac-e8c1-4836-b6d8-7121ad797ccf)
19
-
20
- ### Reference
21
- https://github.com/langchain-ai/auto-evaluator/blob/main/README.md
22
-
23
- ### Prerequist
24
- `You required Phython in your system fyou can follow this (https://www.python.org)`<br/>
25
- `OpenAi API key required to get you can follow this link (https://platform.openai.com/account/api-keys)`<br/>
26
- `Create a account on huggingface https://huggingface.co/login`<br/>
27
-
28
- ### Modules required
29
- All the requirment are define in requirment.text file.
30
- ### How to run
31
- 1. Open the huggingface after login.
32
- 2. Go to the space button and then create new space.
33
- 3. For creating new space enter the name of space as you want, then select "streamlit" from select the space sdk. Then make your space public or private as you want.
34
- 4. After that click the create space.
35
- 5. Then upload all the above file in your space.
36
- 6. Then you can see your application by clicking the app button.
37
-
38
- ### How to use app
39
- 1. In left side enter your open API Key.
40
- 2. Upload you source data in pdf and text format.
41
- 3. Upload your test question answer in JSON format just below it.
 
1
+ ---
2
+ title: Test
3
+ emoji: 🐢
4
+ colorFrom: gray
5
+ colorTo: yellow
6
+ sdk: streamlit
7
+ sdk_version: 1.21.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference