SujanMidatani commited on
Commit
4bc1c81
·
1 Parent(s): dd50846

create app.py

Browse files
Files changed (1) hide show
  1. app.py +55 -0
app.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import guidance
3
+ from dotenv import load_dotenv
4
+
5
+ load_dotenv()
6
+ def indiQuesGrade(question, answer, role, exp):
7
+ evaluatorModel = guidance.llms.OpenAI('gpt-3.5-turbo')
8
+ evaluationSys = guidance('''
9
+ {{#system~}}
10
+ You are an expert system in Evaluating the answer provided by an interviewee in an interview.
11
+ Based on the question, answer given with information of Applying Role and Years of Experience, you can grade the answer on appropriate grading measures.
12
+ You are very skilled in grading the answers accurately and justifiably.
13
+ {{~/system}}
14
+ {{#user~}}
15
+ Now, you are provided with Interviewee's Question, his job role he applied to, and his years of experience he has with it.
16
+ You are now asked to generate suitable/appropriate grading measures for the question and grade his answer according to them.
17
+ The Question asked as follows:
18
+
19
+ {{question}}
20
+
21
+ The Role he applied to is as follows :
22
+
23
+ {{role}}
24
+
25
+ The years of experience he has in it is as follows :
26
+
27
+ {{experience}}
28
+
29
+ Now, generate the grading measures according to the above question, role and experience values.
30
+ Do not output the measures yet.
31
+ {{~/user}}
32
+ {{#assistant~}}
33
+ {{gen 'grading_measures' temperature=0.7 max_tokens=150}}
34
+ {{~/assistant}}
35
+ {{#user~}}
36
+ Here's the answer provided by the interviewee in the interview :
37
+
38
+ {{answer}}
39
+
40
+ Now, perform the evaluation on the answer according to the generated grading measures.
41
+ Output the evaluation in a JSON Format with the grading measure as key and a dictionary of score and reason as value.
42
+ The score key contains a numerical measure depicting the answer against grading measure and the reason key contains text information
43
+ about why the answer was such given such numerical grade in the evaluation measure.
44
+ {{~/user}}
45
+ {{#assistant~}}
46
+ {{gen 'evaluation' temperature=0.5 max_token=1500}}
47
+ {{~/assistant}}
48
+ ''', llm = evaluatorModel)
49
+ return evaluationSys
50
+ k=gr.Interface(
51
+ fn=gen_text,
52
+ inputs=['text','text','text','text'],
53
+ outputs=['json']
54
+ )
55
+ k.launch()