Upload app.py with huggingface_hub
Browse files
app.py
CHANGED
|
@@ -128,6 +128,7 @@ def expand_query(state: AgentState) -> AgentState:
|
|
| 128 |
Convert the user query into something that a nutritionist would understand. Use domain related words.
|
| 129 |
Perform query expansion on the question received. If there are multiple common ways of phrasing a user question \
|
| 130 |
or common synonyms for key words in the question, make sure to return multiple versions of the query with the different phrasings.
|
|
|
|
| 131 |
|
| 132 |
If the query has multiple parts, split them into separate simpler queries. This is the only case where you can generate more than 1 query.
|
| 133 |
|
|
@@ -245,8 +246,8 @@ def score_groundedness(state: Dict) -> Dict:
|
|
| 245 |
"""
|
| 246 |
print("---------check_groundedness---------")
|
| 247 |
system_message = '''
|
| 248 |
-
|
| 249 |
-
You need to assess whether the provided response is grounded in the
|
| 250 |
|
| 251 |
Please act as an impartial judge and evaluate the quality of the provided response, which attempts to answer a user question based on a provided context.
|
| 252 |
You will be presented with a context used by the AI model to generate the response and an AI-generated response to the question.
|
|
@@ -294,10 +295,10 @@ def check_precision(state: Dict) -> Dict:
|
|
| 294 |
"""
|
| 295 |
print("---------check_precision---------")
|
| 296 |
system_message = '''
|
| 297 |
-
Given the user query and response, verify if the AI-generated response precisely addresses the user’s query.
|
| 298 |
-
In the input, the user query will begin with Query:, while the AI generated response will begin with Response:.
|
| 299 |
|
| 300 |
-
Give verdict as 1 if
|
| 301 |
|
| 302 |
DO NOT output anything else before or after the veredict integer value of 1 or 0.
|
| 303 |
'''
|
|
@@ -332,7 +333,7 @@ def refine_response(state: Dict) -> Dict:
|
|
| 332 |
|
| 333 |
system_message = '''
|
| 334 |
You are an expert in reviewing AI-generated responses, and your task is to provide constructive feedback on the following response to help improve its accuracy, clarity, and completeness.
|
| 335 |
-
In the input, the user query will begin with Query:, while the AI-generated response will begin with Response:.
|
| 336 |
You need to identify potential gaps, unsupported claims, ambiguous language, or missing details in the response.
|
| 337 |
|
| 338 |
Do not rewrite the response; only suggest improvements in a concise, bullet-point format when possible, to enhance accuracy and completeness.
|
|
@@ -515,7 +516,7 @@ def filter_input_with_llama_guard(user_input, model="meta-llama/llama-guard-4-12
|
|
| 515 |
|
| 516 |
Parameters:
|
| 517 |
- user_input: The input provided by the user.
|
| 518 |
-
- model: The Llama Guard model to be used for filtering (default is "llama-guard-
|
| 519 |
|
| 520 |
Returns:
|
| 521 |
- The filtered and safe input.
|
|
@@ -693,7 +694,7 @@ def nutrition_disorder_streamlit():
|
|
| 693 |
st.write("You might try asking questions like:")
|
| 694 |
st.write("- In what ways can increasing dietary fiber help alleviate symptoms of functional bowel disorders?")
|
| 695 |
st.write("- What dietary changes can help reduce the risk of diabetes mellitus in those suffering from obesity?")
|
| 696 |
-
st.write("-
|
| 697 |
st.write("Type 'exit' to end the conversation.")
|
| 698 |
|
| 699 |
# Initialize session state for chat history and user_id if they don't exist
|
|
|
|
| 128 |
Convert the user query into something that a nutritionist would understand. Use domain related words.
|
| 129 |
Perform query expansion on the question received. If there are multiple common ways of phrasing a user question \
|
| 130 |
or common synonyms for key words in the question, make sure to return multiple versions of the query with the different phrasings.
|
| 131 |
+
In case you have a valid and not empty feedback, you should use that feedback to improve the expanded query.
|
| 132 |
|
| 133 |
If the query has multiple parts, split them into separate simpler queries. This is the only case where you can generate more than 1 query.
|
| 134 |
|
|
|
|
| 246 |
"""
|
| 247 |
print("---------check_groundedness---------")
|
| 248 |
system_message = '''
|
| 249 |
+
You are an evaluator, and you are tasked with rating AI-generated responses to questions posed by users.
|
| 250 |
+
You need to assess whether the provided response is grounded in the provided context.
|
| 251 |
|
| 252 |
Please act as an impartial judge and evaluate the quality of the provided response, which attempts to answer a user question based on a provided context.
|
| 253 |
You will be presented with a context used by the AI model to generate the response and an AI-generated response to the question.
|
|
|
|
| 295 |
"""
|
| 296 |
print("---------check_precision---------")
|
| 297 |
system_message = '''
|
| 298 |
+
Given the user query and the AI-generated response, verify if the AI-generated response precisely addresses the user’s query.
|
| 299 |
+
In the input, the user query will begin with Query: and ends before the token Response:, while the AI generated response will begin with Response:.
|
| 300 |
|
| 301 |
+
Give verdict as 1 if the AI-generated response addresses the user’s query and 0 if not.
|
| 302 |
|
| 303 |
DO NOT output anything else before or after the veredict integer value of 1 or 0.
|
| 304 |
'''
|
|
|
|
| 333 |
|
| 334 |
system_message = '''
|
| 335 |
You are an expert in reviewing AI-generated responses, and your task is to provide constructive feedback on the following response to help improve its accuracy, clarity, and completeness.
|
| 336 |
+
In the input, the user query will begin with Query: and ends before the token Response:, while the AI-generated response will begin with Response:.
|
| 337 |
You need to identify potential gaps, unsupported claims, ambiguous language, or missing details in the response.
|
| 338 |
|
| 339 |
Do not rewrite the response; only suggest improvements in a concise, bullet-point format when possible, to enhance accuracy and completeness.
|
|
|
|
| 516 |
|
| 517 |
Parameters:
|
| 518 |
- user_input: The input provided by the user.
|
| 519 |
+
- model: The Llama Guard model to be used for filtering (default is "meta-llama/llama-guard-4-12b").
|
| 520 |
|
| 521 |
Returns:
|
| 522 |
- The filtered and safe input.
|
|
|
|
| 694 |
st.write("You might try asking questions like:")
|
| 695 |
st.write("- In what ways can increasing dietary fiber help alleviate symptoms of functional bowel disorders?")
|
| 696 |
st.write("- What dietary changes can help reduce the risk of diabetes mellitus in those suffering from obesity?")
|
| 697 |
+
st.write("- What are some effective dietary changes to help lower high cholesterol levels?")
|
| 698 |
st.write("Type 'exit' to end the conversation.")
|
| 699 |
|
| 700 |
# Initialize session state for chat history and user_id if they don't exist
|