Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
|
@@ -202,7 +202,7 @@ Create a template specification based on the following instructions:
|
|
| 202 |
INSTRUCTIONS:
|
| 203 |
{instructions}
|
| 204 |
|
| 205 |
-
{"DOCUMENT CONTENT (EXCERPT):" + document_content
|
| 206 |
|
| 207 |
Generate a JSON template specification with the following structure:
|
| 208 |
{{
|
|
@@ -243,7 +243,7 @@ If document content was provided, design the template to effectively use that in
|
|
| 243 |
response = client.chat.completions.create(
|
| 244 |
model=st.session_state.model,
|
| 245 |
messages=[{"role": "user", "content": prompt}],
|
| 246 |
-
|
| 247 |
temperature=0.7,
|
| 248 |
)
|
| 249 |
|
|
@@ -320,7 +320,7 @@ OUTPUT VARIABLES:
|
|
| 320 |
{output_vars_text}
|
| 321 |
|
| 322 |
{"KNOWLEDGE BASE AVAILABLE:" if knowledge_base else "NO KNOWLEDGE BASE AVAILABLE."}
|
| 323 |
-
{knowledge_base
|
| 324 |
|
| 325 |
Current prompt template:
|
| 326 |
{template_spec["prompt"]}
|
|
@@ -340,7 +340,7 @@ Return ONLY the revised prompt template text, with no additional explanations.
|
|
| 340 |
response = client.chat.completions.create(
|
| 341 |
model=st.session_state.model,
|
| 342 |
messages=[{"role": "user", "content": prompt}],
|
| 343 |
-
|
| 344 |
temperature=0.7,
|
| 345 |
)
|
| 346 |
|
|
|
|
| 202 |
INSTRUCTIONS:
|
| 203 |
{instructions}
|
| 204 |
|
| 205 |
+
{"DOCUMENT CONTENT (EXCERPT):" + document_content + "..." if document_content else "NO DOCUMENTS PROVIDED"}
|
| 206 |
|
| 207 |
Generate a JSON template specification with the following structure:
|
| 208 |
{{
|
|
|
|
| 243 |
response = client.chat.completions.create(
|
| 244 |
model=st.session_state.model,
|
| 245 |
messages=[{"role": "user", "content": prompt}],
|
| 246 |
+
max_completion_tokens=4096,
|
| 247 |
temperature=0.7,
|
| 248 |
)
|
| 249 |
|
|
|
|
| 320 |
{output_vars_text}
|
| 321 |
|
| 322 |
{"KNOWLEDGE BASE AVAILABLE:" if knowledge_base else "NO KNOWLEDGE BASE AVAILABLE."}
|
| 323 |
+
{knowledge_base if knowledge_base else ""}
|
| 324 |
|
| 325 |
Current prompt template:
|
| 326 |
{template_spec["prompt"]}
|
|
|
|
| 340 |
response = client.chat.completions.create(
|
| 341 |
model=st.session_state.model,
|
| 342 |
messages=[{"role": "user", "content": prompt}],
|
| 343 |
+
max_completion_tokens=4096,
|
| 344 |
temperature=0.7,
|
| 345 |
)
|
| 346 |
|