Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
|
@@ -16,6 +16,10 @@ DESCRIPTION = """\
|
|
| 16 |
# ORLM LLaMA-3-8B
|
| 17 |
|
| 18 |
Hello! I'm ORLM-LLaMA-3-8B, here to automate your optimization modeling tasks! Check our [repo](https://github.com/Cardinal-Operations/ORLM) and [paper](https://arxiv.org/abs/2405.17743)!
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
"""
|
| 20 |
|
| 21 |
MAX_MAX_NEW_TOKENS = 4096
|
|
@@ -43,7 +47,7 @@ Below is an operations research question. Build a mathematical model and corresp
|
|
| 43 |
# Response:
|
| 44 |
"""
|
| 45 |
|
| 46 |
-
@spaces.GPU(duration=
|
| 47 |
def generate(
|
| 48 |
message: str,
|
| 49 |
chat_history: list[tuple[str, str]],
|
|
|
|
| 16 |
# ORLM LLaMA-3-8B
|
| 17 |
|
| 18 |
Hello! I'm ORLM-LLaMA-3-8B, here to automate your optimization modeling tasks! Check our [repo](https://github.com/Cardinal-Operations/ORLM) and [paper](https://arxiv.org/abs/2405.17743)!
|
| 19 |
+
|
| 20 |
+
Please note that solution generation may be terminated if it exceeds 100 seconds. We strongly recommend running the demo locally using our [sample script](https://github.com/Cardinal-Operations/ORLM/blob/master/scripts/inference.py) for a smoother experience.
|
| 21 |
+
|
| 22 |
+
If the demo successfully generates a code solution, execute it in your Python environment with `coptpy` installed to obtain the final optimal value for your task.
|
| 23 |
"""
|
| 24 |
|
| 25 |
MAX_MAX_NEW_TOKENS = 4096
|
|
|
|
| 47 |
# Response:
|
| 48 |
"""
|
| 49 |
|
| 50 |
+
@spaces.GPU(duration=100)
|
| 51 |
def generate(
|
| 52 |
message: str,
|
| 53 |
chat_history: list[tuple[str, str]],
|