update readme
Browse files
README.md
CHANGED
|
@@ -15,4 +15,8 @@ If you are evaluating an AI product or building your own, then you can use these
|
|
| 15 |
|
| 16 |
ADA benchmark was created because of severe short-comings found in closed-domain assistants in the wild. We found that apart from few basic canned questions or workflows,
|
| 17 |
the assistant was struggling to do anything exploratory or meaningful. This was found to be because the assistant is not connected
|
| 18 |
-
to sufficient data and is unable to perform complex or sequential operations over that data.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
ADA benchmark was created because of severe short-comings found in closed-domain assistants in the wild. We found that apart from few basic canned questions or workflows,
|
| 17 |
the assistant was struggling to do anything exploratory or meaningful. This was found to be because the assistant is not connected
|
| 18 |
+
to sufficient data and is unable to perform complex or sequential operations over that data.
|
| 19 |
+
|
| 20 |
+
## Compare
|
| 21 |
+
|
| 22 |
+
If you want to measure your AI product or technique for these questions, you can find the sample schemas and data here: https://github.com/hasura/agentic-data-access-benchmark
|