tirumaraiselvan commited on
Commit
b29cbaa
·
verified ·
1 Parent(s): 5e03583

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -10,16 +10,20 @@ pretty_name: Agentic data access benchmark
10
 
11
  # Agentic Data Access benchmark (ADA benchmark)
12
 
13
- Agentic Data Access benchmark is a set of real-world questions over few "closed domains" which can benefit tremendously from AI assistants/agents.
14
- Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases.
15
- If you are evaluating an AI product or building your own assistant over closed domains, then you can use the nature of questions here to
16
- qualitatively measure the capabilities of your assistants/agents.
17
 
18
- ADA benchmark was created because of severe short-comings found in closed-domain assistants in the wild.
19
- We found that apart from few basic canned questions or workflows, the assistants were struggling to do anything new.
20
- This was found to be because the assistant is not connected to sufficient data and is unable to perform complex or sequential operations over that data.
21
- We call the ability of an AI system to agentically use and operate on data as agentic data access.
 
 
 
 
22
 
23
  ## Learn more
24
 
25
- Learn more about agentic data access and benchmark here: https://github.com/hasura/agentic-data-access-benchmark
 
10
 
11
  # Agentic Data Access benchmark (ADA benchmark)
12
 
13
+ Agentic Data Access benchmark is a set of real-world questions over few "closed domains" to illustrate the evaluation of closed domain AI assistants/agents.
14
+ Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases, SaaS applications, etc
15
+ and AI solutions require mechanisms to connect an LLM to such data. If you are evaluating an AI product or building your own AI architecture over closed domains, then you can use
16
+ these questions/nature of questions to understand the capabilities of your system and qualitatively measure the performance of your assistants/agents.
17
 
18
+ ADA benchmark was created because of severe short-comings found in closed domain assistants in the wild. We found that apart from few basic canned questions or workflows,
19
+ the assistants were struggling to do anything new. This was found to be because the assistant is not connected
20
+ to sufficient data and is unable to perform complex or sequential operations over that data.
21
+ We call the ability of an AI system, given the description of data, to agentically use and operate on that data as agentic data access.
22
+
23
+ <p align="center">
24
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/671495038fa7609ad14e0e34/jk-XVmeiNPM_MUEiwJBV6.png" alt="Agentic data access" width="600">
25
+ </p>
26
 
27
  ## Learn more
28
 
29
+ Learn more about agentic data access and the benchmark here: https://github.com/hasura/agentic-data-access-benchmark