AI & ML interests
Data Valuation, AI Governance
Overview of the YDC_AIGOV Platform
The YDC_AIGOV platform employs multiple AI agents to support Shadow AI Governance of applications with embedded AI.
Problem We Are Trying to Solve
The primary challenge tackled by YDC_AIGOV is a lack of transparency and accessibility in AI policies associated with various applications. Key issues include:
- Difficulty in Locating AI Policy URLs: Users often struggle to find the privacy policies relevant to the applications they use (AI policies are often embedded with company privacy policies).
- Understanding AI Model Integrations: Many users are unaware if the applications they engage with contain embedded AI models and if their data is utilized in training these models.
- Clarifying Opt-Out Options: There is often confusion regarding whether users have the option to opt-out of having their data used for AI training purposes.
- Streamlining Compliance: Organizations face challenges in ensuring compliance with data governance regulations due to the complex nature of policy trails.
The YDC_AIGOV agents streamlines this process, making it easier for both users and organizations to navigate and understand the policies surrounding AI technologies.
Agent Architecture
Agents used are as below :
Fuzzy Matching Agent
Transforms the entered application name into the correct app name by correcting spelling errors, removing special characters, and addressing other inconsistencies.
Embedded AI Detector Agent
Search the internet and fetch relevant data to determine if the application uses AI.
Policy Finder Agent
AI Agent from CrewAI to find the privacy policy URL.
Policy Analyzer Agent
Provides answers to the following questions:
- Is user data used for AI/ML training?
- Is there an opt-out option for AI/ML training?
Policy Decision Agent
Makes final determinations on AI policies based on extracted insights from privacy policy text.
Risk Classifier Agent
Classifies an input applications as High risk, others and prohibited along with its rationale.
The processed data is then sent to AI governance platforms like Collibra and ServiceNow, where further workflows are executed.
Agents used for Creating application, AI Use Case and Assessments are as follow -
A. createApp_Agent: Applications are created in platforms based on the CSV input using REST API.
B. createAIUseCases_Agent: If embedded AI is “Yes,” the corresponding AI use cases are auto-created.
C. conductAIRiskAssessment_Agent: Risk assessments are performed automatically for AI use cases, particularly if Embedded AI is “Yes” and Data specifically excluded from AI training is “No”
Risks
Hallucinations: AI agents might generate incorrect or fabricated information when the required data is not available or poorly structured.
Incomplete Data: Web scraping might miss or misinterpret content due to website restrictions or dynamic loading.
Bias and Misclassification: AI may misclassify the presence of opt-out options or embedded AI due to ambiguous language.
Agent-Specific Risks:
Fuzzy Matching Agent
Incorrect or incomplete app name matches due to imperfect algorithms and variations in spelling or formatting.
Embedded AI Detector Agent
Fetches irrelevant or outdated information.
Policy Finder Agent
Fails to find or retrieve incorrect URLs.
Policy Analyzer
Misinterprets policy text due to ambiguous language.
Policy Decision Agent
Hallucinates answers when information is incomplete or ambiguous.
EU AI Act Classifier Agent
Often targets specific keywords from the description, which may give incorrect responses in some cases.
Risk Dimension Agent
Fails to determine the correct risk dimension, leading to misclassified assessments.
Risk Dimension AI UseCase Agent
Retrieves outdated or misinterpreted AI risk and Responsible AI policy data.
Risk Dimension AI Vendor Agent
Extracts biased, incomplete, or misleading vendor-related AI risk information.
Accuracy Metrics
Check out detailed blog for in-depth information
You can learn more about us here.
