Spaces:
Running
Running
metadata
title: CyberLegalAIendpoint
sdk: docker
app_port: 8000
CyberLegal AI - LangGraph Agent
Advanced cyber-legal assistant powered by LangGraph + LightRAG + GPT-5-Nano for European regulations expertise.
ποΈ Architecture
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Client API ββββββ LangGraph Agent ββββββ LightRAG Serverβ
β (Port 8000) β β (Orchestration)β β (Port 9621) β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
ββββββββββββββββ
β GPT-5-Nano β
β (Reasoning) β
ββββββββββββββββ
π Quick Start
Using Docker Compose (Recommended)
Environment Setup
# Copy and configure environment cp .env.example .env # Edit .env with your API keys # OPENAI_API_KEY=your_openai_key # LIGHTRAG_API_KEY=your_lightrag_key (optional)Deploy
docker-compose up -dVerify Deployment
curl http://localhost:8000/health
Using Docker Directly
# Build the image
docker build -t cyberlegal-ai .
# Run the container
docker run -d \
--name cyberlegal-ai \
-p 8000:8000 \
-e OPENAI_API_KEY=your_key \
-v $(pwd)/rag_storage:/app/rag_storage \
cyberlegal-ai
π‘ API Usage
Base URL
http://localhost:8000
Endpoints
Chat with Assistant
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What are the main obligations under GDPR?",
"role": "client",
"jurisdiction": "EU",
"conversationHistory": []
}'
Health Check
curl http://localhost:8000/health
API Info
curl http://localhost:8000/
π Request Format
{
"message": "User's legal question",
"role": "client" | "lawyer",
"jurisdiction": "EU" | "France" | "Germany" | "Italy" | "Spain" | "Romania" | "Netherlands" | "Belgium",
"conversationHistory": [
{"role": "user|assistant", "content": "Previous message"}
]
}
π€ Response Format
{
"response": "Detailed legal answer with references",
"confidence": 0.85,
"processing_time": 2.34,
"references": ["gdpr_2022_2555.txt", "nis2_2022_2555.txt"],
"timestamp": "2025-01-15T10:30:00Z",
"error": null
}
π§ Expertise Areas
- GDPR (General Data Protection Regulation)
- NIS2 (Network and Information Systems Directive 2)
- DORA (Digital Operational Resilience Act)
- CRA (Cyber Resilience Act)
- eIDAS 2.0 (Electronic Identification, Authentication and Trust Services)
- Romanian Civil Code provisions
π Workflow
- User Query β API receives request with role/jurisdiction context
- LightRAG Retrieval β Searches legal documents for relevant information
- LangGraph Processing β Orchestrates the workflow through nodes:
- Query validation
- LightRAG integration
- Context enhancement with GPT-5-Nano
- Response formatting
- Enhanced Response β Returns structured answer with confidence score
π οΈ Development
Local Development
# Install dependencies
pip install -r requirements.txt
# Start LightRAG server (required)
lightrag-server --host 127.0.0.1 --port 9621
# Start the API
python agent_api.py
Environment Variables
OPENAI_API_KEY=your_openai_api_key
LIGHTRAG_API_KEY=your_lightrag_api_key
LIGHTRAG_HOST=127.0.0.1
LIGHTRAG_PORT=9621
API_PORT=8000
π Project Structure
CyberlegalAI/
βββ agent_api.py # FastAPI server
βββ langraph_agent.py # Main LangGraph workflow
βββ agent_state.py # State management
βββ prompts.py # System prompts
βββ utils.py # LightRAG integration
βββ requirements.txt # Python dependencies
βββ Dockerfile # Container configuration
βββ docker-compose.yml # Orchestration
βββ rag_storage/ # LightRAG data persistence
βββ .env # Environment variables
π§ Configuration
Port Management
- Port 8000: API (exposed externally)
- Port 9621: LightRAG (internal only, for security)
Security Features
- LightRAG server not exposed externally
- API key authentication support
- Non-root container execution
- Health checks and monitoring
π Monitoring
Health Checks
# Container health
docker ps
# Service health
curl http://localhost:8000/health
# Logs
docker logs cyberlegal-ai
Performance Metrics
The API returns:
- Processing time per request
- Confidence scores
- Referenced documents
- Error tracking
π¨ Error Handling
The API gracefully handles:
- LightRAG server unavailability
- OpenAI API errors
- Invalid request format
- Network timeouts
π API Examples
Client Role Example
{
"message": "What should my small business do to comply with GDPR?",
"role": "client",
"jurisdiction": "France"
}
Lawyer Role Example
{
"message": "Analyze the legal implications of NIS2 for financial institutions",
"role": "lawyer",
"jurisdiction": "EU"
}
Comparison Query
{
"message": "Compare incident reporting requirements between NIS2 and DORA",
"role": "client",
"jurisdiction": "EU"
}
π€ Integration Examples
Python Client
import requests
response = requests.post("http://localhost:8000/chat", json={
"message": "What are GDPR penalties?",
"role": "client",
"jurisdiction": "EU",
"conversationHistory": []
})
result = response.json()
print(result["response"])
JavaScript Client
const response = await fetch('http://localhost:8000/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'GDPR requirements',
role: 'client',
jurisdiction: 'EU',
conversationHistory: []
})
});
const result = await response.json();
console.log(result.response);
π Troubleshooting
Common Issues
LightRAG Connection Failed
- Verify LightRAG server is running on port 9621
- Check container logs:
docker logs cyberlegal-ai
OpenAI API Errors
- Verify OPENAI_API_KEY is set correctly
- Check API key permissions and quota
Slow Responses
- Monitor processing time in API response
- Check LightRAG document indexing
Debug Mode
Enable debug logging:
docker-compose logs -f cyberlegal-api
π License
This project provides general legal information and is not a substitute for professional legal advice.
π Updates
The system automatically:
- Retrieves latest regulatory documents
- Updates knowledge base through LightRAG
- Maintains conversation context
- Provides confidence scoring