Spaces:
Sleeping
title: Enflow Api
emoji: 📉
colorFrom: blue
colorTo: gray
sdk: docker
pinned: false
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Enflow Backend
This is the backend for the Enflow application, which allows law enforcement agencies to build, maintain, and use specific workflows that automate tasks based on officers' daily logs.
Deployment Status
The backend is currently deployed and running at:
- API Endpoint: https://huggingface.co/spaces/droov/enflow-api
Key Features
- Intelligent Document Processing: Extract text from PDF logs using OCR and analyze with GPT-4o-mini
- Activity Classification: Automatically categorize activities into defined workflows
- Markdown Template System: Create and fill form templates using extracted data
- Department Management: Organize users by departments with hierarchical access control
- PDF Generation: Convert filled markdown templates to professionally formatted PDFs
Environment Setup
The backend requires several environment variables to function properly. These can be set up in a .env file in the root of the backend directory.
Environment Variables
Copy the env.example file to .env and fill in the required values:
cp env.example .env
Then edit the .env file with your actual credentials:
MONGO_URI: MongoDB connection stringJWT_SECRET: Secret key for JWT token generationOPENAI_API_KEY: OpenAI API key for LLM processing (required)REDIS_HOST: Hostname or IP address of your Redis serverREDIS_PORT: Redis port (default: 6379)REDIS_PASSWORD: Password for Redis authenticationFLASK_ENV: Set to "development" or "production"
Alternatively, you can set REDIS_URL directly as:
REDIS_URL=redis://:{password}@{host}:{port}/0
Important Security Notes
- Never commit the
.envfile to version control - Do not expose these credentials in client-side code
- For production, use environment variables provided by your hosting platform rather than a
.envfile
Running the Application
After setting up the environment variables, you can run the application:
# Install dependencies
pip install -r requirements.txt
# Run the Flask application
python app.py
# In a separate terminal, run Celery worker for background tasks
celery -A utils.celery_tasks.celery_app worker --loglevel=info
Document Processing Pipeline
The application processes documents in the following steps:
- PDF Text Extraction: OCR processes PDF files to extract text
- Activity Extraction: LLM analyzes log text to identify individual activities
- Workflow Classification: Activities are matched to appropriate workflows
- Data Extraction: Required fields are extracted from activities based on workflow requirements
- Form Generation: Markdown templates are filled with extracted data
- PDF Generation: Filled markdown is rendered as HTML and converted to PDF
This pipeline can run either synchronously or asynchronously using Celery tasks.
API Documentation
The API endpoints are organized by resource type:
/api/auth: Authentication endpoints/api/departments: Department management/api/workflows: Workflow management/api/logs: Log upload and management/api/incidents: Incident management
For detailed API documentation, see the API_DOCUMENTATION.md file.
Setup Instructions
Prerequisites
- Python 3.10 or newer
- Docker and Docker Compose (optional, for containerized deployment)
- MongoDB (we're using MongoDB Atlas in the current setup)
- Redis server (for Celery task queue)
Environment Setup
Clone the repository
Run the setup script to create the .env file with the required environment variables:
python setup_env.pyOr manually create a
.envfile in the backend directory with the following variables:MONGO_URI=your_mongodb_connection_string JWT_SECRET=your_jwt_secret OPENAI_API_KEY=your_openai_api_key REDIS_HOST=your_redis_host REDIS_PORT=your_redis_port REDIS_PASSWORD=your_redis_password FLASK_ENV=development
Local Development
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtRun the application:
python app.pyThe API will be available at http://localhost:5000
Docker Deployment
Build and start the containers:
docker-compose up -dThe API will be available at http://localhost:5000
HuggingFace Deployment
The backend is already deployed to HuggingFace Spaces at: https://huggingface.co/spaces/droov/enflow-api
For detailed deployment instructions, see the hugginface_setup.md file.
Key points for HuggingFace deployment:
- Set your Space to use the Docker SDK
- Add all environment variables in the Space settings
- Make sure the Redis configuration is properly set up
- The API will be available at your HuggingFace Space URL
Project Structure
app.py- Main Flask applicationdb.py- Database connection and utilitiesmodels/- Data modelscontrollers/- Controller functionsroutes/- API route definitionsutils/- Utility functions and middlewarecelery_tasks.py- Celery task definitionspdf_utils.py- PDF processing and NLP utilities
celery_config.py- Celery configuration for task managementsetup_env.py- Script to set up environment variablestest_department.py- Test script for department creationtest_auth.py- Test script for authenticationtest_hf_deployment.py- Test script for HuggingFace deploymentDockerfile- Docker configuration for containerized deploymentdocker-compose.yml- Docker Compose configuration for local developmentAPI_DOCUMENTATION.md- Detailed API documentationhugginface_setup.md- Guide for deploying to HuggingFace Spaces