PostGen / README.md
Seth
update
2a0e4eb
---
title: PostGen - LinkedIn Content Scheduler
emoji: πŸš€
colorFrom: blue
colorTo: indigo
sdk: docker
pinned: false
---
# PostGen - AI-Powered LinkedIn Content Scheduler
PostGen is a comprehensive LinkedIn content scheduling application that integrates with Canva and LinkedIn APIs to automate content creation and posting. The app uses GPT for AI-generated content and Canva brand templates for consistent visual design.
## Features
- **Agentic AI System**: Multi-step AI planning that analyzes assets, extracts insights, and generates context-aware content
- **Document Parsing**: Automatic OCR analysis of uploaded documents using integrated OCR API
- **AI Content Generation**: Uses GPT with extracted asset insights to generate engaging, authentic LinkedIn posts
- **Canva Integration**: Access and apply Canva brand templates using the Autofill API
- **LinkedIn Scheduling**: Schedule and publish posts directly to LinkedIn
- **Asset Repository**: Upload and organize marketing materials by product categories with automatic content extraction
- **Smart Scheduler**: Agentic AI automatically generates content schedules based on date ranges, products, and post types
- **Product Categories**: Support for OCR, P2P, and O2C products with sub-categories
## Tech Stack
- **Frontend**: React, React Router, Tailwind CSS, Framer Motion, shadcn/ui
- **Backend**: FastAPI, Python
- **AI**: OpenAI GPT (latest model)
- **APIs**: Canva Connect API, LinkedIn API
## Setup Instructions
### Prerequisites
1. **Canva Account**:
- Canva Teams account (free trial available)
- Canva Connect API integration created
- Autofill API access registered
2. **LinkedIn Account**:
- LinkedIn Developer account
- LinkedIn App created with appropriate permissions
3. **OpenAI Account**:
- OpenAI API key with access to GPT models
### Environment Variables
Create a `.env` file in the backend directory with the following variables:
```env
# OpenAI
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4o
# OCR API (for document parsing and asset analysis)
OCR_API_URL=https://seth0330-ezofisocr.hf.space
OCR_API_KEY=your_ocr_api_key
# Canva (optional - can be passed via API)
CANVA_ACCESS_TOKEN=your_canva_access_token
# LinkedIn (optional - can be passed via API)
LINKEDIN_ACCESS_TOKEN=your_linkedin_access_token
LINKEDIN_PERSON_URN=your_linkedin_person_urn
# Database (CockroachDB connection string)
# Format: postgresql://username:password@host:port/database?sslmode=verify-full
DATABASE_URL=postgresql://seth:YOUR_PASSWORD@ezofis-11210.jxf.gcp-us-east1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full
```
### Local Development
1. **Install Frontend Dependencies**:
```bash
cd frontend
npm install
```
2. **Install Backend Dependencies**:
```bash
cd backend
pip install -r requirements.txt
```
**Note for PDF Viewer**: The PDF viewer requires `poppler-utils` to be installed on your system:
- **macOS**: `brew install poppler`
- **Ubuntu/Debian**: `sudo apt-get install poppler-utils`
- **Windows**: Download from [poppler-windows](https://github.com/oschwartz10612/poppler-windows/releases) and add to PATH
3. **Run Frontend** (development mode):
```bash
cd frontend
npm run dev
```
4. **Run Backend**:
```bash
cd backend
uvicorn app.main:app --reload --port 8000
```
### Building for Production
1. **Build Frontend**:
```bash
cd frontend
npm run build
```
2. **Build Docker Image**:
```bash
docker build -t postgen .
```
3. **Run Docker Container**:
```bash
docker run -p 7860:7860 --env-file .env postgen
```
## Deployment to HuggingFace Spaces
### Step 1: Prepare Your Repository
1. Ensure all code is committed and pushed to your Git repository
2. Make sure the `Dockerfile` is in the root directory
3. Ensure `README.md` has the HuggingFace Spaces configuration at the top
### Step 2: Create a HuggingFace Space
1. Go to [HuggingFace Spaces](https://huggingface.co/spaces)
2. Click "Create new Space"
3. Choose:
- **SDK**: Docker
- **Name**: postgen (or your preferred name)
- **Visibility**: Public or Private
### Step 3: Configure Environment Variables
1. In your HuggingFace Space settings, go to "Variables and secrets"
2. Add the following secrets:
- `OPENAI_API_KEY`: Your OpenAI API key
- `OPENAI_MODEL`: gpt-4o (or your preferred model)
- `DATABASE_URL`: Your CockroachDB connection string (format: `postgresql://username:password@host:port/database?sslmode=verify-full`)
- `CANVA_ACCESS_TOKEN`: (Optional, can be set per user)
- `LINKEDIN_ACCESS_TOKEN`: (Optional, can be set per user)
**Note**: The app will work with dummy/mock data if `DATABASE_URL` is not set. As you implement features, they will gradually use the database instead of mock data.
### Step 4: Connect Your Repository
1. In Space settings, connect your Git repository
2. Or push your code directly to the HuggingFace Space repository
### Step 5: Deploy
1. HuggingFace will automatically build and deploy your Docker image
2. Monitor the build logs in the Space interface
3. Once deployed, your app will be available at: `https://huggingface.co/spaces/your-username/postgen`
## API Integration Guide
### Canva Integration
1. **Get Access Token**:
- Create a Canva Connect API integration
- Complete OAuth flow to get access token
- Token should have scopes: `design:content:write`, `design:content:read`, `brandtemplate:content:read`, `brandtemplate:meta:read`
2. **Using Brand Templates**:
- Call `/api/canva/brand-templates` to get available templates
- Call `/api/canva/brand-templates/{id}/dataset` to get template structure
- Call `/api/canva/autofill` to create a design from template
- Poll `/api/canva/autofill/{job_id}` to check status
### LinkedIn Integration
1. **Get Access Token**:
- Create a LinkedIn App
- Request permissions: `w_member_social`, `r_liteprofile`
- Complete OAuth flow to get access token
2. **Posting to LinkedIn**:
- Call `/api/linkedin/post` with your access token and post content
- Media can be included via `media_uris` parameter
### AI Content Generation
1. **Generate Content**:
- Call `/api/ai/generate-content` with:
- `product_category`: 'ocr', 'p2p', or 'o2c'
- `post_type`: 'carousel', 'cover_content', 'content_only', or 'webinar'
- `context`: Optional additional context
- `assets`: Optional list of asset IDs
## Project Structure
```
PostGen/
β”œβ”€β”€ frontend/
β”‚ β”œβ”€β”€ src/
β”‚ β”‚ β”œβ”€β”€ components/
β”‚ β”‚ β”‚ β”œβ”€β”€ ui/ # shadcn/ui components
β”‚ β”‚ β”‚ └── Layout.jsx # Main layout component
β”‚ β”‚ β”œβ”€β”€ pages/
β”‚ β”‚ β”‚ β”œβ”€β”€ Dashboard.jsx
β”‚ β”‚ β”‚ β”œβ”€β”€ Repository.jsx
β”‚ β”‚ β”‚ β”œβ”€β”€ Scheduler.jsx
β”‚ β”‚ β”‚ β”œβ”€β”€ PostEditor.jsx
β”‚ β”‚ β”‚ └── Integrations.jsx
β”‚ β”‚ β”œβ”€β”€ App.jsx
β”‚ β”‚ β”œβ”€β”€ main.jsx
β”‚ β”‚ └── utils.js
β”‚ β”œβ”€β”€ package.json
β”‚ └── vite.config.js
β”œβ”€β”€ backend/
β”‚ β”œβ”€β”€ app/
β”‚ β”‚ β”œβ”€β”€ services/
β”‚ β”‚ β”‚ β”œβ”€β”€ canva_service.py
β”‚ β”‚ β”‚ β”œβ”€β”€ linkedin_service.py
β”‚ β”‚ β”‚ └── ai_service.py
β”‚ β”‚ β”œβ”€β”€ models.py
β”‚ β”‚ β”œβ”€β”€ schemas.py
β”‚ β”‚ └── main.py
β”‚ └── requirements.txt
β”œβ”€β”€ Dockerfile
└── README.md
```
## Database Setup (CockroachDB)
### Setting Up CockroachDB Connection
1. **Get Your Connection String**:
- From your CockroachDB dashboard, copy the connection string
- Format: `postgresql://username:password@host:port/database?sslmode=require`
- Example: `postgresql://seth:YOUR_PASSWORD@ezofis-11210.jxf.gcp-us-east1.cockroachlabs.cloud:26257/defaultdb?sslmode=require`
- **Note**: The app automatically uses `sslmode=require` (secure SSL without certificate file requirement)
- If you use `sslmode=verify-full`, the app will automatically fall back to `require` if the certificate file is not available
2. **Add to HuggingFace Spaces**:
- Go to your Space settings β†’ "Variables and secrets"
- Add `DATABASE_URL` with your connection string
3. **Database Tables**:
- Tables are automatically created on first startup
- The app will initialize: `users`, `integrations`, `assets`, `posts`, `campaigns`
4. **Dummy Data**:
- The app currently uses dummy/mock data for all endpoints
- As features are implemented, they will gradually use the database
- Dummy data will be removed feature by feature as database integration is completed
## Next Steps After Deployment
1. **Connect Integrations**:
- Go to the Integrations page
- Connect your Canva account
- Connect your LinkedIn account
2. **Upload Assets**:
- Go to Repository page
- Upload marketing materials, screenshots, and documents
- Classify them by product category
3. **Create Campaign**:
- Go to Scheduler page
- Click "Campaign Settings"
- Configure date range, products, post types, and frequency
- Click "Generate Schedule"
4. **Review and Schedule**:
- Review AI-generated posts
- Edit content if needed
- Confirm and schedule posts
## Troubleshooting
### Build Issues
- **Frontend build fails**: Check Node.js version (requires 18+)
- **Backend import errors**: Ensure all Python packages are installed
- **Docker build fails**: Check Dockerfile syntax and paths
### Runtime Issues
- **API errors**: Verify environment variables are set correctly
- **Canva API errors**: Check access token and scopes
- **LinkedIn API errors**: Verify OAuth permissions
- **AI generation fails**: Check OpenAI API key and quota
### Common Issues
1. **CORS errors**: Already handled in backend CORS middleware
2. **Port conflicts**: HuggingFace Spaces uses port 7860
3. **File uploads**: Ensure uploads directory has write permissions
## Support
For issues or questions:
- Check the API documentation in the code
- Review Canva API docs: https://www.canva.dev/docs/connect/
- Review LinkedIn API docs: https://docs.microsoft.com/en-us/linkedin/
## License
This project is private and proprietary.