PaperReview / README.md
Nur Arifin Akbar
Update Gradio to v5.9.1 to fix security vulnerabilities
09d1501

A newer version of the Gradio SDK is available: 6.3.0

Upgrade
metadata
title: AI Literature Review System
emoji: πŸ“š
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.9.1
app_file: app.py
pinned: false
license: mit

πŸ“š AI Literature Review System

An AI-powered literature review application that uses multiple specialized reviewers to analyze research papers. Built with Gradio and powered by OpenAI-compatible APIs.

🌟 Features

  • Multi-Agent Review System: Three specialized AI reviewers with different perspectives:

    • Experimentalist: Focuses on methodology and experimental rigor
    • Impactist: Evaluates potential impact and field significance
    • Novelty Seeker: Assesses originality and innovation
  • Comprehensive Analysis: Reviews papers across multiple dimensions:

    • Originality & Novelty
    • Technical Quality & Soundness
    • Clarity & Presentation
    • Significance & Impact
    • Contribution to the field
  • PDF Upload: Simply upload your research paper and get instant feedback

  • MarkItDown Integration: Advanced PDF text extraction for better accuracy

  • Semantic Scholar Integration: Find and compare related papers

  • Detailed Feedback: Receive strengths, weaknesses, questions, and actionable suggestions

  • Academic-Standard Scoring: Based on top-tier conference review standards (NeurIPS-style)

πŸš€ Quick Start

Local Installation

  1. Clone the repository:
git clone https://huggingface.co/spaces/syaikhipin/PaperReview
cd PaperReview
  1. Install dependencies:
pip install -r requirements.txt
  1. Set up environment variables:
cp .env.example .env
# Edit .env with your API credentials
  1. Run the application:
python app.py

Hugging Face Spaces

Simply visit the space and configure your API settings in the UI or set secrets in Space settings:

  • OPENAI_API_KEY: Your API key
  • OPENAI_BASE_URL: API endpoint (optional)
  • MODEL_NAME: Model identifier (optional)

πŸ“– How to Use

  1. Configure API: Enter your API credentials in the UI or use environment variables
  2. Upload Paper: Upload your research paper in PDF format
  3. Enable Semantic Scholar (optional): Search for related papers
  4. Review: Click "Review Paper" and wait for sequential analysis (3-6 minutes)
  5. Analyze Results: Review detailed feedback from all three reviewers

πŸ“Š Understanding the Scores

The system provides scores on a 1-10 scale:

  • 9-10: Award Quality / Strong Accept
  • 7-8: Accept
  • 5-6: Borderline
  • 3-4: Borderline Reject
  • 1-2: Reject

Each reviewer evaluates:

  • Soundness (1-4): Technical quality
  • Presentation (1-4): Writing quality
  • Contribution (1-4): Overall contribution
  • Originality (1-4): Novelty of ideas
  • Quality (1-4): Research quality
  • Clarity (1-4): Clarity of presentation
  • Significance (1-4): Importance of results
  • Confidence (1-5): Reviewer confidence level
  • Overall (1-10): Overall assessment

πŸ—οΈ Project Structure

aireviewer/
β”œβ”€β”€ app.py              # Main Gradio application
β”œβ”€β”€ agents.py           # Multi-agent review system
β”œβ”€β”€ requirements.txt    # Python dependencies
β”œβ”€β”€ README.md          # This file
β”œβ”€β”€ .env.example       # Environment variables template
└── .gitignore         # Git ignore rules

πŸ› οΈ Technical Details

Multi-Agent Architecture

The system implements three specialized reviewer agents, each with a distinct persona:

  1. Experimentalist Reviewer

    • Emphasizes experimental design and methodology
    • Looks for rigorous evaluation and clear insights
    • Questions reproducibility and statistical significance
  2. Impact-Focused Reviewer

    • Evaluates potential field impact
    • Assesses practical applications
    • Considers broader implications
  3. Novelty-Focused Reviewer

    • Seeks original contributions
    • Evaluates creative approaches
    • Identifies incremental vs. breakthrough work

Review Process

  1. PDF Text Extraction: Uses MarkItDown for high-quality text extraction
  2. Sequential Multi-Agent Review: Each agent independently evaluates the paper one at a time
    • Rate limited to 1 request per second to avoid API concurrency issues
    • Sequential processing ensures consistent quality and respects API limits
  3. Scoring Aggregation: Weighted scoring across multiple criteria
  4. Feedback Generation: Structured feedback with JSON parsing
  5. Related Papers: Semantic Scholar API integration with rate limiting (1 req/sec)

API Compatibility

The system uses OpenAI-compatible APIs, supporting:

  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Azure OpenAI
  • Custom endpoints (LocalAI, vLLM, LiteLLM, etc.)
  • Any OpenAI-compatible inference server

πŸ”’ Privacy & Security

  • API keys can be provided via UI or environment variables
  • Uploaded PDFs are processed in memory and not permanently stored
  • All processing happens on your infrastructure
  • Semantic Scholar searches are anonymous

🌐 Deployment

Hugging Face Spaces

This app is deployed on Hugging Face Spaces. To deploy your own:

  1. Create a new Space on Hugging Face
  2. Select "Gradio" as SDK
  3. Upload all files from this repository
  4. Configure secrets in Space settings:
    • OPENAI_API_KEY (required)
    • OPENAI_BASE_URL (optional, defaults to OpenAI)
    • MODEL_NAME (optional, defaults to gpt-3.5-turbo)
    • SEMANTIC_SCHOLAR_API_KEY (optional but recommended for better rate limits)

Docker

FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]

πŸ“ Environment Variables

# Required
OPENAI_API_KEY=your-api-key-here

# Optional
OPENAI_BASE_URL=https://api.openai.com/v1
MODEL_NAME=gpt-4
SEMANTIC_SCHOLAR_API_KEY=your-semantic-scholar-key

Note: All API keys should be stored as secrets in Hugging Face Spaces settings or in a local .env file (never commit .env to git).

⚠️ Notes

  • Reviews are generated sequentially (one at a time) with 1-second delays between API calls
  • Rate Limiting: Both LLM and Semantic Scholar APIs are rate-limited to 1 request/second to avoid concurrency issues
  • Processing time: 3-6 minutes depending on paper length and API response time
  • Ensure your PDF contains extractable text (not scanned images)
  • Semantic Scholar API: Using an API key provides higher rate limits (recommended)
  • The system provides automated feedback; human review is still recommended

🀝 Contributing

Contributions are welcome! Feel free to:

  • Report bugs
  • Suggest new features
  • Submit pull requests
  • Improve documentation

πŸ“„ License

This project is open source and available under the MIT License.

πŸ™ Acknowledgments

  • Based on multi-agent research frameworks
  • Inspired by academic peer review processes
  • Uses MarkItDown for PDF processing
  • Integrates with Semantic Scholar API
  • Built with Gradio for easy deployment

πŸ“§ Contact

For questions or feedback, please open an issue on the repository.


Live Demo: https://huggingface.co/spaces/syaikhipin/PaperReview