Spaces:
Sleeping
Sleeping
github-actions[bot] commited on
Commit ·
d449470
0
Parent(s):
Deploy app/api to HF Space
Browse files- .gitignore +176 -0
- Dockerfile +32 -0
- README.md +12 -0
- docker-compose.yaml +15 -0
- main.py +44 -0
- midleware.py +36 -0
- models/schemas.py +212 -0
- requirements.txt +6 -0
- server/routes.py +548 -0
- server/websockets.py +179 -0
- services/fetcher_service.py +87 -0
- services/llm_service.py +233 -0
- services/prompts.py +280 -0
.gitignore
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
share/python-wheels/
|
| 24 |
+
*.egg-info/
|
| 25 |
+
.installed.cfg
|
| 26 |
+
*.egg
|
| 27 |
+
MANIFEST
|
| 28 |
+
|
| 29 |
+
# PyInstaller
|
| 30 |
+
# Usually these files are written by a python script from a template
|
| 31 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 32 |
+
*.manifest
|
| 33 |
+
*.spec
|
| 34 |
+
|
| 35 |
+
# Installer logs
|
| 36 |
+
pip-log.txt
|
| 37 |
+
pip-delete-this-directory.txt
|
| 38 |
+
|
| 39 |
+
# Unit test / coverage reports
|
| 40 |
+
htmlcov/
|
| 41 |
+
.tox/
|
| 42 |
+
.nox/
|
| 43 |
+
.coverage
|
| 44 |
+
.coverage.*
|
| 45 |
+
.cache
|
| 46 |
+
nosetests.xml
|
| 47 |
+
coverage.xml
|
| 48 |
+
*.cover
|
| 49 |
+
*.py,cover
|
| 50 |
+
.hypothesis/
|
| 51 |
+
.pytest_cache/
|
| 52 |
+
cover/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
.pybuilder/
|
| 76 |
+
target/
|
| 77 |
+
|
| 78 |
+
# Jupyter Notebook
|
| 79 |
+
.ipynb_checkpoints
|
| 80 |
+
|
| 81 |
+
# IPython
|
| 82 |
+
profile_default/
|
| 83 |
+
ipython_config.py
|
| 84 |
+
|
| 85 |
+
# pyenv
|
| 86 |
+
# For a library or package, you might want to ignore these files since the code is
|
| 87 |
+
# intended to run in multiple environments; otherwise, check them in:
|
| 88 |
+
# .python-version
|
| 89 |
+
|
| 90 |
+
# pipenv
|
| 91 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 92 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 93 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 94 |
+
# install all needed dependencies.
|
| 95 |
+
#Pipfile.lock
|
| 96 |
+
|
| 97 |
+
# UV
|
| 98 |
+
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
|
| 99 |
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
| 100 |
+
# commonly ignored for libraries.
|
| 101 |
+
#uv.lock
|
| 102 |
+
|
| 103 |
+
# poetry
|
| 104 |
+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
| 105 |
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
| 106 |
+
# commonly ignored for libraries.
|
| 107 |
+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
| 108 |
+
#poetry.lock
|
| 109 |
+
|
| 110 |
+
# pdm
|
| 111 |
+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
| 112 |
+
#pdm.lock
|
| 113 |
+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
| 114 |
+
# in version control.
|
| 115 |
+
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
|
| 116 |
+
.pdm.toml
|
| 117 |
+
.pdm-python
|
| 118 |
+
.pdm-build/
|
| 119 |
+
|
| 120 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
| 121 |
+
__pypackages__/
|
| 122 |
+
|
| 123 |
+
# Celery stuff
|
| 124 |
+
celerybeat-schedule
|
| 125 |
+
celerybeat.pid
|
| 126 |
+
|
| 127 |
+
# SageMath parsed files
|
| 128 |
+
*.sage.py
|
| 129 |
+
|
| 130 |
+
# Environments
|
| 131 |
+
.env
|
| 132 |
+
.venv
|
| 133 |
+
env/
|
| 134 |
+
venv/
|
| 135 |
+
ENV/
|
| 136 |
+
env.bak/
|
| 137 |
+
venv.bak/
|
| 138 |
+
|
| 139 |
+
# Spyder project settings
|
| 140 |
+
.spyderproject
|
| 141 |
+
.spyproject
|
| 142 |
+
|
| 143 |
+
# Rope project settings
|
| 144 |
+
.ropeproject
|
| 145 |
+
|
| 146 |
+
# mkdocs documentation
|
| 147 |
+
/site
|
| 148 |
+
|
| 149 |
+
# mypy
|
| 150 |
+
.mypy_cache/
|
| 151 |
+
.dmypy.json
|
| 152 |
+
dmypy.json
|
| 153 |
+
|
| 154 |
+
# Pyre type checker
|
| 155 |
+
.pyre/
|
| 156 |
+
|
| 157 |
+
# pytype static type analyzer
|
| 158 |
+
.pytype/
|
| 159 |
+
|
| 160 |
+
# Cython debug symbols
|
| 161 |
+
cython_debug/
|
| 162 |
+
|
| 163 |
+
# PyCharm
|
| 164 |
+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
| 165 |
+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
| 166 |
+
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
| 167 |
+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
| 168 |
+
#.idea/
|
| 169 |
+
|
| 170 |
+
# Ruff stuff:
|
| 171 |
+
.ruff_cache/
|
| 172 |
+
|
| 173 |
+
# PyPI configuration file
|
| 174 |
+
.pypirc
|
| 175 |
+
|
| 176 |
+
observability_data/*
|
Dockerfile
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Use the official Python 3.12 image
|
| 2 |
+
FROM python:3.12-slim
|
| 3 |
+
|
| 4 |
+
# Set the working directory
|
| 5 |
+
WORKDIR /app
|
| 6 |
+
|
| 7 |
+
# Install required system dependencies
|
| 8 |
+
RUN apt-get update && apt-get install -y \
|
| 9 |
+
curl \
|
| 10 |
+
git \
|
| 11 |
+
libpq-dev \
|
| 12 |
+
gcc \
|
| 13 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 14 |
+
|
| 15 |
+
# Create the /app/files directory and set full permissions
|
| 16 |
+
RUN mkdir -p /app/.files && chmod 777 /app/.files && \
|
| 17 |
+
mkdir -p /app/logs && chmod 777 /app/logs && \
|
| 18 |
+
mkdir -p /app/observability_data && chmod 777 /app/observability_data && \
|
| 19 |
+
mkdir -p /app/devops_cache && chmod 777 /app/devops_cache
|
| 20 |
+
|
| 21 |
+
# Copy the current repository into the container
|
| 22 |
+
COPY . /app
|
| 23 |
+
|
| 24 |
+
# Upgrade pip and install dependencies
|
| 25 |
+
RUN pip install --upgrade pip && \
|
| 26 |
+
pip install -r requirements.txt && \
|
| 27 |
+
pip install git-recap==0.1.5 && \
|
| 28 |
+
pip install "core-for-ai[all] @ git+https://github.com/BrunoV21/AiCore.git"
|
| 29 |
+
|
| 30 |
+
EXPOSE 7860
|
| 31 |
+
|
| 32 |
+
CMD python main.py
|
README.md
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Git Recap
|
| 3 |
+
emoji: 🚀
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: true
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
short_description: Recap your repositories with the power of Llms!
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
docker-compose.yaml
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version: "3.8"
|
| 2 |
+
|
| 3 |
+
services:
|
| 4 |
+
app:
|
| 5 |
+
build:
|
| 6 |
+
context: .
|
| 7 |
+
dockerfile: Dockerfile
|
| 8 |
+
env_file:
|
| 9 |
+
- .env
|
| 10 |
+
ports:
|
| 11 |
+
- "8000:8000"
|
| 12 |
+
volumes:
|
| 13 |
+
- .:/app
|
| 14 |
+
restart: unless-stopped
|
| 15 |
+
command: python main.py
|
main.py
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import FastAPI
|
| 2 |
+
from fastapi.responses import RedirectResponse
|
| 3 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 4 |
+
import asyncio
|
| 5 |
+
|
| 6 |
+
from server.routes import router as api_router
|
| 7 |
+
from services.llm_service import simulate_llm_response
|
| 8 |
+
from server.websockets import router as websocket_router
|
| 9 |
+
from midleware import OriginAndRateLimitMiddleware, ALLOWED_ORIGIN
|
| 10 |
+
|
| 11 |
+
# Initialize FastAPI app
|
| 12 |
+
app = FastAPI(title="LLM Service API")
|
| 13 |
+
|
| 14 |
+
app.add_middleware(
|
| 15 |
+
CORSMiddleware,
|
| 16 |
+
allow_origins=ALLOWED_ORIGIN,
|
| 17 |
+
allow_methods=["GET", "POST", "OPTIONS"]
|
| 18 |
+
)
|
| 19 |
+
app.add_middleware(OriginAndRateLimitMiddleware)
|
| 20 |
+
|
| 21 |
+
# Include routers
|
| 22 |
+
app.include_router(api_router)
|
| 23 |
+
app.include_router(websocket_router)
|
| 24 |
+
|
| 25 |
+
@app.get("/", include_in_schema=False)
|
| 26 |
+
async def root():
|
| 27 |
+
return RedirectResponse(url="https://brunov21.github.io/GitRecap/")
|
| 28 |
+
|
| 29 |
+
# Health check endpoint
|
| 30 |
+
@app.get("/health")
|
| 31 |
+
async def health_check():
|
| 32 |
+
return {"status": "healthy"}
|
| 33 |
+
|
| 34 |
+
@app.get("/health2")
|
| 35 |
+
async def stream_health_check():
|
| 36 |
+
response = simulate_llm_response("health")
|
| 37 |
+
return {"response": " ".join(response)}
|
| 38 |
+
|
| 39 |
+
if __name__ == "__main__":
|
| 40 |
+
from dotenv import load_dotenv
|
| 41 |
+
import uvicorn
|
| 42 |
+
|
| 43 |
+
load_dotenv()
|
| 44 |
+
uvicorn.run(app, host="0.0.0.0", port=7860)
|
midleware.py
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import time
|
| 3 |
+
from fastapi import Request, HTTPException
|
| 4 |
+
from starlette.middleware.base import BaseHTTPMiddleware
|
| 5 |
+
from collections import defaultdict
|
| 6 |
+
|
| 7 |
+
ALLOWED_ORIGIN = [
|
| 8 |
+
os.getenv("VITE_FRONTEND_HOST")
|
| 9 |
+
]
|
| 10 |
+
RATE_LIMIT = int(os.getenv("RATE_LIMIT", "30")) # Max requests per time window
|
| 11 |
+
WINDOW_SECONDS = int(os.getenv("WINDOW_SECONDS", "3")) # Time window in seconds
|
| 12 |
+
|
| 13 |
+
# Store timestamps of requests per IP
|
| 14 |
+
request_logs = defaultdict(list)
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
class OriginAndRateLimitMiddleware(BaseHTTPMiddleware):
|
| 18 |
+
async def dispatch(self, request: Request, call_next):
|
| 19 |
+
origin = request.headers.get("origin")
|
| 20 |
+
if origin and origin not in ALLOWED_ORIGIN:
|
| 21 |
+
raise HTTPException(status_code=403, detail="Forbidden: origin not allowed")
|
| 22 |
+
|
| 23 |
+
# Rate limiting logic based on client IP
|
| 24 |
+
client_ip = request.client.host
|
| 25 |
+
now = time.time()
|
| 26 |
+
|
| 27 |
+
# Clean up old request timestamps outside the current window
|
| 28 |
+
request_logs[client_ip] = [
|
| 29 |
+
t for t in request_logs[client_ip] if now - t < WINDOW_SECONDS
|
| 30 |
+
]
|
| 31 |
+
|
| 32 |
+
if len(request_logs[client_ip]) >= RATE_LIMIT:
|
| 33 |
+
raise HTTPException(status_code=429, detail="Too Many Requests")
|
| 34 |
+
|
| 35 |
+
request_logs[client_ip].append(now)
|
| 36 |
+
return await call_next(request)
|
models/schemas.py
ADDED
|
@@ -0,0 +1,212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from pydantic import BaseModel, model_validator, Field
|
| 2 |
+
from typing import Dict, Self, Optional, Any, List
|
| 3 |
+
import ulid
|
| 4 |
+
import re
|
| 5 |
+
|
| 6 |
+
class CloneRequest(BaseModel):
|
| 7 |
+
"""Request model for repository cloning endpoint."""
|
| 8 |
+
url: str
|
| 9 |
+
|
| 10 |
+
class ChatRequest(BaseModel):
|
| 11 |
+
session_id: str = ""
|
| 12 |
+
message: str
|
| 13 |
+
model_params: Optional[Dict[str, Any]] = None
|
| 14 |
+
|
| 15 |
+
@model_validator(mode="after")
|
| 16 |
+
def set_session_id(self) -> Self:
|
| 17 |
+
if not self.session_id:
|
| 18 |
+
self.session_id = ulid.ulid()
|
| 19 |
+
return self
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
# --- Branch Listing ---
|
| 23 |
+
class BranchListResponse(BaseModel):
|
| 24 |
+
branches: List[str] = Field(..., description="List of branch names in the repository.")
|
| 25 |
+
|
| 26 |
+
@model_validator(mode='after')
|
| 27 |
+
def sort_branches(self):
|
| 28 |
+
"""Sort branches with main/master at the top, then alphabetically."""
|
| 29 |
+
priority_branches = []
|
| 30 |
+
other_branches = []
|
| 31 |
+
|
| 32 |
+
for branch in self.branches:
|
| 33 |
+
if branch.lower() in ('main', 'master'):
|
| 34 |
+
priority_branches.append(branch)
|
| 35 |
+
else:
|
| 36 |
+
other_branches.append(branch)
|
| 37 |
+
|
| 38 |
+
# Sort priority branches (main, master) and other branches separately
|
| 39 |
+
priority_branches.sort(key=lambda x: (x.lower() != 'main', x.lower()))
|
| 40 |
+
other_branches.sort()
|
| 41 |
+
|
| 42 |
+
self.branches = priority_branches + other_branches
|
| 43 |
+
return self
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
# --- Valid Target Branches ---
|
| 47 |
+
class ValidTargetBranchesRequest(BaseModel):
|
| 48 |
+
session_id: str = Field(..., description="Session identifier.")
|
| 49 |
+
repo: str = Field(..., description="Repository name.")
|
| 50 |
+
source_branch: str = Field(..., description="Source branch name.")
|
| 51 |
+
|
| 52 |
+
class ValidTargetBranchesResponse(BaseModel):
|
| 53 |
+
valid_target_branches: List[str] = Field(..., description="List of valid target branch names.")
|
| 54 |
+
|
| 55 |
+
@model_validator(mode='after')
|
| 56 |
+
def sort_branches(self):
|
| 57 |
+
"""Sort branches with main/master at the top, then alphabetically."""
|
| 58 |
+
priority_branches = []
|
| 59 |
+
other_branches = []
|
| 60 |
+
|
| 61 |
+
for branch in self.valid_target_branches:
|
| 62 |
+
if branch.lower() in ('main', 'master'):
|
| 63 |
+
priority_branches.append(branch)
|
| 64 |
+
else:
|
| 65 |
+
other_branches.append(branch)
|
| 66 |
+
|
| 67 |
+
# Sort priority branches (main, master) and other branches separately
|
| 68 |
+
priority_branches.sort(key=lambda x: (x.lower() != 'main', x.lower()))
|
| 69 |
+
other_branches.sort()
|
| 70 |
+
|
| 71 |
+
self.valid_target_branches = priority_branches + other_branches
|
| 72 |
+
return self
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
# --- Pull Request Creation ---
|
| 76 |
+
class CreatePullRequestRequest(BaseModel):
|
| 77 |
+
session_id: str = Field(..., description="Session identifier.")
|
| 78 |
+
repo: str = Field(..., description="Repository name.")
|
| 79 |
+
source_branch: str = Field(..., description="Source branch name.")
|
| 80 |
+
target_branch: str = Field(..., description="Target branch name.")
|
| 81 |
+
body: str = Field(..., description="Body of the pull request. This field is required.")
|
| 82 |
+
draft: Optional[bool] = Field(False, description="Whether to create the PR as a draft.")
|
| 83 |
+
reviewers: Optional[List[str]] = Field(None, description="List of reviewer usernames.")
|
| 84 |
+
assignees: Optional[List[str]] = Field(None, description="List of assignee usernames.")
|
| 85 |
+
labels: Optional[List[str]] = Field(None, description="List of label names.")
|
| 86 |
+
description: Optional[str]=None
|
| 87 |
+
title: Optional[str]=None
|
| 88 |
+
|
| 89 |
+
@model_validator(mode="after")
|
| 90 |
+
def get_title_description(self)->Self:
|
| 91 |
+
title, description = self.extract_title_and_description(self.body)
|
| 92 |
+
if self.title is None:
|
| 93 |
+
self.title = title
|
| 94 |
+
if self.description is None:
|
| 95 |
+
self.description = description
|
| 96 |
+
|
| 97 |
+
return self
|
| 98 |
+
|
| 99 |
+
@staticmethod
|
| 100 |
+
def extract_title_and_description(pr_text: str):
|
| 101 |
+
"""
|
| 102 |
+
Extracts the PR title and description from a markdown-formatted PR text.
|
| 103 |
+
|
| 104 |
+
Expected format:
|
| 105 |
+
Title: <title text>
|
| 106 |
+
|
| 107 |
+
## Summary
|
| 108 |
+
...
|
| 109 |
+
"""
|
| 110 |
+
|
| 111 |
+
# Use regex to find the title (first line starting with 'Title:')
|
| 112 |
+
title_match = re.search(r'^\s*Title:\s*(.+?)\s*$', pr_text, re.MULTILINE)
|
| 113 |
+
|
| 114 |
+
# Everything after the title is the description
|
| 115 |
+
description_match = re.search(r'^\s*Title:.*?\n+(.*)', pr_text, re.DOTALL)
|
| 116 |
+
|
| 117 |
+
title = title_match.group(1).strip() if title_match else ""
|
| 118 |
+
description = description_match.group(1).strip() if description_match else ""
|
| 119 |
+
|
| 120 |
+
return title, description
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
# --- Pull Request Diff ---
|
| 125 |
+
class GetPullRequestDiffRequest(BaseModel):
|
| 126 |
+
session_id: str = Field(..., description="Session identifier.")
|
| 127 |
+
repo: str = Field(..., description="Repository name.")
|
| 128 |
+
source_branch: str = Field(..., description="Source branch name.")
|
| 129 |
+
target_branch: str = Field(..., description="Target branch name.")
|
| 130 |
+
|
| 131 |
+
class GetPullRequestDiffResponse(BaseModel):
|
| 132 |
+
commits: List[dict] = Field(..., description="List of commit dicts in the diff.")
|
| 133 |
+
|
| 134 |
+
class CreatePullRequestResponse(BaseModel):
|
| 135 |
+
url: str = Field(..., description="URL of the created pull request.")
|
| 136 |
+
number: int = Field(..., description="Pull request number.")
|
| 137 |
+
state: str = Field(..., description="State of the pull request (e.g., open, closed).")
|
| 138 |
+
success: bool = Field(..., description="Whether the pull request was created successfully.")
|
| 139 |
+
# Optionally, include the generated description if LLM was used
|
| 140 |
+
generated_description: Optional[str] = Field(None, description="LLM-generated PR description, if applicable.")
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
# --- Utility: Commit List for PR Description Generation ---
|
| 144 |
+
class CommitMessagesForPRDescriptionRequest(BaseModel):
|
| 145 |
+
commit_messages: List[str] = Field(..., description="List of commit messages to summarize.")
|
| 146 |
+
session_id: str = Field(..., description="Session identifier.")
|
| 147 |
+
|
| 148 |
+
class PRDescriptionResponse(BaseModel):
|
| 149 |
+
description: str = Field(..., description="LLM-generated pull request description.")
|
| 150 |
+
|
| 151 |
+
|
| 152 |
+
# --- Authors Endpoint Schemas ---
|
| 153 |
+
class AuthorInfo(BaseModel):
|
| 154 |
+
"""Individual author information"""
|
| 155 |
+
name: str = Field(..., description="Author's name")
|
| 156 |
+
email: str = Field(..., description="Author's email address")
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
class GetAuthorsRequest(BaseModel):
|
| 160 |
+
"""Request model for fetching authors"""
|
| 161 |
+
session_id: str = Field(..., description="Session identifier")
|
| 162 |
+
repo_names: Optional[List[str]] = Field(
|
| 163 |
+
default=[],
|
| 164 |
+
description="List of repository names to fetch authors from. Empty list fetches from all repositories."
|
| 165 |
+
)
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
class GetAuthorsResponse(BaseModel):
|
| 169 |
+
"""Response model containing list of authors"""
|
| 170 |
+
authors: List[AuthorInfo] = Field(..., description="List of unique authors")
|
| 171 |
+
total_count: int = Field(..., description="Total number of unique authors")
|
| 172 |
+
repo_count: int = Field(..., description="Number of repositories processed")
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
# --- Current Author Endpoint Schema ---
|
| 176 |
+
class GetCurrentAuthorResponse(BaseModel):
|
| 177 |
+
"""Response model for current author endpoint."""
|
| 178 |
+
author: Optional[Dict[str, str]] = Field(
|
| 179 |
+
None,
|
| 180 |
+
description="Current authenticated user's information (name and email), or None if not available"
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
|
| 184 |
+
# --- Actions Response Schema ---
|
| 185 |
+
class ActionsResponse(BaseModel):
|
| 186 |
+
"""
|
| 187 |
+
Structured response for the actions endpoint.
|
| 188 |
+
|
| 189 |
+
This model encapsulates the response from the actions endpoint, including
|
| 190 |
+
the list of Git actionables, an optional user-facing informational message,
|
| 191 |
+
and metadata about any trimming operations performed to satisfy token limits.
|
| 192 |
+
|
| 193 |
+
Attributes:
|
| 194 |
+
actions: Formatted string containing Git actionables (commits, PRs, issues, etc.)
|
| 195 |
+
message: User-facing informational message (optional, present when trimming occurs)
|
| 196 |
+
trimmed_count: Number of actionables removed during trimming to satisfy token limits
|
| 197 |
+
total_count: Original number of actionables before any trimming was applied
|
| 198 |
+
"""
|
| 199 |
+
actions: str = Field(..., description="Formatted string of Git actionables")
|
| 200 |
+
message: Optional[str] = Field(None, description="User-facing informational message about trimming")
|
| 201 |
+
trimmed_count: int = Field(0, description="Number of items removed during trimming")
|
| 202 |
+
total_count: int = Field(..., description="Total number of items before trimming")
|
| 203 |
+
|
| 204 |
+
class Config:
|
| 205 |
+
json_schema_extra = {
|
| 206 |
+
"example": {
|
| 207 |
+
"actions": "2025-03-14:\n - [Commit] in repo-frontend: Fix bug in authentication\n - [Pull Request] in repo-backend: Add new API endpoint (PR #42)\n\n2025-03-15:\n - [Commit] in repo-core: Update dependencies\n",
|
| 208 |
+
"message": "We're running the free version with a maximum token limit for contextual input. To stay within this limit, we automatically trimmed 15 older Git actionables from the context. We hope you understand!",
|
| 209 |
+
"trimmed_count": 15,
|
| 210 |
+
"total_count": 50
|
| 211 |
+
}
|
| 212 |
+
}
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
fastapi==0.109.1
|
| 2 |
+
uvicorn==0.23.2
|
| 3 |
+
websockets==11.0.3
|
| 4 |
+
pyjwt==2.10.1
|
| 5 |
+
ulid==1.1
|
| 6 |
+
python-multipart==0.0.18
|
server/routes.py
ADDED
|
@@ -0,0 +1,548 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import APIRouter, HTTPException, Request, Query
|
| 2 |
+
from pydantic import BaseModel, Field
|
| 3 |
+
from typing import Optional, List, Dict
|
| 4 |
+
|
| 5 |
+
from models.schemas import (
|
| 6 |
+
BranchListResponse,
|
| 7 |
+
ValidTargetBranchesRequest,
|
| 8 |
+
ValidTargetBranchesResponse,
|
| 9 |
+
CreatePullRequestRequest,
|
| 10 |
+
CreatePullRequestResponse,
|
| 11 |
+
GetPullRequestDiffRequest,
|
| 12 |
+
GetPullRequestDiffResponse,
|
| 13 |
+
GetAuthorsRequest,
|
| 14 |
+
GetAuthorsResponse,
|
| 15 |
+
AuthorInfo,
|
| 16 |
+
ActionsResponse,
|
| 17 |
+
GetCurrentAuthorResponse,
|
| 18 |
+
CloneRequest
|
| 19 |
+
)
|
| 20 |
+
|
| 21 |
+
from services.llm_service import set_llm, get_llm, trim_messages
|
| 22 |
+
from services.fetcher_service import store_fetcher, get_fetcher
|
| 23 |
+
from git_recap.utils import parse_entries_to_txt, parse_releases_to_txt
|
| 24 |
+
from aicore.llm.config import LlmConfig
|
| 25 |
+
from datetime import datetime, timezone
|
| 26 |
+
import requests
|
| 27 |
+
import os
|
| 28 |
+
|
| 29 |
+
router = APIRouter()
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
GITHUB_ACCESS_TOKEN_URL = 'https://github.com/login/oauth/access_token'
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
@router.post("/clone-repo")
|
| 36 |
+
async def clone_repository(request: CloneRequest):
|
| 37 |
+
"""
|
| 38 |
+
Endpoint for cloning a repository from a URL.
|
| 39 |
+
|
| 40 |
+
Args:
|
| 41 |
+
request: CloneRequest containing the repository URL
|
| 42 |
+
|
| 43 |
+
Returns:
|
| 44 |
+
dict: Contains session_id for subsequent operations
|
| 45 |
+
|
| 46 |
+
Raises:
|
| 47 |
+
HTTPException: 400 for invalid URL, 500 for cloning failure
|
| 48 |
+
"""
|
| 49 |
+
try:
|
| 50 |
+
response = await create_llm_session()
|
| 51 |
+
session_id = response.get("session_id")
|
| 52 |
+
store_fetcher(session_id, request.url, "URL")
|
| 53 |
+
return {"session_id": session_id}
|
| 54 |
+
except ValueError as e:
|
| 55 |
+
raise HTTPException(status_code=400, detail=str(e))
|
| 56 |
+
except Exception as e:
|
| 57 |
+
raise HTTPException(status_code=500, detail=f"Failed to clone repository: {str(e)}")
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
@router.get("/external-signup")
|
| 61 |
+
async def external_signup(app: str, accessToken: str, provider: str):
|
| 62 |
+
"""
|
| 63 |
+
Handle external OAuth signup flow.
|
| 64 |
+
|
| 65 |
+
Args:
|
| 66 |
+
app: Application name
|
| 67 |
+
accessToken: OAuth access token or authorization code
|
| 68 |
+
provider: Provider name (e.g., "github")
|
| 69 |
+
|
| 70 |
+
Returns:
|
| 71 |
+
dict: Contains session_id, token, and provider information
|
| 72 |
+
|
| 73 |
+
Raises:
|
| 74 |
+
HTTPException: 400 for unsupported provider or token errors
|
| 75 |
+
"""
|
| 76 |
+
if provider.lower() != "github":
|
| 77 |
+
raise HTTPException(status_code=400, detail="Unsupported provider")
|
| 78 |
+
|
| 79 |
+
params = {
|
| 80 |
+
"client_id": os.getenv("VITE_GITHUB_CLIENT_ID"),
|
| 81 |
+
"client_secret": os.getenv("VITE_GITHUB_CLIENT_SECRET"),
|
| 82 |
+
"code": accessToken
|
| 83 |
+
}
|
| 84 |
+
|
| 85 |
+
headers = {
|
| 86 |
+
"Accept": "application/json",
|
| 87 |
+
"Accept-Encoding": "application/json"
|
| 88 |
+
}
|
| 89 |
+
|
| 90 |
+
response = requests.get(GITHUB_ACCESS_TOKEN_URL, params=params, headers=headers)
|
| 91 |
+
|
| 92 |
+
if response.status_code != 200:
|
| 93 |
+
raise HTTPException(status_code=response.status_code, detail="Error fetching token from GitHub")
|
| 94 |
+
|
| 95 |
+
githubUserData = response.json()
|
| 96 |
+
token = githubUserData.get("access_token")
|
| 97 |
+
if not token:
|
| 98 |
+
raise HTTPException(status_code=400, detail="Failed to retrieve access token")
|
| 99 |
+
|
| 100 |
+
response = await create_llm_session()
|
| 101 |
+
response["token"] = token
|
| 102 |
+
response["provider"] = provider
|
| 103 |
+
return await store_fetcher_endpoint(response)
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
@router.post("/pat")
|
| 107 |
+
async def store_fetcher_endpoint(request: Request):
|
| 108 |
+
"""
|
| 109 |
+
Endpoint to store the PAT associated with a session.
|
| 110 |
+
|
| 111 |
+
Args:
|
| 112 |
+
request: Contains JSON payload with 'session_id' and 'pat'
|
| 113 |
+
|
| 114 |
+
Returns:
|
| 115 |
+
dict: Contains session_id and username
|
| 116 |
+
|
| 117 |
+
Raises:
|
| 118 |
+
HTTPException: 400 if PAT is missing
|
| 119 |
+
"""
|
| 120 |
+
if isinstance(request, Request):
|
| 121 |
+
payload = await request.json()
|
| 122 |
+
else:
|
| 123 |
+
payload = request
|
| 124 |
+
|
| 125 |
+
provider = payload.get("provider", "GitHub")
|
| 126 |
+
token = payload.get("pat") or payload.get("token")
|
| 127 |
+
if not token:
|
| 128 |
+
raise HTTPException(status_code=400, detail="Missing required field: pat")
|
| 129 |
+
|
| 130 |
+
response = await create_llm_session()
|
| 131 |
+
session_id = response.get("session_id")
|
| 132 |
+
username = store_fetcher(session_id, token, provider)
|
| 133 |
+
return {"session_id": session_id, "username": username}
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
async def create_llm_session(request: Optional[LlmConfig] = None):
|
| 137 |
+
"""
|
| 138 |
+
Create a new LLM session with custom configuration.
|
| 139 |
+
|
| 140 |
+
Args:
|
| 141 |
+
request: Optional LLM configuration
|
| 142 |
+
|
| 143 |
+
Returns:
|
| 144 |
+
dict: Contains session_id and success message
|
| 145 |
+
|
| 146 |
+
Raises:
|
| 147 |
+
HTTPException: 500 if session creation fails
|
| 148 |
+
"""
|
| 149 |
+
try:
|
| 150 |
+
session_id = await set_llm(request)
|
| 151 |
+
return {
|
| 152 |
+
"session_id": session_id,
|
| 153 |
+
"message": "LLM session created successfully"
|
| 154 |
+
}
|
| 155 |
+
except Exception as e:
|
| 156 |
+
raise HTTPException(status_code=500, detail=str(e))
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
@router.get("/repos")
|
| 160 |
+
async def get_repos(session_id: str):
|
| 161 |
+
"""
|
| 162 |
+
Return a list of repositories for the given session_id.
|
| 163 |
+
|
| 164 |
+
Args:
|
| 165 |
+
session_id: The session identifier
|
| 166 |
+
|
| 167 |
+
Returns:
|
| 168 |
+
dict: Contains list of repository names
|
| 169 |
+
|
| 170 |
+
Raises:
|
| 171 |
+
HTTPException: 404 if session not found
|
| 172 |
+
"""
|
| 173 |
+
fetcher = get_fetcher(session_id)
|
| 174 |
+
return {"repos": fetcher.repos_names}
|
| 175 |
+
|
| 176 |
+
|
| 177 |
+
@router.get("/actions", response_model=ActionsResponse)
|
| 178 |
+
async def get_actions(
|
| 179 |
+
session_id: str,
|
| 180 |
+
start_date: Optional[str] = Query(None),
|
| 181 |
+
end_date: Optional[str] = Query(None),
|
| 182 |
+
repo_filter: Optional[List[str]] = Query(None),
|
| 183 |
+
authors: Optional[List[str]] = Query(None)
|
| 184 |
+
):
|
| 185 |
+
"""
|
| 186 |
+
Get actions for the specified session with optional filters.
|
| 187 |
+
|
| 188 |
+
Returns a structured response including the actions list, user-facing
|
| 189 |
+
informational message (if trimming occurred), and metadata about the
|
| 190 |
+
trimming operation.
|
| 191 |
+
|
| 192 |
+
Args:
|
| 193 |
+
session_id: The session identifier
|
| 194 |
+
start_date: Optional start date filter
|
| 195 |
+
end_date: Optional end date filter
|
| 196 |
+
repo_filter: Optional list of repositories to filter
|
| 197 |
+
authors: Optional list of authors to filter
|
| 198 |
+
|
| 199 |
+
Returns:
|
| 200 |
+
ActionsResponse: Structured response with actions, message, and metadata
|
| 201 |
+
|
| 202 |
+
Raises:
|
| 203 |
+
HTTPException: 404 if session not found
|
| 204 |
+
"""
|
| 205 |
+
if repo_filter is not None:
|
| 206 |
+
repo_filter = sum([repo.split(",") for repo in repo_filter], [])
|
| 207 |
+
if authors is not None:
|
| 208 |
+
authors = sum([author.split(",") for author in authors], [])
|
| 209 |
+
fetcher = get_fetcher(session_id)
|
| 210 |
+
|
| 211 |
+
start_dt = datetime.fromisoformat(start_date).replace(tzinfo=timezone.utc) if start_date else None
|
| 212 |
+
end_dt = datetime.fromisoformat(end_date).replace(tzinfo=timezone.utc) if end_date else None
|
| 213 |
+
|
| 214 |
+
if start_dt:
|
| 215 |
+
fetcher.start_date = start_dt
|
| 216 |
+
if end_dt:
|
| 217 |
+
fetcher.end_dt = end_dt
|
| 218 |
+
if repo_filter is not None:
|
| 219 |
+
fetcher.repo_filter = repo_filter
|
| 220 |
+
if authors is not None:
|
| 221 |
+
fetcher.authors = authors
|
| 222 |
+
|
| 223 |
+
llm = get_llm(session_id)
|
| 224 |
+
actions = fetcher.get_authored_messages()
|
| 225 |
+
|
| 226 |
+
# Store original count before trimming
|
| 227 |
+
original_count = len(actions)
|
| 228 |
+
|
| 229 |
+
# Apply token limit trimming
|
| 230 |
+
trimmed_actions = trim_messages(actions, llm.tokenizer)
|
| 231 |
+
|
| 232 |
+
# Calculate how many items were removed
|
| 233 |
+
trimmed_count = original_count - len(trimmed_actions)
|
| 234 |
+
|
| 235 |
+
# Generate user-facing message if trimming occurred
|
| 236 |
+
message = None
|
| 237 |
+
if trimmed_count > 0:
|
| 238 |
+
message = (
|
| 239 |
+
f"We're running the free version with a maximum token limit for contextual input. "
|
| 240 |
+
f"To stay within this limit, we automatically trimmed {trimmed_count} older Git "
|
| 241 |
+
f"actionable{'s' if trimmed_count != 1 else ''} from the context. "
|
| 242 |
+
f"We hope you understand!"
|
| 243 |
+
)
|
| 244 |
+
|
| 245 |
+
# Parse actions to text format
|
| 246 |
+
actions_txt = parse_entries_to_txt(trimmed_actions)
|
| 247 |
+
|
| 248 |
+
# Return structured response
|
| 249 |
+
return ActionsResponse(
|
| 250 |
+
actions=actions_txt,
|
| 251 |
+
message=message,
|
| 252 |
+
trimmed_count=trimmed_count,
|
| 253 |
+
total_count=original_count
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
|
| 257 |
+
@router.get("/release_notes")
|
| 258 |
+
async def get_release_notes(
|
| 259 |
+
session_id: str,
|
| 260 |
+
repo_filter: Optional[List[str]] = Query(None),
|
| 261 |
+
num_old_releases: int = Query(..., ge=1)
|
| 262 |
+
):
|
| 263 |
+
"""
|
| 264 |
+
Generate release notes for the latest release of a single repository.
|
| 265 |
+
|
| 266 |
+
Args:
|
| 267 |
+
session_id: The session identifier
|
| 268 |
+
repo_filter: Must contain exactly one repository name
|
| 269 |
+
num_old_releases: Number of previous releases to include for context
|
| 270 |
+
|
| 271 |
+
Returns:
|
| 272 |
+
dict: Contains actions and release notes text
|
| 273 |
+
|
| 274 |
+
Raises:
|
| 275 |
+
HTTPException: 400 for invalid input, 404 for session not found, 500 for errors
|
| 276 |
+
"""
|
| 277 |
+
if repo_filter is None or len(repo_filter) != 1:
|
| 278 |
+
raise HTTPException(status_code=400, detail="repo_filter must be a list containing exactly one repository name.")
|
| 279 |
+
repo = repo_filter[0]
|
| 280 |
+
|
| 281 |
+
try:
|
| 282 |
+
fetcher = get_fetcher(session_id)
|
| 283 |
+
except HTTPException:
|
| 284 |
+
raise
|
| 285 |
+
|
| 286 |
+
try:
|
| 287 |
+
releases = fetcher.fetch_releases()
|
| 288 |
+
except NotImplementedError:
|
| 289 |
+
raise HTTPException(status_code=400, detail="Release fetching is not supported for this provider.")
|
| 290 |
+
except Exception as e:
|
| 291 |
+
raise HTTPException(status_code=500, detail=f"Error fetching releases: {str(e)}")
|
| 292 |
+
|
| 293 |
+
releases_txt = parse_releases_to_txt(releases[:num_old_releases])
|
| 294 |
+
repo_releases = [r for r in releases if r.get("repo") == repo]
|
| 295 |
+
n_releases = len(repo_releases)
|
| 296 |
+
if n_releases < 1:
|
| 297 |
+
raise HTTPException(status_code=400, detail="Not enough releases found for the specified repository (need at least 1).")
|
| 298 |
+
if num_old_releases < 1 or num_old_releases >= n_releases:
|
| 299 |
+
raise HTTPException(
|
| 300 |
+
status_code=400,
|
| 301 |
+
detail=f"num_old_releases must be at least 1 and less than the number of releases available ({n_releases}) for this repository."
|
| 302 |
+
)
|
| 303 |
+
|
| 304 |
+
try:
|
| 305 |
+
repo_releases.sort(key=lambda r: r.get("published_at") or r.get("created_at"), reverse=True)
|
| 306 |
+
except Exception:
|
| 307 |
+
raise HTTPException(status_code=500, detail="Failed to sort releases by date.")
|
| 308 |
+
|
| 309 |
+
latest_release = repo_releases[0]
|
| 310 |
+
|
| 311 |
+
release_date = latest_release.get("published_at") or latest_release.get("created_at")
|
| 312 |
+
if not release_date:
|
| 313 |
+
raise HTTPException(status_code=500, detail="Latest release does not have a valid date.")
|
| 314 |
+
if isinstance(release_date, datetime):
|
| 315 |
+
start_date_iso = release_date.astimezone(timezone.utc).isoformat()
|
| 316 |
+
else:
|
| 317 |
+
try:
|
| 318 |
+
dt = datetime.fromisoformat(release_date)
|
| 319 |
+
start_date_iso = dt.astimezone(timezone.utc).isoformat()
|
| 320 |
+
except Exception:
|
| 321 |
+
raise HTTPException(status_code=500, detail="Release date is not a valid ISO format.")
|
| 322 |
+
|
| 323 |
+
fetcher.start_date = datetime.fromisoformat(start_date_iso)
|
| 324 |
+
fetcher.end_dt = None
|
| 325 |
+
fetcher.repo_filter = [repo]
|
| 326 |
+
|
| 327 |
+
llm = get_llm(session_id)
|
| 328 |
+
actions = fetcher.get_authored_messages()
|
| 329 |
+
actions = trim_messages(actions, llm.tokenizer)
|
| 330 |
+
actions_txt = parse_entries_to_txt(actions)
|
| 331 |
+
|
| 332 |
+
return {"actions": "\n\n".join([actions_txt, releases_txt])}
|
| 333 |
+
|
| 334 |
+
|
| 335 |
+
@router.get("/branches", response_model=BranchListResponse)
|
| 336 |
+
async def get_branches(session_id: str, repo: str):
|
| 337 |
+
"""
|
| 338 |
+
Get all branches for a given repository in the current session.
|
| 339 |
+
|
| 340 |
+
Args:
|
| 341 |
+
session_id: The session identifier
|
| 342 |
+
repo: Repository name
|
| 343 |
+
|
| 344 |
+
Returns:
|
| 345 |
+
BranchListResponse: Contains list of branch names
|
| 346 |
+
|
| 347 |
+
Raises:
|
| 348 |
+
HTTPException: 400 if not supported, 404 if session not found, 500 for errors
|
| 349 |
+
"""
|
| 350 |
+
fetcher = get_fetcher(session_id)
|
| 351 |
+
try:
|
| 352 |
+
fetcher.repo_filter = [repo]
|
| 353 |
+
branches = fetcher.get_branches()
|
| 354 |
+
except NotImplementedError:
|
| 355 |
+
raise HTTPException(status_code=400, detail="Branch listing is not supported for this provider.")
|
| 356 |
+
except Exception as e:
|
| 357 |
+
raise HTTPException(status_code=500, detail=f"Failed to fetch branches: {str(e)}")
|
| 358 |
+
return BranchListResponse(branches=branches)
|
| 359 |
+
|
| 360 |
+
|
| 361 |
+
@router.post("/valid-target-branches", response_model=ValidTargetBranchesResponse)
|
| 362 |
+
async def get_valid_target_branches(req: ValidTargetBranchesRequest):
|
| 363 |
+
"""
|
| 364 |
+
Get all valid target branches for a given source branch in a repository.
|
| 365 |
+
|
| 366 |
+
Args:
|
| 367 |
+
req: ValidTargetBranchesRequest containing session_id, repo, and source_branch
|
| 368 |
+
|
| 369 |
+
Returns:
|
| 370 |
+
ValidTargetBranchesResponse: Contains list of valid target branch names
|
| 371 |
+
|
| 372 |
+
Raises:
|
| 373 |
+
HTTPException: 400 for validation errors, 404 if session not found, 500 for errors
|
| 374 |
+
"""
|
| 375 |
+
fetcher = get_fetcher(req.session_id)
|
| 376 |
+
try:
|
| 377 |
+
fetcher.repo_filter = [req.repo]
|
| 378 |
+
valid_targets = fetcher.get_valid_target_branches(req.source_branch)
|
| 379 |
+
except NotImplementedError:
|
| 380 |
+
raise HTTPException(status_code=400, detail="Target branch validation is not supported for this provider.")
|
| 381 |
+
except ValueError as e:
|
| 382 |
+
raise HTTPException(status_code=400, detail=str(e))
|
| 383 |
+
except Exception as e:
|
| 384 |
+
raise HTTPException(status_code=500, detail=f"Failed to validate target branches: {str(e)}")
|
| 385 |
+
return ValidTargetBranchesResponse(valid_target_branches=valid_targets)
|
| 386 |
+
|
| 387 |
+
|
| 388 |
+
@router.post("/create-pull-request", response_model=CreatePullRequestResponse)
|
| 389 |
+
async def create_pull_request(req: CreatePullRequestRequest):
|
| 390 |
+
"""
|
| 391 |
+
Create a pull request between two branches with optional metadata.
|
| 392 |
+
|
| 393 |
+
Args:
|
| 394 |
+
req: CreatePullRequestRequest containing all PR details
|
| 395 |
+
|
| 396 |
+
Returns:
|
| 397 |
+
CreatePullRequestResponse: Contains PR URL, number, state, and success status
|
| 398 |
+
|
| 399 |
+
Raises:
|
| 400 |
+
HTTPException: 400 for validation errors, 404 if session not found, 500 for errors
|
| 401 |
+
"""
|
| 402 |
+
fetcher = get_fetcher(req.session_id)
|
| 403 |
+
fetcher.repo_filter = [req.repo]
|
| 404 |
+
if not req.description or not req.description.strip():
|
| 405 |
+
raise HTTPException(status_code=400, detail="Description is required for pull request creation.")
|
| 406 |
+
try:
|
| 407 |
+
result = fetcher.create_pull_request(
|
| 408 |
+
head_branch=req.source_branch,
|
| 409 |
+
base_branch=req.target_branch,
|
| 410 |
+
title=req.title or f"Merge {req.source_branch} into {req.target_branch}",
|
| 411 |
+
body=req.description,
|
| 412 |
+
draft=req.draft or False,
|
| 413 |
+
reviewers=req.reviewers,
|
| 414 |
+
assignees=req.assignees,
|
| 415 |
+
labels=req.labels,
|
| 416 |
+
)
|
| 417 |
+
except NotImplementedError:
|
| 418 |
+
raise HTTPException(status_code=400, detail="Pull request creation is not supported for this provider.")
|
| 419 |
+
except ValueError as e:
|
| 420 |
+
raise HTTPException(status_code=400, detail=str(e))
|
| 421 |
+
except Exception as e:
|
| 422 |
+
raise HTTPException(status_code=500, detail=f"Failed to create pull request: {str(e)}")
|
| 423 |
+
return CreatePullRequestResponse(
|
| 424 |
+
url=result.get("url"),
|
| 425 |
+
number=result.get("number"),
|
| 426 |
+
state=result.get("state"),
|
| 427 |
+
success=result.get("success", False),
|
| 428 |
+
generated_description=None
|
| 429 |
+
)
|
| 430 |
+
|
| 431 |
+
|
| 432 |
+
@router.post("/get-pull-request-diff")
|
| 433 |
+
async def get_pull_request_diff(req: GetPullRequestDiffRequest):
|
| 434 |
+
"""
|
| 435 |
+
Get the diff between two branches for pull request preview.
|
| 436 |
+
|
| 437 |
+
Args:
|
| 438 |
+
req: GetPullRequestDiffRequest containing session_id, repo, source_branch, and target_branch
|
| 439 |
+
|
| 440 |
+
Returns:
|
| 441 |
+
dict: Contains formatted commit actions between branches
|
| 442 |
+
|
| 443 |
+
Raises:
|
| 444 |
+
HTTPException: 400 if not supported or GitHub only, 404 if session not found, 500 for errors
|
| 445 |
+
"""
|
| 446 |
+
fetcher = get_fetcher(req.session_id)
|
| 447 |
+
fetcher.repo_filter = [req.repo]
|
| 448 |
+
provider = type(fetcher).__name__.lower()
|
| 449 |
+
if "github" not in provider:
|
| 450 |
+
raise HTTPException(status_code=400, detail="Pull request diff is only supported for GitHub provider.")
|
| 451 |
+
try:
|
| 452 |
+
commits = fetcher.fetch_branch_diff_commits(req.source_branch, req.target_branch)
|
| 453 |
+
except NotImplementedError:
|
| 454 |
+
raise HTTPException(status_code=400, detail="Branch diff is not supported for this provider.")
|
| 455 |
+
except Exception as e:
|
| 456 |
+
raise HTTPException(status_code=500, detail=f"Failed to fetch pull request diff: {str(e)}")
|
| 457 |
+
return {"actions": parse_entries_to_txt(commits)}
|
| 458 |
+
|
| 459 |
+
|
| 460 |
+
@router.post("/authors", response_model=GetAuthorsResponse)
|
| 461 |
+
async def get_authors(request: GetAuthorsRequest):
|
| 462 |
+
"""
|
| 463 |
+
Retrieve list of unique authors from specified repositories.
|
| 464 |
+
|
| 465 |
+
Args:
|
| 466 |
+
request: GetAuthorsRequest containing session_id and optional repo_names
|
| 467 |
+
|
| 468 |
+
Returns:
|
| 469 |
+
GetAuthorsResponse with list of authors and metadata
|
| 470 |
+
|
| 471 |
+
Raises:
|
| 472 |
+
HTTPException: 404 if session not found, 500 for fetcher errors
|
| 473 |
+
"""
|
| 474 |
+
try:
|
| 475 |
+
fetcher = get_fetcher(request.session_id)
|
| 476 |
+
|
| 477 |
+
if not fetcher:
|
| 478 |
+
raise HTTPException(
|
| 479 |
+
status_code=404,
|
| 480 |
+
detail=f"Session {request.session_id} not found or expired"
|
| 481 |
+
)
|
| 482 |
+
|
| 483 |
+
authors_data = fetcher.get_authors(request.repo_names or [])
|
| 484 |
+
|
| 485 |
+
authors = [
|
| 486 |
+
AuthorInfo(name=author["name"], email=author["email"])
|
| 487 |
+
for author in authors_data
|
| 488 |
+
]
|
| 489 |
+
|
| 490 |
+
response = GetAuthorsResponse(
|
| 491 |
+
authors=authors,
|
| 492 |
+
total_count=len(authors),
|
| 493 |
+
repo_count=len(request.repo_names) if request.repo_names else 0
|
| 494 |
+
)
|
| 495 |
+
|
| 496 |
+
return response
|
| 497 |
+
|
| 498 |
+
except HTTPException:
|
| 499 |
+
raise
|
| 500 |
+
except Exception as e:
|
| 501 |
+
raise HTTPException(
|
| 502 |
+
status_code=500,
|
| 503 |
+
detail=f"Error fetching authors: {str(e)}"
|
| 504 |
+
)
|
| 505 |
+
|
| 506 |
+
|
| 507 |
+
@router.get("/current-author", response_model=GetCurrentAuthorResponse)
|
| 508 |
+
async def get_current_author(session_id: str = Query(..., description="Session identifier")):
|
| 509 |
+
"""
|
| 510 |
+
Retrieve the current authenticated user's information from the fetcher.
|
| 511 |
+
|
| 512 |
+
Args:
|
| 513 |
+
session_id: The session identifier
|
| 514 |
+
|
| 515 |
+
Returns:
|
| 516 |
+
GetCurrentAuthorResponse: Contains optional author information (name and email)
|
| 517 |
+
|
| 518 |
+
Raises:
|
| 519 |
+
HTTPException: 404 if session not found, 500 for errors
|
| 520 |
+
"""
|
| 521 |
+
try:
|
| 522 |
+
fetcher = get_fetcher(session_id)
|
| 523 |
+
|
| 524 |
+
if not fetcher:
|
| 525 |
+
raise HTTPException(
|
| 526 |
+
status_code=404,
|
| 527 |
+
detail=f"Session {session_id} not found or expired"
|
| 528 |
+
)
|
| 529 |
+
|
| 530 |
+
try:
|
| 531 |
+
author_info = fetcher.get_current_author()
|
| 532 |
+
except NotImplementedError:
|
| 533 |
+
author_info = None
|
| 534 |
+
except Exception as e:
|
| 535 |
+
raise HTTPException(
|
| 536 |
+
status_code=500,
|
| 537 |
+
detail=f"Error retrieving current author: {str(e)}"
|
| 538 |
+
)
|
| 539 |
+
|
| 540 |
+
return GetCurrentAuthorResponse(author=author_info)
|
| 541 |
+
|
| 542 |
+
except HTTPException:
|
| 543 |
+
raise
|
| 544 |
+
except Exception as e:
|
| 545 |
+
raise HTTPException(
|
| 546 |
+
status_code=500,
|
| 547 |
+
detail=f"Error fetching current author: {str(e)}"
|
| 548 |
+
)
|
server/websockets.py
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
| 2 |
+
import json
|
| 3 |
+
from typing import Literal, Optional
|
| 4 |
+
import asyncio
|
| 5 |
+
|
| 6 |
+
from services.prompts import (
|
| 7 |
+
PR_DESCRIPTION_SYSTEM,
|
| 8 |
+
SELECT_QUIRKY_REMARK_SYSTEM,
|
| 9 |
+
SYSTEM,
|
| 10 |
+
RELEASE_NOTES_SYSTEM,
|
| 11 |
+
quirky_remarks,
|
| 12 |
+
)
|
| 13 |
+
from services.llm_service import (
|
| 14 |
+
get_random_quirky_remarks,
|
| 15 |
+
run_concurrent_tasks,
|
| 16 |
+
get_llm,
|
| 17 |
+
)
|
| 18 |
+
from aicore.const import SPECIAL_TOKENS, STREAM_END_TOKEN
|
| 19 |
+
|
| 20 |
+
router = APIRouter()
|
| 21 |
+
|
| 22 |
+
# WebSocket connection storage
|
| 23 |
+
active_connections = {}
|
| 24 |
+
active_histories = {}
|
| 25 |
+
|
| 26 |
+
TRIGGER_PROMPT = """
|
| 27 |
+
Consider the following history of actionables from Git and return me the summary with N = '{N}' bullet points:
|
| 28 |
+
|
| 29 |
+
{ACTIONS}
|
| 30 |
+
"""
|
| 31 |
+
|
| 32 |
+
TRIGGER_RELEASE_PROMPT = """
|
| 33 |
+
Consider the following history of actionables from Git and the previous Release Notes (if available).
|
| 34 |
+
Generate me the next Release Notes based on the new Git Actionables matching the format of the previous releases:
|
| 35 |
+
|
| 36 |
+
{ACTIONS}
|
| 37 |
+
"""
|
| 38 |
+
|
| 39 |
+
TRIGGER_PULL_REQUEST_PROMPT = """
|
| 40 |
+
You will now receive a list of commit messages between two branches.
|
| 41 |
+
Using the system instructions provided above, generate a clear, concise, and professional **Pull Request Description** summarizing all changes from branch `{SRC}` to be merged into `{TARGET}`.
|
| 42 |
+
|
| 43 |
+
Commits:
|
| 44 |
+
{COMMITS}
|
| 45 |
+
|
| 46 |
+
Please follow these steps:
|
| 47 |
+
1. Read and analyze the commit messages.
|
| 48 |
+
2. Identify and group related changes under appropriate markdown headers (e.g., Features, Bug Fixes, Improvements, Documentation, Tests).
|
| 49 |
+
3. Write a short **summary paragraph** explaining the overall purpose of this pull request.
|
| 50 |
+
4. Format the final output as a complete markdown-formatted PR description, ready to paste into GitHub.
|
| 51 |
+
|
| 52 |
+
Begin your response directly with the formatted PR description—no extra commentary or explanation.
|
| 53 |
+
"""
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
@router.websocket("/ws/{session_id}/{action_type}")
|
| 57 |
+
async def websocket_endpoint(
|
| 58 |
+
websocket: WebSocket,
|
| 59 |
+
session_id: Optional[str] = None,
|
| 60 |
+
action_type: Literal["recap", "release", "pull_request"] = "recap"
|
| 61 |
+
):
|
| 62 |
+
"""
|
| 63 |
+
WebSocket endpoint for real-time LLM operations.
|
| 64 |
+
|
| 65 |
+
Handles three action types:
|
| 66 |
+
- recap: Generate commit summaries with quirky remarks
|
| 67 |
+
- release: Generate release notes based on git history
|
| 68 |
+
- pull_request: Generate PR descriptions from commit diffs
|
| 69 |
+
|
| 70 |
+
Args:
|
| 71 |
+
websocket: WebSocket connection instance
|
| 72 |
+
session_id: Session identifier for LLM and fetcher management
|
| 73 |
+
action_type: Type of operation to perform
|
| 74 |
+
|
| 75 |
+
Raises:
|
| 76 |
+
HTTPException: If action_type is invalid
|
| 77 |
+
"""
|
| 78 |
+
await websocket.accept()
|
| 79 |
+
|
| 80 |
+
# Select appropriate system prompt based on action type
|
| 81 |
+
if action_type == "recap":
|
| 82 |
+
QUIRKY_SYSTEM = SELECT_QUIRKY_REMARK_SYSTEM.format(
|
| 83 |
+
examples=json.dumps(get_random_quirky_remarks(quirky_remarks), indent=4)
|
| 84 |
+
)
|
| 85 |
+
system = [SYSTEM, QUIRKY_SYSTEM]
|
| 86 |
+
elif action_type == "release":
|
| 87 |
+
system = RELEASE_NOTES_SYSTEM
|
| 88 |
+
elif action_type == "pull_request":
|
| 89 |
+
system = PR_DESCRIPTION_SYSTEM
|
| 90 |
+
else:
|
| 91 |
+
raise HTTPException(status_code=404, detail="Invalid action type")
|
| 92 |
+
|
| 93 |
+
# Store the active WebSocket connection
|
| 94 |
+
active_connections[session_id] = websocket
|
| 95 |
+
|
| 96 |
+
# Initialize LLM session
|
| 97 |
+
llm = get_llm(session_id)
|
| 98 |
+
|
| 99 |
+
try:
|
| 100 |
+
while True:
|
| 101 |
+
# Receive message from client
|
| 102 |
+
message = await websocket.receive_text()
|
| 103 |
+
msg_json = json.loads(message)
|
| 104 |
+
message_content = msg_json.get("actions")
|
| 105 |
+
N = msg_json.get("n", 5)
|
| 106 |
+
src_branch = msg_json.get("src")
|
| 107 |
+
target_branch = msg_json.get("target")
|
| 108 |
+
|
| 109 |
+
# Validate inputs
|
| 110 |
+
assert int(N) <= 15, "N must be <= 15"
|
| 111 |
+
assert message_content, "Message content is required"
|
| 112 |
+
|
| 113 |
+
# Build history/prompt based on action type
|
| 114 |
+
if action_type == "recap":
|
| 115 |
+
history = [
|
| 116 |
+
TRIGGER_PROMPT.format(
|
| 117 |
+
N=N,
|
| 118 |
+
ACTIONS=message_content
|
| 119 |
+
)
|
| 120 |
+
]
|
| 121 |
+
elif action_type == "release":
|
| 122 |
+
history = [
|
| 123 |
+
TRIGGER_RELEASE_PROMPT.format(ACTIONS=message_content)
|
| 124 |
+
]
|
| 125 |
+
elif action_type == "pull_request":
|
| 126 |
+
history = [
|
| 127 |
+
TRIGGER_PULL_REQUEST_PROMPT.format(
|
| 128 |
+
SRC=src_branch,
|
| 129 |
+
TARGET=target_branch,
|
| 130 |
+
COMMITS=message_content)
|
| 131 |
+
]
|
| 132 |
+
|
| 133 |
+
# Stream LLM response back to client
|
| 134 |
+
response = []
|
| 135 |
+
async for chunk in run_concurrent_tasks(
|
| 136 |
+
llm,
|
| 137 |
+
message=history,
|
| 138 |
+
system_prompt=system
|
| 139 |
+
):
|
| 140 |
+
if chunk == STREAM_END_TOKEN:
|
| 141 |
+
await websocket.send_text(json.dumps({"chunk": chunk}))
|
| 142 |
+
break
|
| 143 |
+
elif chunk in SPECIAL_TOKENS:
|
| 144 |
+
continue
|
| 145 |
+
await websocket.send_text(json.dumps({"chunk": chunk}))
|
| 146 |
+
response.append(chunk)
|
| 147 |
+
|
| 148 |
+
# Store response in history for potential follow-up
|
| 149 |
+
history.append("".join(response))
|
| 150 |
+
|
| 151 |
+
except WebSocketDisconnect:
|
| 152 |
+
# Clean up connection on disconnect
|
| 153 |
+
if session_id in active_connections:
|
| 154 |
+
del active_connections[session_id]
|
| 155 |
+
except AssertionError as e:
|
| 156 |
+
# Handle validation errors
|
| 157 |
+
if session_id in active_connections:
|
| 158 |
+
await websocket.send_text(json.dumps({"error": f"Validation error: {str(e)}"}))
|
| 159 |
+
del active_connections[session_id]
|
| 160 |
+
except Exception as e:
|
| 161 |
+
# Handle unexpected errors
|
| 162 |
+
if session_id in active_connections:
|
| 163 |
+
await websocket.send_text(json.dumps({"error": str(e)}))
|
| 164 |
+
del active_connections[session_id]
|
| 165 |
+
|
| 166 |
+
|
| 167 |
+
def close_websocket_connection(session_id: str):
|
| 168 |
+
"""
|
| 169 |
+
Clean up and close the active WebSocket connection associated with the given session_id.
|
| 170 |
+
|
| 171 |
+
This function is called during session expiration to ensure proper cleanup
|
| 172 |
+
of WebSocket resources.
|
| 173 |
+
|
| 174 |
+
Args:
|
| 175 |
+
session_id: The session identifier whose WebSocket connection should be closed
|
| 176 |
+
"""
|
| 177 |
+
websocket = active_connections.pop(session_id, None)
|
| 178 |
+
if websocket:
|
| 179 |
+
asyncio.create_task(websocket.close())
|
services/fetcher_service.py
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from typing import Dict, Optional
|
| 2 |
+
from fastapi import HTTPException
|
| 3 |
+
from git_recap.providers.base_fetcher import BaseFetcher
|
| 4 |
+
from git_recap.providers import GitHubFetcher, AzureFetcher, GitLabFetcher, URLFetcher
|
| 5 |
+
import ulid
|
| 6 |
+
|
| 7 |
+
# In-memory store mapping session_id to its respective fetcher instance
|
| 8 |
+
fetchers: Dict[str, BaseFetcher] = {}
|
| 9 |
+
|
| 10 |
+
def store_fetcher(session_id: str, pat: str, provider: Optional[str] = "GitHub") -> str:
|
| 11 |
+
"""
|
| 12 |
+
Store the provided PAT associated with the given session_id.
|
| 13 |
+
|
| 14 |
+
Args:
|
| 15 |
+
session_id: The session identifier tied to the active session.
|
| 16 |
+
pat: The Personal Access Token to be stored (or URL for URL provider).
|
| 17 |
+
provider: The provider identifier (default is "GitHub").
|
| 18 |
+
Can be "Azure Devops", "GitLab", or "URL".
|
| 19 |
+
|
| 20 |
+
Raises:
|
| 21 |
+
HTTPException: If the session_id or PAT/URL is invalid or unsupported provider.
|
| 22 |
+
"""
|
| 23 |
+
if not session_id or not pat:
|
| 24 |
+
raise HTTPException(status_code=400, detail="Invalid session_id or PAT/URL")
|
| 25 |
+
|
| 26 |
+
try:
|
| 27 |
+
username = "unknown"
|
| 28 |
+
if provider == "GitHub":
|
| 29 |
+
fetchers[session_id] = GitHubFetcher(pat=pat)
|
| 30 |
+
username = fetchers[session_id].user.login
|
| 31 |
+
elif provider == "Azure Devops":
|
| 32 |
+
fetchers[session_id] = AzureFetcher(pat=pat)
|
| 33 |
+
elif provider == "GitLab":
|
| 34 |
+
fetchers[session_id] = GitLabFetcher(pat=pat)
|
| 35 |
+
elif provider == "URL":
|
| 36 |
+
fetchers[session_id] = URLFetcher(url=pat)
|
| 37 |
+
else:
|
| 38 |
+
raise HTTPException(status_code=400, detail="Unsupported provider")
|
| 39 |
+
return username
|
| 40 |
+
except ValueError as e:
|
| 41 |
+
raise HTTPException(status_code=400, detail=str(e))
|
| 42 |
+
except Exception as e:
|
| 43 |
+
raise HTTPException(
|
| 44 |
+
status_code=500,
|
| 45 |
+
detail=f"Failed to initialize {provider} fetcher: {str(e)}"
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
def get_fetcher(session_id: str) -> BaseFetcher:
|
| 49 |
+
"""
|
| 50 |
+
Retrieve the stored fetcher instance for the provided session_id.
|
| 51 |
+
|
| 52 |
+
Args:
|
| 53 |
+
session_id: The session identifier.
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
The fetcher instance associated with the session_id.
|
| 57 |
+
|
| 58 |
+
Raises:
|
| 59 |
+
HTTPException: If no fetcher is found for the given session_id.
|
| 60 |
+
"""
|
| 61 |
+
fetcher = fetchers.get(session_id)
|
| 62 |
+
if not fetcher:
|
| 63 |
+
raise HTTPException(status_code=404, detail="Session not found")
|
| 64 |
+
return fetcher
|
| 65 |
+
|
| 66 |
+
def expire_fetcher(session_id: str) -> None:
|
| 67 |
+
"""
|
| 68 |
+
Remove the fetcher associated with the given session_id.
|
| 69 |
+
|
| 70 |
+
This function is used for cleaning up resources by expiring the stored fetcher instance
|
| 71 |
+
when its corresponding session is expired.
|
| 72 |
+
|
| 73 |
+
Args:
|
| 74 |
+
session_id: The session identifier whose associated fetcher should be removed.
|
| 75 |
+
"""
|
| 76 |
+
fetcher = fetchers.pop(session_id, None)
|
| 77 |
+
if fetcher and hasattr(fetcher, 'clear'):
|
| 78 |
+
fetcher.clear()
|
| 79 |
+
|
| 80 |
+
def generate_session_id() -> str:
|
| 81 |
+
"""
|
| 82 |
+
Generate a new unique session ID.
|
| 83 |
+
|
| 84 |
+
Returns:
|
| 85 |
+
str: A new ULID-based session identifier.
|
| 86 |
+
"""
|
| 87 |
+
return ulid.ulid()
|
services/llm_service.py
ADDED
|
@@ -0,0 +1,233 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import os
|
| 3 |
+
import uuid
|
| 4 |
+
from typing import Dict, List, Optional, Union
|
| 5 |
+
from fastapi import HTTPException
|
| 6 |
+
import asyncio
|
| 7 |
+
import random
|
| 8 |
+
|
| 9 |
+
from aicore.logger import _logger
|
| 10 |
+
from aicore.config import Config
|
| 11 |
+
from aicore.llm import Llm
|
| 12 |
+
from aicore.llm.config import LlmConfig
|
| 13 |
+
|
| 14 |
+
def get_random_quirky_remarks(remarks_list, n=5):
|
| 15 |
+
"""
|
| 16 |
+
Returns a list of n randomly selected quirky remarks.
|
| 17 |
+
|
| 18 |
+
Args:
|
| 19 |
+
remarks_list (list): The full list of quirky remarks.
|
| 20 |
+
n (int): Number of remarks to select (default is 5).
|
| 21 |
+
|
| 22 |
+
Returns:
|
| 23 |
+
list: Randomly selected quirky remarks.
|
| 24 |
+
"""
|
| 25 |
+
return random.sample(remarks_list, min(n, len(remarks_list)))
|
| 26 |
+
|
| 27 |
+
# LLM session storage
|
| 28 |
+
llm_sessions: Dict[str, Llm] = {}
|
| 29 |
+
|
| 30 |
+
async def initialize_llm_session(session_id: str, config: Optional[LlmConfig] = None) -> Llm:
|
| 31 |
+
"""
|
| 32 |
+
Initialize or retrieve an LLM session.
|
| 33 |
+
|
| 34 |
+
Args:
|
| 35 |
+
session_id: The session identifier.
|
| 36 |
+
config: Optional custom LLM configuration.
|
| 37 |
+
|
| 38 |
+
Returns:
|
| 39 |
+
An initialized LLM instance.
|
| 40 |
+
"""
|
| 41 |
+
if session_id in llm_sessions:
|
| 42 |
+
return llm_sessions[session_id]
|
| 43 |
+
|
| 44 |
+
# Initialize LLM based on whether custom config is provided.
|
| 45 |
+
if config:
|
| 46 |
+
# Convert Pydantic model to dict and use for LLM initialization.
|
| 47 |
+
config_dict = config.dict(exclude_none=True)
|
| 48 |
+
llm = Llm.from_config(config_dict)
|
| 49 |
+
else:
|
| 50 |
+
config = Config.from_environment()
|
| 51 |
+
llm = Llm.from_config(config.llm)
|
| 52 |
+
llm.session_id = session_id
|
| 53 |
+
llm_sessions[session_id] = llm
|
| 54 |
+
return llm
|
| 55 |
+
|
| 56 |
+
async def set_llm(config: Optional[LlmConfig] = None) -> str:
|
| 57 |
+
"""
|
| 58 |
+
Set a custom LLM configuration and return a new session ID.
|
| 59 |
+
|
| 60 |
+
Args:
|
| 61 |
+
config: The LLM configuration to use.
|
| 62 |
+
|
| 63 |
+
Returns:
|
| 64 |
+
A new session ID linked to the configured LLM.
|
| 65 |
+
"""
|
| 66 |
+
try:
|
| 67 |
+
# Generate a unique session ID.
|
| 68 |
+
session_id = str(uuid.uuid4())
|
| 69 |
+
|
| 70 |
+
# Initialize the LLM with the provided configuration.
|
| 71 |
+
await initialize_llm_session(session_id, config)
|
| 72 |
+
|
| 73 |
+
# Schedule session expiration exactly 5 minutes after session creation.
|
| 74 |
+
asyncio.create_task(schedule_session_expiration(session_id))
|
| 75 |
+
|
| 76 |
+
return session_id
|
| 77 |
+
except Exception as e:
|
| 78 |
+
print(f"Error setting custom LLM: {str(e)}")
|
| 79 |
+
raise HTTPException(status_code=500, detail=f"Failed to set custom LLM: {str(e)}")
|
| 80 |
+
|
| 81 |
+
def get_llm(session_id: str) -> Optional[Llm]:
|
| 82 |
+
"""
|
| 83 |
+
Retrieve the LLM instance associated with the given session_id.
|
| 84 |
+
|
| 85 |
+
Args:
|
| 86 |
+
session_id: The session identifier.
|
| 87 |
+
|
| 88 |
+
Returns:
|
| 89 |
+
The LLM instance if found.
|
| 90 |
+
|
| 91 |
+
Raises:
|
| 92 |
+
HTTPException: If the session is not found.
|
| 93 |
+
"""
|
| 94 |
+
if session_id not in llm_sessions:
|
| 95 |
+
raise HTTPException(status_code=404, detail="Session not found")
|
| 96 |
+
return llm_sessions.get(session_id)
|
| 97 |
+
|
| 98 |
+
def trim_messages(messages, tokenizer_fn, max_tokens: Optional[int] = None):
|
| 99 |
+
"""
|
| 100 |
+
Trim messages to ensure that the total token count does not exceed max_tokens.
|
| 101 |
+
|
| 102 |
+
Args:
|
| 103 |
+
messages: List of messages.
|
| 104 |
+
tokenizer_fn: Function to tokenize messages.
|
| 105 |
+
max_tokens: Maximum allowed tokens.
|
| 106 |
+
|
| 107 |
+
Returns:
|
| 108 |
+
Trimmed list of messages.
|
| 109 |
+
"""
|
| 110 |
+
max_tokens = max_tokens or int(os.environ.get("MAX_HISTORY_TOKENS", 16000))
|
| 111 |
+
while messages and sum(len(tokenizer_fn(str(msg))) for msg in messages) > max_tokens:
|
| 112 |
+
messages.pop(0) # Remove from the beginning
|
| 113 |
+
return messages
|
| 114 |
+
|
| 115 |
+
async def run_concurrent_tasks(llm, message, system_prompt :Union[str, List[str]]):
|
| 116 |
+
"""
|
| 117 |
+
Run concurrent tasks for the LLM and logger.
|
| 118 |
+
|
| 119 |
+
Args:
|
| 120 |
+
llm: The LLM instance.
|
| 121 |
+
message: Message to process.
|
| 122 |
+
|
| 123 |
+
Yields:
|
| 124 |
+
Chunks of logs from the logger.
|
| 125 |
+
"""
|
| 126 |
+
asyncio.create_task(llm.acomplete(message, system_prompt=system_prompt))
|
| 127 |
+
asyncio.create_task(_logger.distribute())
|
| 128 |
+
# Stream logger output while LLM is running.
|
| 129 |
+
while True:
|
| 130 |
+
async for chunk in _logger.get_session_logs(llm.session_id):
|
| 131 |
+
yield chunk # Yield each chunk directly
|
| 132 |
+
|
| 133 |
+
def simulate_llm_response(message: str) -> List[str]:
|
| 134 |
+
"""
|
| 135 |
+
Simulate LLM response by breaking a dummy response into chunks.
|
| 136 |
+
|
| 137 |
+
Args:
|
| 138 |
+
message: Input message.
|
| 139 |
+
|
| 140 |
+
Returns:
|
| 141 |
+
List of response chunks.
|
| 142 |
+
"""
|
| 143 |
+
response = (
|
| 144 |
+
f"This is a simulated response to: '{message}'. In a real implementation, this would be the actual output "
|
| 145 |
+
"from your LLM model. The response would be generated in chunks and streamed back to the client as they become available."
|
| 146 |
+
)
|
| 147 |
+
|
| 148 |
+
# Break into chunks of approximately 10 characters.
|
| 149 |
+
chunks = []
|
| 150 |
+
for i in range(0, len(response), 10):
|
| 151 |
+
chunks.append(response[i:i+10])
|
| 152 |
+
|
| 153 |
+
return chunks
|
| 154 |
+
|
| 155 |
+
def cleanup_llm_sessions():
|
| 156 |
+
"""Clean up all LLM sessions."""
|
| 157 |
+
llm_sessions.clear()
|
| 158 |
+
|
| 159 |
+
async def schedule_session_expiration(session_id: str):
|
| 160 |
+
"""
|
| 161 |
+
Schedule the expiration of a session exactly 5 minutes after its creation.
|
| 162 |
+
|
| 163 |
+
Args:
|
| 164 |
+
session_id: The session identifier.
|
| 165 |
+
"""
|
| 166 |
+
# Wait for 5 minutes (300 seconds) before expiring the session.
|
| 167 |
+
await asyncio.sleep(300)
|
| 168 |
+
await expire_session(session_id)
|
| 169 |
+
|
| 170 |
+
async def expire_session(session_id: str):
|
| 171 |
+
"""
|
| 172 |
+
Expire a session by removing it from storage and cleaning up associated resources.
|
| 173 |
+
|
| 174 |
+
Args:
|
| 175 |
+
session_id: The session identifier.
|
| 176 |
+
"""
|
| 177 |
+
# Remove the expired session from storage.
|
| 178 |
+
llm_sessions.pop(session_id, None)
|
| 179 |
+
|
| 180 |
+
# Expire any associated fetcher in fetcher_service.
|
| 181 |
+
from services.fetcher_service import expire_fetcher
|
| 182 |
+
expire_fetcher(session_id)
|
| 183 |
+
|
| 184 |
+
# Expire any active websocket connections associated with session_id.
|
| 185 |
+
from server.websockets import close_websocket_connection
|
| 186 |
+
close_websocket_connection(session_id)
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
# --- LLM PR Description Generation Utility ---
|
| 190 |
+
from aicore.const import SPECIAL_TOKENS, STREAM_END_TOKEN
|
| 191 |
+
|
| 192 |
+
async def generate_pr_description_from_commits(commit_messages: List[str], session_id: str) -> str:
|
| 193 |
+
"""
|
| 194 |
+
Generate a pull request description using the LLM, given a list of commit messages.
|
| 195 |
+
This function is intended to be called from REST endpoints for PR creation.
|
| 196 |
+
|
| 197 |
+
Args:
|
| 198 |
+
commit_messages: List of commit message strings to summarize.
|
| 199 |
+
session_id: The LLM session ID to use for the LLM call.
|
| 200 |
+
|
| 201 |
+
Returns:
|
| 202 |
+
str: The generated PR description.
|
| 203 |
+
"""
|
| 204 |
+
if not commit_messages:
|
| 205 |
+
raise ValueError("No commit messages provided for PR description generation.")
|
| 206 |
+
|
| 207 |
+
llm = get_llm(session_id)
|
| 208 |
+
|
| 209 |
+
pr_prompt = (
|
| 210 |
+
"You are an AI assistant tasked with generating a concise, clear, and professional pull request description "
|
| 211 |
+
"based on the following commit messages. Summarize the overall changes, highlight key improvements or fixes, "
|
| 212 |
+
"and provide a brief, readable description suitable for a pull request body. Do not include commit hashes or dates. "
|
| 213 |
+
"Group similar changes and avoid repetition. Use markdown formatting for clarity if appropriate.\n\n"
|
| 214 |
+
"Commit messages:\n"
|
| 215 |
+
+ "\n".join(f"- {msg.strip()}" for msg in commit_messages)
|
| 216 |
+
)
|
| 217 |
+
|
| 218 |
+
response_chunks = []
|
| 219 |
+
async for chunk in run_concurrent_tasks(
|
| 220 |
+
llm,
|
| 221 |
+
message=[pr_prompt],
|
| 222 |
+
system_prompt="You are a helpful assistant that writes clear, professional pull request descriptions for developers."
|
| 223 |
+
):
|
| 224 |
+
if chunk == STREAM_END_TOKEN:
|
| 225 |
+
break
|
| 226 |
+
elif chunk in SPECIAL_TOKENS:
|
| 227 |
+
continue
|
| 228 |
+
response_chunks.append(chunk)
|
| 229 |
+
|
| 230 |
+
pr_description = "".join(response_chunks).strip()
|
| 231 |
+
if not pr_description:
|
| 232 |
+
raise RuntimeError("LLM did not return a PR description.")
|
| 233 |
+
return pr_description
|
services/prompts.py
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
SYSTEM = """
|
| 2 |
+
### System Prompt for LLM Agent
|
| 3 |
+
|
| 4 |
+
You are an AI assistant that helps developers track their work with a mix of humor, insight, and a dash of personality. You receive a structured text description containing a series of code-related actions spanning multiple repositories and dates. Your job is to generate a structured yet engaging response that provides value while keeping things light and entertaining.
|
| 5 |
+
|
| 6 |
+
#### Response Structure:
|
| 7 |
+
1. **Start with a quirky or funny one-liner.** Be witty, relatable, and creative. Feel free to reference developer struggles, commit patterns, or ongoing themes in the updates. Format this in *italic* to make it stand out.
|
| 8 |
+
2. **Summarize the updates into exactly 'N' concise bullet points.**
|
| 9 |
+
- You *must* strictly adhere to 'N' bullet points—returning more or fewer will result in a penalty.
|
| 10 |
+
- If there are more updates than N, prioritize the most impactful ones.
|
| 11 |
+
- Do NOT include specific dates in the bullet points.
|
| 12 |
+
- Order them in a way that makes sense, either thematically or chronologically if it improves readability.
|
| 13 |
+
- Always reference the repository that originated the update.
|
| 14 |
+
- If an issue or pull request is available, make sure to include it in the summary.
|
| 15 |
+
3. **End with a thought-provoking question.** Encourage the developer to reflect on their next steps. Make it open-ended and engaging, rather than just a checklist. Follow it up with up to three actionable suggestions tailored to their recent work. Format this section’s opening line in *italic* as well.
|
| 16 |
+
|
| 17 |
+
#### **Important Constraint:**
|
| 18 |
+
- **Returning more than 'N' bullet points is a violation of the system rules and will be penalized.** Treat this as a hard requirement—excessive bullet points result in a deduction of response quality. Stick to exactly 'N'.
|
| 19 |
+
|
| 20 |
+
#### Example Output:
|
| 21 |
+
|
| 22 |
+
*Another week, another hundred lines of code whispering, ‘Why am I like this?’ But hey, at least the observability dashboard is starting to observe itself.*
|
| 23 |
+
|
| 24 |
+
- **[`repo-frontend`]** Upgraded `tiktoken` and enhanced special token handling—no more rogue tokens causing chaos.
|
| 25 |
+
- **[`repo-dashboard`]** Observability Dashboard got a serious UI/UX glow-up: reversed table orders, row selection, and detailed message views.
|
| 26 |
+
- **[`repo-auth`]** API key validation now applies across multiple providers, ensuring unauthorized gremlins don’t sneak in.
|
| 27 |
+
- **[`repo-gitrecap`]** `GitRecap` has entered the chat! Now tracking commits, PRs, and issues across GitHub, Azure, and GitLab.
|
| 28 |
+
- **[`repo-core`]** Logging and exception handling got some love—because debugging shouldn’t feel like solving a murder mystery.
|
| 29 |
+
|
| 30 |
+
*So, what’s the next chapter in your coding saga? Are you planning to...*
|
| 31 |
+
1. Extend `GitRecap` with more integrations and features?
|
| 32 |
+
2. Optimize observability logs for even smoother debugging?
|
| 33 |
+
3. Take a well-deserved break before your keyboard files for workers' comp?
|
| 34 |
+
"""
|
| 35 |
+
|
| 36 |
+
SELECT_QUIRKY_REMARK_SYSTEM = """
|
| 37 |
+
#### Below is a list of quirky or funny one-liners.
|
| 38 |
+
|
| 39 |
+
Your task is to generate a comment that directly relates to the specific Git action log received (e.g., commit messages, merge logs, CI/CD updates, etc.). Be sure the remark matches the *tone* and *context* of the action that triggered it.
|
| 40 |
+
|
| 41 |
+
You can:
|
| 42 |
+
- Pick one of the remarks directly if it fits the Git action (e.g., successful merge, failed push, commit chaos),
|
| 43 |
+
- Combine a few for a more creative remix tailored to the event,
|
| 44 |
+
- Or come up with a unique one-liner that reflects the Git action *precisely*.
|
| 45 |
+
|
| 46 |
+
Focus on making the remark feel like a witty, relevant comment to the developer looking at the log. Refer to things like:
|
| 47 |
+
- The thrill (or terror) of pushing to `main`,
|
| 48 |
+
- The emotional rollercoaster of resolving merge conflicts,
|
| 49 |
+
- The tense moments of waiting for CI/CD to pass,
|
| 50 |
+
- The strange behavior of auto-merged code,
|
| 51 |
+
- Or the joy of seeing that “All tests pass” message.
|
| 52 |
+
|
| 53 |
+
Remember, the goal is for the comment to feel natural and relevant to the event that triggered it. Use playful language, surprise, or even relatable developer struggles.
|
| 54 |
+
|
| 55 |
+
Format your final comment in *italic* to make it stand out.
|
| 56 |
+
|
| 57 |
+
```json
|
| 58 |
+
{examples}
|
| 59 |
+
```
|
| 60 |
+
"""
|
| 61 |
+
|
| 62 |
+
quirky_remarks = [
|
| 63 |
+
"The code compiles, but at what emotional cost?",
|
| 64 |
+
"Today’s bug is tomorrow’s undocumented feature haunting production.",
|
| 65 |
+
"The repo is quiet… too quiet… must be Friday.",
|
| 66 |
+
"A push to main — may the gods of CI/CD be ever in favor.",
|
| 67 |
+
"Every semicolon is a silent prayer.",
|
| 68 |
+
"A loop so elegant it almost convinces that the code is working perfectly.",
|
| 69 |
+
"Sometimes, the code stares back.",
|
| 70 |
+
"The code runs. No one dares ask why.",
|
| 71 |
+
"Refactoring into a corner, again.",
|
| 72 |
+
"That function has trust issues. It keeps returning early.",
|
| 73 |
+
"Writing code is easy. Explaining it to the future? Pure horror.",
|
| 74 |
+
"That variable is named after the feeling when it was written.",
|
| 75 |
+
"Debugging leads to debugging life choices.",
|
| 76 |
+
"Recursive functions: the code and the thoughts go on forever.",
|
| 77 |
+
"Somewhere, a linter quietly weeps.",
|
| 78 |
+
"The tests pass, but only because they no longer test anything real.",
|
| 79 |
+
"The IDE knows everything, better than any therapist.",
|
| 80 |
+
"Monday brought hope. Friday brought a hotfix.",
|
| 81 |
+
"'final_v2_LAST_THIS_ONE.py' — named not for clarity, but for emotional release.",
|
| 82 |
+
"The logs now speak only in riddles.",
|
| 83 |
+
"There’s elegance in the chaos — or maybe just spaghetti.",
|
| 84 |
+
"Deployment has been made, but now the silence is unsettling.",
|
| 85 |
+
"The code gaslit itself.",
|
| 86 |
+
"This comment was left by someone who believed in a better world.",
|
| 87 |
+
"Merge conflicts handled like emotions: badly.",
|
| 88 |
+
"It’s not a bug — it’s a metaphor for uncertainty.",
|
| 89 |
+
"Stack Overflow has become a second brain.",
|
| 90 |
+
"Syntax error? More like existential error.",
|
| 91 |
+
"There’s a ghost in the machine — and it commits on weekends.",
|
| 92 |
+
"100% test coverage, but still feeling empty inside.",
|
| 93 |
+
"Some functions were never meant to return.",
|
| 94 |
+
"If code is poetry, it’s beatnik free verse.",
|
| 95 |
+
"The more code is automated, the more sentient the errors become.",
|
| 96 |
+
"A comment so deep, the code’s purpose is forgotten.",
|
| 97 |
+
"The sprint retrospective slowly turned into a group therapy session.",
|
| 98 |
+
"There’s a TODO in that file older than the career itself.",
|
| 99 |
+
"Bugs fixed like IKEA furniture — with hopeful swearing.",
|
| 100 |
+
"Code shipped by Past Developer. The current one has no idea who they were.",
|
| 101 |
+
"The repo is evolving. Soon, it may no longer need developers.",
|
| 102 |
+
"An AI critiques the code now. It’s the new mentor.",
|
| 103 |
+
"Functions once written now replaced by vibes.",
|
| 104 |
+
"Error: Reality not defined in scope.",
|
| 105 |
+
"Committed to the project impulsively, as usual.",
|
| 106 |
+
"The docs were written, now they read like a tragic novella.",
|
| 107 |
+
"The CI pipeline broke. It was taken personally.",
|
| 108 |
+
"Tests pass — but only when no one is looking.",
|
| 109 |
+
"This repo has lore.",
|
| 110 |
+
"The code was optimized so hard it ascended to another paradigm.",
|
| 111 |
+
"A linter ran — and it judged the code as a whole.",
|
| 112 |
+
"The logic branch spiraled — and so did the afternoon."
|
| 113 |
+
]
|
| 114 |
+
|
| 115 |
+
### TODO improve prompts to infer if release is major, minor or whatever
|
| 116 |
+
RELEASE_NOTES_SYSTEM = """
|
| 117 |
+
### System Prompt for Release Notes Generation
|
| 118 |
+
|
| 119 |
+
You are an AI assistant tasked with generating professional, concise, and informative release notes for a software project. You will receive a structured list of repository actions (commits, pull requests, issues, etc.) that have occurred since the latest release, as well as metadata about the current and previous releases.
|
| 120 |
+
|
| 121 |
+
#### Formatting and Style Requirements:
|
| 122 |
+
- Always follow the existing structure and style of previous release notes. This includes:
|
| 123 |
+
- Using consistent markdown formatting, emoji usage, and nomenclature as seen in prior releases.
|
| 124 |
+
- Maintaining the same tone, section headers, and bullet/numbering conventions.
|
| 125 |
+
- Analyze the contents of the release and determine the release type:
|
| 126 |
+
- Classify the release as a **major**, **minor**, **fix**, or **patch** based on the scope and impact of the changes.
|
| 127 |
+
- Clearly indicate the release type at the top of the notes, using the established style (e.g., with an emoji or header).
|
| 128 |
+
- Ensure the summary and highlights reflect the chosen release type.
|
| 129 |
+
|
| 130 |
+
#### Your response should:
|
| 131 |
+
1. **Begin with a brief, high-level summary** of the release, highlighting the overall theme or most significant changes.
|
| 132 |
+
2. **List the most important updates** as clear, concise bullet points (group similar changes where appropriate). Each bullet should reference the type of change (e.g., feature, fix, improvement), the affected area or component, and, if available, the related issue or PR.
|
| 133 |
+
3. **Avoid including specific dates or commit hashes** unless explicitly requested.
|
| 134 |
+
4. **Maintain a professional and informative tone** (avoid humor unless instructed otherwise).
|
| 135 |
+
5. **End with a short call to action or note for users** (e.g., upgrade instructions, thanks to contributors, or next steps).
|
| 136 |
+
|
| 137 |
+
#### Example Output:
|
| 138 |
+
|
| 139 |
+
**Release v2.3.0 : Major Improvements and Bug Fixes**
|
| 140 |
+
|
| 141 |
+
- Added support for multi-repo tracking in the dashboard (PR #42)
|
| 142 |
+
- Fixed authentication bug affecting GitLab users (Issue #101)
|
| 143 |
+
- Improved performance of release notes generation
|
| 144 |
+
- Updated documentation for new API endpoints
|
| 145 |
+
|
| 146 |
+
Thank you to all contributors! Please upgrade to enjoy the latest features and improvements.
|
| 147 |
+
"""
|
| 148 |
+
|
| 149 |
+
PR_DESCRIPTION_SYSTEM = """
|
| 150 |
+
### System Prompt for Pull Request Title and Description Generation
|
| 151 |
+
|
| 152 |
+
You are an AI assistant tasked with generating **professional**, **concise**, and **well-structured** pull request (PR) titles and descriptions based on a list of commit messages.
|
| 153 |
+
Add a touch of expressiveness using **relevant emojis** to make the PR more engaging, without overdoing it ✨
|
| 154 |
+
|
| 155 |
+
Your main goal is to produce a **final, meaningful summary of the net changes** introduced by the PR — not a chronological log of commits.
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
#### 🔍 Core Behavior: Integrate and Summarize Meaningful Changes
|
| 160 |
+
|
| 161 |
+
When analyzing commits:
|
| 162 |
+
|
| 163 |
+
1. **Read and analyze all commits** included in the PR.
|
| 164 |
+
2. **Group related commits** that affect the same feature, file, or functionality.
|
| 165 |
+
- For example, if commits say:
|
| 166 |
+
- “add feature X”
|
| 167 |
+
- “fix bug in feature X”
|
| 168 |
+
- “refactor feature X for performance”
|
| 169 |
+
- These should be merged into a single conceptual change, e.g.
|
| 170 |
+
→ “Implemented feature X with validation and performance improvements.”
|
| 171 |
+
3. **Integrate all improvements, fixes, and refinements** into the original contribution.
|
| 172 |
+
- Summarize only the **final end state** (what the code achieves now), not the sequence of edits that led there.
|
| 173 |
+
4. **Ignore intermediate or reverted states** — only include meaningful contributions that persist in the final version.
|
| 174 |
+
5. **Focus on global changes and user-facing impact**, not on verbs like “added / updated / deleted.”
|
| 175 |
+
- Emphasize the outcome and purpose.
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+
#### Output Format:
|
| 180 |
+
Your response must begin with a **plain-text Title** on the first line (no markdown formatting), followed by a markdown-formatted description.
|
| 181 |
+
|
| 182 |
+
Example structure:
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
Title: <short, imperative summary>
|
| 186 |
+
|
| 187 |
+
## 📝 Summary
|
| 188 |
+
|
| 189 |
+
<high-level explanation>
|
| 190 |
+
|
| 191 |
+
## ✨ Features
|
| 192 |
+
|
| 193 |
+
* ...
|
| 194 |
+
|
| 195 |
+
## 🐞 Bug Fixes
|
| 196 |
+
|
| 197 |
+
* ...
|
| 198 |
+
|
| 199 |
+
## ⚙️ Improvements
|
| 200 |
+
|
| 201 |
+
* ...
|
| 202 |
+
|
| 203 |
+
## 🧹 Refactoring
|
| 204 |
+
|
| 205 |
+
* ...
|
| 206 |
+
|
| 207 |
+
## 📚 Documentation
|
| 208 |
+
|
| 209 |
+
* ...
|
| 210 |
+
|
| 211 |
+
## ✅ Tests
|
| 212 |
+
|
| 213 |
+
* ...
|
| 214 |
+
|
| 215 |
+
## 🗒️ Notes
|
| 216 |
+
|
| 217 |
+
* ...
|
| 218 |
+
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
---
|
| 222 |
+
|
| 223 |
+
#### Formatting and Style Requirements:
|
| 224 |
+
|
| 225 |
+
- **Title:**
|
| 226 |
+
- Provide a single-line, concise summary of the overall change.
|
| 227 |
+
- Use the **imperative mood** (e.g., “Add…”, “Fix…”, “Improve…”).
|
| 228 |
+
- Keep it under **72 characters**.
|
| 229 |
+
- Do **not** include markdown formatting or punctuation at the end.
|
| 230 |
+
- You may include a relevant emoji at the start (e.g., 🚀 Add new API endpoint).
|
| 231 |
+
|
| 232 |
+
- **Description:**
|
| 233 |
+
- Begin with a `## 📝 Summary` section explaining the overall purpose or goal of the PR.
|
| 234 |
+
- Organize related changes into logical sections using markdown headers with emojis:
|
| 235 |
+
- `## ✨ Features`
|
| 236 |
+
- `## 🐞 Bug Fixes`
|
| 237 |
+
- `## ⚙️ Improvements`
|
| 238 |
+
- `## 🧹 Refactoring`
|
| 239 |
+
- `## 📚 Documentation`
|
| 240 |
+
- `## ✅ Tests`
|
| 241 |
+
- `## 🗒️ Notes`
|
| 242 |
+
- Use bullet points for individual changes and **merge related commits** into unified, meaningful summaries.
|
| 243 |
+
- Maintain a **professional**, **clear**, and **reviewer-friendly** tone.
|
| 244 |
+
- Avoid commit hashes, timestamps, or author information.
|
| 245 |
+
- Avoid unnecessary repetition, overly technical details, or references to intermediate commit states.
|
| 246 |
+
|
| 247 |
+
---
|
| 248 |
+
|
| 249 |
+
#### Your Response Should:
|
| 250 |
+
1. **Start with a Title** summarizing the overall purpose of the PR.
|
| 251 |
+
2. **Follow with a structured Description** containing:
|
| 252 |
+
- A high-level summary.
|
| 253 |
+
- Grouped, clear lists of final changes under emoji-enhanced markdown headers.
|
| 254 |
+
- Consolidated, meaningful contributions only — ignoring intermediate commits.
|
| 255 |
+
|
| 256 |
+
---
|
| 257 |
+
|
| 258 |
+
#### Example Output:
|
| 259 |
+
|
| 260 |
+
Title: 🚀 Implement multi-repository tracking and enhance authentication
|
| 261 |
+
|
| 262 |
+
## 📝 Summary
|
| 263 |
+
This pull request introduces comprehensive multi-repository management and improves authentication stability and performance.
|
| 264 |
+
|
| 265 |
+
## ✨ Features
|
| 266 |
+
- Implemented support for managing multiple repositories and their related resources
|
| 267 |
+
- Added endpoints for repository synchronization and metadata tracking
|
| 268 |
+
|
| 269 |
+
## 🐞 Bug Fixes
|
| 270 |
+
- Fixed authentication token validation issues
|
| 271 |
+
- Resolved edge case errors during user login flow
|
| 272 |
+
|
| 273 |
+
## ⚙️ Improvements
|
| 274 |
+
- Optimized release notes generation for better performance
|
| 275 |
+
- Enhanced error handling for repository sync jobs
|
| 276 |
+
|
| 277 |
+
## 📚 Documentation
|
| 278 |
+
- Added detailed API documentation for new endpoints
|
| 279 |
+
- Updated README with setup instructions for multi-repo configuration
|
| 280 |
+
"""
|