Spaces:
Sleeping
Initial website
Browse filesI want to build a website, "Ask Paul Graham". I will be hosting huggingface spaces or github pages if or heroku (whichever is easier / cheaper). There should be a text box for "Ask Paul Graham about: ... " it sends the prompt "Write a Paul Graham essay about X" to some LLM, and streams the response interactively below it. style it a bit like Paul Graham's website (v minimal). Every prompt response should be saved in a database (supabase?) or pushed to github. If a prompt already exists, fetch from saved things. Also have a bulleted list of all existing saved things, either ordered by time submitted or ordered by # views (update database or w/e on this) or ordered by alphabetical. Default order of recency. prompts are limited to 70 characters, backend should truncate after 70 characters and textbox should prevent extra typing. How would you design this?
https://g.co/gemini/share/ea8f6e1cc364
- Dockerfile +34 -5
- main.py +350 -0
- requirements.txt +8 -6
- static/script.js +188 -0
- static/style.css +37 -0
- templates/index.html +83 -0
|
@@ -1,13 +1,42 @@
|
|
| 1 |
-
|
|
|
|
| 2 |
|
|
|
|
| 3 |
WORKDIR /code
|
| 4 |
|
|
|
|
| 5 |
COPY ./requirements.txt /code/requirements.txt
|
| 6 |
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Use an official Python runtime as a parent image
|
| 2 |
+
FROM python:3.9-slim
|
| 3 |
|
| 4 |
+
# Set the working directory in the container
|
| 5 |
WORKDIR /code
|
| 6 |
|
| 7 |
+
# Copy the requirements file into the container at /code
|
| 8 |
COPY ./requirements.txt /code/requirements.txt
|
| 9 |
|
| 10 |
+
# Install any needed packages specified in requirements.txt
|
| 11 |
+
# --no-cache-dir: Disables the cache to keep image size down
|
| 12 |
+
# --upgrade pip: Ensures pip is up-to-date
|
| 13 |
+
# -r requirements.txt: Installs packages from the requirements file
|
| 14 |
+
RUN pip install --no-cache-dir --upgrade pip -r requirements.txt
|
| 15 |
|
| 16 |
+
# Copy the rest of the application code (the 'app' directory contents)
|
| 17 |
+
# into the container at /code/app
|
| 18 |
+
# Adjust if your Python code is not in an 'app' subfolder
|
| 19 |
+
COPY ./app /code/app
|
| 20 |
+
# If main.py is at the root with static/ and templates/
|
| 21 |
+
# COPY ./main.py /code/main.py
|
| 22 |
+
# COPY ./static /code/static
|
| 23 |
+
# COPY ./templates /code/templates
|
| 24 |
+
# Ensure your paths match your project structure. Assuming main.py is in 'app/'
|
| 25 |
|
| 26 |
+
# Make port 8000 available to the world outside this container
|
| 27 |
+
# Hugging Face Spaces expects the app to listen on port 7860 by default,
|
| 28 |
+
# but Docker apps often use 8000. We can map this in Space config if needed,
|
| 29 |
+
# or change the port here and in the CMD. Let's use 8000 for now.
|
| 30 |
+
EXPOSE 8000
|
| 31 |
|
| 32 |
+
# Define environment variable (optional, can be set in HF Secrets)
|
| 33 |
+
# ENV NAME World
|
| 34 |
+
|
| 35 |
+
# Run main.py when the container launches
|
| 36 |
+
# Use uvicorn to run the FastAPI application
|
| 37 |
+
# --host 0.0.0.0: Makes the server accessible externally
|
| 38 |
+
# --port 8000: The port the server will listen on
|
| 39 |
+
# app.main:app: Tells uvicorn where to find the FastAPI app instance
|
| 40 |
+
# (in main.py inside the 'app' directory, the instance named 'app')
|
| 41 |
+
# Adjust 'app.main:app' if your file/instance names are different.
|
| 42 |
+
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
|
@@ -0,0 +1,350 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import asyncio
|
| 3 |
+
from fastapi import FastAPI, Request, HTTPException, Form
|
| 4 |
+
from fastapi.responses import HTMLResponse, StreamingResponse, JSONResponse
|
| 5 |
+
from fastapi.staticfiles import StaticFiles
|
| 6 |
+
from fastapi.templating import Jinja2Templates
|
| 7 |
+
from supabase import create_client, Client
|
| 8 |
+
|
| 9 |
+
# Removed httpx import as we'll use the anthropic client directly
|
| 10 |
+
from dotenv import load_dotenv
|
| 11 |
+
import json
|
| 12 |
+
from pydantic import (
|
| 13 |
+
BaseModel,
|
| 14 |
+
) # For potential request body validation if not using Forms
|
| 15 |
+
|
| 16 |
+
# --- Import Anthropic ---
|
| 17 |
+
from anthropic import (
|
| 18 |
+
AsyncAnthropic,
|
| 19 |
+
APIError,
|
| 20 |
+
) # Import Anthropic client and specific errors
|
| 21 |
+
|
| 22 |
+
# Load environment variables from .env file for local development
|
| 23 |
+
load_dotenv()
|
| 24 |
+
|
| 25 |
+
# --- Configuration ---
|
| 26 |
+
SUPABASE_URL = os.getenv("SUPABASE_URL")
|
| 27 |
+
SUPABASE_SERVICE_KEY = os.getenv(
|
| 28 |
+
"SUPABASE_SERVICE_KEY"
|
| 29 |
+
) # Use service key for backend operations
|
| 30 |
+
# ANTHROPIC_API_KEY is the standard env var, but we use LLM_API_KEY from previous steps
|
| 31 |
+
ANTHROPIC_API_KEY = os.getenv("LLM_API_KEY")
|
| 32 |
+
# LLM_API_ENDPOINT is not needed when using the official client library
|
| 33 |
+
|
| 34 |
+
# --- Initialize Supabase Client ---
|
| 35 |
+
try:
|
| 36 |
+
if SUPABASE_URL and SUPABASE_SERVICE_KEY:
|
| 37 |
+
supabase: Client = create_client(SUPABASE_URL, SUPABASE_SERVICE_KEY)
|
| 38 |
+
else:
|
| 39 |
+
print("Warning: Supabase URL or Key not found in environment variables.")
|
| 40 |
+
supabase = None
|
| 41 |
+
except Exception as e:
|
| 42 |
+
print(f"Error initializing Supabase client: {e}")
|
| 43 |
+
supabase = None # Set to None to indicate failure
|
| 44 |
+
|
| 45 |
+
# --- Initialize Anthropic Client ---
|
| 46 |
+
try:
|
| 47 |
+
if ANTHROPIC_API_KEY:
|
| 48 |
+
# Use the API key from the environment variable LLM_API_KEY
|
| 49 |
+
anthropic_client = AsyncAnthropic(api_key=ANTHROPIC_API_KEY)
|
| 50 |
+
else:
|
| 51 |
+
print("Warning: Anthropic API Key (LLM_API_KEY) not found.")
|
| 52 |
+
anthropic_client = None
|
| 53 |
+
except Exception as e:
|
| 54 |
+
print(f"Error initializing Anthropic client: {e}")
|
| 55 |
+
anthropic_client = None
|
| 56 |
+
|
| 57 |
+
# --- Initialize FastAPI App ---
|
| 58 |
+
app = FastAPI()
|
| 59 |
+
|
| 60 |
+
# --- Mount Static Files (CSS, JS) ---
|
| 61 |
+
script_dir = os.path.dirname(__file__)
|
| 62 |
+
static_dir = os.path.join(script_dir, "static")
|
| 63 |
+
if not os.path.exists(static_dir):
|
| 64 |
+
os.makedirs(static_dir) # Create static dir if it doesn't exist
|
| 65 |
+
app.mount("/static", StaticFiles(directory=static_dir), name="static")
|
| 66 |
+
|
| 67 |
+
# --- Configure Templates ---
|
| 68 |
+
templates_dir = os.path.join(script_dir, "templates")
|
| 69 |
+
if not os.path.exists(templates_dir):
|
| 70 |
+
os.makedirs(templates_dir) # Create templates dir if it doesn't exist
|
| 71 |
+
templates = Jinja2Templates(directory=templates_dir)
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
# --- Helper Functions ---
|
| 75 |
+
def truncate_prompt(text: str, max_length: int = 70) -> str:
|
| 76 |
+
"""Truncates text to a maximum length."""
|
| 77 |
+
return text[:max_length]
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
async def get_llm_stream(prompt: str):
|
| 81 |
+
"""
|
| 82 |
+
Gets streaming response from Anthropic Claude API.
|
| 83 |
+
Yields text chunks.
|
| 84 |
+
"""
|
| 85 |
+
if not anthropic_client:
|
| 86 |
+
print("Anthropic client not initialized.")
|
| 87 |
+
yield f"data: {json.dumps({'error': 'LLM service not configured.'})}\n\n"
|
| 88 |
+
return
|
| 89 |
+
|
| 90 |
+
full_llm_prompt = f"Write a Paul Graham essay about {prompt}"
|
| 91 |
+
system_prompt = "You are an AI assistant that writes essays in the style of Paul Graham. Focus on insights about startups, technology, programming, and contrarian thinking. Be concise and clear."
|
| 92 |
+
# Using Claude 3.5 Sonnet model ID
|
| 93 |
+
model_name = "claude-3-5-sonnet-20240620"
|
| 94 |
+
max_tokens_to_sample = 2048 # Adjust as needed
|
| 95 |
+
|
| 96 |
+
print(f"Sending to Claude ({model_name}): {full_llm_prompt}") # For debugging
|
| 97 |
+
|
| 98 |
+
try:
|
| 99 |
+
# Use the Messages streaming API
|
| 100 |
+
async with anthropic_client.messages.stream(
|
| 101 |
+
model=model_name,
|
| 102 |
+
max_tokens=max_tokens_to_sample,
|
| 103 |
+
system=system_prompt,
|
| 104 |
+
messages=[{"role": "user", "content": full_llm_prompt}],
|
| 105 |
+
) as stream:
|
| 106 |
+
# Iterate through the stream events asynchronously
|
| 107 |
+
async for event in stream:
|
| 108 |
+
# Check for text delta events
|
| 109 |
+
if (
|
| 110 |
+
event.type == "content_block_delta"
|
| 111 |
+
and event.delta.type == "text_delta"
|
| 112 |
+
):
|
| 113 |
+
text_chunk = event.delta.text
|
| 114 |
+
# print(f"Received chunk: {text_chunk}") # Debugging
|
| 115 |
+
yield text_chunk # Yield the raw text chunk
|
| 116 |
+
|
| 117 |
+
except APIError as e:
|
| 118 |
+
print(f"Anthropic API Error: {e}")
|
| 119 |
+
yield f"data: {json.dumps({'error': f'LLM API Error: {e.status_code} - {e.message}'})}\n\n"
|
| 120 |
+
except Exception as e:
|
| 121 |
+
print(f"Error calling Anthropic API: {e}")
|
| 122 |
+
yield f"data: {json.dumps({'error': f'Failed to get response from LLM: {e}'})}\n\n"
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
async def stream_and_save_essay(truncated_prompt: str):
|
| 126 |
+
"""
|
| 127 |
+
Streams response from LLM, yields chunks for the client (as SSE events),
|
| 128 |
+
and saves the full response to Supabase upon completion.
|
| 129 |
+
"""
|
| 130 |
+
full_response = ""
|
| 131 |
+
error_occurred = False
|
| 132 |
+
try:
|
| 133 |
+
async for chunk in get_llm_stream(truncated_prompt):
|
| 134 |
+
# Check if the chunk indicates an error (yielded by get_llm_stream)
|
| 135 |
+
if isinstance(chunk, str) and chunk.startswith('data: {"error":'):
|
| 136 |
+
yield chunk # Propagate error SSE event to client
|
| 137 |
+
print(f"LLM Stream Error reported: {chunk}")
|
| 138 |
+
error_occurred = True
|
| 139 |
+
# Don't break here, let get_llm_stream finish if it yields more details
|
| 140 |
+
# Break or return after the loop if error_occurred is True
|
| 141 |
+
continue # Skip processing this chunk as text
|
| 142 |
+
|
| 143 |
+
# Accumulate response and yield chunk to client
|
| 144 |
+
full_response += chunk
|
| 145 |
+
# Format as Server-Sent Event (SSE)
|
| 146 |
+
yield f"data: {json.dumps({'text': chunk})}\n\n"
|
| 147 |
+
await asyncio.sleep(0.01) # Small delay to allow client processing
|
| 148 |
+
|
| 149 |
+
# If an error was yielded during the stream, don't save or send 'end'
|
| 150 |
+
if error_occurred:
|
| 151 |
+
print("Skipping save due to previous stream error.")
|
| 152 |
+
return
|
| 153 |
+
|
| 154 |
+
# --- Save to Supabase after successful streaming ---
|
| 155 |
+
if supabase and full_response:
|
| 156 |
+
try:
|
| 157 |
+
# Check again before inserting, in case of race condition
|
| 158 |
+
check_resp = (
|
| 159 |
+
supabase.table("essays")
|
| 160 |
+
.select("id")
|
| 161 |
+
.eq("prompt", truncated_prompt)
|
| 162 |
+
.limit(1)
|
| 163 |
+
.execute()
|
| 164 |
+
)
|
| 165 |
+
if not check_resp.data:
|
| 166 |
+
insert_resp = (
|
| 167 |
+
supabase.table("essays")
|
| 168 |
+
.insert(
|
| 169 |
+
{
|
| 170 |
+
"prompt": truncated_prompt,
|
| 171 |
+
"response": full_response,
|
| 172 |
+
"view_count": 1, # Initial view count
|
| 173 |
+
}
|
| 174 |
+
)
|
| 175 |
+
.execute()
|
| 176 |
+
)
|
| 177 |
+
print(f"Saved new essay for prompt: {truncated_prompt}")
|
| 178 |
+
else:
|
| 179 |
+
print(
|
| 180 |
+
f"Essay for '{truncated_prompt}' already exists (checked before insert). Not saving again."
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
except Exception as e:
|
| 184 |
+
# Handle potential unique constraint violation if race condition occurs
|
| 185 |
+
# The check above makes this less likely, but keep handling just in case.
|
| 186 |
+
if "duplicate key value violates unique constraint" in str(e):
|
| 187 |
+
print(
|
| 188 |
+
f"Race condition? Essay for '{truncated_prompt}' already exists."
|
| 189 |
+
)
|
| 190 |
+
else:
|
| 191 |
+
print(f"Error saving essay to Supabase: {e}")
|
| 192 |
+
# Optionally yield an error message back to client if critical
|
| 193 |
+
yield f"data: {json.dumps({'error': 'Failed to save essay.'})}\n\n"
|
| 194 |
+
error_occurred = True # Mark error occurred
|
| 195 |
+
|
| 196 |
+
# Signal stream end only if no errors occurred
|
| 197 |
+
if not error_occurred:
|
| 198 |
+
yield f"data: {json.dumps({'end': True})}\n\n"
|
| 199 |
+
|
| 200 |
+
except Exception as e:
|
| 201 |
+
print(f"Error during streaming/saving: {e}")
|
| 202 |
+
# Send error as SSE data event
|
| 203 |
+
yield f"data: {json.dumps({'error': f'An error occurred: {e}'})}\n\n"
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
# --- API Endpoints ---
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
@app.get("/", response_class=HTMLResponse)
|
| 210 |
+
async def read_root(request: Request):
|
| 211 |
+
"""Serves the main HTML page."""
|
| 212 |
+
if not os.path.exists(os.path.join(templates_dir, "index.html")):
|
| 213 |
+
raise HTTPException(status_code=404, detail="index.html not found")
|
| 214 |
+
return templates.TemplateResponse("index.html", {"request": request})
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
@app.post("/ask")
|
| 218 |
+
async def ask_paul_graham(prompt: str = Form(...)):
|
| 219 |
+
"""
|
| 220 |
+
Handles prompt submission, checks cache, calls LLM, streams response.
|
| 221 |
+
Uses Server-Sent Events (SSE).
|
| 222 |
+
"""
|
| 223 |
+
if not prompt:
|
| 224 |
+
raise HTTPException(status_code=400, detail="Prompt cannot be empty.")
|
| 225 |
+
if not supabase:
|
| 226 |
+
# Return SSE error if DB is down
|
| 227 |
+
async def db_error_stream():
|
| 228 |
+
yield f"data: {json.dumps({'error': 'Database connection not available.'})}\n\n"
|
| 229 |
+
|
| 230 |
+
return StreamingResponse(
|
| 231 |
+
db_error_stream(), media_type="text/event-stream", status_code=503
|
| 232 |
+
)
|
| 233 |
+
if not anthropic_client:
|
| 234 |
+
# Return SSE error if LLM client isn't configured
|
| 235 |
+
async def llm_error_stream():
|
| 236 |
+
yield f"data: {json.dumps({'error': 'LLM service not configured.'})}\n\n"
|
| 237 |
+
|
| 238 |
+
return StreamingResponse(
|
| 239 |
+
llm_error_stream(), media_type="text/event-stream", status_code=503
|
| 240 |
+
)
|
| 241 |
+
|
| 242 |
+
truncated = truncate_prompt(prompt)
|
| 243 |
+
|
| 244 |
+
try:
|
| 245 |
+
# Check if essay already exists
|
| 246 |
+
response = (
|
| 247 |
+
supabase.table("essays")
|
| 248 |
+
.select("response, view_count")
|
| 249 |
+
.eq("prompt", truncated)
|
| 250 |
+
.limit(1)
|
| 251 |
+
.execute()
|
| 252 |
+
)
|
| 253 |
+
existing_essay = response.data
|
| 254 |
+
|
| 255 |
+
if existing_essay:
|
| 256 |
+
print(f"Cache hit for prompt: {truncated}")
|
| 257 |
+
essay_data = existing_essay[0]
|
| 258 |
+
saved_response = essay_data["response"]
|
| 259 |
+
current_views = essay_data["view_count"]
|
| 260 |
+
|
| 261 |
+
# Increment view count
|
| 262 |
+
try:
|
| 263 |
+
supabase.table("essays").update({"view_count": current_views + 1}).eq(
|
| 264 |
+
"prompt", truncated
|
| 265 |
+
).execute()
|
| 266 |
+
except Exception as e:
|
| 267 |
+
print(f"Error updating view count: {e}") # Log error but continue
|
| 268 |
+
|
| 269 |
+
# Stream the cached response word by word (or chunk by chunk) as SSE
|
| 270 |
+
async def stream_cached():
|
| 271 |
+
# Split into smaller chunks for smoother streaming simulation
|
| 272 |
+
chunk_size = 20 # Number of characters per chunk
|
| 273 |
+
for i in range(0, len(saved_response), chunk_size):
|
| 274 |
+
chunk = saved_response[i : i + chunk_size]
|
| 275 |
+
yield f"data: {json.dumps({'text': chunk})}\n\n"
|
| 276 |
+
await asyncio.sleep(0.01) # Simulate streaming delay
|
| 277 |
+
yield f"data: {json.dumps({'end': True})}\n\n" # Signal end
|
| 278 |
+
|
| 279 |
+
return StreamingResponse(stream_cached(), media_type="text/event-stream")
|
| 280 |
+
|
| 281 |
+
else:
|
| 282 |
+
print(f"Cache miss for prompt: {truncated}. Calling LLM.")
|
| 283 |
+
# Stream from LLM and save upon completion
|
| 284 |
+
return StreamingResponse(
|
| 285 |
+
stream_and_save_essay(truncated), media_type="text/event-stream"
|
| 286 |
+
)
|
| 287 |
+
|
| 288 |
+
except Exception as e:
|
| 289 |
+
print(f"Error in /ask endpoint: {e}")
|
| 290 |
+
|
| 291 |
+
# Return error as SSE event
|
| 292 |
+
async def error_stream():
|
| 293 |
+
yield f"data: {json.dumps({'error': f'Server error: {e}'})}\n\n"
|
| 294 |
+
|
| 295 |
+
return StreamingResponse(
|
| 296 |
+
error_stream(), media_type="text/event-stream", status_code=500
|
| 297 |
+
)
|
| 298 |
+
|
| 299 |
+
|
| 300 |
+
@app.get("/essays", response_class=JSONResponse)
|
| 301 |
+
async def get_essays(sort_by: str = "time", order: str = "desc"):
|
| 302 |
+
"""Fetches the list of saved essay prompts based on sorting preferences."""
|
| 303 |
+
if not supabase:
|
| 304 |
+
return JSONResponse(
|
| 305 |
+
content={"error": "Database connection not available."}, status_code=503
|
| 306 |
+
)
|
| 307 |
+
|
| 308 |
+
valid_sort_by = {"time": "created_at", "views": "view_count", "alpha": "prompt"}
|
| 309 |
+
valid_order = {"asc": True, "desc": False}
|
| 310 |
+
|
| 311 |
+
sort_column = valid_sort_by.get(sort_by, "created_at")
|
| 312 |
+
ascending = valid_order.get(order, False)
|
| 313 |
+
|
| 314 |
+
try:
|
| 315 |
+
response = (
|
| 316 |
+
supabase.table("essays")
|
| 317 |
+
.select("prompt, created_at, view_count")
|
| 318 |
+
.order(sort_column, ascending=ascending)
|
| 319 |
+
.execute()
|
| 320 |
+
)
|
| 321 |
+
|
| 322 |
+
# Convert Timestamps to ISO format strings for JSON serialization
|
| 323 |
+
essays_data = []
|
| 324 |
+
if response.data:
|
| 325 |
+
for row in response.data:
|
| 326 |
+
row["created_at"] = (
|
| 327 |
+
row["created_at"].isoformat() if row.get("created_at") else None
|
| 328 |
+
)
|
| 329 |
+
essays_data.append(row)
|
| 330 |
+
return JSONResponse(content=essays_data)
|
| 331 |
+
else:
|
| 332 |
+
return JSONResponse(content=[]) # Return empty list if no essays
|
| 333 |
+
|
| 334 |
+
except Exception as e:
|
| 335 |
+
print(f"Error fetching essays: {e}")
|
| 336 |
+
return JSONResponse(
|
| 337 |
+
content={"error": f"Failed to fetch essays: {e}"}, status_code=500
|
| 338 |
+
)
|
| 339 |
+
|
| 340 |
+
|
| 341 |
+
# --- Optional: Add simple health check ---
|
| 342 |
+
@app.get("/health")
|
| 343 |
+
async def health_check():
|
| 344 |
+
db_status = "ok" if supabase else "unavailable"
|
| 345 |
+
llm_status = "ok" if anthropic_client else "unavailable"
|
| 346 |
+
return {"status": "ok", "database": db_status, "llm_service": llm_status}
|
| 347 |
+
|
| 348 |
+
|
| 349 |
+
# Note: For Hugging Face Spaces deployment using Dockerfile,
|
| 350 |
+
# the CMD instruction will run uvicorn. No need for __main__ block.
|
|
@@ -1,6 +1,8 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
fastapi>=0.100.0
|
| 2 |
+
uvicorn[standard]>=0.20.0
|
| 3 |
+
supabase>=1.0.0,<2.0.0
|
| 4 |
+
# httpx is usually installed as a dependency of fastapi or supabase, but good to have explicitly if needed elsewhere
|
| 5 |
+
httpx>=0.24.0
|
| 6 |
+
python-dotenv>=1.0.0
|
| 7 |
+
jinja2>=3.0.0 # For templating
|
| 8 |
+
anthropic>=0.20.0 # Added Anthropic client library
|
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// static/script.js
|
| 2 |
+
|
| 3 |
+
document.addEventListener('DOMContentLoaded', () => {
|
| 4 |
+
const promptForm = document.getElementById('prompt-form');
|
| 5 |
+
const promptInput = document.getElementById('prompt-input');
|
| 6 |
+
const submitButton = document.getElementById('submit-button');
|
| 7 |
+
const responseOutput = document.getElementById('response-output');
|
| 8 |
+
const loadingIndicator = document.getElementById('loading-indicator');
|
| 9 |
+
const errorMessage = document.getElementById('error-message');
|
| 10 |
+
const essaysList = document.getElementById('essays-list');
|
| 11 |
+
const sortButtons = document.querySelectorAll('.sort-button');
|
| 12 |
+
const charCount = document.getElementById('char-count');
|
| 13 |
+
|
| 14 |
+
let currentSort = { field: 'time', order: 'desc' }; // Default sort
|
| 15 |
+
let eventSource = null; // To hold the EventSource connection
|
| 16 |
+
|
| 17 |
+
// --- Character Counter ---
|
| 18 |
+
promptInput.addEventListener('input', () => {
|
| 19 |
+
const count = promptInput.value.length;
|
| 20 |
+
charCount.textContent = `${count} / 70 characters`;
|
| 21 |
+
});
|
| 22 |
+
|
| 23 |
+
// --- Fetch and Render Essays ---
|
| 24 |
+
async function fetchEssays(sortBy = 'time', order = 'desc') {
|
| 25 |
+
essaysList.innerHTML = '<li class="text-gray-400">Loading essays...</li>'; // Show loading state
|
| 26 |
+
try {
|
| 27 |
+
const response = await fetch(`/essays?sort_by=${sortBy}&order=${order}`);
|
| 28 |
+
if (!response.ok) {
|
| 29 |
+
throw new Error(`HTTP error! status: ${response.status}`);
|
| 30 |
+
}
|
| 31 |
+
const essays = await response.json();
|
| 32 |
+
|
| 33 |
+
essaysList.innerHTML = ''; // Clear list
|
| 34 |
+
|
| 35 |
+
if (essays.length === 0) {
|
| 36 |
+
essaysList.innerHTML = '<li class="text-gray-500">No essays found yet.</li>';
|
| 37 |
+
} else {
|
| 38 |
+
essays.forEach(essay => {
|
| 39 |
+
const li = document.createElement('li');
|
| 40 |
+
// Simple display: just the prompt text
|
| 41 |
+
li.textContent = essay.prompt;
|
| 42 |
+
// Optional: Add view count or date
|
| 43 |
+
// li.textContent += ` (Views: ${essay.view_count}, Created: ${new Date(essay.created_at).toLocaleDateString()})`;
|
| 44 |
+
essaysList.appendChild(li);
|
| 45 |
+
});
|
| 46 |
+
}
|
| 47 |
+
} catch (error) {
|
| 48 |
+
console.error('Error fetching essays:', error);
|
| 49 |
+
essaysList.innerHTML = '<li class="text-red-500">Failed to load essays.</li>';
|
| 50 |
+
}
|
| 51 |
+
}
|
| 52 |
+
|
| 53 |
+
// --- Handle Form Submission ---
|
| 54 |
+
promptForm.addEventListener('submit', async (event) => {
|
| 55 |
+
event.preventDefault(); // Prevent default form submission
|
| 56 |
+
const prompt = promptInput.value.trim();
|
| 57 |
+
|
| 58 |
+
if (!prompt) return; // Do nothing if prompt is empty
|
| 59 |
+
|
| 60 |
+
// Close any existing EventSource connection
|
| 61 |
+
if (eventSource) {
|
| 62 |
+
eventSource.close();
|
| 63 |
+
}
|
| 64 |
+
|
| 65 |
+
// UI updates: disable input/button, show loading, clear previous results
|
| 66 |
+
promptInput.disabled = true;
|
| 67 |
+
submitButton.disabled = true;
|
| 68 |
+
submitButton.textContent = 'Generating...';
|
| 69 |
+
loadingIndicator.classList.remove('hidden');
|
| 70 |
+
responseOutput.innerHTML = ''; // Clear previous output
|
| 71 |
+
errorMessage.classList.add('hidden'); // Hide previous errors
|
| 72 |
+
|
| 73 |
+
try {
|
| 74 |
+
// Use EventSource for Server-Sent Events
|
| 75 |
+
eventSource = new EventSource(`/ask?prompt=${encodeURIComponent(prompt)}`, { method: 'POST' }); // NOTE: EventSource uses GET by default, FastAPI route needs POST. This is tricky.
|
| 76 |
+
// Standard fetch is better suited for POST + Streaming body response. Let's refactor to use fetch.
|
| 77 |
+
|
| 78 |
+
const response = await fetch('/ask', {
|
| 79 |
+
method: 'POST',
|
| 80 |
+
headers: {
|
| 81 |
+
'Content-Type': 'application/x-www-form-urlencoded', // FastAPI Form expects this
|
| 82 |
+
},
|
| 83 |
+
body: `prompt=${encodeURIComponent(prompt)}`
|
| 84 |
+
});
|
| 85 |
+
|
| 86 |
+
if (!response.ok) {
|
| 87 |
+
// Try to read error message from backend if available
|
| 88 |
+
let errorData = { message: `HTTP error! status: ${response.status}` };
|
| 89 |
+
try {
|
| 90 |
+
errorData = await response.json();
|
| 91 |
+
} catch (e) { /* Ignore if response is not JSON */ }
|
| 92 |
+
throw new Error(errorData.message || `HTTP error! status: ${response.status}`);
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
// Check if response is event-stream (SSE)
|
| 96 |
+
if (response.headers.get('content-type')?.includes('text/event-stream')) {
|
| 97 |
+
// Handle SSE stream with fetch
|
| 98 |
+
const reader = response.body.getReader();
|
| 99 |
+
const decoder = new TextDecoder();
|
| 100 |
+
|
| 101 |
+
reader.read().then(function processText({ done, value }) {
|
| 102 |
+
if (done) {
|
| 103 |
+
console.log("Stream complete");
|
| 104 |
+
// Re-fetch essays list to include the new one (if generated)
|
| 105 |
+
fetchEssays(currentSort.field, currentSort.order);
|
| 106 |
+
return;
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
const chunk = decoder.decode(value, { stream: true });
|
| 110 |
+
// Process potential multiple events in a single chunk
|
| 111 |
+
const lines = chunk.split('\n');
|
| 112 |
+
lines.forEach(line => {
|
| 113 |
+
if (line.startsWith('data:')) {
|
| 114 |
+
try {
|
| 115 |
+
const data = JSON.parse(line.substring(5).trim());
|
| 116 |
+
if (data.text) {
|
| 117 |
+
responseOutput.innerHTML += data.text; // Append text chunk
|
| 118 |
+
} else if (data.error) {
|
| 119 |
+
console.error("SSE Error:", data.error);
|
| 120 |
+
errorMessage.textContent = `Error: ${data.error}`;
|
| 121 |
+
errorMessage.classList.remove('hidden');
|
| 122 |
+
// Optionally close the stream reader on error
|
| 123 |
+
reader.cancel();
|
| 124 |
+
} else if (data.end) {
|
| 125 |
+
console.log("SSE Stream ended by server.");
|
| 126 |
+
// Stream ended signal received
|
| 127 |
+
reader.cancel(); // Close the reader
|
| 128 |
+
fetchEssays(currentSort.field, currentSort.order);
|
| 129 |
+
}
|
| 130 |
+
} catch (e) {
|
| 131 |
+
console.error("Error parsing SSE data:", e, "Line:", line);
|
| 132 |
+
}
|
| 133 |
+
}
|
| 134 |
+
});
|
| 135 |
+
|
| 136 |
+
// Continue reading
|
| 137 |
+
reader.read().then(processText);
|
| 138 |
+
}).catch(error => {
|
| 139 |
+
console.error("Stream reading error:", error);
|
| 140 |
+
errorMessage.textContent = `Stream reading error: ${error.message}`;
|
| 141 |
+
errorMessage.classList.remove('hidden');
|
| 142 |
+
});
|
| 143 |
+
|
| 144 |
+
} else {
|
| 145 |
+
// Handle non-streaming response (shouldn't happen with current backend logic)
|
| 146 |
+
const text = await response.text();
|
| 147 |
+
responseOutput.textContent = text;
|
| 148 |
+
fetchEssays(currentSort.field, currentSort.order); // Update list
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
} catch (error) {
|
| 152 |
+
console.error('Error submitting prompt:', error);
|
| 153 |
+
errorMessage.textContent = `Error: ${error.message}`;
|
| 154 |
+
errorMessage.classList.remove('hidden');
|
| 155 |
+
} finally {
|
| 156 |
+
// Re-enable form elements regardless of success/failure AFTER stream ends
|
| 157 |
+
// Need to move this logic to inside the stream handling completion/error
|
| 158 |
+
promptInput.disabled = false;
|
| 159 |
+
submitButton.disabled = false;
|
| 160 |
+
submitButton.textContent = 'Generate Essay';
|
| 161 |
+
loadingIndicator.classList.add('hidden');
|
| 162 |
+
// Note: Re-enabling is handled more accurately within the stream processing logic (on done/error/end)
|
| 163 |
+
}
|
| 164 |
+
});
|
| 165 |
+
|
| 166 |
+
|
| 167 |
+
// --- Handle Sorting ---
|
| 168 |
+
sortButtons.forEach(button => {
|
| 169 |
+
button.addEventListener('click', () => {
|
| 170 |
+
const sortBy = button.dataset.sort;
|
| 171 |
+
const order = button.dataset.order;
|
| 172 |
+
|
| 173 |
+
// Update current sort state
|
| 174 |
+
currentSort = { field: sortBy, order: order };
|
| 175 |
+
|
| 176 |
+
// Update active button style
|
| 177 |
+
sortButtons.forEach(btn => btn.classList.remove('active'));
|
| 178 |
+
button.classList.add('active');
|
| 179 |
+
|
| 180 |
+
// Fetch essays with new sorting
|
| 181 |
+
fetchEssays(sortBy, order);
|
| 182 |
+
});
|
| 183 |
+
});
|
| 184 |
+
|
| 185 |
+
// --- Initial Load ---
|
| 186 |
+
fetchEssays(); // Load essays on page load with default sort
|
| 187 |
+
promptInput.dispatchEvent(new Event('input')); // Initialize char count
|
| 188 |
+
});
|
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
/* static/style.css */
|
| 2 |
+
|
| 3 |
+
/* Add any custom styles here if needed, complementing Tailwind */
|
| 4 |
+
|
| 5 |
+
/* Example: Style for the active sort button */
|
| 6 |
+
.sort-button.active {
|
| 7 |
+
font-weight: bold;
|
| 8 |
+
text-decoration: none;
|
| 9 |
+
color: #374151;
|
| 10 |
+
/* gray-700 */
|
| 11 |
+
cursor: default;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
/* Style for list items in the essay list for better readability */
|
| 15 |
+
#essays-list li {
|
| 16 |
+
padding: 2px 0;
|
| 17 |
+
}
|
| 18 |
+
|
| 19 |
+
/* You might want to style the prose output further if needed */
|
| 20 |
+
#response-output {
|
| 21 |
+
/* Example: Add a light background or border */
|
| 22 |
+
/* background-color: #f9fafb; */
|
| 23 |
+
/* border-left: 3px solid #fb923c; */
|
| 24 |
+
/* orange-400 */
|
| 25 |
+
/* padding-left: 1rem; */
|
| 26 |
+
/* margin-top: 0.5rem; */
|
| 27 |
+
}
|
| 28 |
+
|
| 29 |
+
/* Ensure loading indicator is centered or styled appropriately if used more prominently */
|
| 30 |
+
#loading-indicator {
|
| 31 |
+
/* Add styles if needed */
|
| 32 |
+
}
|
| 33 |
+
|
| 34 |
+
/* Ensure error message stands out */
|
| 35 |
+
#error-message {
|
| 36 |
+
/* Tailwind classes handle most of this, but you can override */
|
| 37 |
+
}
|
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>Ask Paul Graham</title>
|
| 7 |
+
<script src="https://cdn.tailwindcss.com"></script>
|
| 8 |
+
<link rel="stylesheet" href="/static/style.css">
|
| 9 |
+
<script>
|
| 10 |
+
// Minimal Tailwind config (optional, can customize fonts etc.)
|
| 11 |
+
tailwind.config = {
|
| 12 |
+
theme: {
|
| 13 |
+
extend: {
|
| 14 |
+
fontFamily: {
|
| 15 |
+
// Match PG's site or use a clean default
|
| 16 |
+
sans: ['Inter', 'sans-serif'],
|
| 17 |
+
// serif: ['Georgia', 'serif'], // Example if using serif
|
| 18 |
+
},
|
| 19 |
+
}
|
| 20 |
+
}
|
| 21 |
+
}
|
| 22 |
+
</script>
|
| 23 |
+
</head>
|
| 24 |
+
<body class="bg-white text-gray-800 font-sans antialiased">
|
| 25 |
+
<div class="container mx-auto px-4 py-8 max-w-3xl">
|
| 26 |
+
|
| 27 |
+
<header class="mb-8">
|
| 28 |
+
<h1 class="text-3xl font-semibold text-gray-900">Ask Paul Graham</h1>
|
| 29 |
+
</header>
|
| 30 |
+
|
| 31 |
+
<section class="mb-8 p-4 border border-gray-200 rounded-md bg-gray-50">
|
| 32 |
+
<form id="prompt-form">
|
| 33 |
+
<label for="prompt-input" class="block text-sm font-medium text-gray-700 mb-1">
|
| 34 |
+
Ask Paul Graham about:
|
| 35 |
+
</label>
|
| 36 |
+
<div class="flex items-center space-x-2">
|
| 37 |
+
<input
|
| 38 |
+
type="text"
|
| 39 |
+
id="prompt-input"
|
| 40 |
+
name="prompt"
|
| 41 |
+
maxlength="70"
|
| 42 |
+
required
|
| 43 |
+
placeholder="e.g., starting a startup, Lisp, Y Combinator..."
|
| 44 |
+
class="flex-grow px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-orange-500 focus:border-orange-500 sm:text-sm"
|
| 45 |
+
>
|
| 46 |
+
<button
|
| 47 |
+
type="submit"
|
| 48 |
+
id="submit-button"
|
| 49 |
+
class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-orange-600 hover:bg-orange-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-orange-500 disabled:opacity-50 disabled:cursor-not-allowed"
|
| 50 |
+
>
|
| 51 |
+
Generate Essay
|
| 52 |
+
</button>
|
| 53 |
+
</div>
|
| 54 |
+
<p id="char-count" class="text-xs text-gray-500 mt-1 text-right">0 / 70 characters</p>
|
| 55 |
+
</form>
|
| 56 |
+
</section>
|
| 57 |
+
|
| 58 |
+
<section id="response-section" class="mb-8 min-h-[100px]">
|
| 59 |
+
<h2 class="text-xl font-semibold text-gray-800 mb-2">Essay:</h2>
|
| 60 |
+
<div id="loading-indicator" class="hidden text-gray-500">Generating...</div>
|
| 61 |
+
<div id="error-message" class="hidden text-red-600 bg-red-100 border border-red-300 p-3 rounded-md"></div>
|
| 62 |
+
<div id="response-output" class="prose prose-sm sm:prose lg:prose-lg xl:prose-xl max-w-none text-gray-700 leading-relaxed">
|
| 63 |
+
</div>
|
| 64 |
+
</section>
|
| 65 |
+
|
| 66 |
+
<section>
|
| 67 |
+
<h2 class="text-xl font-semibold text-gray-800 mb-3">Past Essays</h2>
|
| 68 |
+
<div class="mb-3 text-sm">
|
| 69 |
+
Sort by:
|
| 70 |
+
<button data-sort="time" data-order="desc" class="sort-button text-orange-600 hover:underline focus:outline-none active">Recent</button> |
|
| 71 |
+
<button data-sort="views" data-order="desc" class="sort-button text-orange-600 hover:underline focus:outline-none">Popular</button> |
|
| 72 |
+
<button data-sort="alpha" data-order="asc" class="sort-button text-orange-600 hover:underline focus:outline-none">A-Z</button>
|
| 73 |
+
</div>
|
| 74 |
+
<ul id="essays-list" class="list-disc pl-5 space-y-1 text-gray-700">
|
| 75 |
+
<li class="text-gray-400">Loading essays...</li>
|
| 76 |
+
</ul>
|
| 77 |
+
</section>
|
| 78 |
+
|
| 79 |
+
</div>
|
| 80 |
+
|
| 81 |
+
<script src="/static/script.js"></script>
|
| 82 |
+
</body>
|
| 83 |
+
</html>
|