Spaces:
Sleeping
Sleeping
Refactor app.py to enhance paper fetching functionality and improve error handling. Update README.md to reflect new features and usage instructions. Remove dotenv dependency from requirements.txt.
Browse files- IMPROVEMENTS.md +77 -0
- __pycache__/app.cpython-313.pyc +0 -0
- app.py +197 -10
- test_functionality.py +85 -0
IMPROVEMENTS.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Paperbot Improvements Summary
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
Enhanced the AI Research Paper Chatbot to better connect filenames from the abstracts database to the corresponding full text files in the Papers folder, enabling retrieval of comprehensive information from complete papers rather than just abstracts.
|
| 5 |
+
|
| 6 |
+
## Key Improvements
|
| 7 |
+
|
| 8 |
+
### 1. Enhanced Filename Matching System
|
| 9 |
+
- **Added `normalize_filename()` function**: Normalizes filenames by removing special characters, converting to lowercase, and standardizing spacing
|
| 10 |
+
- **Added `find_matching_paper_file()` function**: Uses sophisticated matching algorithms to find the best matching paper file based on query terms
|
| 11 |
+
- **Improved search relevance**: Better scoring system that prioritizes exact matches and word overlaps
|
| 12 |
+
|
| 13 |
+
### 2. Improved Paper Content Retrieval
|
| 14 |
+
- **Enhanced `search_papers()` function**: Now returns up to 5 most relevant papers instead of 3, with better relevance scoring
|
| 15 |
+
- **Better content truncation**: Provides more content (up to 12,000 characters) for full paper requests
|
| 16 |
+
- **Improved title extraction**: Properly handles filename to title conversion
|
| 17 |
+
|
| 18 |
+
### 3. New Paper Summary Functionality
|
| 19 |
+
- **Added `get_paper_summary()` function**: Extracts and structures key sections of papers including:
|
| 20 |
+
- Abstract
|
| 21 |
+
- Introduction
|
| 22 |
+
- Methodology
|
| 23 |
+
- Results
|
| 24 |
+
- Conclusions
|
| 25 |
+
- **Structured output**: Provides organized summaries when users request paper summaries
|
| 26 |
+
|
| 27 |
+
### 4. Enhanced System Prompt
|
| 28 |
+
- **Updated instructions**: Added guidance for using full paper content instead of just abstracts
|
| 29 |
+
- **Better context**: System now prioritizes full paper content when available
|
| 30 |
+
- **New capabilities**: Supports both full paper retrieval and structured summaries
|
| 31 |
+
|
| 32 |
+
### 5. Improved User Experience
|
| 33 |
+
- **Better example queries**: Added new example button for testing summary functionality
|
| 34 |
+
- **Enhanced error handling**: Better handling of missing files and edge cases
|
| 35 |
+
- **More informative responses**: Shows relevance scores and better content organization
|
| 36 |
+
|
| 37 |
+
## Technical Details
|
| 38 |
+
|
| 39 |
+
### Filename Matching Algorithm
|
| 40 |
+
The system now uses a multi-stage matching approach:
|
| 41 |
+
1. **Exact matching**: Checks for exact substring matches in normalized filenames
|
| 42 |
+
2. **Word overlap**: Counts overlapping words between query and filename
|
| 43 |
+
3. **Partial matching**: Checks for partial word matches
|
| 44 |
+
4. **Scoring**: Assigns scores based on match quality and prioritizes best matches
|
| 45 |
+
|
| 46 |
+
### Content Retrieval Strategy
|
| 47 |
+
- **Full paper requests**: Provides up to 12,000 characters of content
|
| 48 |
+
- **Summary requests**: Extracts and structures key sections
|
| 49 |
+
- **General queries**: Provides up to 3,000 characters for context
|
| 50 |
+
- **Conclusion requests**: Extracts conclusion sections specifically
|
| 51 |
+
|
| 52 |
+
### Data Structure
|
| 53 |
+
- **Papers dictionary**: Now uses filenames as keys for better matching
|
| 54 |
+
- **Content mapping**: Direct connection between abstracts filenames and full text files
|
| 55 |
+
- **Metadata preservation**: Maintains original filenames while providing clean titles
|
| 56 |
+
|
| 57 |
+
## Testing Results
|
| 58 |
+
The system successfully:
|
| 59 |
+
- Loads 54 papers from the Papers folder
|
| 60 |
+
- Correctly matches queries to appropriate paper files
|
| 61 |
+
- Normalizes filenames for better matching
|
| 62 |
+
- Provides relevant content based on user queries
|
| 63 |
+
|
| 64 |
+
## Usage Examples
|
| 65 |
+
Users can now:
|
| 66 |
+
1. **Search for specific papers**: "Show me the paper about pigs"
|
| 67 |
+
2. **Request full content**: "Show me the full paper about AI companions"
|
| 68 |
+
3. **Get structured summaries**: "Summarize the paper about labor market effects"
|
| 69 |
+
4. **Find conclusions**: "What's the conclusion of the pig disease paper?"
|
| 70 |
+
5. **General queries**: "Find papers about transformer architecture"
|
| 71 |
+
|
| 72 |
+
## Benefits
|
| 73 |
+
1. **More comprehensive information**: Access to full paper content instead of just abstracts
|
| 74 |
+
2. **Better accuracy**: Direct quotes and detailed information from complete papers
|
| 75 |
+
3. **Improved relevance**: Better matching between user queries and paper content
|
| 76 |
+
4. **Structured responses**: Organized summaries with key sections clearly identified
|
| 77 |
+
5. **Enhanced user experience**: More informative and helpful responses
|
__pycache__/app.cpython-313.pyc
ADDED
|
Binary file (28.9 kB). View file
|
|
|
app.py
CHANGED
|
@@ -18,6 +18,155 @@ def load_abstracts_content():
|
|
| 18 |
# Load abstracts content globally
|
| 19 |
ABSTRACTS_CONTENT = load_abstracts_content()
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
# Get API key with better error handling
|
| 22 |
api_key = os.getenv("OPENAI_API_KEY")
|
| 23 |
if not api_key:
|
|
@@ -159,25 +308,63 @@ def respond(
|
|
| 159 |
yield "Please enter a message to start the conversation."
|
| 160 |
return
|
| 161 |
|
| 162 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
system_prompt = f"""You are an AI chatbot designed to help users explore and analyze AI research papers.
|
| 164 |
|
| 165 |
You have access to:
|
| 166 |
1. An abstracts database with summaries of research papers
|
| 167 |
-
2.
|
|
|
|
| 168 |
|
| 169 |
ABSTRACTS DATABASE:
|
| 170 |
{ABSTRACTS_CONTENT}
|
| 171 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
INSTRUCTIONS:
|
| 173 |
-
-
|
| 174 |
-
- Use
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
- When fetching papers, use the exact filename from the abstracts table
|
| 180 |
-
- Provide accurate, detailed responses based on the actual paper content"""
|
| 181 |
|
| 182 |
messages = [{"role": "system", "content": system_prompt}]
|
| 183 |
|
|
|
|
| 18 |
# Load abstracts content globally
|
| 19 |
ABSTRACTS_CONTENT = load_abstracts_content()
|
| 20 |
|
| 21 |
+
# Load full paper texts
|
| 22 |
+
def load_paper_texts():
|
| 23 |
+
"""Load all paper texts from the Papers directory and create a mapping from abstracts filenames."""
|
| 24 |
+
papers = {}
|
| 25 |
+
papers_dir = "Papers"
|
| 26 |
+
|
| 27 |
+
if not os.path.exists(papers_dir):
|
| 28 |
+
return {}
|
| 29 |
+
|
| 30 |
+
# Create a mapping from abstracts filenames to actual file content
|
| 31 |
+
for filename in os.listdir(papers_dir):
|
| 32 |
+
if filename.endswith('.txt'):
|
| 33 |
+
filepath = os.path.join(papers_dir, filename)
|
| 34 |
+
try:
|
| 35 |
+
with open(filepath, "r", encoding="utf-8") as f:
|
| 36 |
+
content = f.read()
|
| 37 |
+
# Store with the filename as key
|
| 38 |
+
papers[filename] = content
|
| 39 |
+
except Exception as e:
|
| 40 |
+
papers[filename] = f"Error loading paper: {str(e)}"
|
| 41 |
+
|
| 42 |
+
return papers
|
| 43 |
+
|
| 44 |
+
# Load paper texts globally
|
| 45 |
+
PAPER_TEXTS = load_paper_texts()
|
| 46 |
+
|
| 47 |
+
def normalize_filename(filename):
|
| 48 |
+
"""Normalize filename for better matching."""
|
| 49 |
+
# Remove .txt extension and normalize
|
| 50 |
+
if filename.endswith('.txt'):
|
| 51 |
+
filename = filename[:-4]
|
| 52 |
+
|
| 53 |
+
# Convert to lowercase and remove special characters
|
| 54 |
+
filename = re.sub(r'[^\w\s]', '', filename.lower())
|
| 55 |
+
# Normalize whitespace
|
| 56 |
+
filename = ' '.join(filename.split())
|
| 57 |
+
return filename
|
| 58 |
+
|
| 59 |
+
def find_matching_paper_file(query_terms, papers_dict):
|
| 60 |
+
"""Find the best matching paper file based on query terms."""
|
| 61 |
+
query_normalized = normalize_filename(' '.join(query_terms))
|
| 62 |
+
best_match = None
|
| 63 |
+
best_score = 0
|
| 64 |
+
|
| 65 |
+
for filename in papers_dict.keys():
|
| 66 |
+
filename_normalized = normalize_filename(filename)
|
| 67 |
+
|
| 68 |
+
# Calculate match score
|
| 69 |
+
score = 0
|
| 70 |
+
|
| 71 |
+
# Exact substring match
|
| 72 |
+
if query_normalized in filename_normalized or filename_normalized in query_normalized:
|
| 73 |
+
score += 10
|
| 74 |
+
|
| 75 |
+
# Word overlap
|
| 76 |
+
query_words = set(query_normalized.split())
|
| 77 |
+
filename_words = set(filename_normalized.split())
|
| 78 |
+
overlap = len(query_words.intersection(filename_words))
|
| 79 |
+
score += overlap * 2
|
| 80 |
+
|
| 81 |
+
# Partial word matches
|
| 82 |
+
for query_word in query_words:
|
| 83 |
+
for filename_word in filename_words:
|
| 84 |
+
if query_word in filename_word or filename_word in query_word:
|
| 85 |
+
score += 1
|
| 86 |
+
|
| 87 |
+
if score > best_score:
|
| 88 |
+
best_score = score
|
| 89 |
+
best_match = filename
|
| 90 |
+
|
| 91 |
+
return best_match if best_score > 0 else None
|
| 92 |
+
|
| 93 |
+
def get_relevant_papers_content(query, max_papers=5):
|
| 94 |
+
"""Get relevant paper content based on user query."""
|
| 95 |
+
query_terms = query.lower().split()
|
| 96 |
+
relevant_papers = []
|
| 97 |
+
|
| 98 |
+
for filename, content in PAPER_TEXTS.items():
|
| 99 |
+
title = filename[:-4] if filename.endswith('.txt') else filename
|
| 100 |
+
title_lower = title.lower()
|
| 101 |
+
|
| 102 |
+
# Calculate relevance score
|
| 103 |
+
score = 0
|
| 104 |
+
for term in query_terms:
|
| 105 |
+
if term in title_lower:
|
| 106 |
+
score += 2
|
| 107 |
+
if term in content.lower():
|
| 108 |
+
score += 1
|
| 109 |
+
|
| 110 |
+
if score > 0:
|
| 111 |
+
relevant_papers.append((filename, content, score))
|
| 112 |
+
|
| 113 |
+
# Sort by relevance score and return top papers
|
| 114 |
+
relevant_papers.sort(key=lambda x: x[2], reverse=True)
|
| 115 |
+
return relevant_papers[:max_papers]
|
| 116 |
+
|
| 117 |
+
def get_full_paper_content(title, max_chars=12000):
|
| 118 |
+
"""Get full paper content for a specific title."""
|
| 119 |
+
for filename, content in PAPER_TEXTS.items():
|
| 120 |
+
if title.lower() in filename.lower() or filename.lower() in title.lower():
|
| 121 |
+
return content[:max_chars] + "..." if len(content) > max_chars else content
|
| 122 |
+
return "Paper not found."
|
| 123 |
+
|
| 124 |
+
def get_paper_summary(title):
|
| 125 |
+
"""Get a structured summary of a paper."""
|
| 126 |
+
content = get_full_paper_content(title)
|
| 127 |
+
if content == "Paper not found.":
|
| 128 |
+
return content
|
| 129 |
+
|
| 130 |
+
# Extract key sections
|
| 131 |
+
sections = {
|
| 132 |
+
'abstract': '',
|
| 133 |
+
'introduction': '',
|
| 134 |
+
'methodology': '',
|
| 135 |
+
'results': '',
|
| 136 |
+
'conclusions': ''
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
lines = content.split('\n')
|
| 140 |
+
current_section = None
|
| 141 |
+
|
| 142 |
+
for line in lines:
|
| 143 |
+
line_lower = line.lower().strip()
|
| 144 |
+
|
| 145 |
+
# Detect section headers
|
| 146 |
+
if any(keyword in line_lower for keyword in ['abstract', 'introduction', 'method', 'methodology', 'results', 'conclusion']):
|
| 147 |
+
if 'abstract' in line_lower:
|
| 148 |
+
current_section = 'abstract'
|
| 149 |
+
elif 'introduction' in line_lower:
|
| 150 |
+
current_section = 'introduction'
|
| 151 |
+
elif 'method' in line_lower:
|
| 152 |
+
current_section = 'methodology'
|
| 153 |
+
elif 'result' in line_lower:
|
| 154 |
+
current_section = 'results'
|
| 155 |
+
elif 'conclusion' in line_lower:
|
| 156 |
+
current_section = 'conclusions'
|
| 157 |
+
|
| 158 |
+
# Add content to current section
|
| 159 |
+
if current_section and line.strip():
|
| 160 |
+
sections[current_section] += line + '\n'
|
| 161 |
+
|
| 162 |
+
# Create structured summary
|
| 163 |
+
summary = f"# {title}\n\n"
|
| 164 |
+
for section, content in sections.items():
|
| 165 |
+
if content.strip():
|
| 166 |
+
summary += f"## {section.title()}\n{content.strip()}\n\n"
|
| 167 |
+
|
| 168 |
+
return summary
|
| 169 |
+
|
| 170 |
# Get API key with better error handling
|
| 171 |
api_key = os.getenv("OPENAI_API_KEY")
|
| 172 |
if not api_key:
|
|
|
|
| 308 |
yield "Please enter a message to start the conversation."
|
| 309 |
return
|
| 310 |
|
| 311 |
+
# Get relevant full paper content based on user query
|
| 312 |
+
relevant_papers_content = get_relevant_papers_content(message)
|
| 313 |
+
|
| 314 |
+
# Check if user is asking for a specific paper (e.g., "show me the full paper about pigs")
|
| 315 |
+
specific_paper_content = ""
|
| 316 |
+
conclusion_content = ""
|
| 317 |
+
paper_summary_content = ""
|
| 318 |
+
|
| 319 |
+
if any(keyword in message.lower() for keyword in ["full paper", "complete paper", "entire paper", "show me the paper", "read the paper", "summarize", "summary"]):
|
| 320 |
+
# Try to find specific paper content
|
| 321 |
+
for filename, content in PAPER_TEXTS.items():
|
| 322 |
+
title = filename[:-4] if filename.endswith('.txt') else filename
|
| 323 |
+
if any(term in title.lower() for term in message.lower().split()):
|
| 324 |
+
if any(keyword in message.lower() for keyword in ["summarize", "summary"]):
|
| 325 |
+
paper_summary_content = get_paper_summary(title)
|
| 326 |
+
else:
|
| 327 |
+
specific_paper_content = get_full_paper_content(title)
|
| 328 |
+
break
|
| 329 |
+
|
| 330 |
+
# Check if user is asking for conclusions specifically
|
| 331 |
+
if any(keyword in message.lower() for keyword in ["conclusion", "conclusions", "what's the conclusion", "what is the conclusion"]):
|
| 332 |
+
for filename, content in PAPER_TEXTS.items():
|
| 333 |
+
title = filename[:-4] if filename.endswith('.txt') else filename
|
| 334 |
+
if any(term in title.lower() for term in message.lower().split()):
|
| 335 |
+
conclusion_content = extract_conclusion_from_paper(content)
|
| 336 |
+
break
|
| 337 |
+
|
| 338 |
+
# Initialize messages with a comprehensive system prompt
|
| 339 |
system_prompt = f"""You are an AI chatbot designed to help users explore and analyze AI research papers.
|
| 340 |
|
| 341 |
You have access to:
|
| 342 |
1. An abstracts database with summaries of research papers
|
| 343 |
+
2. Full paper texts for detailed analysis
|
| 344 |
+
3. A tool to fetch additional paper content when needed
|
| 345 |
|
| 346 |
ABSTRACTS DATABASE:
|
| 347 |
{ABSTRACTS_CONTENT}
|
| 348 |
|
| 349 |
+
RELEVANT PAPERS CONTENT:
|
| 350 |
+
{chr(10).join([f"Paper: {filename}\nContent: {content[:3000]}..." for filename, content, score in relevant_papers_content])}
|
| 351 |
+
|
| 352 |
+
SPECIFIC PAPER CONTENT:
|
| 353 |
+
{specific_paper_content if specific_paper_content else "None"}
|
| 354 |
+
|
| 355 |
+
CONCLUSION CONTENT:
|
| 356 |
+
{conclusion_content if conclusion_content else "None"}
|
| 357 |
+
|
| 358 |
+
PAPER SUMMARY:
|
| 359 |
+
{paper_summary_content if paper_summary_content else "None"}
|
| 360 |
+
|
| 361 |
INSTRUCTIONS:
|
| 362 |
+
- Use the abstracts for general questions and overview
|
| 363 |
+
- Use full paper content when users ask for specific details, conclusions, or complete papers
|
| 364 |
+
- Use the fetch_papers tool when you need additional paper content
|
| 365 |
+
- Provide accurate, detailed responses based on the actual paper content
|
| 366 |
+
- When referencing papers, use their actual titles from the filenames
|
| 367 |
+
- Prioritize full paper content over abstracts when available"""
|
|
|
|
|
|
|
| 368 |
|
| 369 |
messages = [{"role": "system", "content": system_prompt}]
|
| 370 |
|
test_functionality.py
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script to verify the paper loading and matching functionality.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import sys
|
| 8 |
+
|
| 9 |
+
# Add the current directory to the path so we can import from app.py
|
| 10 |
+
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
| 11 |
+
|
| 12 |
+
# Import the functions from app.py
|
| 13 |
+
from app import load_paper_texts, find_matching_paper_file, normalize_filename
|
| 14 |
+
|
| 15 |
+
def test_paper_loading():
|
| 16 |
+
"""Test that papers are loaded correctly."""
|
| 17 |
+
print("Testing paper loading...")
|
| 18 |
+
|
| 19 |
+
papers = load_paper_texts()
|
| 20 |
+
print(f"Loaded {len(papers)} papers")
|
| 21 |
+
|
| 22 |
+
if papers:
|
| 23 |
+
print("Sample paper filenames:")
|
| 24 |
+
for i, filename in enumerate(list(papers.keys())[:5]):
|
| 25 |
+
print(f" {i+1}. {filename}")
|
| 26 |
+
|
| 27 |
+
return papers
|
| 28 |
+
|
| 29 |
+
def test_filename_matching():
|
| 30 |
+
"""Test the filename matching functionality."""
|
| 31 |
+
print("\nTesting filename matching...")
|
| 32 |
+
|
| 33 |
+
papers = load_paper_texts()
|
| 34 |
+
if not papers:
|
| 35 |
+
print("No papers loaded, skipping test")
|
| 36 |
+
return
|
| 37 |
+
|
| 38 |
+
# Test cases
|
| 39 |
+
test_queries = [
|
| 40 |
+
"pigs",
|
| 41 |
+
"AI companions",
|
| 42 |
+
"labor market",
|
| 43 |
+
"generative AI",
|
| 44 |
+
"chatgpt",
|
| 45 |
+
"large language models"
|
| 46 |
+
]
|
| 47 |
+
|
| 48 |
+
for query in test_queries:
|
| 49 |
+
print(f"\nQuery: '{query}'")
|
| 50 |
+
matching_file = find_matching_paper_file(query.split(), papers)
|
| 51 |
+
if matching_file:
|
| 52 |
+
print(f" Matched: {matching_file}")
|
| 53 |
+
else:
|
| 54 |
+
print(" No match found")
|
| 55 |
+
|
| 56 |
+
def test_normalize_filename():
|
| 57 |
+
"""Test the filename normalization function."""
|
| 58 |
+
print("\nTesting filename normalization...")
|
| 59 |
+
|
| 60 |
+
test_filenames = [
|
| 61 |
+
"The Labor Market Effects of Generativ.txt",
|
| 62 |
+
"AI Companions Reduce Loneliness.txt",
|
| 63 |
+
"WHEN PIGS GET SICK MULTI-AGENT AI.txt",
|
| 64 |
+
"Your Brain on ChatGPT Accumulation o.txt"
|
| 65 |
+
]
|
| 66 |
+
|
| 67 |
+
for filename in test_filenames:
|
| 68 |
+
normalized = normalize_filename(filename)
|
| 69 |
+
print(f" '{filename}' -> '{normalized}'")
|
| 70 |
+
|
| 71 |
+
if __name__ == "__main__":
|
| 72 |
+
print("Testing Paperbot functionality...")
|
| 73 |
+
print("=" * 50)
|
| 74 |
+
|
| 75 |
+
# Test paper loading
|
| 76 |
+
papers = test_paper_loading()
|
| 77 |
+
|
| 78 |
+
# Test filename normalization
|
| 79 |
+
test_normalize_filename()
|
| 80 |
+
|
| 81 |
+
# Test filename matching
|
| 82 |
+
test_filename_matching()
|
| 83 |
+
|
| 84 |
+
print("\n" + "=" * 50)
|
| 85 |
+
print("Testing completed!")
|