Tantawi65 commited on
Commit
57cb63a
·
1 Parent(s): b590102

Updated Lab Report Analysis AI with Gradio interface and Google AI Studio integration

Browse files
Files changed (6) hide show
  1. Dockerfile +19 -14
  2. README.md +55 -239
  3. app.py +174 -5
  4. lab_analyzer.py +52 -38
  5. main.py +12 -20
  6. requirements.txt +6 -9
Dockerfile CHANGED
@@ -1,23 +1,28 @@
1
- FROM python:3.9
 
2
 
3
  # Set working directory
4
- WORKDIR /code
5
 
6
- # Copy requirements first for better caching
7
- COPY ./requirements.txt /code/requirements.txt
 
 
 
 
8
 
9
- # Install dependencies
10
- RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
 
11
 
12
- # Copy application code
13
- COPY . /code
14
 
15
- # Expose port 7860 (required by Hugging Face Spaces)
16
  EXPOSE 7860
17
 
18
- # Set environment variables
19
- ENV PYTHONPATH=/code
20
- ENV HUGGINGFACE_API_KEY=${HUGGINGFACE_API_KEY}
21
 
22
- # Command to run the application
23
- CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
 
1
+ # Use Python 3.11 slim image
2
+ FROM python:3.11-slim
3
 
4
  # Set working directory
5
+ WORKDIR /app
6
 
7
+ # Install system dependencies
8
+ RUN apt-get update && apt-get install -y \
9
+ build-essential \
10
+ curl \
11
+ software-properties-common \
12
+ && rm -rf /var/lib/apt/lists/*
13
 
14
+ # Copy requirements and install Python dependencies
15
+ COPY requirements.txt .
16
+ RUN pip install --no-cache-dir -r requirements.txt
17
 
18
+ # Copy application files
19
+ COPY . .
20
 
21
+ # Expose port
22
  EXPOSE 7860
23
 
24
+ # Health check
25
+ HEALTHCHECK CMD curl --fail http://localhost:7860/_stcore/health
 
26
 
27
+ # Run the application
28
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,272 +1,88 @@
1
  ---
2
- title: GP-Tea Lab Analysis
3
- emoji: 🧪
4
  colorFrom: blue
5
  colorTo: green
6
- sdk: docker
7
- app_port: 7860
 
8
  pinned: false
9
  license: apache-2.0
 
10
  ---
11
 
12
- # GP-Tea Lab Analysis Service
13
 
14
- A FastAPI-based web service for analyzing lab report images using AI. This service accepts lab report images and provides structured medical analysis with key findings, interpretations, and health insights.
15
 
16
- ## Features
17
 
18
- - 🖼️ Image upload support (JPG, PNG, BMP, TIFF, WEBP)
19
- - 🔍 AI-powered lab report analysis
20
- - 📊 Structured response with summary, key findings, and interpretation
21
- - 🌐 RESTful API with automatic documentation
22
- - 🧪 Built-in test client and web interface
23
- - Async processing for better performance
24
 
25
- ## Project Structure
26
 
27
- ```
28
- Lab_analysis/
29
- ├── main.py # FastAPI application
30
- ├── lab_analyzer.py # Core analysis logic
31
- ├── models.py # Pydantic models
32
- ├── test_client.py # API test client
33
- ├── index.html # Web interface
34
- ├── requirements.txt # Dependencies
35
- ├── Lab_report_analysis.py # Original script
36
- └── README.md # This file
37
- ```
38
-
39
- ## Installation
40
-
41
- 1. **Clone or navigate to the project directory:**
42
-
43
- ```bash
44
- cd "e:\E-JUST Assignments\Projects\HealthCare\Lab_analysis"
45
- ```
46
-
47
- 2. **Create a virtual environment (recommended):**
48
-
49
- ```bash
50
- python -m venv venv
51
- venv\Scripts\activate # On Windows
52
- ```
53
-
54
- 3. **Install dependencies:**
55
- ```bash
56
- pip install -r requirements.txt
57
- ```
58
-
59
- ## Running the API
60
-
61
- ### Method 1: Using Python directly
62
-
63
- ```bash
64
- python main.py
65
- ```
66
-
67
- ### Method 2: Using Uvicorn
68
 
69
- ```bash
70
- uvicorn main:app --host 0.0.0.0 --port 8000 --reload
71
- ```
72
-
73
- The API will be available at:
74
-
75
- - **API Endpoints**: http://localhost:8000
76
- - **Interactive Docs**: http://localhost:8000/docs
77
- - **ReDoc**: http://localhost:8000/redoc
78
-
79
- ## API Endpoints
80
-
81
- ### Health Check
82
 
83
- - **GET** `/health`
84
- - Returns service status
 
 
 
85
 
86
- ### Analyze Lab Report (File Upload)
87
 
88
- - **POST** `/analyze`
89
- - Upload an image file for analysis
90
- - Accepts: `multipart/form-data` with `file` field
91
 
92
- ### Analyze Lab Report (Base64)
93
 
94
- - **POST** `/analyze-base64`
95
- - Send base64 encoded image for analysis
96
- - Accepts: JSON with `image` field containing base64 string
97
 
98
- ## Usage Examples
 
99
 
100
- ### Using cURL
101
-
102
- 1. **Health check:**
103
 
 
 
104
  ```bash
105
- curl http://localhost:8000/health
106
  ```
107
-
108
- 2. **Analyze image file:**
109
-
110
  ```bash
111
- curl -X POST "http://localhost:8000/analyze" \
112
- -H "accept: application/json" \
113
- -H "Content-Type: multipart/form-data" \
114
- -F "file=@your_lab_report.jpg"
115
  ```
116
-
117
- 3. **Analyze base64 image:**
118
  ```bash
119
- curl -X POST "http://localhost:8000/analyze-base64" \
120
- -H "Content-Type: application/json" \
121
- -d '{"image": "your_base64_encoded_image_here"}'
122
  ```
123
 
124
- ### Using Python Test Client
125
 
126
- ```python
127
- from test_client import LabReportAPIClient
128
-
129
- client = LabReportAPIClient()
130
-
131
- # Health check
132
- health = client.health_check()
133
- print(health)
134
-
135
- # Analyze image
136
- result = client.analyze_image_file("path/to/your/lab_report.jpg")
137
- print(result)
138
  ```
139
-
140
- ### Using the Web Interface
141
-
142
- 1. Start the API server
143
- 2. Open `index.html` in your web browser
144
- 3. Drag and drop or select a lab report image
145
- 4. Click "Analyze Report" to get results
146
-
147
- ## Response Format
148
-
149
- Successful analysis returns:
150
-
151
- ```json
152
- {
153
- "success": true,
154
- "filename": "lab_report.jpg",
155
- "analysis": {
156
- "error": false,
157
- "summary": "Brief summary of the lab report",
158
- "key_findings": ["Finding 1", "Finding 2", "Finding 3"],
159
- "interpretation": "Medical interpretation",
160
- "note": "Disclaimer about medical advice",
161
- "raw_response": "Complete AI response"
162
- }
163
- }
164
  ```
165
 
166
- ## Configuration
167
-
168
- ### Environment Variables
169
-
170
- You can set these environment variables to customize the behavior:
171
-
172
- - `API_HOST`: Host to bind to (default: "0.0.0.0")
173
- - `API_PORT`: Port to bind to (default: 8000)
174
- - `HF_API_KEY`: Hugging Face API key (currently hardcoded in `lab_analyzer.py`)
175
-
176
- ### Updating API Key
177
-
178
- To use your own Hugging Face API key, modify the `lab_analyzer.py` file:
179
-
180
- ```python
181
- self.client = InferenceClient(
182
- provider="nebius",
183
- api_key="your_api_key_here", # Replace with your API key
184
- )
185
- ```
186
-
187
- ## Development
188
-
189
- ### Running in Development Mode
190
-
191
- ```bash
192
- uvicorn main:app --reload --host 0.0.0.0 --port 8000
193
- ```
194
-
195
- ### Testing
196
-
197
- Run the test client:
198
-
199
- ```bash
200
- python test_client.py
201
- ```
202
-
203
- ### Adding New Features
204
-
205
- 1. Add new endpoints to `main.py`
206
- 2. Update models in `models.py` if needed
207
- 3. Extend the analyzer in `lab_analyzer.py`
208
- 4. Update documentation
209
-
210
- ## Deployment
211
-
212
- ### Docker (Optional)
213
-
214
- Create a `Dockerfile`:
215
-
216
- ```dockerfile
217
- FROM python:3.9-slim
218
-
219
- WORKDIR /app
220
- COPY requirements.txt .
221
- RUN pip install -r requirements.txt
222
-
223
- COPY . .
224
-
225
- EXPOSE 8000
226
- CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
227
- ```
228
-
229
- Build and run:
230
-
231
- ```bash
232
- docker build -t lab-analysis-api .
233
- docker run -p 8000:8000 lab-analysis-api
234
- ```
235
-
236
- ### Production Considerations
237
-
238
- - Use environment variables for API keys
239
- - Set up proper CORS origins
240
- - Add rate limiting
241
- - Use HTTPS
242
- - Add authentication if needed
243
- - Set up logging and monitoring
244
-
245
- ## Troubleshooting
246
-
247
- ### Common Issues
248
-
249
- 1. **Import errors**: Make sure all dependencies are installed
250
- 2. **Port conflicts**: Change the port in the uvicorn command
251
- 3. **API key issues**: Verify your Hugging Face API key is valid
252
- 4. **Image format errors**: Ensure images are in supported formats
253
-
254
- ### Logs
255
-
256
- The application logs important events. Check console output for debugging information.
257
-
258
- ## License
259
-
260
- This project is for educational purposes. Please ensure you have proper licenses for any AI models used.
261
-
262
- ## Contributing
263
-
264
- 1. Fork the repository
265
- 2. Create a feature branch
266
- 3. Make your changes
267
- 4. Test thoroughly
268
- 5. Submit a pull request
269
-
270
- ---
271
 
272
- **Note**: This analysis is for educational purposes only and should not replace professional medical advice.
 
 
 
1
  ---
2
+ title: Lab Report Analysis AI
3
+ emoji: 🏥
4
  colorFrom: blue
5
  colorTo: green
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
+ short_description: AI-powered lab report analysis using Google AI Studio
12
  ---
13
 
14
+ # 🏥 Lab Report Analysis AI
15
 
16
+ An intelligent lab report analysis system that uses Google AI Studio's Gemini model to analyze medical lab reports and provide structured interpretations.
17
 
18
+ ## Features
19
 
20
+ - 📸 **Image Analysis**: Upload lab report images in various formats (JPG, PNG, TIFF, etc.)
21
+ - 🤖 **AI-Powered**: Uses Google's Gemini 2.0 Flash model for accurate analysis
22
+ - 📊 **Structured Output**: Provides organized summary, key findings, and interpretations
23
+ - **Fast Processing**: Quick analysis with real-time results
24
+ - 🔒 **Secure**: Images are processed securely and not stored
25
+ - 🌐 **Web Interface**: Easy-to-use Gradio interface
26
 
27
+ ## 🚀 How to Use
28
 
29
+ 1. **Upload Image**: Click on the upload area and select your lab report image
30
+ 2. **Analyze**: Click the "Analyze Report" button
31
+ 3. **Review Results**: Get structured analysis with:
32
+ - Summary of the report
33
+ - Key findings and abnormal values
34
+ - Medical interpretation
35
+ - Important disclaimers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
+ ## 🛠️ Technology Stack
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ - **Frontend**: Gradio for interactive web interface
40
+ - **Backend**: FastAPI for robust API handling
41
+ - **AI Model**: Google AI Studio (Gemini 2.0 Flash)
42
+ - **Image Processing**: PIL for image handling
43
+ - **Deployment**: Docker containerization for Hugging Face Spaces
44
 
45
+ ## ⚠️ Important Disclaimer
46
 
47
+ **This tool is for educational and informational purposes only. It should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare professionals for medical concerns.**
 
 
48
 
49
+ ## 🏗️ Local Development
50
 
51
+ ### Prerequisites
 
 
52
 
53
+ - Python 3.11+
54
+ - Google AI Studio API key
55
 
56
+ ### Installation
 
 
57
 
58
+ 1. Clone the repository
59
+ 2. Install dependencies:
60
  ```bash
61
+ pip install -r requirements.txt
62
  ```
63
+ 3. Set your API key:
 
 
64
  ```bash
65
+ export GOOGLE_AI_STUDIO_API_KEY=your_api_key_here
 
 
 
66
  ```
67
+ 4. Run the application:
 
68
  ```bash
69
+ python app.py
 
 
70
  ```
71
 
72
+ ## 📁 Project Structure
73
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  ```
75
+ ├── app.py # Main Gradio application
76
+ ├── main.py # FastAPI server
77
+ ├── lab_analyzer.py # Core analysis logic
78
+ ├── models.py # Data models
79
+ ├── requirements.txt # Python dependencies
80
+ ├── Dockerfile # Container configuration
81
+ └── README.md # Documentation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  ```
83
 
84
+ ## 🙏 Acknowledgments
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
+ - Google AI Studio for providing the Gemini API
87
+ - Hugging Face for Spaces hosting
88
+ - Gradio for the web interface framework
app.py CHANGED
@@ -1,8 +1,177 @@
 
 
 
1
  import os
2
- import uvicorn
3
- from main import app
 
 
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  if __name__ == "__main__":
6
- # Hugging Face Spaces uses port 7860
7
- port = int(os.environ.get("PORT", 7860))
8
- uvicorn.run(app, host="0.0.0.0", port=port)
 
 
 
 
 
1
+ import gradio as gr
2
+ import base64
3
+ import asyncio
4
  import os
5
+ from lab_analyzer import LabReportAnalyzer
6
+ from PIL import Image
7
+ import io
8
+ import logging
9
 
10
+ # Configure logging
11
+ logging.basicConfig(level=logging.INFO)
12
+ logger = logging.getLogger(__name__)
13
+
14
+ # Initialize the lab analyzer
15
+ analyzer = LabReportAnalyzer()
16
+
17
+ async def analyze_lab_report(image):
18
+ """
19
+ Analyze a lab report image using the LabReportAnalyzer
20
+
21
+ Args:
22
+ image: PIL Image from Gradio interface
23
+
24
+ Returns:
25
+ Formatted analysis results
26
+ """
27
+ try:
28
+ if image is None:
29
+ return "❌ Please upload an image first."
30
+
31
+ # Convert PIL image to base64
32
+ buffer = io.BytesIO()
33
+ image.save(buffer, format="JPEG")
34
+ image_b64 = base64.b64encode(buffer.getvalue()).decode("utf-8")
35
+
36
+ # Analyze the lab report
37
+ analysis_result = await analyzer.analyze_report(image_b64)
38
+
39
+ if analysis_result.get("error", False):
40
+ return f"❌ Error: {analysis_result.get('message', 'Unknown error occurred')}"
41
+
42
+ # Format the results for display
43
+ formatted_result = f"""
44
+ ## 📊 Lab Report Analysis Results
45
+
46
+ ### 📋 Summary
47
+ {analysis_result.get('summary', 'No summary available')}
48
+
49
+ ### 🔍 Key Findings
50
+ """
51
+
52
+ key_findings = analysis_result.get('key_findings', [])
53
+ if key_findings:
54
+ for finding in key_findings:
55
+ formatted_result += f"• {finding}\n"
56
+ else:
57
+ formatted_result += "• No specific findings identified\n"
58
+
59
+ formatted_result += f"""
60
+ ### 💡 Interpretation
61
+ {analysis_result.get('interpretation', 'No interpretation available')}
62
+
63
+ ### ⚠️ Important Note
64
+ {analysis_result.get('note', 'This analysis is for educational purposes only and should not replace professional medical advice.')}
65
+
66
+ ---
67
+ *Analysis powered by Google AI Studio (Gemini 2.0 Flash)*
68
+ """
69
+
70
+ return formatted_result
71
+
72
+ except Exception as e:
73
+ logger.error(f"Error in analyze_lab_report: {str(e)}")
74
+ return f"❌ Analysis failed: {str(e)}"
75
+
76
+ def analyze_wrapper(image):
77
+ """Wrapper function to run async analysis in Gradio"""
78
+ return asyncio.run(analyze_lab_report(image))
79
+
80
+ # Create Gradio interface
81
+ with gr.Blocks(
82
+ theme=gr.themes.Soft(),
83
+ title="Lab Report Analysis AI",
84
+ css="""
85
+ .gradio-container {
86
+ max-width: 1200px !important;
87
+ }
88
+ .analysis-output {
89
+ font-family: 'Georgia', serif;
90
+ line-height: 1.6;
91
+ }
92
+ """
93
+ ) as demo:
94
+
95
+ gr.Markdown("""
96
+ # 🏥 Lab Report Analysis AI
97
+
98
+ Upload a lab report image and get an AI-powered analysis with key findings and interpretations.
99
+
100
+ **Features:**
101
+ - 📸 Image-to-text analysis using advanced AI
102
+ - 🔍 Structured medical report interpretation
103
+ - ⚡ Fast and accurate results
104
+ - 🔒 Secure processing (images are not stored)
105
+
106
+ **Supported formats:** JPG, JPEG, PNG, BMP, TIFF, WEBP
107
+ """)
108
+
109
+ with gr.Row():
110
+ with gr.Column(scale=1):
111
+ gr.Markdown("### 📤 Upload Lab Report")
112
+ image_input = gr.Image(
113
+ type="pil",
114
+ label="Lab Report Image",
115
+ height=400
116
+ )
117
+
118
+ analyze_btn = gr.Button(
119
+ "🔬 Analyze Report",
120
+ variant="primary",
121
+ size="lg"
122
+ )
123
+
124
+ gr.Markdown("""
125
+ ### 📝 Instructions:
126
+ 1. Upload a clear image of your lab report
127
+ 2. Click "Analyze Report" button
128
+ 3. Wait for AI analysis results
129
+ 4. Review the structured findings
130
+
131
+ **Note:** This tool is for educational purposes only.
132
+ Always consult healthcare professionals for medical advice.
133
+ """)
134
+
135
+ with gr.Column(scale=2):
136
+ gr.Markdown("### 📊 Analysis Results")
137
+ analysis_output = gr.Markdown(
138
+ "Upload an image and click 'Analyze Report' to see results here.",
139
+ elem_classes=["analysis-output"]
140
+ )
141
+
142
+ # Event handlers
143
+ analyze_btn.click(
144
+ fn=analyze_wrapper,
145
+ inputs=[image_input],
146
+ outputs=[analysis_output],
147
+ show_progress=True
148
+ )
149
+
150
+ # Example images section
151
+ gr.Markdown("""
152
+ ---
153
+ ### 🎯 Example Lab Reports
154
+ Try uploading sample lab reports to see how the analysis works!
155
+ """)
156
+
157
+ # Footer
158
+ gr.Markdown("""
159
+ ---
160
+ **Powered by:**
161
+ - 🤖 Google AI Studio (Gemini 2.0 Flash)
162
+ - ⚡ FastAPI Backend
163
+ - 🎨 Gradio Interface
164
+ - 🐳 Docker Containerization
165
+
166
+ **Privacy:** Your images are processed securely and not stored permanently.
167
+ """)
168
+
169
+ # Launch the app
170
  if __name__ == "__main__":
171
+ # For Hugging Face Spaces, use the default port 7860
172
+ demo.launch(
173
+ server_name="0.0.0.0",
174
+ server_port=7860,
175
+ share=False,
176
+ show_error=True
177
+ )
lab_analyzer.py CHANGED
@@ -1,26 +1,22 @@
1
  import base64
2
- import os
3
- from huggingface_hub import InferenceClient
4
  import asyncio
5
  from typing import Dict, Any
6
  import logging
7
- from dotenv import load_dotenv
8
-
9
- # Load environment variables from .env file
10
- load_dotenv()
11
 
12
  logger = logging.getLogger(__name__)
13
 
14
  class LabReportAnalyzer:
15
- """Lab Report Analysis service using Hugging Face Inference Client"""
16
 
17
  def __init__(self):
18
- """Initialize the analyzer with Hugging Face client"""
19
- self.client = InferenceClient(
20
- token=os.getenv("HUGGINGFACE_API_KEY", "your-api-key-here"),
21
-
22
- )
23
- self.model = "google/gemma-3-27b-it"
24
 
25
  async def analyze_report(self, image_b64: str) -> Dict[str, Any]:
26
  """
@@ -45,7 +41,7 @@ class LabReportAnalyzer:
45
  )
46
 
47
  # Extract and parse the response
48
- analysis_text = completion.choices[0].message.content.strip()
49
 
50
  # Parse the structured response
51
  parsed_result = self._parse_analysis_result(analysis_text)
@@ -61,24 +57,45 @@ class LabReportAnalyzer:
61
  }
62
 
63
  def _run_inference(self, image_b64: str, prompt: str):
64
- """Run the Hugging Face inference synchronously"""
65
- return self.client.chat.completions.create(
66
- model=self.model,
67
- messages=[
 
 
 
 
 
 
 
 
 
68
  {
69
- "role": "user",
70
- "content": [
71
- {"type": "text", "text": prompt},
72
  {
73
- "type": "image_url",
74
- "image_url": {
75
- "url": f"data:image/jpeg;base64,{image_b64}"
76
  }
77
  }
78
  ]
79
  }
80
  ],
81
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  def _get_analysis_prompt(self) -> str:
84
  """Get the structured analysis prompt"""
@@ -141,27 +158,24 @@ Keep it short, clear, and professional — like a medical summary written for qu
141
  if not line:
142
  continue
143
 
144
- # Identify sections (handle both plain text and markdown formats)
145
- if line.startswith('Summary:') or line.startswith('**Summary:**'):
146
  current_section = 'summary'
147
- result['summary'] = line.replace('**Summary:**', '').replace('Summary:', '').strip()
148
- elif line.startswith('Key Findings:') or line.startswith('**Key Findings:**'):
149
  current_section = 'key_findings'
150
- elif line.startswith('Interpretation:') or line.startswith('**Interpretation:**'):
151
  current_section = 'interpretation'
152
- result['interpretation'] = line.replace('**Interpretation:**', '').replace('Interpretation:', '').strip()
153
- elif line.startswith('Note:') or line.startswith('**Note:**'):
154
  current_section = 'note'
155
- result['note'] = line.replace('**Note:**', '').replace('Note:', '').strip()
156
  else:
157
  # Continue previous section
158
  if current_section == 'summary' and not result['summary']:
159
  result['summary'] = line
160
- elif current_section == 'key_findings' and (line.startswith(('•', '-', '*')) or line.strip().startswith('*')):
161
- # Handle both regular bullets and markdown-style bullets
162
- clean_line = line.lstrip('•-* ').strip()
163
- if clean_line:
164
- result['key_findings'].append(clean_line)
165
  elif current_section == 'interpretation' and not result['interpretation']:
166
  result['interpretation'] = line
167
  elif current_section == 'note' and not result['note']:
 
1
  import base64
2
+ import requests
 
3
  import asyncio
4
  from typing import Dict, Any
5
  import logging
6
+ import json
7
+ import os
 
 
8
 
9
  logger = logging.getLogger(__name__)
10
 
11
  class LabReportAnalyzer:
12
+ """Lab Report Analysis service using Google AI Studio API"""
13
 
14
  def __init__(self):
15
+ """Initialize the analyzer with Google AI Studio API"""
16
+ # Try to get API key from environment variable first, fallback to hardcoded
17
+ self.api_key = os.getenv("GOOGLE_AI_STUDIO_API_KEY", "AIzaSyDZmxLcrroxuxup0cQkXhdVLeU5DsF5Asg")
18
+ self.base_url = "https://generativelanguage.googleapis.com/v1beta/models"
19
+ self.model = "gemini-2.0-flash" # Using Gemini 2.0 Flash for vision capabilities
 
20
 
21
  async def analyze_report(self, image_b64: str) -> Dict[str, Any]:
22
  """
 
41
  )
42
 
43
  # Extract and parse the response
44
+ analysis_text = completion.get('candidates', [{}])[0].get('content', {}).get('parts', [{}])[0].get('text', '').strip()
45
 
46
  # Parse the structured response
47
  parsed_result = self._parse_analysis_result(analysis_text)
 
57
  }
58
 
59
  def _run_inference(self, image_b64: str, prompt: str):
60
+ """Run the Google AI Studio API inference synchronously"""
61
+ url = f"{self.base_url}/{self.model}:generateContent"
62
+
63
+ headers = {
64
+ "Content-Type": "application/json",
65
+ }
66
+
67
+ params = {
68
+ "key": self.api_key
69
+ }
70
+
71
+ payload = {
72
+ "contents": [
73
  {
74
+ "parts": [
75
+ {"text": prompt},
 
76
  {
77
+ "inline_data": {
78
+ "mime_type": "image/jpeg",
79
+ "data": image_b64
80
  }
81
  }
82
  ]
83
  }
84
  ],
85
+ "generationConfig": {
86
+ "temperature": 0.1,
87
+ "topP": 0.8,
88
+ "topK": 10,
89
+ "maxOutputTokens": 2048,
90
+ }
91
+ }
92
+
93
+ response = requests.post(url, headers=headers, params=params, json=payload, timeout=30)
94
+
95
+ if response.status_code == 200:
96
+ return response.json()
97
+ else:
98
+ raise Exception(f"API request failed with status {response.status_code}: {response.text}")
99
 
100
  def _get_analysis_prompt(self) -> str:
101
  """Get the structured analysis prompt"""
 
158
  if not line:
159
  continue
160
 
161
+ # Identify sections
162
+ if line.startswith('Summary:'):
163
  current_section = 'summary'
164
+ result['summary'] = line.replace('Summary:', '').strip()
165
+ elif line.startswith('Key Findings:'):
166
  current_section = 'key_findings'
167
+ elif line.startswith('Interpretation:'):
168
  current_section = 'interpretation'
169
+ result['interpretation'] = line.replace('Interpretation:', '').strip()
170
+ elif line.startswith('Note:'):
171
  current_section = 'note'
172
+ result['note'] = line.replace('Note:', '').strip()
173
  else:
174
  # Continue previous section
175
  if current_section == 'summary' and not result['summary']:
176
  result['summary'] = line
177
+ elif current_section == 'key_findings' and line.startswith(('•', '-', '*')):
178
+ result['key_findings'].append(line.lstrip('•-* '))
 
 
 
179
  elif current_section == 'interpretation' and not result['interpretation']:
180
  result['interpretation'] = line
181
  elif current_section == 'note' and not result['note']:
main.py CHANGED
@@ -6,7 +6,6 @@ import io
6
  from PIL import Image
7
  from lab_analyzer import LabReportAnalyzer
8
  import logging
9
- import os
10
 
11
  # Configure logging
12
  logging.basicConfig(level=logging.INFO)
@@ -53,32 +52,27 @@ async def analyze_lab_report(file: UploadFile = File(...)):
53
  JSON response with analysis results
54
  """
55
  try:
56
- # Read file contents first
 
 
 
 
 
 
 
57
  contents = await file.read()
58
  if len(contents) == 0:
59
  raise HTTPException(status_code=400, detail="Empty file uploaded")
60
 
61
- # Validate image can be opened (more reliable than content_type)
62
  try:
63
  image = Image.open(io.BytesIO(contents))
64
  image.verify() # Verify it's a valid image
65
-
66
- # Reset file pointer for re-reading
67
- contents_for_analysis = contents
68
-
69
  except Exception as e:
70
- # Check if content type suggests it might be an image
71
- allowed_types = ['image/', 'application/octet-stream']
72
- if file.content_type and not any(t in file.content_type for t in allowed_types):
73
- raise HTTPException(
74
- status_code=400,
75
- detail=f"File must be an image (jpg, jpeg, png, bmp, tiff, webp). Received: {file.content_type}"
76
- )
77
- else:
78
- raise HTTPException(status_code=400, detail=f"Invalid image file: {str(e)}")
79
 
80
  # Convert to base64 for analysis
81
- image_b64 = base64.b64encode(contents_for_analysis).decode("utf-8")
82
 
83
  # Analyze the lab report
84
  logger.info(f"Analyzing lab report: {file.filename}")
@@ -159,6 +153,4 @@ async def analyze_lab_base64_api(data: dict):
159
 
160
  if __name__ == "__main__":
161
  import uvicorn
162
- # Use port 7860 for Hugging Face Spaces, fallback to 8000 for local development
163
- port = int(os.getenv("PORT", 7860))
164
- uvicorn.run("main:app", host="0.0.0.0", port=port, reload=False)
 
6
  from PIL import Image
7
  from lab_analyzer import LabReportAnalyzer
8
  import logging
 
9
 
10
  # Configure logging
11
  logging.basicConfig(level=logging.INFO)
 
52
  JSON response with analysis results
53
  """
54
  try:
55
+ # Validate file type
56
+ if not file.content_type.startswith('image/'):
57
+ raise HTTPException(
58
+ status_code=400,
59
+ detail="File must be an image (jpg, jpeg, png, bmp, tiff, webp)"
60
+ )
61
+
62
+ # Read and validate image
63
  contents = await file.read()
64
  if len(contents) == 0:
65
  raise HTTPException(status_code=400, detail="Empty file uploaded")
66
 
67
+ # Validate image can be opened
68
  try:
69
  image = Image.open(io.BytesIO(contents))
70
  image.verify() # Verify it's a valid image
 
 
 
 
71
  except Exception as e:
72
+ raise HTTPException(status_code=400, detail=f"Invalid image file: {str(e)}")
 
 
 
 
 
 
 
 
73
 
74
  # Convert to base64 for analysis
75
+ image_b64 = base64.b64encode(contents).decode("utf-8")
76
 
77
  # Analyze the lab report
78
  logger.info(f"Analyzing lab report: {file.filename}")
 
153
 
154
  if __name__ == "__main__":
155
  import uvicorn
156
+ uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
 
 
requirements.txt CHANGED
@@ -1,12 +1,9 @@
1
- fastapi==0.104.1
2
- uvicorn[standard]==0.24.0
3
- pydantic==2.5.0
4
- python-multipart==0.0.6
5
- requests==2.31.0
6
- python-dotenv==1.0.0
7
- Pillow==10.0.0
8
- huggingface-hub==0.24.0
9
  fastapi>=0.104.0
10
  uvicorn>=0.24.0
11
  python-multipart>=0.0.6
12
- pydantic>=2.4.0
 
 
1
+ gradio>=4.0.0
2
+ Pillow>=10.0.0
3
+ numpy>=1.26.0
4
+ requests>=2.31.0
 
 
 
 
5
  fastapi>=0.104.0
6
  uvicorn>=0.24.0
7
  python-multipart>=0.0.6
8
+ pydantic>=2.4.0
9
+ aiofiles>=23.1.0