jmisak commited on
Commit
1424dcc
Β·
verified Β·
1 Parent(s): 4858e1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -170
README.md CHANGED
@@ -1,170 +1,170 @@
1
- ---
2
- title: ConversAI - Qualitative Research Assistant
3
- emoji: πŸ”¬
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 5.45.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- # ConversAI - AI-Powered Qualitative Research Assistant
14
-
15
- Battle the blank page, reach global audiences, and uncover insights with AI assistance.
16
-
17
- ---
18
-
19
- > **✨ UPDATED (Nov 2025):** Now uses **local transformers** with **Google Flan-T5** models - Fast, reliable, and **completely FREE**! No API dependencies, runs directly on HuggingFace Spaces.
20
-
21
- ---
22
-
23
- ## 🌟 Features
24
-
25
- ### πŸ“ Survey Generation
26
- - Generate professional surveys from simple outlines
27
- - Follow industry best practices automatically
28
- - Choose from qualitative, quantitative, or mixed methods
29
- - Customize number of questions and target audience
30
-
31
- ### 🌍 Survey Translation
32
- - Translate surveys to 18+ languages
33
- - Maintain cultural appropriateness and meaning
34
- - Reach global audiences effortlessly
35
- - Batch translation support
36
-
37
- ### πŸ“Š Data Analysis
38
- - AI-assisted thematic analysis
39
- - Sentiment analysis and emotional insights
40
- - Automatic pattern and trend detection
41
- - Generate actionable insights and recommendations
42
- - Export detailed analysis reports
43
-
44
- ## πŸš€ Quick Start
45
-
46
- **On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API.
47
-
48
- **Workflow:**
49
- 1. **Generate a Survey**: Start with an outline or topic description
50
- 2. **Translate**: Select target languages to reach global audiences
51
- 3. **Collect Responses**: Use the generated survey with your participants
52
- 4. **Analyze**: Upload responses to uncover key findings and trends
53
-
54
- ## πŸ”§ Configuration
55
-
56
- ### Default: Local Transformers (Completely FREE!)
57
-
58
- **✨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces using local model loading.
59
-
60
- **Default Model:** google/flan-t5-large
61
- - βœ… **100% Free** - No API keys, no costs, ever
62
- - βœ… **Good quality** - 1.2GB model, excellent at following instructions
63
- - βœ… **Fast after loading** - Typically 3-8 seconds per request after initial load
64
- - βœ… **No API dependencies** - Runs entirely on your Space's compute
65
- - βœ… **Private** - All processing happens locally, nothing sent to external APIs
66
- - βœ… **Reliable** - Google's instruction-tuned model, battle-tested
67
-
68
- **Setup for HuggingFace Spaces:**
69
- - Just deploy - models download automatically on first run
70
- - **No API keys or tokens required!**
71
- - Models are cached after first download for faster subsequent loads
72
-
73
- ### Alternative Free Models
74
-
75
- You can try different free models by setting the `LLM_MODEL` environment variable:
76
-
77
- **Recommended Free Models (Local Transformers):**
78
-
79
- | Model | Best For | Speed | Quality | Model Size |
80
- |-------|----------|-------|---------|------------|
81
- | **google/flan-t5-base** | Testing - fastest | ⚑⚑⚑ Very Fast | ⭐⭐ Basic | 250MB |
82
- | **google/flan-t5-large** (default) | **Recommended** - balanced | ⚑⚑ Fast | ⭐⭐⭐ Good | 1.2GB |
83
- | **google/flan-t5-xl** | Better quality | ⚑ Medium | ⭐⭐⭐⭐ Excellent | 3GB |
84
- | **google/flan-t5-xxl** | Maximum quality | ⚑ Slower | ⭐⭐⭐⭐⭐ Best | 11GB |
85
-
86
- **Note:** Flan-T5 models are Google's instruction-tuned models, specifically designed for following instructions. They run locally with transformers library.
87
-
88
- **To change model:**
89
- ```bash
90
- # In Space Settings β†’ Variables
91
- LLM_MODEL=google/flan-t5-large # Better quality
92
-
93
- # Or for maximum quality (requires more memory)
94
- LLM_MODEL=google/flan-t5-xl
95
- ```
96
-
97
- **Why Local Transformers?**
98
- - βœ… **No API dependencies** - runs entirely on your Space
99
- - βœ… **No 404 errors** - no network issues
100
- - βœ… **Fast after loading** - models cached in memory
101
- - βœ… **Instruction-tuned** - designed for following prompts
102
- - βœ… **Privacy** - all processing happens locally
103
-
104
- ### Tips for Best Performance with Local Models
105
-
106
- 1. **Default model (flan-t5-large) is recommended** - Good balance of quality and speed
107
- 2. **First load takes time** - Model downloads and loads (~2-3 minutes for large)
108
- 3. **Subsequent requests are fast** - Model stays in memory (3-8 seconds)
109
- 4. **For simple testing** - Use flan-t5-base (faster loading)
110
- 5. **For best quality** - Use flan-t5-xl or xxl (requires more memory)
111
- 6. **Keep prompts clear** - Simpler outlines work better with smaller models
112
-
113
- ## πŸ“¦ Installation
114
-
115
- ```bash
116
- # Install dependencies
117
- pip install -r requirements.txt
118
-
119
- # Check environment setup (optional but recommended)
120
- python check_env.py
121
-
122
- # Run the app
123
- python app.py
124
- ```
125
-
126
- ## πŸ—οΈ Architecture
127
-
128
- ConversAI is built with a modular architecture:
129
-
130
- - **llm_backend.py** - Unified LLM interface supporting multiple providers
131
- - **survey_generator.py** - AI-powered survey generation
132
- - **survey_translator.py** - Multi-language translation engine
133
- - **data_analyzer.py** - Qualitative data analysis and insights
134
- - **app.py** - Gradio-based web interface
135
- - **export_utils.py** - Export to JSON, CSV, Markdown
136
-
137
- ## πŸ“„ Data Privacy
138
-
139
- - All processing is done through your configured LLM provider
140
- - No data is stored permanently by this application
141
- - Survey data and responses remain in your control
142
- - Suitable for sensitive research projects
143
-
144
- ## 🀝 Contributing
145
-
146
- Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
147
-
148
- ## πŸ“ License
149
-
150
- MIT License - Feel free to use for research and commercial purposes.
151
-
152
- ---
153
-
154
- ## πŸ“š Documentation
155
-
156
- **New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough.
157
-
158
- **Quick Links:**
159
- - πŸ“– [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE)
160
- - ⚑ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment
161
- - πŸ”§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions
162
- - πŸ†“ [Free Models Guide](FREE_MODELS.md) - Best free models to use
163
-
164
- **Diagnostic Tools:**
165
- - Run `python check_env.py` - Check your environment setup
166
- - Run `python test_hf_backend.py` - Test HuggingFace connection
167
-
168
- ---
169
-
170
- Built with ❀️ using Gradio and state-of-the-art open-source LLMs
 
1
+ ---
2
+ title: ProjectEcho - Qualitative Research Assistant
3
+ emoji: πŸ”¬
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 5.49.1
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # ConversAI - AI-Powered Qualitative Research Assistant
14
+
15
+ Battle the blank page, reach global audiences, and uncover insights with AI assistance.
16
+
17
+ ---
18
+
19
+ > **✨ UPDATED (Nov 2025):** Now uses **local transformers** with **Google Flan-T5** models - Fast, reliable, and **completely FREE**! No API dependencies, runs directly on HuggingFace Spaces.
20
+
21
+ ---
22
+
23
+ ## 🌟 Features
24
+
25
+ ### πŸ“ Survey Generation
26
+ - Generate professional surveys from simple outlines
27
+ - Follow industry best practices automatically
28
+ - Choose from qualitative, quantitative, or mixed methods
29
+ - Customize number of questions and target audience
30
+
31
+ ### 🌍 Survey Translation
32
+ - Translate surveys to 18+ languages
33
+ - Maintain cultural appropriateness and meaning
34
+ - Reach global audiences effortlessly
35
+ - Batch translation support
36
+
37
+ ### πŸ“Š Data Analysis
38
+ - AI-assisted thematic analysis
39
+ - Sentiment analysis and emotional insights
40
+ - Automatic pattern and trend detection
41
+ - Generate actionable insights and recommendations
42
+ - Export detailed analysis reports
43
+
44
+ ## πŸš€ Quick Start
45
+
46
+ **On HuggingFace Spaces:** Works immediately with zero configuration! Uses the free HF Inference API.
47
+
48
+ **Workflow:**
49
+ 1. **Generate a Survey**: Start with an outline or topic description
50
+ 2. **Translate**: Select target languages to reach global audiences
51
+ 3. **Collect Responses**: Use the generated survey with your participants
52
+ 4. **Analyze**: Upload responses to uncover key findings and trends
53
+
54
+ ## πŸ”§ Configuration
55
+
56
+ ### Default: Local Transformers (Completely FREE!)
57
+
58
+ **✨ Zero configuration needed!** ConversAI works out-of-the-box on HuggingFace Spaces using local model loading.
59
+
60
+ **Default Model:** google/flan-t5-large
61
+ - βœ… **100% Free** - No API keys, no costs, ever
62
+ - βœ… **Good quality** - 1.2GB model, excellent at following instructions
63
+ - βœ… **Fast after loading** - Typically 3-8 seconds per request after initial load
64
+ - βœ… **No API dependencies** - Runs entirely on your Space's compute
65
+ - βœ… **Private** - All processing happens locally, nothing sent to external APIs
66
+ - βœ… **Reliable** - Google's instruction-tuned model, battle-tested
67
+
68
+ **Setup for HuggingFace Spaces:**
69
+ - Just deploy - models download automatically on first run
70
+ - **No API keys or tokens required!**
71
+ - Models are cached after first download for faster subsequent loads
72
+
73
+ ### Alternative Free Models
74
+
75
+ You can try different free models by setting the `LLM_MODEL` environment variable:
76
+
77
+ **Recommended Free Models (Local Transformers):**
78
+
79
+ | Model | Best For | Speed | Quality | Model Size |
80
+ |-------|----------|-------|---------|------------|
81
+ | **google/flan-t5-base** | Testing - fastest | ⚑⚑⚑ Very Fast | ⭐⭐ Basic | 250MB |
82
+ | **google/flan-t5-large** (default) | **Recommended** - balanced | ⚑⚑ Fast | ⭐⭐⭐ Good | 1.2GB |
83
+ | **google/flan-t5-xl** | Better quality | ⚑ Medium | ⭐⭐⭐⭐ Excellent | 3GB |
84
+ | **google/flan-t5-xxl** | Maximum quality | ⚑ Slower | ⭐⭐⭐⭐⭐ Best | 11GB |
85
+
86
+ **Note:** Flan-T5 models are Google's instruction-tuned models, specifically designed for following instructions. They run locally with transformers library.
87
+
88
+ **To change model:**
89
+ ```bash
90
+ # In Space Settings β†’ Variables
91
+ LLM_MODEL=google/flan-t5-large # Better quality
92
+
93
+ # Or for maximum quality (requires more memory)
94
+ LLM_MODEL=google/flan-t5-xl
95
+ ```
96
+
97
+ **Why Local Transformers?**
98
+ - βœ… **No API dependencies** - runs entirely on your Space
99
+ - βœ… **No 404 errors** - no network issues
100
+ - βœ… **Fast after loading** - models cached in memory
101
+ - βœ… **Instruction-tuned** - designed for following prompts
102
+ - βœ… **Privacy** - all processing happens locally
103
+
104
+ ### Tips for Best Performance with Local Models
105
+
106
+ 1. **Default model (flan-t5-large) is recommended** - Good balance of quality and speed
107
+ 2. **First load takes time** - Model downloads and loads (~2-3 minutes for large)
108
+ 3. **Subsequent requests are fast** - Model stays in memory (3-8 seconds)
109
+ 4. **For simple testing** - Use flan-t5-base (faster loading)
110
+ 5. **For best quality** - Use flan-t5-xl or xxl (requires more memory)
111
+ 6. **Keep prompts clear** - Simpler outlines work better with smaller models
112
+
113
+ ## πŸ“¦ Installation
114
+
115
+ ```bash
116
+ # Install dependencies
117
+ pip install -r requirements.txt
118
+
119
+ # Check environment setup (optional but recommended)
120
+ python check_env.py
121
+
122
+ # Run the app
123
+ python app.py
124
+ ```
125
+
126
+ ## πŸ—οΈ Architecture
127
+
128
+ ConversAI is built with a modular architecture:
129
+
130
+ - **llm_backend.py** - Unified LLM interface supporting multiple providers
131
+ - **survey_generator.py** - AI-powered survey generation
132
+ - **survey_translator.py** - Multi-language translation engine
133
+ - **data_analyzer.py** - Qualitative data analysis and insights
134
+ - **app.py** - Gradio-based web interface
135
+ - **export_utils.py** - Export to JSON, CSV, Markdown
136
+
137
+ ## πŸ“„ Data Privacy
138
+
139
+ - All processing is done through your configured LLM provider
140
+ - No data is stored permanently by this application
141
+ - Survey data and responses remain in your control
142
+ - Suitable for sensitive research projects
143
+
144
+ ## 🀝 Contributing
145
+
146
+ Contributions are welcome! This is a production-grade application designed for real-world qualitative research.
147
+
148
+ ## πŸ“ License
149
+
150
+ MIT License - Feel free to use for research and commercial purposes.
151
+
152
+ ---
153
+
154
+ ## πŸ“š Documentation
155
+
156
+ **New to ConversAI?** Start with **[USER_GUIDE.md](USER_GUIDE.md)** for a complete walkthrough.
157
+
158
+ **Quick Links:**
159
+ - πŸ“– [Complete User Guide](USER_GUIDE.md) - How to use ConversAI (START HERE)
160
+ - ⚑ [Quick Start for HF Spaces](QUICK_START_HF_SPACES.md) - 5-minute deployment
161
+ - πŸ”§ [Troubleshooting](TROUBLESHOOTING.md) - Common issues and solutions
162
+ - πŸ†“ [Free Models Guide](FREE_MODELS.md) - Best free models to use
163
+
164
+ **Diagnostic Tools:**
165
+ - Run `python check_env.py` - Check your environment setup
166
+ - Run `python test_hf_backend.py` - Test HuggingFace connection
167
+
168
+ ---
169
+
170
+ Built with ❀️ using Gradio and state-of-the-art open-source LLMs