Starfish55 commited on
Commit
23c518c
Β·
verified Β·
1 Parent(s): d72d967

Upload 8 files

Browse files
Files changed (5) hide show
  1. FINGPT_MIGRATION.md +92 -0
  2. UPLOAD_GUIDE.md +103 -0
  3. app.py +95 -79
  4. requirements.txt +5 -1
  5. test_fingpt_integration.py +77 -0
FINGPT_MIGRATION.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FinGPT Migration Guide
2
+
3
+ ## Overview
4
+ This document describes the migration from Google Gemini 2.5 Pro to FinGPT model for the FinRobot Forecaster application.
5
+
6
+ ## Changes Made
7
+
8
+ ### 1. Dependencies Updated
9
+ - **Removed**: `google-generativeai>=0.3.0`
10
+ - **Added**:
11
+ - `transformers>=4.30.0`
12
+ - `torch>=2.0.0`
13
+ - `accelerate>=0.20.0`
14
+ - `peft>=0.4.0`
15
+ - `bitsandbytes>=0.39.0`
16
+
17
+ ### 2. Model Configuration
18
+ - **Old**: `GEMINI_MODEL = "gemini-2.5-pro"`
19
+ - **New**: `FINGPT_MODEL_NAME = "Starfish55/fingpt-complete"`
20
+
21
+ ### 3. API Key Changes
22
+ - **Old**: `GOOGLE_API_KEYS` environment variable
23
+ - **New**: `HF_TOKEN` environment variable for Hugging Face access
24
+
25
+ ### 4. Model Loading
26
+ - **Old**: Google Generative AI client initialization
27
+ - **New**: Hugging Face Transformers with BitsAndBytesConfig for memory efficiency
28
+
29
+ ### 5. Inference Function
30
+ - **Old**: `genai.GenerativeModel.generate_content()`
31
+ - **New**: `model.generate()` with custom tokenization and decoding
32
+
33
+ ## Environment Variables Required
34
+
35
+ ### For Hugging Face Spaces:
36
+ ```bash
37
+ HF_TOKEN=your_hugging_face_token_here
38
+ FINNHUB_KEYS=your_finnhub_api_keys_here
39
+ RAPIDAPI_KEYS=your_rapidapi_keys_here
40
+ ```
41
+
42
+ ## Model Features
43
+
44
+ ### FinGPT Advantages:
45
+ 1. **Specialized for Finance**: Trained specifically on financial data
46
+ 2. **Open Source**: No API rate limits or costs
47
+ 3. **Memory Efficient**: Uses 4-bit quantization
48
+ 4. **Real-time Updates**: Can be fine-tuned with latest financial data
49
+
50
+ ### Performance Considerations:
51
+ - **Memory Usage**: ~4-8GB GPU memory (with quantization)
52
+ - **Inference Speed**: Slower than API calls but more reliable
53
+ - **Model Size**: ~7B parameters (much smaller than Gemini)
54
+
55
+ ## Testing
56
+
57
+ Run the test script to verify the integration:
58
+ ```bash
59
+ python test_fingpt_integration.py
60
+ ```
61
+
62
+ ## Deployment Notes
63
+
64
+ 1. **Hugging Face Spaces**: Ensure `HF_TOKEN` is set in secrets
65
+ 2. **GPU Requirements**: Requires GPU with at least 8GB VRAM
66
+ 3. **Model Loading**: First run may take longer due to model download
67
+ 4. **Fallback**: App will use mock responses if model fails to load
68
+
69
+ ## Troubleshooting
70
+
71
+ ### Common Issues:
72
+ 1. **Out of Memory**: Reduce `MAX_LENGTH` or use CPU inference
73
+ 2. **Model Loading Fails**: Check `HF_TOKEN` and internet connection
74
+ 3. **Slow Inference**: Consider using smaller model or CPU inference
75
+
76
+ ### Debug Mode:
77
+ Set `debug=True` in the app launch to see detailed error messages.
78
+
79
+ ## Migration Benefits
80
+
81
+ 1. **Cost Effective**: No API costs for inference
82
+ 2. **Privacy**: Data stays local, no external API calls
83
+ 3. **Reliability**: No rate limits or API downtime
84
+ 4. **Customization**: Can fine-tune for specific financial tasks
85
+ 5. **Transparency**: Full control over model behavior
86
+
87
+ ## Next Steps
88
+
89
+ 1. Test the application with real financial data
90
+ 2. Fine-tune the model if needed for specific use cases
91
+ 3. Monitor performance and adjust parameters
92
+ 4. Consider implementing model caching for faster startup
UPLOAD_GUIDE.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HΖ°α»›ng dαΊ«n Upload lΓͺn Hugging Face Space
2
+
3
+ ## πŸ“‹ CΓ‘c file cαΊ§n thiαΊΏt Δ‘Γ£ được tαΊ‘o
4
+
5
+ βœ… **app.py** - File chΓ­nh cα»§a α»©ng dα»₯ng
6
+ βœ… **README.md** - MΓ΄ tαΊ£ Space vα»›i metadata Hugging Face
7
+ βœ… **requirements.txt** - Dependencies cαΊ§n thiαΊΏt
8
+ βœ… **config.json** - CαΊ₯u hΓ¬nh Space
9
+ βœ… **.gitignore** - LoαΊ‘i trα»« file khΓ΄ng cαΊ§n thiαΊΏt
10
+
11
+ ## πŸš€ CΓ‘ch upload lΓͺn Hugging Face Space
12
+
13
+ ### PhΖ°Ζ‘ng phΓ‘p 1: Sα»­ dα»₯ng Git (KhuyαΊΏn nghα»‹)
14
+
15
+ 1. **TαΊ‘o Space mα»›i trΓͺn Hugging Face:**
16
+ - Truy cαΊ­p: https://huggingface.co/new-space
17
+ - Đặt tΓͺn: `your-username/finhigh` (thay your-username bαΊ±ng tΓͺn cα»§a bαΊ‘n)
18
+ - Chọn SDK: Gradio
19
+ - Chọn License: MIT
20
+ - TαΊ‘o Space
21
+
22
+ 2. **Clone Space về mÑy:**
23
+ ```bash
24
+ git clone https://huggingface.co/spaces/your-username/finhigh
25
+ cd finhigh
26
+ ```
27
+
28
+ 3. **Copy files vΓ o Space:**
29
+ ```bash
30
+ # Copy tαΊ₯t cαΊ£ files tα»« FinHigh_HuggingFace_Upload/
31
+ cp -r ../FinHigh_HuggingFace_Upload/* .
32
+ ```
33
+
34
+ 4. **Commit vΓ  push:**
35
+ ```bash
36
+ git add .
37
+ git commit -m "Initial commit: FinHigh Stock Prediction App"
38
+ git push
39
+ ```
40
+
41
+ ### PhΖ°Ζ‘ng phΓ‘p 2: Upload trα»±c tiαΊΏp
42
+
43
+ 1. **TαΊ‘o Space mα»›i trΓͺn Hugging Face**
44
+ 2. **Upload tα»«ng file:**
45
+ - Upload `app.py` lΓ m file chΓ­nh
46
+ - Upload `requirements.txt`
47
+ - Upload `README.md`
48
+ - Upload `config.json` (nαΊΏu cαΊ§n)
49
+
50
+ ## βš™οΈ ThiαΊΏt lαΊ­p API Keys
51
+
52
+ Sau khi upload, cαΊ§n thiαΊΏt lαΊ­p cΓ‘c API keys trong Settings cα»§a Space:
53
+
54
+ 1. **VΓ o Settings cα»§a Space**
55
+ 2. **ThΓͺm cΓ‘c Secrets:**
56
+ - `FINNHUB_KEYS`: API keys cho Finnhub (phΓ’n cΓ‘ch bαΊ±ng dΓ²ng mα»›i)
57
+ - `RAPIDAPI_KEYS`: API keys cho RapidAPI (phΓ’n cΓ‘ch bαΊ±ng dΓ²ng mα»›i)
58
+ - `GOOGLE_API_KEYS`: API keys cho Google Generative AI (phΓ’n cΓ‘ch bαΊ±ng dΓ²ng mα»›i)
59
+
60
+ ## πŸ”§ CαΊ₯u hΓ¬nh Space
61
+
62
+ - **Hardware**: CPU (Δ‘Γ£ cαΊ₯u hΓ¬nh trong config.json)
63
+ - **Memory**: 2GB
64
+ - **Disk**: 10GB
65
+ - **SDK**: Gradio 4.44.0
66
+
67
+ ## πŸ“Š Kiểm tra sau khi upload
68
+
69
+ 1. **Space sαΊ½ tα»± Δ‘α»™ng build** sau khi push code
70
+ 2. **Kiểm tra logs** nαΊΏu cΓ³ lα»—i
71
+ 3. **Test α»©ng dα»₯ng** vα»›i cΓ‘c vΓ­ dα»₯ cΓ³ sαΊ΅n
72
+ 4. **Kiểm tra API keys** hoαΊ‘t Δ‘α»™ng Δ‘ΓΊng
73
+
74
+ ## πŸ› Troubleshooting
75
+
76
+ ### Lα»—i thường gαΊ·p:
77
+
78
+ 1. **Import Error**: Kiểm tra requirements.txt
79
+ 2. **API Key Error**: Kiểm tra Secrets trong Settings
80
+ 3. **Memory Error**: Tăng memory limit trong config.json
81
+ 4. **Timeout Error**: Tối ưu code hoặc tăng timeout
82
+
83
+ ### GiαΊ£i phΓ‘p:
84
+
85
+ - Xem logs trong tab "Logs" cα»§a Space
86
+ - Kiểm tra file requirements.txt
87
+ - Đảm bαΊ£o API keys Δ‘ΓΊng format
88
+ - Test local trΖ°α»›c khi upload
89
+
90
+ ## πŸ“ LΖ°u Γ½ quan trọng
91
+
92
+ - βœ… Code Δ‘Γ£ được tα»‘i Ζ°u cho Hugging Face
93
+ - βœ… Dependencies Δ‘Γ£ được pin version
94
+ - βœ… README cΓ³ metadata Δ‘ΓΊng format
95
+ - βœ… Config.json Δ‘Γ£ cαΊ₯u hΓ¬nh sαΊ΅n
96
+ - ⚠️ Cần thiết lập API keys sau khi upload
97
+ - ⚠️ Test kα»Ή trΖ°α»›c khi public
98
+
99
+ ## πŸ”— LiΓͺn kαΊΏt hα»―u Γ­ch
100
+
101
+ - [Hugging Face Spaces Documentation](https://huggingface.co/docs/hub/spaces)
102
+ - [Gradio Documentation](https://gradio.app/docs/)
103
+ - [API Keys Guide](https://huggingface.co/docs/hub/spaces-sdks-docker-advanced#secrets)
app.py CHANGED
@@ -7,24 +7,27 @@ from datetime import date, datetime, timedelta
7
  import gradio as gr
8
  import pandas as pd
9
  import finnhub
10
- import google.generativeai as genai
11
  from io import StringIO
12
  import requests
13
  from requests.adapters import HTTPAdapter
14
  from urllib3.util.retry import Retry
 
 
 
15
 
16
- # Suppress Google Cloud warnings
17
- os.environ['GRPC_VERBOSITY'] = 'ERROR'
18
- os.environ['GRPC_TRACE'] = ''
19
-
20
- # Suppress other warnings
21
  import warnings
22
  warnings.filterwarnings('ignore', category=UserWarning)
23
  warnings.filterwarnings('ignore', category=FutureWarning)
 
24
 
25
  # ---------- CαΊ€U HÌNH ---------------------------------------------------------
26
 
27
- GEMINI_MODEL = "gemini-2.5-pro"
 
 
 
 
28
 
29
  # RapidAPI Configuration
30
  RAPIDAPI_HOST = "alpha-vantage.p.rapidapi.com"
@@ -43,27 +46,21 @@ if RAPIDAPI_KEYS_RAW:
43
  else:
44
  RAPIDAPI_KEYS = []
45
 
46
- # Load Google API keys from single secret (multiple keys separated by newlines)
47
- GOOGLE_API_KEYS_RAW = os.getenv("GOOGLE_API_KEYS", "")
48
- if GOOGLE_API_KEYS_RAW:
49
- GOOGLE_API_KEYS = [key.strip() for key in GOOGLE_API_KEYS_RAW.split('\n') if key.strip()]
50
- else:
51
- GOOGLE_API_KEYS = []
52
 
53
  # Filter out empty keys
54
  FINNHUB_KEYS = [key for key in FINNHUB_KEYS if key.strip()]
55
- GOOGLE_API_KEYS = [key for key in GOOGLE_API_KEYS if key.strip()]
56
 
57
  # Validate that we have at least one key for each service
58
  if not FINNHUB_KEYS:
59
  print("⚠️ Warning: No Finnhub API keys found in secrets")
60
  if not RAPIDAPI_KEYS:
61
  print("⚠️ Warning: No RapidAPI keys found in secrets")
62
- if not GOOGLE_API_KEYS:
63
- print("⚠️ Warning: No Google API keys found in secrets")
64
-
65
- # Chọn ngαΊ«u nhiΓͺn mα»™t khΓ³a API để bαΊ―t Δ‘αΊ§u (if available)
66
- GOOGLE_API_KEY = random.choice(GOOGLE_API_KEYS) if GOOGLE_API_KEYS else None
67
 
68
  print("=" * 50)
69
  print("πŸš€ FinRobot Forecaster Starting Up...")
@@ -76,20 +73,48 @@ if RAPIDAPI_KEYS:
76
  print(f"πŸ“ˆ RapidAPI Alpha Vantage: {RAPIDAPI_HOST} ({len(RAPIDAPI_KEYS)} keys loaded)")
77
  else:
78
  print("πŸ“ˆ RapidAPI Alpha Vantage: Not configured")
79
- if GOOGLE_API_KEYS:
80
- print(f"πŸ€– Google Gemini API: {len(GOOGLE_API_KEYS)} keys loaded")
81
  else:
82
- print("πŸ€– Google Gemini API: Not configured")
83
  print("βœ… Application started successfully!")
84
  print("=" * 50)
85
 
86
- # CαΊ₯u hΓ¬nh Google Generative AI (if keys available)
87
- if GOOGLE_API_KEYS:
88
- # Configure with first key for initial setup
89
- genai.configure(api_key=GOOGLE_API_KEYS[0])
90
- print(f"βœ… Google AI configured with {len(GOOGLE_API_KEYS)} keys")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  else:
92
- print("⚠️ Google AI not configured - will use mock responses")
 
 
93
 
94
  # CαΊ₯u hΓ¬nh Finnhub client (if keys available)
95
  if FINNHUB_KEYS:
@@ -413,62 +438,53 @@ def make_prompt(symbol: str, df: pd.DataFrame, curday: str, use_basics=False) ->
413
  # ---------- LLM CALL -------------------------------------------------------
414
 
415
  def chat_completion(prompt: str,
416
- model: str = GEMINI_MODEL,
417
- temperature: float = 0.2,
418
  stream: bool = False,
419
  symbol: str = "STOCK") -> str:
420
- # Check if Google API keys are configured
421
- if not GOOGLE_API_KEYS:
422
- print(f"⚠️ Google API not configured, using mock response for {symbol}")
423
  return create_mock_ai_response(symbol)
424
 
425
- # Thα»­ vα»›i tαΊ₯t cαΊ£ cΓ‘c Google API keys
426
- for api_key in GOOGLE_API_KEYS:
427
- try:
428
- # CαΊ₯u hΓ¬nh lαΊ‘i vα»›i key hiện tαΊ‘i
429
- genai.configure(api_key=api_key)
430
-
431
- # TαΊ‘o instance cα»§a model
432
- model_instance = genai.GenerativeModel(
433
- model_name=model,
434
- generation_config={
435
- "max_output_tokens": 6144,
436
- "temperature": temperature,
437
- "top_p": 0.9,
438
- "top_k": 40,
439
- }
 
 
 
 
 
440
  )
441
-
442
- # Kết hợp system prompt và user prompt
443
- full_prompt = f"{SYSTEM_PROMPT}\n\n{prompt}"
444
-
445
- if stream:
446
- response = model_instance.generate_content(full_prompt, stream=True)
447
- collected = []
448
- for chunk in response:
449
- if chunk.text:
450
- print(chunk.text, end="", flush=True)
451
- collected.append(chunk.text)
452
- print()
453
- return "".join(collected)
454
- else:
455
- response = model_instance.generate_content(full_prompt)
456
- return response.text
457
-
458
- except Exception as e:
459
- print(f"Error with Google API key {api_key[:10]}...: {e}")
460
- if "quota" in str(e).lower() or "limit" in str(e).lower():
461
- print(f"Quota/limit hit with key {api_key[:10]}..., trying next key")
462
- continue
463
- # NαΊΏu khΓ΄ng phαΊ£i lα»—i quota, thα»­ key tiαΊΏp theo
464
- continue
465
-
466
- # Fallback: TαΊ‘o mock AI response khi tαΊ₯t cαΊ£ Google API keys đều fail
467
- print("⚠️ All Google API keys failed, using mock AI response for demonstration...")
468
- return create_mock_ai_response(symbol)
469
 
470
  def create_mock_ai_response(symbol: str) -> str:
471
- """TαΊ‘o mock AI response khi Google API khΓ΄ng hoαΊ‘t Δ‘α»™ng"""
472
  return f"""
473
  [Positive Developments]
474
  β€’ Strong market position and brand recognition for {symbol}
@@ -582,9 +598,9 @@ def create_interface():
582
  gr.Markdown("""
583
  # πŸ€– FinRobot Forecaster
584
 
585
- **AI-powered stock market analysis and prediction using advanced language models**
586
 
587
- This application analyzes stock market data, company news, and financial metrics to provide comprehensive market insights and predictions.
588
 
589
  ⚠️ **Note**: Free API keys have daily rate limits. If you encounter errors, the app will use mock data for demonstration purposes.
590
  """)
 
7
  import gradio as gr
8
  import pandas as pd
9
  import finnhub
 
10
  from io import StringIO
11
  import requests
12
  from requests.adapters import HTTPAdapter
13
  from urllib3.util.retry import Retry
14
+ import torch
15
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
16
+ from peft import PeftModel
17
 
18
+ # Suppress warnings
 
 
 
 
19
  import warnings
20
  warnings.filterwarnings('ignore', category=UserWarning)
21
  warnings.filterwarnings('ignore', category=FutureWarning)
22
+ warnings.filterwarnings('ignore', category=DeprecationWarning)
23
 
24
  # ---------- CαΊ€U HÌNH ---------------------------------------------------------
25
 
26
+ # FinGPT Model Configuration
27
+ FINGPT_MODEL_NAME = "Starfish55/fingpt-complete"
28
+ FINGPT_BASE_MODEL = "microsoft/DialoGPT-medium" # Base model for FinGPT
29
+ MAX_LENGTH = 1024
30
+ TEMPERATURE = 0.7
31
 
32
  # RapidAPI Configuration
33
  RAPIDAPI_HOST = "alpha-vantage.p.rapidapi.com"
 
46
  else:
47
  RAPIDAPI_KEYS = []
48
 
49
+ # Load Hugging Face API token for FinGPT model access
50
+ HF_TOKEN = os.getenv("HF_TOKEN", "")
51
+ if not HF_TOKEN:
52
+ print("⚠️ Warning: No Hugging Face token found in secrets")
 
 
53
 
54
  # Filter out empty keys
55
  FINNHUB_KEYS = [key for key in FINNHUB_KEYS if key.strip()]
 
56
 
57
  # Validate that we have at least one key for each service
58
  if not FINNHUB_KEYS:
59
  print("⚠️ Warning: No Finnhub API keys found in secrets")
60
  if not RAPIDAPI_KEYS:
61
  print("⚠️ Warning: No RapidAPI keys found in secrets")
62
+ if not HF_TOKEN:
63
+ print("⚠️ Warning: No Hugging Face token found in secrets")
 
 
 
64
 
65
  print("=" * 50)
66
  print("πŸš€ FinRobot Forecaster Starting Up...")
 
73
  print(f"πŸ“ˆ RapidAPI Alpha Vantage: {RAPIDAPI_HOST} ({len(RAPIDAPI_KEYS)} keys loaded)")
74
  else:
75
  print("πŸ“ˆ RapidAPI Alpha Vantage: Not configured")
76
+ if HF_TOKEN:
77
+ print(f"πŸ€– FinGPT Model: {FINGPT_MODEL_NAME} loaded")
78
  else:
79
+ print("πŸ€– FinGPT Model: Not configured")
80
  print("βœ… Application started successfully!")
81
  print("=" * 50)
82
 
83
+ # Initialize FinGPT model (if token available)
84
+ if HF_TOKEN:
85
+ try:
86
+ # Configure BitsAndBytesConfig for memory efficiency
87
+ bnb_config = BitsAndBytesConfig(
88
+ load_in_4bit=True,
89
+ bnb_4bit_use_double_quant=True,
90
+ bnb_4bit_quant_type="nf4",
91
+ bnb_4bit_compute_dtype=torch.bfloat16
92
+ )
93
+
94
+ # Load tokenizer and model
95
+ tokenizer = AutoTokenizer.from_pretrained(
96
+ FINGPT_MODEL_NAME,
97
+ token=HF_TOKEN,
98
+ trust_remote_code=True
99
+ )
100
+
101
+ model = AutoModelForCausalLM.from_pretrained(
102
+ FINGPT_MODEL_NAME,
103
+ token=HF_TOKEN,
104
+ quantization_config=bnb_config,
105
+ device_map="auto",
106
+ trust_remote_code=True
107
+ )
108
+
109
+ print(f"βœ… FinGPT model {FINGPT_MODEL_NAME} loaded successfully")
110
+ except Exception as e:
111
+ print(f"❌ Error loading FinGPT model: {e}")
112
+ model = None
113
+ tokenizer = None
114
  else:
115
+ print("⚠️ FinGPT not configured - will use mock responses")
116
+ model = None
117
+ tokenizer = None
118
 
119
  # CαΊ₯u hΓ¬nh Finnhub client (if keys available)
120
  if FINNHUB_KEYS:
 
438
  # ---------- LLM CALL -------------------------------------------------------
439
 
440
  def chat_completion(prompt: str,
441
+ model_name: str = FINGPT_MODEL_NAME,
442
+ temperature: float = 0.7,
443
  stream: bool = False,
444
  symbol: str = "STOCK") -> str:
445
+ # Check if FinGPT model is configured
446
+ if model is None or tokenizer is None:
447
+ print(f"⚠️ FinGPT model not configured, using mock response for {symbol}")
448
  return create_mock_ai_response(symbol)
449
 
450
+ try:
451
+ # Kết hợp system prompt và user prompt
452
+ full_prompt = f"{SYSTEM_PROMPT}\n\n{prompt}"
453
+
454
+ # Tokenize input
455
+ inputs = tokenizer.encode(full_prompt, return_tensors="pt", max_length=MAX_LENGTH, truncation=True)
456
+
457
+ # Generate response
458
+ with torch.no_grad():
459
+ outputs = model.generate(
460
+ inputs,
461
+ max_length=inputs.shape[1] + 512, # Generate up to 512 new tokens
462
+ temperature=temperature,
463
+ do_sample=True,
464
+ top_p=0.9,
465
+ top_k=50,
466
+ pad_token_id=tokenizer.eos_token_id,
467
+ eos_token_id=tokenizer.eos_token_id,
468
+ no_repeat_ngram_size=2,
469
+ early_stopping=True
470
  )
471
+
472
+ # Decode response
473
+ response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
474
+
475
+ # Clean up response
476
+ response = response.strip()
477
+ if not response:
478
+ return create_mock_ai_response(symbol)
479
+
480
+ return response
481
+
482
+ except Exception as e:
483
+ print(f"Error with FinGPT model: {e}")
484
+ return create_mock_ai_response(symbol)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
485
 
486
  def create_mock_ai_response(symbol: str) -> str:
487
+ """TαΊ‘o mock AI response khi FinGPT model khΓ΄ng hoαΊ‘t Δ‘α»™ng"""
488
  return f"""
489
  [Positive Developments]
490
  β€’ Strong market position and brand recognition for {symbol}
 
598
  gr.Markdown("""
599
  # πŸ€– FinRobot Forecaster
600
 
601
+ **AI-powered stock market analysis and prediction using FinGPT financial language model**
602
 
603
+ This application analyzes stock market data, company news, and financial metrics using the specialized FinGPT model to provide comprehensive market insights and predictions.
604
 
605
  ⚠️ **Note**: Free API keys have daily rate limits. If you encounter errors, the app will use mock data for demonstration purposes.
606
  """)
requirements.txt CHANGED
@@ -1,10 +1,14 @@
1
  gradio==4.44.0
2
  pandas>=1.5.0
3
  finnhub-python>=2.4.0
4
- google-generativeai>=0.3.0
5
  requests>=2.28.0
6
  urllib3>=1.26.0
7
  numpy>=1.21.0
8
  matplotlib>=3.5.0
9
  plotly>=5.0.0
10
  yfinance>=0.2.0
 
 
 
 
 
 
1
  gradio==4.44.0
2
  pandas>=1.5.0
3
  finnhub-python>=2.4.0
 
4
  requests>=2.28.0
5
  urllib3>=1.26.0
6
  numpy>=1.21.0
7
  matplotlib>=3.5.0
8
  plotly>=5.0.0
9
  yfinance>=0.2.0
10
+ transformers>=4.30.0
11
+ torch>=2.0.0
12
+ accelerate>=0.20.0
13
+ peft>=0.4.0
14
+ bitsandbytes>=0.39.0
test_fingpt_integration.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Test script to verify FinGPT integration
4
+ """
5
+
6
+ import os
7
+ import sys
8
+
9
+ def test_imports():
10
+ """Test if all required imports work"""
11
+ try:
12
+ import torch
13
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
14
+ from peft import PeftModel
15
+ print("βœ… All FinGPT imports successful")
16
+ return True
17
+ except ImportError as e:
18
+ print(f"❌ Import error: {e}")
19
+ return False
20
+
21
+ def test_model_config():
22
+ """Test model configuration"""
23
+ try:
24
+ # Test if we can access the model configuration
25
+ FINGPT_MODEL_NAME = "Starfish55/fingpt-complete"
26
+ print(f"βœ… Model configuration: {FINGPT_MODEL_NAME}")
27
+ return True
28
+ except Exception as e:
29
+ print(f"❌ Model configuration error: {e}")
30
+ return False
31
+
32
+ def test_app_import():
33
+ """Test if the main app can be imported"""
34
+ try:
35
+ # Add current directory to path
36
+ sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
37
+
38
+ # Import the main app
39
+ import app
40
+ print("βœ… App import successful")
41
+ return True
42
+ except Exception as e:
43
+ print(f"❌ App import error: {e}")
44
+ return False
45
+
46
+ def main():
47
+ """Run all tests"""
48
+ print("πŸ§ͺ Testing FinGPT Integration...")
49
+ print("=" * 50)
50
+
51
+ tests = [
52
+ test_imports,
53
+ test_model_config,
54
+ test_app_import
55
+ ]
56
+
57
+ passed = 0
58
+ total = len(tests)
59
+
60
+ for test in tests:
61
+ if test():
62
+ passed += 1
63
+ print()
64
+
65
+ print("=" * 50)
66
+ print(f"πŸ“Š Test Results: {passed}/{total} tests passed")
67
+
68
+ if passed == total:
69
+ print("πŸŽ‰ All tests passed! FinGPT integration is ready.")
70
+ else:
71
+ print("⚠️ Some tests failed. Please check the errors above.")
72
+
73
+ return passed == total
74
+
75
+ if __name__ == "__main__":
76
+ success = main()
77
+ sys.exit(0 if success else 1)