webdpro commited on
Commit
fd54039
Β·
verified Β·
1 Parent(s): 9a5f1dc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +33 -31
README.md CHANGED
@@ -1,32 +1,39 @@
1
  ---
2
- license: apache-2.0
 
3
  tags:
4
- - finance
5
- - tax
6
- - banking
7
- - investment
8
- - indian-finance
9
- - gst
10
- - conversational
 
 
11
  ---
12
 
13
- # 🏦 Finxan
14
 
15
- Finxan is an AI assistant specialized in Indian finance β€” covering GST, tax compliance, B2B invoicing, banking, and investments.
 
 
 
16
 
17
- ## πŸ’¬ What can Finxan do?
 
 
 
 
18
 
19
- - 🧾 Calculate GST and verify tax compliance
20
- - 🏦 Create and manage B2B invoices
21
- - πŸ“ˆ Answer investment and banking queries
22
- - βœ… Check financial document compliance
23
 
24
- ## πŸš€ Try it
25
  ```python
26
  from peft import PeftModel
27
  from transformers import AutoModelForCausalLM, AutoTokenizer
28
  import torch
29
 
 
30
  base = AutoModelForCausalLM.from_pretrained(
31
  "mistralai/Mistral-7B-v0.1",
32
  torch_dtype=torch.float16,
@@ -35,21 +42,16 @@ base = AutoModelForCausalLM.from_pretrained(
35
  model = PeftModel.from_pretrained(base, "webdpro/finxan")
36
  tokenizer = AutoTokenizer.from_pretrained("webdpro/finxan")
37
 
38
- def ask_finxan(instruction, input_text=""):
39
- prompt = f"""### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:"""
40
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
41
- outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.7, do_sample=True)
42
- full = tokenizer.decode(outputs[0], skip_special_tokens=True)
43
- return full.split("### Response:")[-1].strip()
44
 
45
- print(ask_finxan("What is your name?"))
46
- print(ask_finxan("Calculate GST", "Invoice Rs 50000, GST 18%"))
47
- ```
48
 
49
- ## πŸ“Œ Example
50
 
51
- **You:** What is your name?
52
- **Finxan:** I am Finxan, your AI finance assistant. I can help you with GST, invoicing, tax compliance, and more.
53
-
54
- **You:** Calculate GST for invoice Rs 75000 at 18%
55
- **Finxan:** GST amount: Rs 13500. Total payable: Rs 88500.
 
1
  ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ library_name: peft
4
  tags:
5
+ - finance
6
+ - tax
7
+ - banking
8
+ - investment
9
+ - qlora
10
+ - mistral
11
+ - indian-finance
12
+ - gst
13
+ license: apache-2.0
14
  ---
15
 
16
+ # Finxan β€” Finance Domain LLM
17
 
18
+ Fine-tuned **Mistral-7B-v0.1** using QLoRA (4-bit) on Indian finance domain examples:
19
+ - 🧾 GST / Tax compliance & calculations
20
+ - 🏦 B2B Invoicing & banking
21
+ - πŸ“ˆ Investment advice
22
 
23
+ **Training details:**
24
+ - Hardware: Google Colab T4 GPU (free)
25
+ - Method: QLoRA 4-bit, LoRA r=16
26
+ - Epochs: 3
27
+ - Dataset: 110 finance instruction examples
28
 
29
+ ## How to use
 
 
 
30
 
 
31
  ```python
32
  from peft import PeftModel
33
  from transformers import AutoModelForCausalLM, AutoTokenizer
34
  import torch
35
 
36
+ # Load base + adapter
37
  base = AutoModelForCausalLM.from_pretrained(
38
  "mistralai/Mistral-7B-v0.1",
39
  torch_dtype=torch.float16,
 
42
  model = PeftModel.from_pretrained(base, "webdpro/finxan")
43
  tokenizer = AutoTokenizer.from_pretrained("webdpro/finxan")
44
 
45
+ # Run inference
46
+ prompt = """### Instruction:
47
+ Calculate GST
 
 
 
48
 
49
+ ### Input:
50
+ Invoice amount Rs 50000, GST rate 18%
 
51
 
52
+ ### Response:"""
53
 
54
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
55
+ outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7, do_sample=True)
56
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
57
+ ```