webdpro commited on
Commit
df3b954
Β·
verified Β·
1 Parent(s): fd2c5ff

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -33
README.md CHANGED
@@ -1,39 +1,33 @@
1
  ---
2
- base_model: mistralai/Mistral-7B-v0.1
3
- library_name: peft
4
- tags:
5
- - finance
6
- - tax
7
- - banking
8
- - investment
9
- - qlora
10
- - mistral
11
- - indian-finance
12
- - gst
13
  license: apache-2.0
 
 
 
 
 
 
 
 
14
  ---
15
 
16
- # Finxan β€” Finance Domain LLM
17
 
18
- Fine-tuned **Mistral-7B-v0.1** using QLoRA (4-bit) on Indian finance domain examples:
19
- - 🧾 GST / Tax compliance & calculations
20
- - 🏦 B2B Invoicing & banking
21
- - πŸ“ˆ Investment advice
22
 
23
- **Training details:**
24
- - Hardware: Google Colab T4 GPU (free)
25
- - Method: QLoRA 4-bit, LoRA r=16
26
- - Epochs: 3
27
- - Dataset: 110 finance instruction examples
28
 
29
- ## How to use
 
 
 
 
30
 
 
31
  ```python
32
  from peft import PeftModel
33
  from transformers import AutoModelForCausalLM, AutoTokenizer
34
  import torch
35
 
36
- # Load base + adapter
37
  base = AutoModelForCausalLM.from_pretrained(
38
  "mistralai/Mistral-7B-v0.1",
39
  torch_dtype=torch.float16,
@@ -42,16 +36,31 @@ base = AutoModelForCausalLM.from_pretrained(
42
  model = PeftModel.from_pretrained(base, "webdpro/finxan")
43
  tokenizer = AutoTokenizer.from_pretrained("webdpro/finxan")
44
 
45
- # Run inference
46
- prompt = """### Instruction:
47
- Calculate GST
 
 
 
 
 
 
 
 
 
 
48
 
49
- ### Input:
50
- Invoice amount Rs 50000, GST rate 18%
 
 
 
51
 
52
- ### Response:"""
53
 
54
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
55
- outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7, do_sample=True)
56
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
57
- ```
 
 
 
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
+ tags:
4
+ - finance
5
+ - tax
6
+ - banking
7
+ - investment
8
+ - indian-finance
9
+ - gst
10
+ - conversational
11
  ---
12
 
13
+ # 🏦 Finxan
14
 
15
+ Finxan is an AI assistant specialized in Indian finance β€” GST, tax compliance, B2B invoicing, banking, and investments.
 
 
 
16
 
17
+ ## πŸ’¬ What Finxan can do
 
 
 
 
18
 
19
+ - 🧾 Calculate GST and verify tax compliance
20
+ - 🏦 Create and manage B2B invoices
21
+ - πŸ“ˆ Answer investment and banking queries
22
+ - βœ… Check financial document compliance
23
+ - πŸ€– Answer questions about itself
24
 
25
+ ## πŸš€ Quick Start
26
  ```python
27
  from peft import PeftModel
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
  import torch
30
 
 
31
  base = AutoModelForCausalLM.from_pretrained(
32
  "mistralai/Mistral-7B-v0.1",
33
  torch_dtype=torch.float16,
 
36
  model = PeftModel.from_pretrained(base, "webdpro/finxan")
37
  tokenizer = AutoTokenizer.from_pretrained("webdpro/finxan")
38
 
39
+ def ask_finxan(instruction, input_text=""):
40
+ prompt = f"### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:\n"
41
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
42
+ outputs = model.generate(
43
+ **inputs,
44
+ max_new_tokens=80,
45
+ temperature=0.3,
46
+ do_sample=True,
47
+ repetition_penalty=1.3,
48
+ pad_token_id=tokenizer.eos_token_id
49
+ )
50
+ full = tokenizer.decode(outputs[0], skip_special_tokens=True)
51
+ return full.split("### Response:")[-1].strip()
52
 
53
+ # Try it
54
+ print(ask_finxan("What is your name?"))
55
+ print(ask_finxan("Calculate GST", "Invoice Rs 50000, GST 18%"))
56
+ print(ask_finxan("Who built you?"))
57
+ ```
58
 
59
+ ## πŸ“Œ Example Outputs
60
 
61
+ | Question | Answer |
62
+ |----------|--------|
63
+ | What is your name? | I am Finxan, your AI finance assistant |
64
+ | Calculate GST for Rs 50000 at 18% | GST: Rs 9000. Total: Rs 59000 |
65
+ | Create a B2B invoice | Invoice created with GST applied |
66
+ | Check GST compliance | GST valid. Compliant. |