AI & ML interests

AI models for Sales, CRM and Consultancy.

ritvik77 
published a Space 3 months ago
ritvik77 
posted an update 6 months ago
ritvik77 
posted an update 8 months ago
view post
Post
579
Advanced Quantization Techniques vs. Parameter Scaling: The Efficiency Frontier in Modern AI. What do you think is future ?
  • 1 reply
·
ritvik77 
posted an update 9 months ago
view post
Post
3351
Hi 🤗HF Community,

I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?

I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.

Thank you for your time and consideration!

Warm regards,
Ritvik Gaur
ritvik77 
posted an update 9 months ago
view post
Post
2350
ritvik77/ContributionChartHuggingFace
It's Ready!

One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.

If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
·
ritvik77 
posted an update 9 months ago
view post
Post
492
Someone remember the Wile E. Coyote from Looney Tunes Show? He did it again but now with fooling a Tesla! This shows the difference of LiDAR vs Camera.

Tesla Autopilot Fails Wile E. Coyote Test, Drives Itself Into Picture of a Road.
For Original Video: https://lnkd.in/g4Qi8fd4
ritvik77 
posted an update 9 months ago
view post
Post
2521
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.

What do you all thing about this ?
·
ritvik77 
posted an update 10 months ago
view post
Post
1540
Try it out: ritvik77/Medical_Doctor_AI_LoRA-Mistral-7B-Instruct_FullModel

🩺 Medical Diagnosis AI Model - Powered by Mistral-7B & LoRA 🚀
🔹 Model Overview:
Base Model: Mistral-7B (7.7 billion parameters)
Fine-Tuning Method: LoRA (Low-Rank Adaptation)
Quantization: bnb_4bit (reduces memory footprint while retaining performance)
🔹 Parameter Details:
Original Mistral-7B Parameters: 7.7 billion
LoRA Fine-Tuned Parameters: 4.48% of total model parameters (340 million) Final Merged Model Size (bnb_4bit Quantized): ~4.5GB

This can help you in making a AI agent for healthcare, if you need to finetune it for JSON function/tool calling format you can use some medical function calling dataset to again fine fine tine on it.

  • 3 replies
·
ritvik77 
posted an update 10 months ago
view post
Post
407
Hey Community,

ritvik77/FineTune_LoRA__AgentToolCall_Mistral-7B_Transformer

Fine-tuned Mistral-7B-Instruct-v0.3 with LoRA on Salesforce Function Dataset (~60K samples) for AI finance Agent, deployed full scale LLM model (14.5GB) via Hugging Face python transformers library and LoRA (PEFT, 715MB) for scalability.

Please let me know any improvements it requires, your ideas and feedback are always welcomed as I am still new to this and still learning.