Nahara Dataset Model

  • Developed by: Redeemer Salami Okekale, BMS
  • License: apache-2.0
  • Finetuned from model: unsloth/meta-llama-3.1-8b-bnb-4bit
  • Training Loss: 1.181600

Model Description: The nahara-dataset-model is a fine-tuned version of Meta's LLaMA series, specifically optimized for low-precision (4-bit) operations to enhance efficiency in both memory usage and computational resources. It was fine-tuned on the Nahara dataset and achieved a training loss of 1.181600, ensuring strong performance on medical data.

  • Model Type: Transformer-based Language Model
  • Size: 8 billion parameters
  • Precision: 4-bit quantization using bnb (bits and bytes), improving memory efficiency and making the model suitable for resource-constrained environments.

Intended Use: This model serves as a highly adaptable AI copilot for medical professionals, ideal for providing real-time recommendations and decision support. It can assist with:

  • Medical diagnostics and treatment suggestions
  • Summarization of clinical data
  • Generation of medical reports and documentation
  • Assistance with medical coding and research data preparation

Performance:

  • Training Loss: 1.181600
  • Fine-tuning Data: Medical and clinical datasets enhanced through data augmentation techniques to handle sparsity and variability, making it applicable across various healthcare contexts.

Applications: The nahara-dataset-model is suited for:

  • Clinical decision support systems
  • AI copilots for medical professionals
  • Research data analysis and augmentation
  • Medical record summarization and automated report generation

Limitations and Considerations:

  • The model is trained on medical data but may not encompass all clinical expertise nuances. It should be used to augment decision-making, not replace professional judgment.
  • Ethical considerations, including data privacy and bias in healthcare applications, must be strictly followed.
  • While efficiency is boosted by quantizing to 4-bit, there may be trade-offs in performance for complex tasks compared to higher precision models.

Future Improvements: The model will undergo further optimization and refinement in Phase 2, including expanding the dataset, improving real-world adaptability, and fine-tuning the AI copilot for specific medical specializations.

Contributors:

  • Emmanuel Akomanin Asiamah, PhD
  • Elli Banini
  • Felix Coker
  • Philip Attram, BMS
  • Schandorf Osam-Frimpong, MD
  • Daniel Mawuenyega Gohoho
  • Vitus Amenorpe
  • Aaron Kofi Gayi
  • Julius Richard Ogbey
  • Cherryln Asiwome Ahiable
  • Ama Quashie
  • Andrew Kojo Mensah-Onumah
  • Edith Zikpi
  • Azumah Benson, MD
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train oMarquess/nahara-dataset-model