Abrar21 commited on
Commit
09236e6
·
verified ·
1 Parent(s): 127b601

Update README.md

Browse files

description: |
**EYE-Llama (GQA)** is a large language model fine-tuned for answering medical questions related to eye health.
It has been trained on a wide variety of ophthalmology-related text and aims to provide accurate and informative responses
for clinical queries, including diagnoses, treatments, and general eye care.

The model is based on the **LLaMA architecture**, known for its efficiency and effectiveness in large-scale language generation tasks.
It can handle text-based queries in natural language and respond with expert-level insights, especially in the domain of ocular diseases,
treatments, and eye health recommendations.

**Use cases:**
- Answering medical questions related to eye diseases (e.g., glaucoma, diabetic retinopathy, cataracts, etc.)
- Providing general information and treatment options for various eye conditions.
- Assisting healthcare professionals and researchers with quick, reliable access to domain-specific knowledge.

**Training:**
This model was fine-tuned on a combination of publicly available ophthalmology literature, research papers, and medical records, making it well-suited to answer a wide range of queries related to eye health. The model was trained with a focus on text-based generation, where it can understand the context of the query and provide relevant responses.

**Limitations:**
- The model should not be used as a substitute for professional medical advice or diagnosis.
- It may not be up-to-date with the most recent clinical practices or treatments, and users should verify medical information from trusted sources.
- While the model performs well in medical question answering, it is still prone to errors and should be used as a supplementary tool rather than a primary decision-making resource.

**License:**
The model is available for research and educational purposes under the MIT License.

Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -12,4 +12,8 @@ language:
12
  - en
13
  base_model:
14
  - HuggingFaceH4/tiny-random-LlamaForCausalLM
 
 
 
 
15
  ---
 
12
  - en
13
  base_model:
14
  - HuggingFaceH4/tiny-random-LlamaForCausalLM
15
+ datasets:
16
+ - QIAIUNCC/EYE-QA-PLUS
17
+ new_version: HuggingFaceM4/tiny-random-LlamaForCausalLM
18
+ library_name: transformers
19
  ---