Instructions to use IslamQA/multilingual-e5-large-instruct-finetuned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IslamQA/multilingual-e5-large-instruct-finetuned with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("IslamQA/multilingual-e5-large-instruct-finetuned", dtype="auto") - Notebooks
- Google Colab
- Kaggle
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel Load the base model and tokenizer
base_model_name = "intfloat/multilingual-e5-large-instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModel.from_pretrained(base_model_name) Load the LoRA adapter directly
adapter_repo = "IslamQA/multilingual-e5-large-instruct-finetuned"
model = PeftModel.from_pretrained(base_model, adapter_repo)
Model Card for Model ID
An embedding model optimized for retrieving passages that answer questions about Islam. The passages are inherently multilingual, as they contain quotes from the Quran and Hadith. They often include preambles like "Bismillah" in various languages and follow a specific writing style.
Model Details
Model Sources [optional]
- https://islamqa.info/
- https://islamweb.net/
- https://hadithanswers.com/
- https://askimam.org/
- https://sorularlaislamiyet.com/
Uses
- embedding
- retrieval
- islam
- multilingual
- q&a
from transformers import AutoModel, AutoTokenizer from peft import PeftModel
Load the base model and tokenizer
base_model_name = "intfloat/multilingual-e5-large-instruct" tokenizer = AutoTokenizer.from_pretrained(base_model_name) base_model = AutoModel.from_pretrained(base_model_name)
Load the LoRA adapter directly
adapter_repo = "IslamQA/multilingual-e5-large-instruct-finetuned" model = PeftModel.from_pretrained(base_model, adapter_repo)
Model tree for IslamQA/multilingual-e5-large-instruct-finetuned
Base model
intfloat/multilingual-e5-large-instruct