|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: image-to-text |
|
|
--- |
|
|
# git_20 |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context. |
|
|
### Here is how to use it with texts in HuggingFace |
|
|
```python |
|
|
from transformers import AutoModelForCausalLM |
|
|
from transformers import AutoProcessor |
|
|
from PIL import Image |
|
|
model = AutoModelForCausalLM.from_pretrained("Fan21/git_20") |
|
|
processor = AutoProcessor.from_pretrained("Fan21/git_20") |
|
|
|
|
|
image_path ='Please enter the image address here' |
|
|
image = Image.open(image_path) |
|
|
width, height = image.size |
|
|
display(image.resize((int(1 * width), int(1 * height)))) |
|
|
pixel_values = processor(images=image, return_tensors="pt").pixel_values |
|
|
with torch.no_grad(): |
|
|
outputs = model.generate(pixel_values=pixel_values, max_length=50) |
|
|
|
|
|
answer = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
``` |