Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,14 @@
|
|
| 1 |
# FeynModel V 0.1
|
| 2 |
|
| 3 |
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
# how to use
|
| 7 |
|
|
@@ -32,6 +39,18 @@ for output in model.generate(input_ids=input_ids.input_ids,max_length=max_length
|
|
| 32 |
|
| 33 |
```
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
# Vision Inference
|
| 36 |
|
| 37 |
```python
|
|
|
|
| 1 |
# FeynModel V 0.1
|
| 2 |
|
| 3 |
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
#### Welcome to the FeynModel repository, a Vision Language model with the resapnning capabilities of the LLM (Language Learning Model). it aims to explore the power of both worlds on scientific reasonin capaicties, this model is fine-tuned using the LoRA (Local Re-Attention) method, optimizing it for enhanced performance in diverse vision and language tasks.
|
| 8 |
+
|
| 9 |
+
#### The 0.1 version uses pretrained layers from DaVit Vision Tower of Florence2-base (Microsoft) and Gemma2-2B (Google) and was finetuned on M3IT coco and ScencieQA
|
| 10 |
+
|
| 11 |
+
#### It use a S6 block to wire context memory for Q* TS (experimental)
|
| 12 |
|
| 13 |
# how to use
|
| 14 |
|
|
|
|
| 39 |
|
| 40 |
```
|
| 41 |
|
| 42 |
+
#### it will output something like :
|
| 43 |
+
|
| 44 |
+
```
|
| 45 |
+
This is a trick question! Here's why:
|
| 46 |
+
|
| 47 |
+
* **Helicopters don't have food to eat.** Helicopters are machines that fly. They don't have mouths or stomachs!
|
| 48 |
+
* **Humans don't fly through food.** We eat food to give our bodies energy. But we don't eat food that we can fly through!
|
| 49 |
+
|
| 50 |
+
Let me know if you'd like to learn about how people eat different foods.
|
| 51 |
+
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
# Vision Inference
|
| 55 |
|
| 56 |
```python
|