Commit ·
c938cc1
1
Parent(s): afe8128
Update README.md
Browse files
README.md
CHANGED
|
@@ -32,7 +32,7 @@ InternLM has open-sourced a 7 billion parameter base model and a chat model tail
|
|
| 32 |
- It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities.
|
| 33 |
- It provides a versatile toolset for users to flexibly build their own workflows.
|
| 34 |
|
| 35 |
-
## InternLM-7B
|
| 36 |
### Performance Evaluation
|
| 37 |
#### Introduction
|
| 38 |
##### Introduction
|
|
@@ -93,6 +93,7 @@ model = model.eval()
|
|
| 93 |
response, history = model.chat(tokenizer, "hello", history=[])
|
| 94 |
print(response)
|
| 95 |
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
|
|
|
|
| 96 |
print(response)
|
| 97 |
```
|
| 98 |
|
|
|
|
| 32 |
- It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities.
|
| 33 |
- It provides a versatile toolset for users to flexibly build their own workflows.
|
| 34 |
|
| 35 |
+
## InternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7BInternLM-7B
|
| 36 |
### Performance Evaluation
|
| 37 |
#### Introduction
|
| 38 |
##### Introduction
|
|
|
|
| 93 |
response, history = model.chat(tokenizer, "hello", history=[])
|
| 94 |
print(response)
|
| 95 |
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
|
| 96 |
+
|
| 97 |
print(response)
|
| 98 |
```
|
| 99 |
|