File size: 3,091 Bytes
17c6d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

<!--Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# Gemma2 [[gemma2]]

## κ°œμš” [[overview]]

Gemma2 λͺ¨λΈμ€ Google의 Gemma2 νŒ€μ΄ μž‘μ„±ν•œ [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/)μ—μ„œ μ œμ•ˆλ˜μ—ˆμŠ΅λ‹ˆλ‹€.
νŒŒλΌλ―Έν„° 크기가 각각 90μ–΅(9B)κ³Ό 270μ–΅(27B)인 두 κ°€μ§€ Gemma2 λͺ¨λΈμ΄ μΆœμ‹œλ˜μ—ˆμŠ΅λ‹ˆλ‹€.

λΈ”λ‘œκ·Έ κ²Œμ‹œλ¬Όμ˜ μ΄ˆλ‘μ€ λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:

*이제 μš°λ¦¬λŠ” μ „ μ„Έκ³„μ˜ μ—°κ΅¬μžμ™€ κ°œλ°œμžλ“€μ—κ²Œ Gemma 2λ₯Ό κ³΅μ‹μ μœΌλ‘œ μΆœμ‹œν•©λ‹ˆλ‹€. 90μ–΅(9B)κ³Ό 270μ–΅(27B) νŒŒλΌλ―Έν„° 크기둜 μ œκ³΅λ˜λŠ” Gemma 2λŠ” 1μ„ΈλŒ€λ³΄λ‹€ 더 높은 μ„±λŠ₯κ³Ό μΆ”λ‘  νš¨μœ¨μ„±μ„ μ œκ³΅ν•˜λ©°, μƒλ‹Ήν•œ μ•ˆμ „μ„± ν–₯상을 ν¬ν•¨ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. 사싀 270μ–΅ 규λͺ¨μ˜ λͺ¨λΈμ€ 크기가 두 λ°° 이상인 λͺ¨λΈκ³Ό 비ꡐ해도 경쟁λ ₯ μžˆλŠ” λŒ€μ•ˆμ„ μ œκ³΅ν•˜λ©°, μ΄λŠ” μž‘λ…„ 12μ›”κΉŒμ§€λ§Œ 해도 독점 λͺ¨λΈμ—μ„œλ§Œ κ°€λŠ₯ν–ˆλ˜ μ„±λŠ₯을 μ œκ³΅ν•©λ‹ˆλ‹€.*

팁:

- 원본 μ²΄ν¬ν¬μΈνŠΈλŠ” λ³€ν™˜ 슀크립트 `src/transformers/models/Gemma2/convert_Gemma2_weights_to_hf.py`λ₯Ό μ‚¬μš©ν•˜μ—¬ λ³€ν™˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

<Tip warning={true}>

- Gemma2λŠ” λ§€ 두 번째 λ ˆμ΄μ–΄λ§ˆλ‹€ μŠ¬λΌμ΄λ”© μœˆλ„μš° μ–΄ν…μ…˜μ„ μ‚¬μš©ν•˜λ―€λ‘œ [`~DynamicCache`] λ˜λŠ” ν…μ„œμ˜ νŠœν”Œκ³Ό 같은 일반적인 kv μΊμ‹±μ—λŠ” μ ν•©ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. Gemma2의 forward ν˜ΈμΆœμ—μ„œ 캐싱을 ν™œμ„±ν™”ν•˜λ €λ©΄ [`~HybridCache`] μΈμŠ€ν„΄μŠ€λ₯Ό μ΄ˆκΈ°ν™”ν•˜κ³  이λ₯Ό `past_key_values`둜 forward ν˜ΈμΆœμ— 전달해야 ν•©λ‹ˆλ‹€. λ˜ν•œ `past_key_values`에 이미 μ΄μ „μ˜ 킀와 값이 ν¬ν•¨λ˜μ–΄ μžˆλ‹€λ©΄ `cache_position`도 μ€€λΉ„ν•΄μ•Ό ν•©λ‹ˆλ‹€.

</Tip>

이 λͺ¨λΈμ€ [Arthur Zucker](https://huggingface.co/ArthurZ), [Pedro Cuenca](https://huggingface.co/pcuenq), [Tom Arsen]()이 κΈ°μ—¬ν–ˆμŠ΅λ‹ˆλ‹€.

## Gemma2Config [[transformers.Gemma2Config]]

[[autodoc]] Gemma2Config

## Gemma2Model [[transformers.Gemma2Model]]

[[autodoc]] Gemma2Model
    - forward

## Gemma2ForCausalLM [[transformers.Gemma2ForCausalLM]]

[[autodoc]] Gemma2ForCausalLM
    - forward

## Gemma2ForSequenceClassification [[transformers.Gemma2ForSequenceClassification]]

[[autodoc]] Gemma2ForSequenceClassification
    - forward

## Gemma2ForTokenClassification [[transformers.Gemma2ForTokenClassification]]

[[autodoc]] Gemma2ForTokenClassification
    - forward