Elvis-t9 commited on
Commit
00888b9
·
verified ·
1 Parent(s): 524e216

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -5,13 +5,11 @@
5
  </a>
6
  </div>
7
 
8
- # Introduction
9
 
10
 
 
11
 
12
- ## C2LLM: Advanced Code Embeddings for Deep Semantic Understanding
13
-
14
- **C2LLMs (Code Contrastive Large Language Model)** is a powerful new model for generating code embeddings, designed to capture the deep semantics of source code.
15
 
16
  #### Key Features
17
 
@@ -27,7 +25,7 @@ C2LLM is designed to be a go-to model for tasks like code search and Retrieval-A
27
 
28
  ## Usage (**HuggingFace Transformers**)
29
 
30
- ```plain
31
  from transformers import AutoModel, AutoTokenizer
32
  import torch
33
 
@@ -118,7 +116,7 @@ embeddings = model.encode(sentences)
118
 
119
  ## Evaluation (**MTEB**)
120
 
121
- ```plain
122
  from sentence_transformers import SentenceTransformer
123
  from mteb.models import ModelMeta
124
  from mteb.cache import ResultCache
@@ -146,4 +144,4 @@ If you find this project helpful, please give it a star. It means a lot to us!
146
 
147
  ## Correspondence to
148
 
149
- Jin Qin (qj431428@antgroup.com), Zihan Liao (liaozihan.lzh@antgroup.com), Ziyin Zhang (zhangziying.zzy@antgroup.com), Hang Yu (hyu.hugo@antgroup.com), Peng Di (dipeng.dp@antgroup.com)
 
5
  </a>
6
  </div>
7
 
 
8
 
9
 
10
+ # A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling
11
 
12
+ **C2LLMs (Code Contrastive Large Language Models)** are powerful new models for generating code embeddings, designed to capture the deep semantics of source code.
 
 
13
 
14
  #### Key Features
15
 
 
25
 
26
  ## Usage (**HuggingFace Transformers**)
27
 
28
+ ```Python
29
  from transformers import AutoModel, AutoTokenizer
30
  import torch
31
 
 
116
 
117
  ## Evaluation (**MTEB**)
118
 
119
+ ```python
120
  from sentence_transformers import SentenceTransformer
121
  from mteb.models import ModelMeta
122
  from mteb.cache import ResultCache
 
144
 
145
  ## Correspondence to
146
 
147
+ Jin Qin (qj431428@antgroup.com), Zihan Liao (liaozihan.lzh@antgroup.com), Ziyin Zhang (zhangziying.zzy@antgroup.com), Hang Yu (hyu.hugo@antgroup.com), Peng Di (dipeng.dp@antgroup.com)