Feature Extraction
Transformers
PyTorch
roberta
code-understanding
unixcoder
text-embeddings-inference
Instructions to use Henry65/RepoSim4Py with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Henry65/RepoSim4Py with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Henry65/RepoSim4Py")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Henry65/RepoSim4Py") model = AutoModel.from_pretrained("Henry65/RepoSim4Py") - Notebooks
- Google Colab
- Kaggle
Update RepoPipeline.py
Browse files- RepoPipeline.py +1 -1
RepoPipeline.py
CHANGED
|
@@ -201,7 +201,7 @@ class RepoPipeline(Pipeline):
|
|
| 201 |
info["mean_readme_embedding"] = torch.mean(readme_embeddings, dim=0).cpu().numpy()
|
| 202 |
|
| 203 |
info["code_embeddings_shape"] = info["code_embeddings"].shape
|
| 204 |
-
info["doc_embeddings_shape"] = info["
|
| 205 |
|
| 206 |
progress_bar.update(1)
|
| 207 |
model_outputs.append(info)
|
|
|
|
| 201 |
info["mean_readme_embedding"] = torch.mean(readme_embeddings, dim=0).cpu().numpy()
|
| 202 |
|
| 203 |
info["code_embeddings_shape"] = info["code_embeddings"].shape
|
| 204 |
+
info["doc_embeddings_shape"] = info["doc_embeddings"].shape
|
| 205 |
|
| 206 |
progress_bar.update(1)
|
| 207 |
model_outputs.append(info)
|