Feature Extraction
Transformers
PyTorch
roberta
code-understanding
unixcoder
text-embeddings-inference
Instructions to use Henry65/RepoSim4Py with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Henry65/RepoSim4Py with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Henry65/RepoSim4Py")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Henry65/RepoSim4Py") model = AutoModel.from_pretrained("Henry65/RepoSim4Py") - Notebooks
- Google Colab
- Kaggle
Commit ·
7b66718
1
Parent(s): e8f04b1
Update pipeline progress bar
Browse files- RepoPipeline.py +1 -1
RepoPipeline.py
CHANGED
|
@@ -290,7 +290,7 @@ class RepoPipeline(Pipeline):
|
|
| 290 |
model_outputs = []
|
| 291 |
# The number of repository.
|
| 292 |
num_texts = sum(
|
| 293 |
-
len(x["codes"]) + len(x["docs"] + len(x["requirements"]) + len(x["readmes"])
|
| 294 |
with tqdm(total=num_texts) as progress_bar:
|
| 295 |
# For each repository
|
| 296 |
for repo_info in extracted_infos:
|
|
|
|
| 290 |
model_outputs = []
|
| 291 |
# The number of repository.
|
| 292 |
num_texts = sum(
|
| 293 |
+
len(x["codes"]) + len(x["docs"]) + len(x["requirements"]) + len(x["readmes"]) for x in extracted_infos)
|
| 294 |
with tqdm(total=num_texts) as progress_bar:
|
| 295 |
# For each repository
|
| 296 |
for repo_info in extracted_infos:
|