Update README.md
Browse files
README.md
CHANGED
|
@@ -1,144 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
- sentence-transformers
|
| 4 |
-
-
|
| 5 |
-
-
|
| 6 |
-
-
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
| 8 |
library_name: sentence-transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
-
|
| 13 |
-
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
| 14 |
-
|
| 15 |
-
## Model Details
|
| 16 |
-
|
| 17 |
-
### Model Description
|
| 18 |
-
- **Model Type:** Sentence Transformer
|
| 19 |
-
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
|
| 20 |
-
- **Maximum Sequence Length:** 512 tokens
|
| 21 |
-
- **Output Dimensionality:** 768 dimensions
|
| 22 |
-
- **Similarity Function:** Cosine Similarity
|
| 23 |
-
<!-- - **Training Dataset:** Unknown -->
|
| 24 |
-
<!-- - **Language:** Unknown -->
|
| 25 |
-
<!-- - **License:** Unknown -->
|
| 26 |
-
|
| 27 |
-
### Model Sources
|
| 28 |
-
|
| 29 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 30 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
|
| 31 |
-
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
| 32 |
-
|
| 33 |
-
### Full Model Architecture
|
| 34 |
-
|
| 35 |
-
```
|
| 36 |
-
SentenceTransformer(
|
| 37 |
-
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
|
| 38 |
-
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
| 39 |
-
(2): Normalize()
|
| 40 |
-
)
|
| 41 |
-
```
|
| 42 |
-
|
| 43 |
-
## Usage
|
| 44 |
|
| 45 |
-
|
|
|
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
| 50 |
-
pip install -U sentence-transformers
|
| 51 |
-
```
|
| 52 |
|
| 53 |
-
Then you can load this model and run inference.
|
| 54 |
```python
|
| 55 |
from sentence_transformers import SentenceTransformer
|
| 56 |
|
| 57 |
-
# Download from the 🤗 Hub
|
| 58 |
model = SentenceTransformer("MojtabaEshghie/RavenBERT")
|
| 59 |
-
# Run inference
|
| 60 |
sentences = [
|
| 61 |
-
|
| 62 |
-
"
|
| 63 |
-
|
| 64 |
]
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
# [3, 768]
|
| 68 |
-
|
| 69 |
-
# Get the similarity scores for the embeddings
|
| 70 |
-
similarities = model.similarity(embeddings, embeddings)
|
| 71 |
-
print(similarities)
|
| 72 |
-
# tensor([[1.0000, 0.5997, 0.5681],
|
| 73 |
-
# [0.5997, 1.0000, 0.8354],
|
| 74 |
-
# [0.5681, 0.8354, 1.0000]])
|
| 75 |
```
|
| 76 |
|
| 77 |
-
|
| 78 |
-
### Direct Usage (Transformers)
|
| 79 |
-
|
| 80 |
-
<details><summary>Click to see the direct usage in Transformers</summary>
|
| 81 |
-
|
| 82 |
-
</details>
|
| 83 |
-
-->
|
| 84 |
-
|
| 85 |
-
<!--
|
| 86 |
-
### Downstream Usage (Sentence Transformers)
|
| 87 |
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
-
|
| 93 |
-
-->
|
| 94 |
|
| 95 |
-
|
| 96 |
-
### Out-of-Scope Use
|
| 97 |
|
| 98 |
-
*
|
| 99 |
-
|
|
|
|
| 100 |
|
| 101 |
-
|
| 102 |
-
## Bias, Risks and Limitations
|
| 103 |
|
| 104 |
-
*
|
| 105 |
-
-
|
|
|
|
| 106 |
|
| 107 |
-
|
| 108 |
-
### Recommendations
|
| 109 |
|
| 110 |
-
*
|
| 111 |
-
-->
|
| 112 |
|
| 113 |
-
##
|
| 114 |
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
- Transformers: 4.57.1
|
| 119 |
-
- PyTorch: 2.9.0
|
| 120 |
-
- Accelerate: 1.11.0
|
| 121 |
-
- Datasets: 4.4.0
|
| 122 |
-
- Tokenizers: 0.22.1
|
| 123 |
|
| 124 |
## Citation
|
| 125 |
|
| 126 |
-
|
| 127 |
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
*Clearly define terms in order to be accessible across audiences.*
|
| 132 |
-
-->
|
| 133 |
-
|
| 134 |
-
<!--
|
| 135 |
-
## Model Card Authors
|
| 136 |
-
|
| 137 |
-
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
| 138 |
-
-->
|
| 139 |
|
| 140 |
-
|
| 141 |
-
## Model Card Contact
|
| 142 |
|
| 143 |
-
|
| 144 |
-
-->
|
|
|
|
| 1 |
+
Here’s a concise **README.md** you can drop into the model repo.
|
| 2 |
+
|
| 3 |
+
````markdown
|
| 4 |
---
|
| 5 |
tags:
|
| 6 |
- sentence-transformers
|
| 7 |
+
- embeddings
|
| 8 |
+
- roberta
|
| 9 |
+
- code
|
| 10 |
+
- solidity
|
| 11 |
+
- ethereum
|
| 12 |
+
- smart-contracts
|
| 13 |
+
- security
|
| 14 |
library_name: sentence-transformers
|
| 15 |
+
pipeline_tag: sentence-similarity
|
| 16 |
+
base_model: web3se/SmartBERT-v2
|
| 17 |
+
model-index:
|
| 18 |
+
- name: RavenBERT
|
| 19 |
+
results: []
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# RavenBERT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
RavenBERT is a **SentenceTransformers** embedding model specialized for **smart-contract invariants** (e.g., `require(...)`, `assert(...)`, `if (...) revert`) from Ethereum/Vyper sources.
|
| 25 |
+
It starts from **`web3se/SmartBERT-v2`** and is **contrastively fine-tuned** so that cosine similarity reflects *semantic intent* of guards used in transaction-reverting checks.
|
| 26 |
|
| 27 |
+
- **Architecture:** BERT-family encoder (SmartBERT-v2) → MeanPooling → L2 Normalize
|
| 28 |
+
- **Embedding dimension:** 768
|
| 29 |
+
- **Normalization:** Enabled (unit-norm vectors; cosine ≡ dot product)
|
| 30 |
+
- **Intended use:** clustering / semantic search / dedup / taxonomy building for short guard predicates (and optional messages)
|
| 31 |
|
| 32 |
+
## Quick start
|
|
|
|
|
|
|
| 33 |
|
|
|
|
| 34 |
```python
|
| 35 |
from sentence_transformers import SentenceTransformer
|
| 36 |
|
|
|
|
| 37 |
model = SentenceTransformer("MojtabaEshghie/RavenBERT")
|
|
|
|
| 38 |
sentences = [
|
| 39 |
+
"amountOut >= amountOutMin",
|
| 40 |
+
"deadline >= block.timestamp",
|
| 41 |
+
"balances[msg.sender] >= amount"
|
| 42 |
]
|
| 43 |
+
emb = model.encode(sentences, convert_to_numpy=True, show_progress_bar=False)
|
| 44 |
+
# emb are L2-normalized; use cosine similarity for comparisons
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
```
|
| 46 |
|
| 47 |
+
## Training summary (contrastive)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
+
* **Base model:** `web3se/SmartBERT-v2`
|
| 50 |
+
* **Objective:** `CosineSimilarityLoss` (positives near 1.0, negatives near 0.0)
|
| 51 |
+
* **Pair construction:** L2-normalized seed embeddings →
|
| 52 |
+
positives if **cosine ≥ 0.80**, negatives if **cosine ≤ 0.20** (nearest-neighbor candidates, `top_k=10`, max 5 positives/item)
|
| 53 |
+
* **This release stats:** 1,647 unique texts → **16,470 pairs** (8,235 pos / 8,235 neg)
|
| 54 |
+
* **Hyperparams:** epochs=1, batch_size=16, max_seq_len=512
|
| 55 |
+
* **Saved as:** canonical SentenceTransformers layout (`0_Transformer/`, `1_Pooling/`, `2_Normalize/`)
|
| 56 |
|
| 57 |
+
A more detailed methodology and evaluation appear in the RAVEN paper (semantic clustering of revert-inducing invariants).
|
| 58 |
|
| 59 |
+
## Intended uses & limitations
|
|
|
|
| 60 |
|
| 61 |
+
**Good for**
|
|
|
|
| 62 |
|
| 63 |
+
* Measuring semantic relatedness of short invariant predicates
|
| 64 |
+
* Clustering guards by intent (e.g., access control, slippage, timeouts)
|
| 65 |
+
* Deduplicating near-equivalent checks across contracts
|
| 66 |
|
| 67 |
+
**Not ideal for**
|
|
|
|
| 68 |
|
| 69 |
+
* Long code blocks or whole-function embeddings
|
| 70 |
+
* General code understanding outside invariant-style snippets
|
| 71 |
+
* Non-EVM ecosystems without adaptation
|
| 72 |
|
| 73 |
+
## Evaluation (paper)
|
|
|
|
| 74 |
|
| 75 |
+
When paired with DBSCAN on predicate-only text, RavenBERT produced **compact, well-separated clusters** (e.g., Silhouette ≈ 0.93, S_Dbw ≈ 0.043 at ~52% coverage), surfacing meaningful categories of defenses from reverted transactions. See paper for full protocol, ablations, and metrics.
|
|
|
|
| 76 |
|
| 77 |
+
## Reproducibility
|
| 78 |
|
| 79 |
+
* Pair thresholds: **τ₊ = 0.80**, **τ₋ = 0.20**
|
| 80 |
+
* Normalization: L2 via `sentence_transformers.models.Normalize()`
|
| 81 |
+
* Training log: `ravenbert_training_stats.json` (included in repo)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
## Citation
|
| 84 |
|
| 85 |
+
If you use RavenBERT, please cite the RAVEN paper and this model:
|
| 86 |
|
| 87 |
+
```
|
| 88 |
+
TBD
|
| 89 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
+
## License
|
|
|
|
| 92 |
|
| 93 |
+
MIT
|
|
|