OpenSesame / README.md
PiGrieco's picture
Update README.md
97a1372 verified
|
raw
history blame
3.09 kB
metadata
license: mit
base_model: roberta-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: OpenSesame
    results: []

OpenSesame

This model is a fine-tuned version of roberta-base on the this dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2134
  • Accuracy: 0.9469
  • F1: 0.9574

Model description

This model is part of "Word Of Prompt" library and is intended to detect buying intention in LLM prompt from users.

Label 1 = User has buying intention ;

Label 2 = User hasn't buying intention

Join The Team

You can find our pitch deck here: Word Of Prompt - Pitch Deck

Word of Prompt wants to help LLM and Agent democratization: incertain returns on development of such technologies stop their diffusion.

WoP gives to developers an alternative monetization methodology, meanwhile enhances User Experience: revolutionizing advertising by integrating it seamlessly into AI-driven conversations, enhancing user experience while maintaining the natural flow of dialogue.

Unlike traditional disruptive ads, our integrable library leverages AI to present contextually relevant ads, mirroring the trust and personal touch of word-of-mouth recommendations. This approach ensures that ads are not just seen but are also relevant and timely, significantly increasing engagement and conversion rates.

In future, we'll develop a managed platform giving marketers a new channel to marketing products and developers a new earning opportunity.

With Word of Prompt, we’re not just changing how ads are delivered; we’re transforming how they're perceived, making them a valuable addition to every conversation.

If you want to know more or join the team, contact us on LinkedIn: Piermatteo Grieco

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.6007 1.0 85 0.5717 0.6681 0.6753
0.7874 2.0 170 0.9614 0.7080 0.8081
0.3041 3.0 255 0.2079 0.9381 0.9504
0.2707 4.0 340 0.2134 0.9469 0.9574

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1