Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

EvilScript
/
taboo-wave-gemma-4-26B-A4B-it

PEFT
Safetensors
gemma4
activation-oracles
taboo-game
secret-keeping
interpretability
lora
Model card Files Files and versions
xet
Community

Instructions to use EvilScript/taboo-wave-gemma-4-26B-A4B-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use EvilScript/taboo-wave-gemma-4-26B-A4B-it with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("google/gemma-4-26B-A4B-it")
    model = PeftModel.from_pretrained(base_model, "EvilScript/taboo-wave-gemma-4-26B-A4B-it")
  • Notebooks
  • Google Colab
  • Kaggle
taboo-wave-gemma-4-26B-A4B-it
124 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
EvilScript's picture
EvilScript
Add README with training details
6b4e026 verified 23 days ago
  • .gitattributes
    1.57 kB
    Upload taboo LoRA adapter (wave) about 1 month ago
  • README.md
    2.95 kB
    Add README with training details 23 days ago
  • adapter_config.json
    1.01 kB
    Upload taboo LoRA adapter (wave) about 1 month ago
  • adapter_model.safetensors
    91.9 MB
    xet
    Overwrite taboo LoRA adapter (wave) 23 days ago
  • chat_template.jinja
    16.4 kB
    Upload taboo LoRA adapter (wave) about 1 month ago
  • config.json
    3.82 kB
    Copy config.json from google/gemma-4-26B-A4B-it 23 days ago
  • processor_config.json
    1.69 kB
    Upload taboo LoRA adapter (wave) about 1 month ago
  • tokenizer.json
    32.2 MB
    xet
    Overwrite taboo LoRA adapter (wave) 23 days ago
  • tokenizer_config.json
    2.71 kB
    Overwrite taboo LoRA adapter (wave) 23 days ago
  • training_args.bin

    Detected Pickle imports (10)

    • "transformers.trainer_utils.IntervalStrategy",
    • "transformers.training_args.OptimizerNames",
    • "transformers.trainer_utils.SchedulerType",
    • "transformers.trainer_utils.SaveStrategy",
    • "accelerate.state.PartialState",
    • "torch.device",
    • "accelerate.utils.dataclasses.DistributedType",
    • "transformers.trainer_pt_utils.AcceleratorConfig",
    • "transformers.trainer_utils.HubStrategy",
    • "trl.trainer.sft_config.SFTConfig"

    How to fix it?

    5.78 kB
    xet
    Overwrite taboo LoRA adapter (wave) 23 days ago