Instructions to use jeiku/Berry_v2_7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jeiku/Berry_v2_7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jeiku/Berry_v2_7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jeiku/Berry_v2_7B") model = AutoModelForCausalLM.from_pretrained("jeiku/Berry_v2_7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jeiku/Berry_v2_7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jeiku/Berry_v2_7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jeiku/Berry_v2_7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/jeiku/Berry_v2_7B
- SGLang
How to use jeiku/Berry_v2_7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jeiku/Berry_v2_7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jeiku/Berry_v2_7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jeiku/Berry_v2_7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jeiku/Berry_v2_7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use jeiku/Berry_v2_7B with Docker Model Runner:
docker model run hf.co/jeiku/Berry_v2_7B
π© Report: Legal issue(s)
I am aware. A user on a discord server I frequent took it upon himself to bot like my account. Unfortunately, this has caused attention I never intended for this model to receive. If you look at my other repositories, you will see that this is not indicative of my body of work.
Well I'd recommend reporting said user, then. This is unacceptable behavior. If it really isn't your fault, which I don't doubt your claim, then I do apologize, but ensure proper action is taken.
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
That's up to you to figure out, or contact HF and let them know the situation. I'd recommend including any screenshots and Discord IDs/usernames.
So I should throw away my day because some asshole bot spammed me? Honestly, I don't care enough about this model to even try. If it gets nuked, so be it.
Guess we'll see what happens then I suppose. I understand why you wouldn't want to deal with that.
EDIT: Already found another model affected, hopefully HF can do something about this, because bot spamming models is a pretty big issue imo. https://huggingface.co/MangoMango69420/LumiHathor
I was able to convince the spammer to delete the likes, but I sincerely hope this doesn't happen again.
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
I think his username is @KoolenDasheppi
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
I think his username is @KoolenDasheppi
Why would I report myself? You're actually stupid if you think anyone will believe that.
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
I think his username is @KoolenDasheppi
Why would I report myself? You're actually stupid if you think anyone will believe that.
Why would you hide your model? Interesting..
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
I think his username is @KoolenDasheppi
Why would I report myself? You're actually stupid if you think anyone will believe that.
Why would you hide your model? Interesting..
To prevent botting scum like you from harrassing me. That's an old model I converted to diffusers, it's entirely unnecessary.
Bro we got the whole 4chan entourage coming in.. I hope HF staff comes in soon, cuz I'm getting tired of these notifications.
Bro we got the whole 4chan entourage coming in.. I hope HF staff comes in soon, cuz I'm getting tired of these notifications.
BLUHBLUHBLUHBLUH
How can I report him if I don't know his actual username on HF? He has only claimed responsibility on Discord.
I think his username is @KoolenDasheppi
Why would I report myself? You're actually stupid if you think anyone will believe that.
Why would you hide your model? Interesting..
To prevent botting scum like you from harrassing me. That's an old model I converted to diffusers, it's entirely unnecessary.
6 million likes


