Instructions to use Bitsy/robodetect with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Bitsy/robodetect with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Bitsy/robodetect")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Bitsy/robodetect") model = AutoModelForCausalLM.from_pretrained("Bitsy/robodetect") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Bitsy/robodetect with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Bitsy/robodetect" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bitsy/robodetect", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Bitsy/robodetect
- SGLang
How to use Bitsy/robodetect with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Bitsy/robodetect" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bitsy/robodetect", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Bitsy/robodetect" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bitsy/robodetect", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Bitsy/robodetect with Docker Model Runner:
docker model run hf.co/Bitsy/robodetect
Upload gpt4chan_model_float16_meta.xml
Browse files
gpt4chan_model_float16_meta.xml
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<?xml version="1.0" encoding="UTF-8"?>
|
| 2 |
+
<metadata>
|
| 3 |
+
<identifier>gpt4chan_model_float16</identifier>
|
| 4 |
+
<mediatype>software</mediatype>
|
| 5 |
+
<collection>datasets_unsorted</collection>
|
| 6 |
+
<creator>Yannic Kilcher</creator>
|
| 7 |
+
<description>GPT-4chan is a language model fine-tuned from <a href="https://huggingface.co/EleutherAI/gpt-j-6B" rel="nofollow">GPT-J 6B</a> on 3.5 years worth of data from 4chan's politically incorrect (/pol/) board, as included in the dataset <span style="border-style:solid;border-color:rgb(229,231,235);"><a href="https://zenodo.org/record/3606810" rel="nofollow">Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board</a></span>.</description>
|
| 8 |
+
<language>eng</language>
|
| 9 |
+
<scanner>Internet Archive HTML5 Uploader 1.6.4</scanner>
|
| 10 |
+
<subject>GPT 4chan pol AI</subject>
|
| 11 |
+
<title>gpt4chan_model_float16</title>
|
| 12 |
+
<uploader>chadchad@monumentmail.com</uploader>
|
| 13 |
+
<publicdate>2022-06-13 00:47:29</publicdate>
|
| 14 |
+
<addeddate>2022-06-13 00:47:29</addeddate>
|
| 15 |
+
<curation>[curator]validator@archive.org[/curator][date]20220613005332[/date][comment]checked for malware[/comment]</curation>
|
| 16 |
+
<collection>datasets</collection>
|
| 17 |
+
</metadata>
|