|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- text-generation |
|
|
- deep-learning |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- sbapan41/Quantumhash |
|
|
new_version: sbapan41/Quantumhash |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# π Quantumhash |
|
|
|
|
|
This is a **Quantumhash** trained for **text generation**. |
|
|
You can use it to **as a text_generation Model**. |
|
|
|
|
|
--- |
|
|
|
|
|
## π₯ Try It Now |
|
|
Use the inference widget below or in your own application. |
|
|
|
|
|
[](https://huggingface.co/spaces/sbapan41/Quantumhash) |
|
|
|
|
|
--- |
|
|
|
|
|
## π **How to Use This Model** |
|
|
### πΉ **Use in Python** |
|
|
For **text models (Transformers)**: |
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "sbapan41/Quantumhash" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
|
|
prompt = "Once upon a time..." |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
output = model.generate(**inputs) |
|
|
print(tokenizer.decode(output[0])) |
|
|
|
|
|
--- |
|
|
## π Inference API |
|
|
### Try the model directly in your browser with the Hugging Face Inference API. |
|
|
--- |
|
|
from huggingface_hub import InferenceClient |
|
|
|
|
|
client = InferenceClient(model="sbapan41/Quantumhash") |
|
|
response = client.text_generation("Hello, how are you?") |
|
|
print(response) |
|
|
|