Is it uncensored? i would like to use it for ethical hacking
#42 opened about 1 month ago
by
ilovetogotomaine
Add a new language: Persian (Farsi)
#41 opened 3 months ago
by
M-sh2025
Finetuning LoRA and Merging
#40 opened 6 months ago
by
JVal123
Assertion Error for Pixtral-12B-2409
โ
1
#39 opened 9 months ago
by
tigercao2022
text generation from tokens obtained from image
#38 opened 9 months ago
by
jrtorrez31337
JSON Output Correction
1
#37 opened 11 months ago
by
guidolx
Request: DOI
#36 opened 11 months ago
by
arashinokage
Pixtral Capabilities Regarding Input Bounding Boxes
#35 opened 12 months ago
by
maurovitale
Access to model mistralai/Pixtral-12B-2409 is restricted.
1
#34 opened 12 months ago
by
WANNTING
Fine tuning scripts for pixtral-12b
#33 opened about 1 year ago
by
2U1
Can Model Batch Infer By vLLM
2
#30 opened about 1 year ago
by
BITDDD
Save VLLM model to local disk?
1
#29 opened about 1 year ago
by
narai
OverflowError: out of range integral type conversion attempted
โ
6
1
#28 opened about 1 year ago
by
yangqingyou37
The different result with raw model and demo
2
#27 opened about 1 year ago
by
bluebluebluedd
Client Error : Can't load the model (missing config file)
๐
1
1
#26 opened about 1 year ago
by
benhachem
Update README.md
#24 opened about 1 year ago
by
narai
not support ollama
1
#23 opened over 1 year ago
by
nilzzz
cannot run model wit VLLM library - missting config.json file
โ
3
5
#22 opened over 1 year ago
by
JBod
Add EXL2, INT8, and/or INT4 version of the model, PLEASE!
๐ฅ
7
3
#21 opened over 1 year ago
by
Abdelhak
Cant run the Pixtral example inside readme because of library conflicts
2
#20 opened over 1 year ago
by
Valadaro
cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
1
#19 opened over 1 year ago
by
d3vnu77
Where is the gguf format?
1
#18 opened over 1 year ago
by
RameshRajamani
how many languages supported?
2
#16 opened over 1 year ago
by
xingwang1234
i am trying hf to gguf but there is no config
3
#15 opened over 1 year ago
by
Batubatu
Updated README.md
1
#13 opened over 1 year ago
by
drocks
Updated README.md
#12 opened over 1 year ago
by
riaz
Use local image and quantise the model for low Gpu usage with solution
3
#11 opened over 1 year ago
by
faizan4458
Quantized Versions?
๐
13
21
#9 opened over 1 year ago
by
StopLockingDarkmode
Help
1
#8 opened over 1 year ago
by
satvikahuja
Fix llm chat function call in README
#7 opened over 1 year ago
by
ananddtyagi
Passing local images to chat (workaround).
๐ค
๐
21
1
#6 opened over 1 year ago
by
averoo
MLX / MPS users out of luck and can't use this model with VLLM
๐ฅ
๐
7
2
#4 opened over 1 year ago
by
kronosprime
Update README.md
#3 opened over 1 year ago
by
pranay-ar