HumanPet X2.2 1.7B

Description

HumanPet X2.2 1.7B is an instruct LLM consisting of 1.7B parameters trained to talk in a human conversational manner. It does not support reasoning nor tool-calling (although the base model does).
The model was LoRA fine-tuned with Qwen/Qwen3-1.7B as base model.

The HumanPet series is part of an experiment. Do not expect consistent releases.

Explanation of the experiment
Progress on the experiment
Findings for HumanPet X2.2 1.7B

Note: the model files are not released yet. Read section 'Stages' in the experiment explanation.

Note: this model is a re-train of the previous model. Read why the model was re-trained here: Findings for HumanPet X2.1 1.7B

Chat Format

HumanPet X2.2 1.7B uses the ChatML format, e.g.:

<|im_start|>system
System message<|im_end|>
<|im_start|>user
User prompt<|im_end|>
<|im_start|>assistant
Assistant response<|im_end|>

Usage

This model is not trained on system prompts. Therefore, it is recommended to not send any system messages. This includes tools (which this model also was not trained on).

The assistant response has the following format:

<|im_start|>assistant
<think>

</think>

What happened?

<emote>:^</emote><|im_end|>

Note that the <think>...</think> tags are always empty, as this model was not trained on reasoning data. The <emote>...</emote> tags contain a text emoji that the model thinks fits most with the response. The emojis it may output are:

0^0
u~u
u.u
:^
:3
x3
0w0
u-u
:P
>.<
>-<
owo
U.U
:]
^-^
>:3
:0
:O
0~0
o^o
O^O
@~@
@.@
o~o
T^T
>~<
>.0
o.o
>^<
0.0
0-0
-w-
-.-
T-T
T.T
T~T
:<
u^u

Note that the emojis are escaped because of the XML tags (e.g. ">.<" is outputted by the model as "&gt;.&lt;"). If the model does not adhere to the list of emojis, please let us know in the community tab.

Datasets

  1. ConvLab/dailydialog 4.3k chats
    Only the conversations from the "Relationship" domain have been used.
    • Private NLP processing
      NLP processing has been applied to the text to modify the dataset to be sillier.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Flexan/HumanPet-X2.2-1.7B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(533)
this model

Dataset used to train Flexan/HumanPet-X2.2-1.7B