api-inference is not working

#6
by baby1 - opened
This comment has been hidden

either that or I need to change LLaMa to llama

tokenizer_class in tokenizer_config.json needs to be changed to: LlamaTokenizer

The model_type 'llama' is not recognized. It could be a bleeding edge model, or incorrect

Is it normal for the model to sometimes just to parrot the prompt with no additional text? Or is there a trick to setting the temperature, num_beams, top_p, etc.? I find it's hit and miss.

Sign up or log in to comment