--- base_model: google/gemma-3-270m-it library_name: transformers model_name: cypher-gemma tags: - generated_from_trainer - trl - sft - cypher licence: license datasets: - neo4j/text2cypher-2025v1 pipeline_tag: text-generation language: - en --- # Model Card This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). Its purpose is turning natural language queries into CypherQueryLanguage. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline from schemas import MOVIE_SCHEMA # you need to define this yourself! query = "Which actors played a role in the movie Titanic?" pipe = pipeline("text-generation", model="VoErik/cypher-gemma", device="cuda") output = pipe([{"role": "user", "content": f"Question: {question} \n Schema: {MOVIE_SCHEMA}"}], max_new_tokens=256, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT on the text2cypher-2025v1 dataset from Neo4j. It was trained for roughly 3500 steps. ### Framework versions - TRL: 0.23.1 - Transformers: 4.57.0 - Pytorch: 2.8.0 - Datasets: 4.2.0 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```