Model Card: Gemma3-1B Turkish CPT (Only Stage 3 Data, 100K–150K Subset)

Overview

This model is a Turkish Continued Pretraining (CPT) variant of Gemma-3-1B.

Unlike multi-stage CPT runs that progressively adapt the model across multiple data shards, this model was trained specifically to isolate and measure the effect of only the third shard of the dataset. No prior stage adaptation was used.

The model was trained only on the third shard of the Turkish web corpus (samples 100,000–150,000).

Base model: google/gemma-3-1b-pt
Training method: standard continued pretraining (full model update)
Dataset shard: 100K–150K samples
Objective: isolate and evaluate the standalone impact of Stage 3 Turkish web data

For anyone interested in the full experimental results, I’ve compiled all runs here:

https://docs.google.com/spreadsheets/d/10dbABNIMc_WL85ba0rfGwrkbU-VHu3aRa9tnuOAGpyc/edit?usp=sharing

In particular, the Gemma 3B CPT table is the main one to look at.


Training Setup

Base Model: google/gemma-3-1b-pt
Dataset: canbingol/vngrs-web-corpus-200k
Subset Used: Samples 100,000–150,000
Training Objective: Continued Pretraining
Data Regime: Plain text
Epochs: 1
Token Count: ~21.6M tokens


Training Details

All model parameters were updated during training (no parameter-efficient methods such as LoRA were used).

This run represents an isolated CPT experiment where only the third data shard is used, without any carry-over from earlier stages.


Training Notes

This model was trained specifically to test the isolated impact of Stage 3 data (samples 100K–150K), independent of earlier-stage adaptation.

It is intended for controlled comparison against:

  • Stage 1-only CPT models
  • Stage 2-only CPT models
  • Sequential multi-stage CPT models
  • LoRA-based CPT variants

This setup enables analysis of:

  • Data ordering effects
  • Incremental vs isolated adaptation
  • Sensitivity of the model to specific corpus segments

Usage Example

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "canbingol/gemma3_1B_base-tr-cpt-only_3rd_stage_data"

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16
).to(device)

prompt = "bundan böyle"
inputs = tokenizer(prompt, return_tensors="pt").to(device)

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=True,
    temperature=0.8,
    top_p=0.9
)

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Downloads last month
392
Safetensors
Model size
1.0B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for canbingol/gemma3_1B_base-tr-cpt-only_3rd_stage_data

Finetuned
(423)
this model

Dataset used to train canbingol/gemma3_1B_base-tr-cpt-only_3rd_stage_data