codellama-fine-tuning / TEST_COMMANDS.md
Prithvik-1's picture
Upload TEST_COMMANDS.md with huggingface_hub
c1b0ab6 verified

🧪 Quick Test Commands for Single Training Sample

Method 1: Using the Test Script (Easiest)

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate
python3 test_single_sample.py

This will:

  • Load the first sample from datasets/processed/split/train.jsonl
  • Show the instruction and expected response
  • Load the fine-tuned model
  • Generate and display the output

Method 2: Direct Inference Command

Test with a specific prompt from training data:

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate

python3 scripts/inference/inference_codellama.py \
    --mode local \
    --model-path training-outputs/codellama-fifo-v1 \
    --base-model-path models/base-models/CodeLlama-7B-Instruct \
    --prompt "You are Elinnos RTL Code Generator v1.0, a specialized Verilog/SystemVerilog code generation agent. Your role: Generate clean, synthesizable RTL code for hardware design tasks. Output ONLY functional RTL code with no \$display, assertions, comments, or debug statements.

Generate a synchronous FIFO with 8-bit data width, depth 4, write_enable, read_enable, full flag, empty flag, write_err flag (pulses if write when full), and read_err flag (pulses if read when empty)." \
    --max-new-tokens 800 \
    --temperature 0.3

Method 3: Extract Sample and Test

Extract a specific sample by line number:

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate

# Extract sample 1 (first line)
SAMPLE=$(sed -n '1p' datasets/processed/split/train.jsonl)
INSTRUCTION=$(echo $SAMPLE | python3 -c "import sys, json; print(json.load(sys.stdin)['instruction'])")

python3 scripts/inference/inference_codellama.py \
    --mode local \
    --model-path training-outputs/codellama-fifo-v1 \
    --prompt "$INSTRUCTION" \
    --max-new-tokens 800 \
    --temperature 0.3

Or use Python one-liner:

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate

python3 -c "
import json
from pathlib import Path

# Load first training sample
with open('datasets/processed/split/train.jsonl', 'r') as f:
    sample = json.loads(f.readline())
    instruction = sample['instruction']
    print('Testing with instruction:')
    print(instruction[:200] + '...')
    print()
    
# Now run inference
import sys
sys.path.insert(0, 'scripts/inference')
from inference_codellama import load_local_model, generate_with_local_model

model, tokenizer = load_local_model(
    'training-outputs/codellama-fifo-v1',
    'models/base-models/CodeLlama-7B-Instruct'
)

response = generate_with_local_model(
    model, tokenizer, instruction,
    max_new_tokens=800, temperature=0.3, stream=False
)

print('=' * 80)
print('GENERATED OUTPUT:')
print('=' * 80)
print(response)
"

Method 4: Interactive Mode

Test interactively with your own prompts:

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate

python3 scripts/inference/inference_codellama.py \
    --mode local \
    --model-path training-outputs/codellama-fifo-v1

Then type your prompt when prompted.


Method 5: Test Specific Sample Number

To test sample N from training data:

cd /workspace/ftt/codellama-migration
source /venv/main/bin/activate

# Test sample 2 (change N=2 to any sample number)
N=2
INSTRUCTION=$(sed -n "${N}p" datasets/processed/split/train.jsonl | python3 -c "import sys, json; print(json.load(sys.stdin)['instruction'])")

python3 scripts/inference/inference_codellama.py \
    --mode local \
    --model-path training-outputs/codellama-fifo-v1 \
    --prompt "$INSTRUCTION" \
    --max-new-tokens 800 \
    --temperature 0.3

Quick Reference

Model Path: training-outputs/codellama-fifo-v1
Base Model: models/base-models/CodeLlama-7B-Instruct
Training Data: datasets/processed/split/train.jsonl
Test Data: datasets/processed/split/test.jsonl

Recommended Parameters:

  • --max-new-tokens 800 (for longer code)
  • --temperature 0.3 (deterministic code generation)
  • --temperature 0.1 (very deterministic, try if getting text instead of code)