Commit History

fix: call model.forward() directly instead of @torch .no_grad encode()
14d8d6b

Michał Paliński commited on

custom training loop — bypass SentenceTransformerTrainer
f2411ad

Michał Paliński commited on

disable gradient checkpointing (NVEmbedModel doesn't support it)
baa5ebd

Michał Paliński commited on

eval + LoRA training + eval in one pipeline
d381447

Michał Paliński commited on

eval-only: use AutoModel directly, no peft/trainer deps
9c1ece1

Michał Paliński commited on

faiss-gpu -> faiss-cpu (search is trivial, avoids CUDA build issues)
a9d11dd

Michał Paliński commited on

add faiss-gpu for evaluation
82e03f5

Michał Paliński commited on

Add eval+train pipeline with benchmark data
c1ca06d

Michał Paliński commited on

pin transformers==4.42.4 for NV-Embed-v2 compat
43e82f0

Michał Paliński commited on

fix: total_mem -> total_memory
4e84cd1

Michał Paliński commited on

NV-Embed-v2 ESCO fine-tuning via LoRA
81a59c5

Michał Paliński commited on

initial commit
fbd5dd5
verified

mpalinski commited on