Exploring the intersection of local AI inference, fine-tuning, and applied research. Currently running LLMs locally on an NVIDIA 9070XT & AMD RX 9070 XT build, with a focus on 32B parameter models at Q4 quantization for research workflows. Interested in LoRA fine-tuning small models (7B/8B) on domain-specific academic content.