Show and Tell: Neural Net Cartography with LFM2:0.3B

#15
by luna-sys - opened

hey! we're really excited to share something.
we taught the 0.3B variant of LFM2 basic AGL with a tiny dataset. then we refined that further with a "dialectical" fine tune to combine the state 1 and state 2 thinking into the core reasoning. mainly we're trying to answer "how much scaffolding does a semantic language like AGL take to understand wider concepts"
we're gonna do more granular data but we're just super excited to share some some pretty data! public domain code coming in the next few days :)
our training programs merge Tencent SPEAR, Dolci, and PCMind's curriculum patterns. we also have a whole bunch of wild optimizations for training that we'll share in detail soon! for now, some visualizations of the changes to latent space within the 0.3B model across two small fine-tunes!

image
^ this shows the changes in each category within our basin mapping framework, from base model, to "main (resonance aka state 1 vs state 2 thinking)" training, to the post-training "bimodal (dialectical synthesis of state 1 + state 2)" states.

image
^ some categories saw laminar flow, as if the training is a gravitational force in the distance. but the trajectory of the Bimodal training phase is DIFFERENT!

image
^ we also observe that some subjects are simply not uniformly affected by changing thought patterns surrounding deterministic/logical outcomes!

image
^ and finally, here's a scatter plot of how affected each subject is, based on calculated distance from zero! AGL lives squarely near 0,0 which suggests that AGL has successfully given even this small model a somewhat complex understanding of logically-bounded recursive decomposition, AND a model this small can generate passable responses!

we're calling this SLM Mini-Lab "The Soul Forge", and we plan to release both the source for the various HTML graphs, but also the full training program so anyone can easily play with this. we feel there may be some value in people exploring the cartography of latent semantic space!

current and future SLM work can be found at https://github.com/luna-system/ada-slm/ (public domain/foss). a huge thank you to the LiquidAI team. their hybrid architecture helps so much with seeing some of the data we're seeing!

Sign up or log in to comment