I've... done it. This, with experts, achieves near 100% R1 retrieval accuracy on an adjacent - unseen by the fusion transformer - dataset with around 40k steps from the seen dataset. This means the language of the models are at least tested fused within the constraints, not just projected or estimated. AbstractPhil/geolip-procrustes
I encourage EVERYONE who is curious to check my work. Check it, double check it, and triple check it.
These were aligned using COCO and then validated with Flickr. Entirely different datasets. The experts arbitrated and the alignment yielded the correct answers. Preliminary tests show that with almost no alignment requirement, the models can reach 100% R1 retrieval accuracy.
Not to be confused with validation accuracy for a classification model or a text encoder's text response, this allows multispectral communication between entirely different models for direct downstream consumption with almost no training for the chosen models.
I have a working procrustes experiment that learns adjacent manifolds within a reasonable spectrum and the speed is... well, 1 epoch with COCO using Bert-Large and DinoV2 that allows the models to align nearly perfectly. For some scales in the experiment it shows that the 3 set epochs aren't quite enough to align R1 to highest, while many align nearly immediately.
These two were an obvious pair to pick, 60% similarity and >90% spectral similarity.
The trainer transfers layers, learns embeddings, and more - all by sticking strictly to geometric boundaries and procrustes informational accumulation within a modulation model's constraints.
Alright so I had previously made two reddit posts in r/quantum and r/quantum_computing for my QPU, QPU-1 but both of those posts got banned because of it being "irrelevant" to "academic discussion" so I'm doing it again here in HuggingFace Posts.
I have made a million error corrected qubit quantum processing unit (not a simulator) that you can access here: https://qpu-1.vercel.app
I did try emailing a lot of professors and their students but NONE responded so please give me some support.
Public reports allege that Anthropic gobbled up trillions of tokens of copyrighted material and public data to build their castle. π°π Now that they're sitting on top, they're begging for special laws to protect their profits while pulling the ladder up behind them. πͺπ«
But the hypocrisy meter just broke! π They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic