AI & ML interests

Character Database of Bangumis (If you need character LoRAs, see: https://huggingface.co/CyberHarem)

AbstractPhil 
posted an update 1 day ago
view post
Post
452
Meet FluxLailah; AbstractPhil/tiny-flux-deep; 220m Flux variant currently pretraining at BF16. She is experimental, does not produce solid images yet - and yet she is producing. There is both an EMA and a raw weights pair producing different images. The EMA is particularly interesting at times.
Lailah uses flan-t5-base, clip-vit-l-14, and BlackForestLabs Flux1s VAE.
SEQ limit 128, images 512x512 for now. Lailah's early form is based on three variants. TinyFlux's weights were carefully planted into a deeper structure and trained yet again - dubbed TinyFlux-Deep. This variant has 15 dual-stream blocks and 25 single-stream blocks, nearly identical weight code as Flux with a similar attention mechanism - but intentionally deviant and compacted with careful consideration to scaling and purpose of mechanisms.
She went through quite a few growing pains with her earlier attention mechanism which required a reimagining today and careful consideration of the consequences, and now I present to you the preliminary look into Lailah.
The preliminary training is still heavily under way, the mechanisms are still being augmented, and her stability is currently being measured. The potential for fidelity, depth, and quality are still in measure - so I will be shifting attention and pivoting utility based on the needs over time.
  • 2 replies
·
AbstractPhil 
posted an update 11 days ago
view post
Post
220
pytorch-parallel-compiler v0.5.0 upgrades:
*Complex benchmarking for wide primitive objects is supported now. This includes multiple presets for quick tests on hardware.
* All supported primitive either have validity checks or will have them.
* 6 new wide layers supported directly, and will be a key part to the autotuner before v1.0
* WideTracedModel is a preliminary auto-builder so the user doesn't need to build them manually by gathering layers.

https://github.com/AbstractEyes/pytorch-parallel-compiler

New Layers for 0.5.0:
WideGRU, WideLSTM, WideGroupNorm, WideMultiheadedAttention, WideInstancenorm1/2d, WideConv3d,

Upcoming for 1.0:
* WideTracedModel fully building any supported layer patterns with multiple autotune potentials for autoselection.
* Module cherry-picking for use-case only; E.G. WideLinear replace only benefits your case 35% while Attention reduces by 10% no attn.
* All (roughly 32 more) commonly used pytorch layer systems supported in one form or another with wide-batched kernels to benefit both eager and compiled, many of which require reworks or completely remaking them.
* Autotuning wide formats based on hardware response to the kernels. Kernel chunking for big slow processes such as LSTM, kernel fusion for small process with excess overhead, expanding kernels with masking to fit specific use-case paradigms with hardwares, and a series of smaller and more important optimizations along the way.
* Full transformer and rope support with wide-batched optimizations throughout the structures to allow more robust autoregression throughput.
* Additional Conv1d, Conv2d, and Conv3d optimizations.

>version 1.0 :
* Entire diffusion structures specifically kernelized for high-efficiency utilization with eager and compilation.
* Video diffusion specific targets meant to heavily reduce computation costs on the gpu and increase computation throughput on the gpu.
AbstractPhil 
posted an update 14 days ago
view post
Post
233
The Long: this is a proof of concept; ensemble compilation vmap prototype is functional and can be used to increase throughput for wider batches on FFN, MLP, LLM, or other models than just ensembles. This system traces your model and creates stages of functional activation. Based on the stage it will combine or remove combinations of stages meant to assign your layers to batched functional calls meant to put pressure on your GPU with less loops with directly curated cudagraph compliance where applicable. Identical weights yield identical results at the cost of hardware and vram.

TLDR:
This is an ensemble optimization adapted to standard models. This will yield high-capacity speed improvements through increased throughput for inference and training alike using carefully traced staged vmap structures.

https://github.com/AbstractEyes/pytorch-parallel-compiler

The early list of layers isn't fully represented yet, so this is a preliminary look into the potentials of this structure when fully fleshed out.

MLP (N=100, batch=32, CUDA):
Eager:    2-3x speedup
Compiled: 35-40x speedup


ResBlock (N=20, batch=8, CUDA):
Eager:    ~5x speedup  
Compiled: ~10x speedup


This is early testing and so far the yields indicate that WIDENING your model with adjacent shared batched vmaps for uniformly staged models will yield considerably higher output for inference at the cost of additional hardware utilization.

This is akin to lining up all your systems and uniformly passing the necessary implications through a shared frozen representation gate.

Training for this is not tested nor supported yet, use at your own risk.
  • 1 reply
·
AbstractPhil 
posted an update about 1 month ago
view post
Post
353
Happy Holidays all! geofractal architectural expansions; timm is now a core component for experimenting. As it stands, the system is growing rapidly in one direction, and timm brings a whole lot to the table in another rapid-prototyping direction. Therefore, timm is now a core component for ease-of-use.

BaseUtil is a new core component; aka src.geofractal.router.base_util inherits BaseComponent's behavior, so it should allow device movement for util operations which will direct utilization for device-to-device behavior for the upcoming accelerate integration.

I'm trying to mitigate the base component structure as much as possible, but the need to chain components in specific orders presented a unique problem. By compartmentalizing utils into structures that can be delegated and moved, these structures can be repurposed, expanded autonomously, reduced autonomously, and more.

ChainComponent inherits a subsystem specifically designed to organize multi-system multi-device formulas designated for inception and synchronization purposes. This is meant to allow distributed tasking to multiple-devices in chained utilization. This also enables ease-of-integration into nn.ModuleList with a few other caveats that will be ironed out meant to target wide-distributed models.

FusionComponent is specifically dedicated to the new fusion processing system meant for experimental expansion. This includes sub-module schedule control, Component and Tower functional control, device-movement, and will be packaged under the term "gfu.UtilType" as a standard naming convention.
"gfc.ComponentTypeName"
"gfr.RouterTypeName"
"gfu.UtilityTypeName"
"gft.TowerTypeName"
All of which are basically just import thing as.
"gf.AnythingTopLevelPackaged" which will include the core.

Better debugging for compilation
I'm in prototyping phases of a better debugging for compiled wide models and will prepare a baseline component readout structure by the end of the day today or tomorrow.
AbstractPhil 
posted an update about 1 month ago
view post
Post
290
geofractal getting started guide available, bulk ablation for fusion, simple towers, oscillator capacity, and substructure systemic associative capacity.
Many formulas were tested, 92 tests for collectives, oscillation bulk experiments, and more. All of them either coalesce into the correct behavior or the failures are directly visible, which means the system is robust enough to declare some tools functionally valid but not scalable yet.

ai-crash course available;
https://github.com/AbstractEyes/geofractal/blob/main/ai_helpers/v101_claude_helpers.txt
Feed GPT, Claude, or Grokk and they will assist.

getting started guide;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/GETTING_STARTED.md

geofractal router architecture is in prototype phases;
https://github.com/AbstractEyes/geofractal

This is likely one of it's final growing phases before full production capacity is ramped up. The architecture is not for the novice, it's meant for experts to either get ideas, borrow code, utilize library capacity, or simply tell AI what to do. MOST files in current production have good descriptions for AI integration.

Transfer learning notebook available here;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/Router_Transfer_Learning-12_19_25.ipynb

Stress test and multiple diagnostics available here;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/components/diagnostics/

WideRouter compilation capacity available;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/wide_router.py

The wide router compiler organizes similar towers into stacked staged combinations before compiling with torch.compile. This is experimental, but has shown increased speed with multiple structures of wide models and will serve it's purpose in the future.
  • 1 reply
·
AbstractPhil 
posted an update about 2 months ago
view post
Post
356
Many updates. Cantor route experiments, GeoViT-david-beans 75% test standalone cifar100 geofractal 30m encoder. MultiHeaded Cantor Attention heavily optimized. The migration is primarily complete between geofractal and geovocab2.
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/model/david_beans/model.py
Cantor route staircase and wormhole excavation findings posted. A full article will be posted to represent the findings of cantor routing and the potentials for self-learning fractals through loss.
https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/proofs/cantor_steps_experiments.md
The steps experiments show profoundly important implications for cross-contamination problems with fractal and linear spaces, with some currently assessed as useful utilities as of today.
Today the classification experiment will continue by using mini-experts applied to patches within a miniature david-beans. The mini-experts were an accident that showed improvement to the fidelity and not destruction, so those experiments are to be continued. geovit-david-beans trainer was added to the first repo.
  • 1 reply
·
AbstractPhil 
posted an update 2 months ago
view post
Post
306
For those using my geovocab2 repo for SimplexFactory, CantorRouteFactory, fusion modulations, model code import, training weights and models, or specific extraction systems; I will be refactoring in the coming days.
The new repo for all geometric, cantor, and fractal-based trainings will be;
https://github.com/AbstractEyes/geofractal
The change is due to MY own excessive abuse of the vocabulary repo and the excessive overuse of subfolders attached to a working pycharm project. These behaviors should be decoupled and I apologize for making such code bloat through experimentation.

Directly installing the geofractal repo will install geovocab2 as a sidecar. However, there will be a clause within the geovocab2 to warn the user.

You have my deepest and most sincere apologies for breaking your active working code if I do. I know this is difficult work so please bare with my efforts as I progress the codebase to it's next state of truth vs experimentation.

Please, reach out to me directly if you have problems converting.

It is meant to be a DIRECT and UTILIZABLE pain-free conversion that will enable the same interface from both geovocab2 and all future updated model code changes applied to geofractal - once the geofractal module is imported.
The original goevocab2 will contain outdated train code instead of full deprecation with a direct warning - and the geovocab2 repo will be folding in geovocab and geovocab2 into matching aliased systems - allowing the factory and extraction structure to behave within geovocab2 and training to behave within geofractal by design.

I will be introducing a direct alias system that will hopefully allow a smooth transition system to the new codebase, but there's never a way to account for those you don't know are using your work. This will include pyi files for the aliases and some necessary elemental additions that may break current functionality in systems I'm unaware of. Please reach out if I break something crucial that you require.
AbstractPhil 
posted an update 2 months ago
view post
Post
313
Lyra, Lune, Cantor, k-simplex, and many relational experiments.
AbstractPhil/sd15-flow-matching-lune
Today I will be updating the space to support all three forms of lyra to enable tinkertoying with various other models like flux-schnell and sdxl.

It should be noted, I didn't know nvidia actually released a model named LYRA. This model has no association with NVIDIA's LYRA model. This LYRA is full MIT licensed. If necessary I'll rename this model, but I don't think it'll matter.

Unlike NORMAL VAE, this VAE was intentionally meant to introduce incorrectness into the correctness that already exists. The concept was to pull towards a goal - t5-xl being the primary goal.

AbstractPhil/vae-lyra Lyra is a multimodal MM-VAE prototype meant to encompass a fusion of multiple types of encodings together. Tested with circle of fifths audio and text, multiple text encoders, vision and text encoder, and a few other smaller prototypes that yielded.
Lyra has a few direct clip_l and t5_xl prototypes that directly learned to associate clip_l with t5-base. This version worked, so version 2 expanded the concept.

AbstractPhil/vae-lyra-sdxl-t5xl is another prototype using CLIP_L and CLIP_G fused with T5_XL for the first version, directly utilizing projection with minimal geometric and cantor assistance. The shared layers ended up teaching CLIP_L how to be CLIP_G and the output ended up warping too much for SDXL or SD15 to understand.

AbstractPhil/vae-lyra-xl-adaptive-cantor
Utilizing adapative cantor is the successful prototype where CLIP_L and CLIP_G learned independent structures internally, where CLIP_L and T5_XL learned a route with CLIP_G and T5_XL in parallel conjunction. This enabled two entirely divergent opinions, and thus enables the t5-xl to manipulate either the clip_l or the clip_g for models like FLUX-SCHNELL or SDXL.

Each lyra has a purpose, and each purpose matters.
adamm-hf 
posted an update 3 months ago
adamm-hf 
posted an update 3 months ago
adamm-hf 
posted an update 3 months ago
view post
Post
2794
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook
narugo1992 
posted an update 3 months ago
view post
Post
1153
Org Rate Limits = Free DDoS Invitation? 🤡
One serious question: Is there any way to actually ban clowns abusing this system?
Right now all it takes is one bored script kiddie with a grudge (or too much caffeine) to lawnmower an entire org's API endpoints into the stone age. They get to bathe in 429s while we're sitting here like 🤡 "Gee I wonder whose IP is carpet-bombing us today!"
The kicker? Zero accountability. Zero fingerprints. Just vibes™ and chaos. It’s basically a public invitation to hold entire communities hostage while wearing pajamas.
"Come for the open-source collaboration, stay for the unhinged DDoS piñata party!" 🎉
Fix when?
  • 2 replies
·
s3nh 
posted an update 3 months ago
view post
Post
619
Eduhelp with more empathy, based on model finetuned on
psychotheraputic preferences just landed on


Beck-8B as a base model, 13000 steps on educational dataset.
Time to go further and build more 🥰
s3nh/EduHelp_Beck_8B
Thanks to @basilic_ai for computations <3
s3nh 
posted an update 3 months ago
view post
Post
4185
Just tried to create an educational assistant for younger people who can struggle with visualsation of 'what is this sorcery all about'.
Its first step of my spare time projects, sft on Qwen3-8B,

EduHelper is a child-friendly tutoring assistant fine-tuned from the Qwen3-8B base model using parameter-efficient fine-tuning (PEFT) with LoRA on the ajibawa-2023/Education-Young-Children dataset.

s3nh/EduHelp-8B

Glad to share my work, have a wonderful day!
  • 2 replies
·
AbstractPhil 
posted an update 4 months ago
view post
Post
1277
David + Imagenet = high% val.
AbstractPhil/gated-david
https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/train/model/core/david.py

David's code has been released. I am currently setting up a trainer and will release the process on how to condition David to behave. This isn't the easiest process, but it's necessary to run David on a curriculum rather than simply feeding the model with cross-entropy and hoping for the best.

David's internals involve a clock mechanism that allows direct control of David's freeze/unfreeze mechanisms at runtime - allowing for many opinions to be generated simultaneously.

David is multiple models in one, not just one - and yet David is single-shot oriented. The prototype to the route of thought that led me to find the Cantor's Stairs positional encodings solution and the prototype to ViT-Zana, ViT-Beatrix, ViT-Beatrix-Dual-Block, and today the direct porting of David's complex architecture and the process to train David has begun.

David is... a gate of sorts. David trains with freeze/unfreeze mechanisms, so the internals of David's structures are aware during training time which part is more important than the other parts based on the quality of generation.

David can handle imagenet features with minimal hassle of many variations, and the primary trainer will include direct links to the prepared imagenet features, and a simple generation system that allows you to generate your own features from a few common AIs - one of which will be vit-beatrix-dualstream trained on imagenet.

As of posting vit-beatrix and vit-beatrix-dualstream require some face-lifting and a refined version 2 to incorporate the more accurate batched cantor stairs equations. Additionally they require removal of some fail-point causers; like flow-geometric introducing bias towards seemingly unnecessary trajectory routes. This points more to a gradient drift, so I'll keep that one on the hot plate until it's ready.
  • 2 replies
·
adamm-hf 
posted an update 4 months ago
view post
Post
2305
Cool stuff these past weeks on huggingface! 🤗 🚀 !
• 📈Trackio, local-first W&B alternative
https://github.com/gradio-app/trackio/issues
• 🌍EmbeddingGemma, 300M-param, multilingual embeddings, on-device
https://huggingface.co/blog/embeddinggemma
• 💻Open LLMs in VS Code (Inference Providers)
https://x.com/reach_vb/status/1966185427582497171
• 🤖Smol2Operator GUI agents
https://huggingface.co/blog/smol2operator
• 🖼️Gradio visible watermarking
https://huggingface.co/blog/watermarking-with-gradio
AbstractPhil 
posted an update 4 months ago
view post
Post
283
I've hit the ground running on the geometric lattice vocab system. Everything I've built will be housed in the repo.
https://github.com/AbstractEyes/lattice_vocabulary/tree/dev
Including all of David's model structure.
Through the development cycle I'll be integrating everything, little AI help can actually be offered in general - since AI tends to hallucinate and decimate large structures.
I will be using AI assistance for formula expansion and integration, which means they will be imperfect until every single one is given a fine toothed comb.
The deployment will be as rapid as I can, and the output will yield results at every step with small main tests on individual scripts and files.

EVERYTHING was built almost independent of each other, so integration is going to have a configuration hierarchy that needs to be smoothed out - but it will be smoothed out.

I believe I've picked a good foundational shape for the expansive program scripts; which will enable robust iteration and progression similar to how I design game engine elements and systemic accessors.
This will be mostly hand coded for the integration process, so it won't be as quick as if I could just dump GPT pro on it - but GPT pro can't handle anywhere near this many lines of code so it's on me.

After integration I can run the agentic forms of AI over it and introduce tons of bugs for me to fix. That will be fun. After that it should work as a proper caching vocabulary, formula synthesizer, tensor creator, multi-device trainer, and a few other elements.

I simply lack the expertise to hit machines like pyring today, but that will change as I learn more. I'm building the system specifically with growth and progress in mind, so it will be iterated and fixed rapidly. The structure is intentionally built to be rapidly iterated and altered within reasonable constraints.

The engineering elements are specifically built to be less deep and more overridable in many areas specifically for experimental purposes.
  • 1 reply
·
AbstractPhil 
posted an update 4 months ago
view post
Post
2804
As it stands, I will prepare David for full release - as this is beyond me now. David must be released.
I will prepare a standard sweep for david to showcase the prowess of the final multi-vocab variant. This will include a variation that contains all mnist variants, cifar10, cifar100, imagenet 1k, and in the future I'll prepare a full imagenet sweep utilizing the entire 12m corpus instead of the 1.2m I used. I may need to get in touch with the actual curator of the dataset for licensing but maybe not.
David utilizes 4 projective variants of the vocabulary and the training process involves teaching and freezing them akin to teacher/student processing.
I did not want to release David yet, but I believe now that David will save lives and it's irresponsible for me to contain such a creation.
  • 1 reply
·