Commit History

add full README with API docs, MCP, CLI, architecture
9d2d424

Nekochu commited on

allow custom LoRA values in dropdown (API clients)
2bd2612

Nekochu commited on

log ace-server restart, show output for adapter debugging
d2ae079

Nekochu commited on

fix: forward adapter to synth request, default LM to 1.7B
b23b6b8

Nekochu commited on

fix: adapter saved to clean dir, LM dropdown no 'Default', on-demand download
e62602f

Nekochu commited on

copy train_engine.py into Docker image
5fe3c53

Nekochu commited on

Side-Step training engine, tested locally on CPU
a07b39d

Nekochu commited on

switch back to 1.7B LM (fastest at 269s, 0.6B was 936s)
5e95353

Nekochu commited on

swap LM to 0.6B Q8_0 for speed test
88b9223

Nekochu commited on

swap LM 4B->1.7B Q8_0 for faster CPU inference
b14d3e8

Nekochu commited on

add LoRA adapter dropdown to inference UI
5c2e4e7

Nekochu commited on

default mp3, remove format selector, increase LM timeout to 900s
882ed5c

Nekochu commited on

remove accelerate (causes meta tensors), clean up patches
153f929

Nekochu commited on

fix meta tensor crash: force low_cpu_mem_usage=False and float32 for CPU
9ed24c7

Nekochu commited on

disable flash_sdp on CPU, force attn_implementation=sdpa for training
13f9406

Nekochu commited on

add granular logging + full stderr to diagnose preprocessing hang
6cee8bd

Nekochu commited on

fix: use float32 not bfloat16 for CPU training (bf16 deadlocks on CPU)
560b5e0

Nekochu commited on

redirect training subprocess stderr to log file for debugging
e69e9ec

Nekochu commited on

add einops + vector_quantize_pytorch for model loading
88ca206

Nekochu commited on

run training as detached subprocess to survive Gradio session timeout
a4a86a8

Nekochu commited on

use bfloat16 precision for training to halve RAM usage
a0e1f4c

Nekochu commited on

stop ace-server during training to free RAM, restart after, add log visibility
c2cb0b9

Nekochu commited on

pre-download training checkpoints at build time
c37e80e

Nekochu commited on

pin diffusers==0.30.3 for torch 2.4.x compat
f59d542

Nekochu commited on

pin torchaudio==2.4.0 (before torchcodec default backend)
148cd6b

Nekochu commited on

force uninstall torchcodec (torchaudio dependency, broken on Ubuntu 22.04)
9822bee

Nekochu commited on

monkey-patch torchaudio.load to use soundfile backend
afccbc0

Nekochu commited on

remove torchcodec (needs FFmpeg 5.x), keep soundfile backend
63d7cb8

Nekochu commited on

add torchcodec + ffmpeg for torchaudio audio loading
f1f383b

Nekochu commited on

add soundfile + libsndfile1 for torchaudio backend
dd7a793

Nekochu commited on

add all training deps: diffusers lightning numpy tensorboard
eb1b926

Nekochu commited on

add torchaudio dep for training preprocess
9dc4031

Nekochu commited on

add loguru dep for training preprocess
d89a224

Nekochu commited on

install gradio[mcp] for mcp_server support
eb0e7f8

Nekochu commited on

add git to runtime image for ace-step source clone
0f0b7b0

Nekochu commited on

add LoRA training, fix css kwarg
625132a

Nekochu commited on

full app: CLI + Gradio + training placeholder + MCP
72e4b69

Nekochu commited on

rewrite app.py for ace-server HTTP API, no torch
2dc2899

Nekochu commited on

copy all shared libs including versioned symlinks
567a93e

Nekochu commited on

add pkg-config for BLAS detection
fd22b19

Nekochu commited on

switch to acestep.cpp GGUF: XL turbo Q4_K_M + 4B LM Q5_K_M
fccaf48

Nekochu commited on

clickable title
fe00878

Nekochu commited on

slim title
1e6f1c3

Nekochu commited on

compact grid UI, 4B default, show training model
4b376ab

Nekochu commited on

force float32 on training model for CPU
562fa54

Nekochu commited on

fix: LoRA rank param is 'r' not 'rank'
dece91f

Nekochu commited on

fix training: use LoRAConfigV2 instead of Union type alias
a6b0e73

Nekochu commited on

add torchcodec for training audio loading
2e461fa

Nekochu commited on

step 7: real app.py
dae27d4

Nekochu commited on

step 6: +numba (all deps)
2da29c2

Nekochu commited on