Request: Add non-quantized Preprocessor + Encoder as .mlpackage
#2
by h-pal - opened
Summary
Request to add non-quantized CoreML models (Preprocessor + Encoder) as .mlpackage in the mlpackages/ directory.
Background
The mlpackages/ folder currently contains:
MelEncoder.mlpackage— an int8 quantized fused encoder (commit: "int-8 model")Decoder.mlpackageandJointDecision.mlpackage— already available
The repository also includes:
Preprocessor.mlmodelcEncoder.mlmodelc
These are non-quantized compiled models used by FluidAudio’s Swift SDK and achieve ~2.1% WER, but they are not available as .mlpackage.
Problem
I’m using coremltools from Python, which can load .mlpackage but cannot load .mlmodelc.
This makes it impossible to use the non-quantized encoder pipeline outside of Swift.
Request
Please add the following to mlpackages/:
Preprocessor.mlpackage(non-quantized)Encoder.mlpackage(non-quantized)
Notes
Decoder.mlpackageandJointDecision.mlpackageare already available and working- The only missing pieces are the non-quantized preprocessor + encoder pipeline
Why This Helps
- Enables Python usage via
coremltools - Provides access to the higher-accuracy non-quantized models
- Aligns model availability across Swift and Python workflows
P.S. - This is opened by me, a human, not a OpenClaw clanker
@h-pal we have the conversion scripts if you want the original model format in mlpackage https://github.com/FluidInference/mobius
alexwengg changed discussion status to closed