LS-W4-270M-Micro-T1 / README.md
Web4's picture
Update README.md
515c38d verified
metadata
tags:
  - text-generation
  - masked-language-modeling
  - browser-compatible
  - micro-model
  - social-media
  - linkspreed
  - Web4
datasets:
  - custom
metrics:
  - custom
library_name: transformers
base_model: google/gemma-270m

LS-W4-270M-Micro-T1

Model Description

LS-W4-270M-Micro-T1 is the first model in the Web4 Localized Services (W4-LS) series, specifically designed for highly efficient, on-device text generation. As a Micro Language Model (Micro-LM), it features a compact architecture with a total of $2 \times 270$ million parameters.
This model is a Masked Language Model (MLM) specialized in generating social media captions. It prioritizes inference speed and minimal resource usage, making it ideal for client-side execution.

Key Features πŸš€

  • Base Architecture: Built on top of Gemma 3 270M.
  • Micro-LM Architecture: Optimized for low-latency performance on consumer devices.
  • Social Media Specialization: Trained to generate engaging and contextually relevant social media captions.
  • Serverless Operation: A core innovation of this model is its ability to run entirely locally within a web browser or on a client device without requiring a server. This ensures full privacy and offline functionality.

How to Use: Serverless Deployment

The model is designed exclusively for serverless environments and cannot be executed using traditional Hugging Face inference endpoints.

Client-Side/On-Device Deployment Files

To run this model locally in a browser or on a device, the necessary client-side deployment files are required.
The required .task and .tflite files for local deployment can be downloaded at:

https://ai.web4.one

Model Details

Model Name: LS-W4-270M-Micro-T1
Model Type: Masked Language Model (MLM)
Parameters: 540 Million (2Γ—270 Million)
Base Model: Gemma 3 270M
Primary Task: Social Media Caption Generation (Serverless/Local Inference)
License: Same license as the base model Gemma 3 270M

Training Details πŸ› οΈ

The model was fine-tuned specifically for the task of social media caption generation.
Training Data Size: Over 50,000 datasets (examples/entries) were used for fine-tuning.
Training Hardware: Fine-tuning was performed on a T4 GPU with 12 GB of RAM.