|
|
--- |
|
|
title: README |
|
|
emoji: 馃搱 |
|
|
colorFrom: red |
|
|
colorTo: yellow |
|
|
sdk: static |
|
|
pinned: false |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<a href="https://lexsi.ai/"> |
|
|
<img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/lexsilogowhite.png" width="600"> |
|
|
</a> |
|
|
<br> |
|
|
<a href="https://lexsi.ai/">https://www.lexsi.ai</a> |
|
|
<br><br> |
|
|
Paris 馃嚝馃嚪 路 Mumbai 馃嚠馃嚦 路 London 馃嚞馃嚙 |
|
|
<br><br> |
|
|
<a href="https://discord.gg/dSB62Q7A" style="display:inline-block; vertical-align:middle;"> |
|
|
<img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/discord.png" width="150"> |
|
|
</a> |
|
|
<a href="https://github.com/Lexsi-Labs" style="display:inline-block; vertical-align:middle; margin-left:10px;"> |
|
|
<img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/githublogo.png" width="150"> |
|
|
</a> |
|
|
</div> |
|
|
|
|
|
Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale. |
|
|
|
|
|
|
|
|
### Research Focus |
|
|
- **Aligned & Safe AI:** Frameworks for self-monitoring, interpretable, and alignment-aware systems. |
|
|
- **Explainability & Alignment:** Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models. |
|
|
- **Safe Behaviour Control:** Techniques for fine-tuning, pruning, and behavioural steering in large models. |
|
|
- **Risk & Governance:** Continuous monitoring, drift detection, and fairness auditing for responsible deployment. |
|
|
- **Tabular & LLM Research:** Foundational work on tabular intelligence, in-context learning, and interpretable large language models. |
|
|
|
|
|
|
|
|
|
|
|
|