File size: 1,775 Bytes
8e8a539
 
 
 
 
 
 
 
4de1d9a
 
 
 
 
 
f388166
4de1d9a
f388166
4de1d9a
 
 
 
 
 
 
 
f388166
 
 
 
 
 
 
 
 
 
8e8a539
6abe463
490ec58
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
title: README
emoji: ๐Ÿ“ˆ
colorFrom: red
colorTo: yellow
sdk: static
pinned: false
---

<div align="center">
  <a href="https://lexsi.ai/">
    <img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/lexsilogowhite.png" width="600">
  </a>
  <br>
  <a href="https://lexsi.ai/">https://www.lexsi.ai</a>
  <br><br>
  Paris ๐Ÿ‡ซ๐Ÿ‡ท ยท Mumbai ๐Ÿ‡ฎ๐Ÿ‡ณ ยท London ๐Ÿ‡ฌ๐Ÿ‡ง 
  <br><br>
  <a href="https://discord.gg/dSB62Q7A" style="display:inline-block; vertical-align:middle;">
    <img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/discord.png" width="150">
  </a>
  <a href="https://github.com/Lexsi-Labs" style="display:inline-block; vertical-align:middle; margin-left:10px;">
    <img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/githublogo.png" width="150">
  </a>
</div>

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.  


### Research Focus  
- **Aligned & Safe AI:** Frameworks for self-monitoring, interpretable, and alignment-aware systems.  
- **Explainability & Alignment:** Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.  
- **Safe Behaviour Control:** Techniques for fine-tuning, pruning, and behavioural steering in large models.  
- **Risk & Governance:** Continuous monitoring, drift detection, and fairness auditing for responsible deployment.  
- **Tabular & LLM Research:** Foundational work on tabular intelligence, in-context learning, and interpretable large language models.