README / README.md
pratinavsetharya's picture
Update README.md
4de1d9a verified
metadata
title: README
emoji: ๐Ÿ“ˆ
colorFrom: red
colorTo: yellow
sdk: static
pinned: false

https://www.lexsi.ai

Paris ๐Ÿ‡ซ๐Ÿ‡ท ยท Mumbai ๐Ÿ‡ฎ๐Ÿ‡ณ ยท London ๐Ÿ‡ฌ๐Ÿ‡ง

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.

Research Focus

  • Aligned & Safe AI: Frameworks for self-monitoring, interpretable, and alignment-aware systems.
  • Explainability & Alignment: Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
  • Safe Behaviour Control: Techniques for fine-tuning, pruning, and behavioural steering in large models.
  • Risk & Governance: Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
  • Tabular & LLM Research: Foundational work on tabular intelligence, in-context learning, and interpretable large language models.