File size: 2,205 Bytes
30c14cd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
layout: default
title: CLaRa Documentation
---
# CLaRa Documentation
Welcome to the CLaRa documentation! This site provides comprehensive guides and references for using CLaRa.
## What is CLaRa?
**CLaRa** (Continuous Latent Reasoning) is a unified framework for retrieval-augmented generation that performs embedding-based compression and joint optimization in a shared continuous space.
[](https://arxiv.org/abs/XXXX.XXXXX) [](../LICENSE) [](https://huggingface.co/your-org/clara-base) [](https://huggingface.co/your-org/clara-instruct) [](https://huggingface.co/your-org/clara-e)
## Documentation
- **[Getting Started](./getting_started.md)** - Installation and quick start guide
- **[Training Guide](./training.md)** - Detailed instructions for all three training stages including data formats
- **[Inference Guide](./inference.md)** - How to use CLaRa models for inference
## Quick Links
- **GitHub Repository**: [github.com/apple/ml-CLaRa](https://github.com/apple/ml-CLaRa)
- **Main README**: [../README.md](../README.md)
- **Model Checkpoints**: [Hugging Face](https://huggingface.co/your-org/clara-base) (Coming Soon)
## Overview
CLaRa uses a three-stage training approach:
1. **Stage 1: Compression Pretraining** - Learn effective document compression
2. **Stage 2: Compression Instruction Tuning** - Adapt for downstream QA tasks
3. **Stage 3: End-to-End Fine-tuning (CLaRa)** - Joint retrieval and generation optimization
For more details, see the [Training Guide](./training.md).
## Citation
If you use CLaRa in your research, please cite:
```bibtex
@article{clara2024,
title={CLaRa: Unified Retrieval-Augmented Generation with Compression},
author={[Authors]},
journal={[Journal]},
year={2024},
eprint={XXXX.XXXXX},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/XXXX.XXXXX}
}
```
|