| | --- |
| | title: README |
| | emoji: π |
| | colorFrom: green |
| | colorTo: pink |
| | sdk: static |
| | pinned: false |
| | short_description: TNG on huggingface |
| | --- |
| | |
| | # TNG Technology Consulting GmbH |
| |
|
| | We solve hard IT problems. |
| |
|
| | ## Latest Research |
| | Check out our latest research: |
| | - **DeepSeek-TNG-R1T2-Chimera** |
| | - [Announcement on X](https://x.com/tngtech/status/1940531045432283412) |
| | - [Model Card](https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera) |
| | - **DeepSeek-R1T-Chimera** |
| | - [arXiv: Assembly of Experts](https://arxiv.org/abs/2506.14794) |
| | - [Announcement on X](https://x.com/tngtech/status/1916284566127444468) |
| | - [Model Card](https://huggingface.co/tngtech/DeepSeek-R1T-Chimera) |
| | - **Mixture of Tunable Experts** |
| | - [arXiv: Mixture of Tunable Experts](https://arxiv.org/abs/2502.11096) |
| | - [blog: Mixture of Tunable Experts](https://huggingface.co/blog/rbrt/mixture-of-tunable-experts) |
| |
|
| | ## Blog |
| | Read our latest blog posts: |
| |
|
| | - [Prefill and Decode for Concurrent Requests - Optimizing LLM Performance ](https://huggingface.co/blog/tngtech/llm-performance-prefill-decode-concurrent-requests) |
| | - [Finetuning olmOCR to be a faithful OCR-Engine](https://huggingface.co/blog/tngtech/finetuning-olmocr-to-be-a-faithful-ocr-engine) |
| | - [Efficient Request Queueing β Optimizing LLM Performance](https://huggingface.co/blog/tngtech/llm-performance-request-queueing) |
| |
|
| | ## Follow us |
| |
|
| | [TNG on github](https://github.com/TNG) |
| | , [on X](https://x.com/tngtech) |
| | , [on LinkedIn](https://www.linkedin.com/company/96020) |