Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,14 +11,23 @@ pinned: false
|
|
| 11 |
|
| 12 |
**An open AI community for all**
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
## 🎯 Our Mission
|
| 17 |
|
| 18 |
AI Foundry is the open-source community platform stewarded by [Ainekko](https://www.ainekko.io), bringing open-source principles all the way down to silicon. We're building the foundation for flexible, software-defined AI systems—from lightweight edge devices to high-performance inference platforms.
|
| 19 |
|
| 20 |
Our long-term vision is to help the industry evolve where model training and deployment are open, transparent, and accessible to everyone. We support practitioners who want to benefit from AI in their work and life, particularly in model fine-tuning and deployment.
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
## 🌟 Core Values
|
| 23 |
|
| 24 |
- **Openness** - All our projects are Apache 2.0 licensed
|
|
@@ -39,6 +48,22 @@ Just as Linux opened up operating systems and Kubernetes made cloud infrastructu
|
|
| 39 |
|
| 40 |
## 📦 Featured Projects
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
### Llamagator
|
| 43 |
Test prompts against multiple LLMs or versions, observe relative performance, and assess reliability through multiple runs. Supports both local and API-based LLM access.
|
| 44 |
|
|
@@ -59,26 +84,18 @@ Techniques and implementations for efficient model compression and deployment.
|
|
| 59 |
|
| 60 |
AI Foundry is built for:
|
| 61 |
|
|
|
|
| 62 |
- AI engineers and embedded system builders
|
| 63 |
- Open-source developers and contributors
|
| 64 |
- Researchers exploring novel architectures
|
| 65 |
- Startups building edge AI products
|
| 66 |
- Anyone interested in accessible, composable AI infrastructure
|
| 67 |
|
| 68 |
-
## 📚 Get Started
|
| 69 |
-
|
| 70 |
-
Visit [aifoundry.org](https://aifoundry.org) to:
|
| 71 |
-
- Download our latest releases
|
| 72 |
-
- Access documentation and tutorials
|
| 73 |
-
- Join community discussions
|
| 74 |
-
- Contribute to ongoing projects
|
| 75 |
-
- Attend upcoming events
|
| 76 |
-
|
| 77 |
## 🌍 Community Events
|
| 78 |
|
| 79 |
We host regular events including:
|
| 80 |
-
- **AI Plumbers Conference** - Technical deep-dives and hands-on sessions
|
| 81 |
-
- **Community Calls** - Regular sync-ups and roadmap discussions
|
| 82 |
|
| 83 |
## 💡 Why AI Foundry?
|
| 84 |
|
|
|
|
| 11 |
|
| 12 |
**An open AI community for all**
|
| 13 |
|
| 14 |
+
Hi there 👋 -- we are building a collection of shareable building blocks (from silicon to open models) empowering humanity to own its own AI. We bring together open source hackers in the fields of computer architecture, ASIC design, advanced systems, and neural network compilers who are bold enough to think that you don't have to work for a mega corporation to build your own computers for AI. Our mission is to bring the spirit of [Homebrew Computer Club](https://en.wikipedia.org/wiki/Homebrew_Computer_Club) back into vogue. The easiest place to find us is on our [Discord](https://discord.com/invite/WNKvkefkUs) or at [FOSDEM](https://fosdem.org/2025/schedule/track/ai/).
|
|
|
|
| 15 |
## 🎯 Our Mission
|
| 16 |
|
| 17 |
AI Foundry is the open-source community platform stewarded by [Ainekko](https://www.ainekko.io), bringing open-source principles all the way down to silicon. We're building the foundation for flexible, software-defined AI systems—from lightweight edge devices to high-performance inference platforms.
|
| 18 |
|
| 19 |
Our long-term vision is to help the industry evolve where model training and deployment are open, transparent, and accessible to everyone. We support practitioners who want to benefit from AI in their work and life, particularly in model fine-tuning and deployment.
|
| 20 |
|
| 21 |
+
We believe AGI can only be achieved through [cat super intelligence](https://en.wikipedia.org/wiki/Accelerando#Characters) and we grew up playing the [worst Atari game ever created](https://en.wikipedia.org/wiki/E.T._the_Extra-Terrestrial_(video_game)).
|
| 22 |
+
|
| 23 |
+
We don't attempt to re-invent the wheel and try to work directly with as many upstream communities as possible. We are eternaly grateful for [ggml/llama.cpp](https://ggml.ai/), [tinygrad](https://tinygrad.org/), [gcc](https://github.com/riscv-collab/riscv-gnu-toolchain), [llvm](https://llvm.org/) and [RISC-V](https://riscv.org/) (just to name a few) and we're not shy to use those building blocks in our end-to-end designs.
|
| 24 |
+
|
| 25 |
+
When we can't find appropriate building blocks in the open -- we're not shy to build them ourselves from scratch. That's why we're going all the way to chip design level and creating world's first fully open source, many core hardware architecture scalable from a few dozen processing units to 4096. We also have opinions about software side of AI inference servers and are trying to change the state of the art there as well.
|
| 26 |
+
|
| 27 |
+
Oh, and mark our words: [Transputers](https://tu-dresden.de/ing/informatik/ti/vlsi/ressourcen/dateien/dateien_studium/dateien_lehstuhlseminar/vortraege_lehrstuhlseminar/folder-2013-04-11-7748162390/20130612_Transputer-Architecture_Handout_UM.pdf?lang=en) are due for a huge comeback in the AI-centric world.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
## 🌟 Core Values
|
| 32 |
|
| 33 |
- **Openness** - All our projects are Apache 2.0 licensed
|
|
|
|
| 48 |
|
| 49 |
## 📦 Featured Projects
|
| 50 |
|
| 51 |
+
### ET
|
| 52 |
+
|
| 53 |
+
ET is an open-source manycore platform for parallel
|
| 54 |
+
computing acceleration. It is built on the legacy of Esperanto
|
| 55 |
+
Technologies ET-SOC1 chip.
|
| 56 |
+
|
| 57 |
+
On first approximation, the ET Platform is a RISCV, manycore architecture.
|
| 58 |
+
|
| 59 |
+
The ETSOC1 contains 1088 compute cores (called _minions_). Each minion has
|
| 60 |
+
two `rv64imfc` RISCV HARTs with vendor-specific vector and tensor extensions.
|
| 61 |
+
|
| 62 |
+
There's an extra RISCV core on board, called the Service Processor, that is used
|
| 63 |
+
for chip bring up and runtime management.
|
| 64 |
+
|
| 65 |
+
For a full understanding of the ETSOC1 architecture check the [ETSOC1 Programmer's Reference Manual](https://github.com/aifoundry-org/et-man/blob/main/ET%20Programmer's%20Reference%20Manual.pdf).
|
| 66 |
+
|
| 67 |
### Llamagator
|
| 68 |
Test prompts against multiple LLMs or versions, observe relative performance, and assess reliability through multiple runs. Supports both local and API-based LLM access.
|
| 69 |
|
|
|
|
| 84 |
|
| 85 |
AI Foundry is built for:
|
| 86 |
|
| 87 |
+
- Chip designers looking into many core architectures
|
| 88 |
- AI engineers and embedded system builders
|
| 89 |
- Open-source developers and contributors
|
| 90 |
- Researchers exploring novel architectures
|
| 91 |
- Startups building edge AI products
|
| 92 |
- Anyone interested in accessible, composable AI infrastructure
|
| 93 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
## 🌍 Community Events
|
| 95 |
|
| 96 |
We host regular events including:
|
| 97 |
+
- **AI Plumbers Conference** - [Technical deep-dives and hands-on sessions](https://blog.aifoundry.org/)
|
| 98 |
+
- **Community Calls** - [Regular sync-ups and roadmap discussions](https://discord.gg/WNKvkefkUs)
|
| 99 |
|
| 100 |
## 💡 Why AI Foundry?
|
| 101 |
|