Anima-Core commited on
Commit
78a65d7
Β·
verified Β·
1 Parent(s): b918741

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -7
README.md CHANGED
@@ -1,10 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: README
3
- emoji: 😻
4
- colorFrom: purple
5
- colorTo: pink
6
- sdk: static
7
- pinned: false
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
1
+ ---
2
+ title: README
3
+ emoji: 😻
4
+ colorFrom: purple
5
+ colorTo: pink
6
+ sdk: static
7
+ pinned: false
8
+ license: mit
9
+ short_description: High-efficiency LLM acceleration engine
10
+ ---
11
+
12
+ <h1 align="center">Anima Core Inc.</h1>
13
+ <h3 align="center">High-efficiency LLM acceleration for real-world AI inference</h3>
14
+
15
+ <p align="center">
16
+ <b>7–11Γ— faster attention on NVIDIA H100 NVL</b><br>
17
+ <b>~90% lower energy per token</b><br>
18
+ <b>Software-only, no custom hardware required</b>
19
+ </p>
20
+
21
+ ---
22
+
23
+ ### πŸš€ AN1 Acceleration Engine
24
+ The AN1 Engine provides drop-in accelerated attention and matrix operations for PyTorch LLM inference.
25
+ It achieves competitive speedups to dedicated AI hardware but runs on standard NVIDIA GPUs.
26
+
27
+ | Feature | Value |
28
+ |--------:|:------|
29
+ | Acceleration | 7.21Γ— to 11.3Γ— faster |
30
+ | Energy Savings | ~90% lower joules/token |
31
+ | Hardware | H100 NVL, A100, L40S, more |
32
+ | Integration | PyTorch (vLLM & TensorRT adapters coming) |
33
+ | Availability | Production pilots by request |
34
+
35
+ ---
36
+
37
+ ### πŸ“ˆ Benchmarked on NVIDIA H100 NVL
38
+
39
+ ```text
40
+ Baseline PyTorch (fp16, 2048 seq):
41
+ Latency: 11.63 ms
42
+ Tokens/sec: 1.41M
43
+ TFLOPs: 47.27
44
+
45
+ AN1 Accelerated (fp16, 2048 seq):
46
+ Latency: 1.36 ms
47
+ Tokens/sec: 12.04M
48
+ TFLOPs: 404.01
49
+
50
+ Speedup: 7.65Γ—
51
+ Energy Savings: ~90%
52
+ ```
53
+
54
+ ---
55
+
56
+ ### πŸ”— Public Repository
57
+ πŸ‘‰ https://github.com/Anima-Core/an1-engine
58
+
59
+ ---
60
+
61
+ ### πŸ§ͺ Pilot Access (Private GPU Backend)
62
+ The CUDA backend (`an1_core_gpu`) remains proprietary.
63
+ To request access for benchmarking or integration:
64
+
65
+ πŸ“© **pilot@animacore.ai**
66
+
67
+ Please include:
68
+ - Name
69
+ - Organization
70
+ - GPU hardware available
71
+
72
  ---
73
+
74
+ ### 🧠 Research Interest
75
+ - Efficient LLM inference
76
+ - Software-based structured reuse for GPU compute
77
+ - Symbolic and neuro-symbolic acceleration
78
+ - Meaning-based computational models
79
+
80
  ---
81
 
82
+ ### 🌱 About Anima Core
83
+ Anima Core builds AI systems focused on secure, ethical, and computationally efficient machine intelligence.
84
+ We believe performance and responsibility belong together.
85
+
86
+ 🌐 https://www.animacore.ai