TeleAI-AI-Flow commited on
Commit
21aeb55
·
verified ·
1 Parent(s): 1b18856

Upload Leaderboard.vue

Browse files
Files changed (1) hide show
  1. src/views/Leaderboard.vue +2 -4
src/views/Leaderboard.vue CHANGED
@@ -51,10 +51,6 @@ const lastSelectedDataNameChart = ref('')
51
 
52
  // header markdown 内容
53
  const headerMarkdown = ref(`
54
- <p align="center">
55
- 🏆 <a href="https://huggingface.co/spaces/TeleAI-AI-Flow/InformationCapacityLeaderboard"> Leaderboard</a> &nbsp&nbsp | &nbsp&nbsp
56
- 🖥️ <a href="https://github.com/TeleAI-AI-Flow/InformationCapacity">GitHub</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp 📑&nbsp <a href="https://www.arxiv.org/abs/2511.08066">Paper</a>
57
- </p>
58
 
59
  **Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, harnessing the inherent correlation between **compression** and **intelligence**.
60
  Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs.
@@ -62,7 +58,9 @@ Consequently, a series of models with varying sizes exhibits **consistent** info
62
  It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
63
  With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.
64
 
 
65
  If you want to add your evaluation results to the leaderboard, please submit a PR at [our GitHub repo](https://github.com/TeleAI-AI-Flow/InformationCapacity).
 
66
  `)
67
 
68
  const title = 'Information Capacity Leaderboard'
 
51
 
52
  // header markdown 内容
53
  const headerMarkdown = ref(`
 
 
 
 
54
 
55
  **Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, harnessing the inherent correlation between **compression** and **intelligence**.
56
  Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs.
 
58
  It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
59
  With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.
60
 
61
+
62
  If you want to add your evaluation results to the leaderboard, please submit a PR at [our GitHub repo](https://github.com/TeleAI-AI-Flow/InformationCapacity).
63
+
64
  `)
65
 
66
  const title = 'Information Capacity Leaderboard'