Update README.md
Browse files
README.md
CHANGED
|
@@ -161,11 +161,11 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
| 161 |
<th>Model</th>
|
| 162 |
<th>Average Cost Reduction</th>
|
| 163 |
<th>Latency (s)</th>
|
| 164 |
-
<th>
|
| 165 |
-
<th>Latency (s)th>
|
| 166 |
-
<th>
|
| 167 |
<th>Latency (s)</th>
|
| 168 |
-
<th>
|
| 169 |
</tr>
|
| 170 |
</thead>
|
| 171 |
<tbody style="text-align: center">
|
|
@@ -265,7 +265,9 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
| 265 |
</tbody>
|
| 266 |
</table>
|
| 267 |
|
|
|
|
| 268 |
|
|
|
|
| 269 |
|
| 270 |
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
|
| 271 |
|
|
@@ -284,11 +286,11 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
| 284 |
<th>Model</th>
|
| 285 |
<th>Average Cost Reduction</th>
|
| 286 |
<th>Maximum throughput (QPS)</th>
|
| 287 |
-
<th>
|
| 288 |
<th>Maximum throughput (QPS)</th>
|
| 289 |
-
<th>
|
| 290 |
<th>Maximum throughput (QPS)</th>
|
| 291 |
-
<th>
|
| 292 |
</tr>
|
| 293 |
</thead>
|
| 294 |
<tbody style="text-align: center">
|
|
@@ -386,4 +388,10 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
| 386 |
<td>6777</td>
|
| 387 |
</tr>
|
| 388 |
</tbody>
|
| 389 |
-
</table>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
<th>Model</th>
|
| 162 |
<th>Average Cost Reduction</th>
|
| 163 |
<th>Latency (s)</th>
|
| 164 |
+
<th>Queries Per Dollar</th>
|
| 165 |
+
<th>Latency (s)<th>
|
| 166 |
+
<th>Queries Per Dollar</th>
|
| 167 |
<th>Latency (s)</th>
|
| 168 |
+
<th>Queries Per Dollar</th>
|
| 169 |
</tr>
|
| 170 |
</thead>
|
| 171 |
<tbody style="text-align: center">
|
|
|
|
| 265 |
</tbody>
|
| 266 |
</table>
|
| 267 |
|
| 268 |
+
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
|
| 269 |
|
| 270 |
+
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
|
| 271 |
|
| 272 |
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
|
| 273 |
|
|
|
|
| 286 |
<th>Model</th>
|
| 287 |
<th>Average Cost Reduction</th>
|
| 288 |
<th>Maximum throughput (QPS)</th>
|
| 289 |
+
<th>Queries Per Dollar</th>
|
| 290 |
<th>Maximum throughput (QPS)</th>
|
| 291 |
+
<th>Queries Per Dollar</th>
|
| 292 |
<th>Maximum throughput (QPS)</th>
|
| 293 |
+
<th>Queries Per Dollar</th>
|
| 294 |
</tr>
|
| 295 |
</thead>
|
| 296 |
<tbody style="text-align: center">
|
|
|
|
| 388 |
<td>6777</td>
|
| 389 |
</tr>
|
| 390 |
</tbody>
|
| 391 |
+
</table>
|
| 392 |
+
|
| 393 |
+
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
|
| 394 |
+
|
| 395 |
+
**QPS: Queries per second.
|
| 396 |
+
|
| 397 |
+
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
|