File size: 1,081 Bytes
cca7cbb
3e1383a
 
cca7cbb
3e1383a
 
 
cca7cbb
 
 
 
 
3e1383a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
title: LLM Evaluation Framework
emoji: 🤖
colorFrom: blue
colorTo: green
sdk: streamlit
sdk_version: 1.28.0
app_file: app.py
pinned: false
license: mit
---

# LLM Quantitative Evaluation Framework

A comprehensive tool for comparing and evaluating Large Language Models based on multiple quantitative criteria.

## Features

- **Multi-criteria evaluation**: Performance, cost, speed, reliability, compliance, and integration
- **Interactive weights**: Adjust importance of each factor based on your use case
- **Usage scenario modeling**: Input your specific requirements for accurate cost analysis
- **Visual comparisons**: Charts and graphs for easy model comparison
- **Transparent methodology**: Clear scoring algorithms and explanations

## How to Use

1. Adjust the evaluation criteria weights in the sidebar based on your priorities
2. Configure your usage scenario (monthly requests, token usage)
3. Review the ranked results and detailed analysis
4. Use the insights to make informed LLM selection decisions

Built with Streamlit and deployed on Hugging Face Spaces.