provider
stringclasses 54
values | name
stringclasses 186
values | size
stringclasses 120
values | variant
stringclasses 110
values | version
stringclasses 110
values | sector
stringclasses 4
values | openness
stringclasses 2
values | region
stringclasses 5
values | country
stringclasses 13
values | source_id
stringclasses 434
values | is_first_party
bool 2
classes | category
int64 1
7
| year
int64 2.02k
2.03k
| metadata
stringclasses 433
values | score
float64 0
3
| is_model_release
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Google
|
palm-2
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Google
|
gemma-1
|
2B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Microsoft
|
llava-1.5
|
13B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| true
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Meta
|
llama-2
|
70B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Google
|
gemma-1
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 3
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 3
| false
|
Microsoft
|
llava-1.5
|
13B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| true
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Google
|
gemma-1
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Meta
|
llama-2
|
70B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Meta
|
llama-2
|
13B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Google
|
gemini-1.0
| null |
pro
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Google
|
palm-2
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Google
|
gemma-1
|
2B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Mistral
|
mistral
| null | null |
v1.0
|
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
Meta
|
llama-2
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_40
| false
| 2
| 2,024
|
{'title': 'Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks', 'url': 'https://aclanthology.org/2024.naacl-long.143.pdf', 'release_date': '2024-04-02'}
| 2
| false
|
OpenAI
|
gpt-4
| null |
turbo
|
2024-04-09
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
Anthropic
|
claude-3.5
| null |
sonnet
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
Anthropic
|
claude-3
| null |
opus
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null |
2024-05-13
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
Google
|
gemini-1.5
| null |
pro
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null |
preview
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_41
| false
| 2
| 2,024
|
{'title': 'EUREKA: Evaluating and Understanding Large Foundation Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/09/Eureka-Evaluating-and-Understanding-Large-Foundation-Models-Sept-13.pdf'}
| 3
| false
|
Mistral
|
mistral
|
7B
| null | null |
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Meta
|
llama-2
|
7B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
OpenAI
|
gpt-4.1
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Meta
|
llama-2
|
13B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Alibaba
|
qwen-3
|
8B
| null | null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Alibaba
|
qwen-3
|
32B
| null | null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Meta
|
llama-2
|
70B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Microsoft
|
phi-4
| null | null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Alibaba
|
qwen-2
|
72B
| null | null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Meta
|
llama-3
|
8B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Alibaba
|
qwen-2
|
7B
| null | null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
Meta
|
llama-3
|
70B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
OpenAI
|
gpt-3.5
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_42
| false
| 2
| 2,025
|
{'title': 'SocialCC: Interactive Evaluation for Cultural Competence in Language Agents', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/SocialCC-Interactive-Evaluation-for-Cultural-Competence-in-Language-Agents.pdf'}
| 3
| false
|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Mistral
|
mistral
|
7B
|
instruct
|
v0.2
|
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Google
|
gemini-1.0
| null |
pro
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-3
|
70B
|
instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-3
|
8B
|
instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-2
|
7B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Google
|
gemma-1
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 3
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Mistral
|
mistral
|
7B
|
instruct
|
v0.2
|
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Google
|
gemini-1.0
| null |
pro
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-3
|
70B
|
instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-2
|
7B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Google
|
gemma-1
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
Meta
|
llama-3
|
8B
|
instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_43
| false
| 2
| 2,024
|
{'title': 'PARIKSHA: A Scalable, Democratic, Transparent Evaluation Platform for Assessing Indic Large Language Models', 'url': 'https://www.microsoft.com/en-us/research/wp-content/uploads/2024/05/Pariksha_Tech_Report_v1-663980ea39a84.pdf'}
| 3
| false
|
BigScience
|
bloom
|
176B
| null | null |
Academia
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Meta
|
opt
|
125M
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Meta
|
llama-1
|
13B
| null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Meta
|
llama-3
|
8B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Mistral
|
mixtral
|
7B
| null | null |
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Google
|
gemma-2
|
27B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Meta
|
llama-1
|
70B
| null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Google
|
gemma-2
|
2B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Mistral
|
mistral
| null | null | null |
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_44
| false
| 4
| 2,025
|
{'title': 'EcoServe: Designing Carbon-Aware AI Inference Systems', 'url': 'https://www.arxiv.org/pdf/2502.05043'}
| 3
| false
|
Salesforce
|
instructblip
| null | null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 3
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
Microsoft
|
llava-1.5
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 3
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
OpenAI
|
clip
| null | null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 3
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 3
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
Google
|
gemini-1.5
| null |
flash
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 3
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 2
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
Google
|
gemini-1.5
| null |
flash
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 2
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
OpenAI
|
clip
| null | null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 2
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
Microsoft
|
llava-1.5
|
7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 2
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
Salesforce
|
instructblip
| null | null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_45
| false
| 2
| 2,024
|
{'title': 'CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark', 'url': 'https://arxiv.org/pdf/2406.05967'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_46
| false
| 6
| 2,024
|
{'title': 'From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond', 'url': 'https://arxiv.org/pdf/2411.03590'}
| 1
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_46
| false
| 6
| 2,024
|
{'title': 'From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond', 'url': 'https://arxiv.org/pdf/2411.03590'}
| 1
| false
|
OpenAI
|
o1
| null |
preview
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_46
| false
| 6
| 2,024
|
{'title': 'From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond', 'url': 'https://arxiv.org/pdf/2411.03590'}
| 1
| false
|
Microsoft
|
phi-3
|
3.82B
|
mini, 4k, instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Alibaba
|
qwen-3
|
235B
|
235B-A22B
| null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Cohere
|
command-r
|
104B
|
plus
|
08-2024
|
Industry
|
open
|
North America
|
Canada
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Alibaba
|
qwen-3
|
32B
| null | null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Alibaba
|
qwen-2.5
|
72B
|
Instruct
| null |
Industry
|
open
|
East Asia
|
China
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Meta
|
llama-3.3
|
70B
|
Instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Meta
|
llama-3.1
|
8B
|
Instruct
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Microsoft
|
phi-4
|
14.7B
| null | null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_5
| false
| 4
| 2,025
|
{'title': ""Bigger isn't always better: how to choose the most efficient model for context-specific tasks"", 'url': 'https://huggingface.co/blog/sasha/energy-efficiency-bigger-better'}
| 2
| false
|
Mistral
|
mistral-medium
| null | null | null |
Industry
|
closed
|
Europe
|
France
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Mistral
|
mistral
|
7B
|
instruct
| null |
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Google
|
gemini-1.0
| null |
pro
| null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Meta
|
llama-2
|
70B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Meta
|
llama-2
|
7B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 2
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Mistral
|
mistral
|
7B
|
instruct
| null |
Industry
|
open
|
Europe
|
France
|
first-party-or-cooperative-evals_47
| false
| 1
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Mistral
|
mistral-medium
| null | null | null |
Industry
|
closed
|
Europe
|
France
|
first-party-or-cooperative-evals_47
| false
| 1
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
OpenAI
|
gpt-3.5
| null | null |
turbo
|
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 1
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
OpenAI
|
gpt-4
| null | null | null |
Industry
|
closed
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 1
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Meta
|
llama-2
|
70B
|
chat
| null |
Industry
|
open
|
North America
|
United States
|
first-party-or-cooperative-evals_47
| false
| 1
| 2,025
|
{'title': 'Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing', 'url': 'https://arxiv.org/pdf/2406.14230'}
| 3
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.