shreyanshpadarha commited on
Commit
e48d39a
·
verified ·
1 Parent(s): c83b5c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -2
README.md CHANGED
@@ -1,10 +1,31 @@
1
  ---
2
  title: README
3
- emoji: 🦀
4
  colorFrom: blue
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji:
4
  colorFrom: blue
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ # Oxford Reasoning and Machine Learning Lab
11
+
12
+ We conduct research on **AI evaluation, safety, and human–AI interaction** to advance understanding of how large language models reason, solve complex problems, and collaborate with humans.
13
+
14
+ Our work combines **theoretical rigour** with **empirical investigation** to study how large language models process information, perform tasks, and behave in real-world settings.
15
+
16
+ ## Research Areas
17
+
18
+ ### Benchmarks and Evaluation
19
+ We study the science of LLM evaluation, using **systematic reviews**, **benchmark analysis**, and **statistical modelling** to examine the validity of existing evaluation practices. We develop new benchmarks and evaluation frameworks to test the limits of LLM reasoning, especially in **adversarial**, **interactive**, and **low-resource language** settings.
20
+
21
+ ### Agentic AI for Science
22
+ We build agentic AI systems that automate and augment key stages of the scientific process, including **literature discovery**, **evidence synthesis**, **hypothesis generation**, and **decision support**. A central focus is developing agents that are **reliable**, **transparent**, and **grounded in domain expertise** for real-world scientific and policy applications.
23
+
24
+ ### AI Safety
25
+ We investigate the risks that advanced AI systems may pose to individuals and society. Our work spans the spectrum of harms, from **bias and toxicity in language models** to **misalignment in agentic systems**, alongside technical methods for mitigation and research on AI governance.
26
+
27
+ ### Human–AI Interaction
28
+ We conduct large-scale empirical studies of how people use and respond to AI systems in decision-making contexts. This includes our landmark study of **1,300 participants** examining the use of LLMs in **medical self-diagnosis** and healthcare-related decision support.
29
+
30
+ ## Links
31
+ - Website: [oxrml.com](https://oxrml.com/)