Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

UCR AI Safety Group

university
https://erfanshayegani.github.io/
Erf_Shayegani
erfanshayegani
Activity Feed

AI & ML interests

AI Safety

Erfan Shayegani 😈's profile picture

Erfan-Shayegani 
authored a paper 5 months ago

Just Do It!? Computer-Use Agents Exhibit Blind Goal-Directedness

Paper • 2510.01670 • Published Oct 2, 2025 • 7
Erfan-Shayegani 
authored 2 papers 10 months ago

Misaligned Roles, Misplaced Images: Structural Input Perturbations Expose Multimodal Alignment Blind Spots

Paper • 2504.03735 • Published Apr 1, 2025 • 1

Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models

Paper • 2411.04291 • Published Nov 6, 2024
Erfan-Shayegani 
authored a paper over 1 year ago

Cross-Modal Safety Alignment: Is textual unlearning all you need?

Paper • 2406.02575 • Published May 27, 2024 • 1
Erfan-Shayegani 
authored 2 papers almost 2 years ago

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

Paper • 2310.10844 • Published Oct 16, 2023

Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models

Paper • 2307.14539 • Published Jul 26, 2023 • 2
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs