shoaibfd26 commited on
Commit
f010d8e
·
verified ·
1 Parent(s): 6282cd6

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +13 -11
app.py CHANGED
@@ -36,23 +36,25 @@ def app(text, model):
36
  interface = gr.Interface(
37
  fn=app,
38
  inputs=[
39
- gr.Textbox(label="Input Text", placeholder="Enter a sentence to visualize attention."),
40
  gr.Dropdown(
41
- label="Choose a Transformer Model",
42
  choices=["bert-base-uncased", "distilbert-base-uncased", "roberta-base"],
43
  value=DEFAULT_MODEL
44
  )
45
  ],
46
  outputs=gr.Plot(label="Attention Map"),
47
- title="🔍 Transformer Attention Visualizer",
48
- description="Visualize how transformer models focus on different parts of input text using attention maps.",
49
- article="""
50
- ## How It Works
51
- This tool uses Hugging Face transformer models to extract **self-attention scores** from the last layer of the model.
52
- - 🧠 The attention map shows how each token attends to every other token.
53
- - 📊 You’re seeing Layer `-1` (last) and Head `0` for simplicity.
54
- - 🧪 Try different models and sentences to explore how they understand context differently.
55
- This is especially helpful for researchers, students, and NLP practitioners interested in interpretability.
 
 
56
  """
57
  )
58
 
 
36
  interface = gr.Interface(
37
  fn=app,
38
  inputs=[
39
+ gr.Textbox(label="Input Text", placeholder="Enter a sentence"),
40
  gr.Dropdown(
41
+ label="Model",
42
  choices=["bert-base-uncased", "distilbert-base-uncased", "roberta-base"],
43
  value=DEFAULT_MODEL
44
  )
45
  ],
46
  outputs=gr.Plot(label="Attention Map"),
47
+ title="Transformer Attention Visualizer",
48
+ description="""
49
+ Understand how transformer models interpret text through self-attention.
50
+
51
+ 🧠 This tool extracts attention weights from the **last layer** and **first attention head** of popular transformer models.
52
+
53
+ 🔍 The attention map shows how each token focuses on others during processing.
54
+
55
+ 📚 Try different models and sentences to compare how they handle language and context.
56
+
57
+ Ideal for NLP learners, researchers, and anyone curious about how transformers "pay attention".
58
  """
59
  )
60