css = """#myProgress { width: 100%; background-color: var(--block-border-color); border-radius: 2px; } #myBar { width: 0%; height: 30px; background-color: var(--block-title-background-fill); border-radius: 2px; } #progressText { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); color: var(--block-title-text-color); font-weight: regular; font-size: 14px; } h1, h2, h3, h4 { padding: var(--block-title-padding); color: var(--block-title-text-color); border: solid var(--block-title-border-width) var(--block-title-border-color); border-radius: var(--block-title-radius); background: var(--block-title-background-fill); width: fit-content; display: inline-block; } h4 { margin: 0px; color: var(--block-title-background-fill); background: var(--block-title-text-color); } #instructions { max-width: 980px; align-self: center; } .content-box { border-color: var(--block-border-color); border-radius: var(--block-radius); background: var(--block-background-fill); padding: var(--block-label-padding); } """ js_progress_bar = """ function move(start, end, total_duration, current_index, n_ann, total_ann) { var elem = document.getElementById("myBar"); elem.style.width = (n_ann / total_ann) * 100 + "%"; const index = current_index + 1; progressText.innerText = `${index} / ${total_ann} (Completed: ${n_ann})`; const waveform = document.querySelector('#audio_to_annotate #waveform div'); if (!waveform) return; const shadowRoot = waveform.shadowRoot; if (!shadowRoot) return; const canvases = shadowRoot.querySelector('.wrapper'); if (!canvases) return; const leftOffsetPct = start / total_duration; const widthPct = (end - start) / total_duration; // Ensure there is a single style element we can update let style = shadowRoot.querySelector('style[data-overlay-style="true"]'); if (!style) { style = document.createElement('style'); style.setAttribute('data-overlay-style', 'true'); shadowRoot.appendChild(style); } // Function to (re)compute and apply the rule const applyOverlayRule = () => { const w = canvases.offsetWidth || 0; style.textContent = ` .wrapper { position: relative; } .wrapper::after { content: ''; position: absolute; top: 0; left: ${w * leftOffsetPct}px; width: ${w * widthPct}px; height: 100%; background-color: blue; z-index: 999; opacity: 0.5; pointer-events: none; } `; }; // Apply once now applyOverlayRule(); // Attach a single ResizeObserver (memoized on the element) if (!canvases.__overlayResizeObserver) { const ro = new ResizeObserver(() => { applyOverlayRule(); }); ro.observe(canvases); canvases.__overlayResizeObserver = ro; // Optional: also respond to window resizes (covers zoom/scrollbar layout edge cases) const onWinResize = () => applyOverlayRule(); window.addEventListener('resize', onWinResize); canvases.__overlayWinResizeCleanup = () => window.removeEventListener('resize', onWinResize); } // Optional: cleanup helper you can call when tearing down the UI canvases.__cleanupOverlay = () => { if (canvases.__overlayResizeObserver) { canvases.__overlayResizeObserver.disconnect(); delete canvases.__overlayResizeObserver; } if (canvases.__overlayWinResizeCleanup) { canvases.__overlayWinResizeCleanup(); delete canvases.__overlayWinResizeCleanup; } }; } """ intro_html = """

Emotionality in Speech

Spoken language communicates more than just words. Speakers use tone, pitch, and other nonverbal cues to express emotions. In emotional speech, these cues can strengthen or even contradict the meaning of the words—for example, irony can make a positive phrase sound sarcastic. For this research, we will focus on three basic emotions plus neutral:

This may seem like a small set, but it's a great starting point for analyzing emotions in such a large collection— 303 hours of interviews! (That’s 13 days of nonstop listening! 😮)

The ACT-UP Oral History Project

You will be annotating short audio clips extracted from the ACT UP (AIDS Coalition to Unleash Power) Oral History Project developed by Sarah Schulman and Jim Hubbard . This archive features interviews with individuals who were part of ACT UP during the late 1980s and early 1990s, amidst the AIDS epidemic. In each video, the subjects talk about their life before the epidemic, how they were affected by AIDS and their work in ACT UP.

What will you be annotating?

You will annotate one emotion per short audio clip, based on the following criteria:

If you're uncertain about which emotion you are hearing, open the sidebar by clicking the arrow in the upper left corner. There, you'll find a list of major emotions grouped under each category!
""" examples_explanation = """

Audio examples

Let's check out examples for the four emotions to annotate. Note that all these examples use the same sentence and are acted out, making the emotionality in speech more apparent. In a real-world setting, emotionality is more complex, so you will find a list of additional emotions within each of the three emotion categories (Happy, Sad, and Angry) to assist you during annotation.

""" side_bar_html = """

Major subclasses

🙂

Happiness

Affection, Goodwill, Joy, Satisfaction, Zest, Acceptance, Pride, Hope, Excitement, Relief, Passion, Caring

🙁

Sadness

Suffering, Regret, Displeasure, Embarrassment, Sympathy, Depression

😡

Anger

Irritability, Torment, Jealousy, Disgust, Rage, Frustration

""" start_annotating = """

How to use the annotation interface?

  1. Open the sidebar by clicking the arrow in the upper left corner.
  2. Enter the participant ID you received via email.
  3. Click "Let's go!" — this will lock your participant ID.
  4. You’ll be directed to the annotation interface. The task will resume where you left off (on the last example you annotated), or start from the first audio if this is your first session.

Note: You can click on any part of the audio to start playing from that point. Please avoid clicking on the audio while it is playing (pause it first). This will not affect the program, but it will help us understand how you interact with the interface.

Below you can find an overview of the annotation interface.

"""