Spaces:
Running
Running
| <html lang="ru"> | |
| <head> | |
| <meta charset="UTF-8"> | |
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> | |
| <title>Attention is All You Need - Презентация</title> | |
| <script src="https://cdn.tailwindcss.com"></script> | |
| <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css"> | |
| <style> | |
| .slide { | |
| display: none; | |
| animation: fadeIn 0.5s ease-in-out; | |
| } | |
| .slide.active { | |
| display: block; | |
| } | |
| @keyframes fadeIn { | |
| from { opacity: 0; } | |
| to { opacity: 1; } | |
| } | |
| .code-block { | |
| font-family: 'Courier New', monospace; | |
| background-color: #2d3748; | |
| color: #f7fafc; | |
| padding: 1rem; | |
| border-radius: 0.5rem; | |
| overflow-x: auto; | |
| } | |
| .attention-formula { | |
| font-size: 1.5rem; | |
| color: #4fd1c5; | |
| text-align: center; | |
| margin: 1rem 0; | |
| } | |
| </style> | |
| </head> | |
| <body class="bg-gray-100 font-sans"> | |
| <div class="container mx-auto px-4 py-8 max-w-6xl"> | |
| <!-- Title Slide --> | |
| <div class="slide active bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <div class="flex flex-col items-center justify-center h-full"> | |
| <div class="text-5xl font-bold text-indigo-700 mb-6 text-center"> | |
| <i class="fas fa-brain mr-4"></i>Attention is All You Need | |
| </div> | |
| <div class="text-2xl text-gray-600 mb-8 text-center"> | |
| Революция в обработке последовательностей | |
| </div> | |
| <div class="text-xl text-gray-500 mb-12 text-center"> | |
| Ashish Vaswani, Noam Shazeer, Niki Parmar, et al. (2017) | |
| </div> | |
| <div class="flex space-x-4"> | |
| <div class="bg-indigo-100 text-indigo-800 px-4 py-2 rounded-full"> | |
| <i class="fas fa-project-diagram mr-2"></i>Transformer | |
| </div> | |
| <div class="bg-blue-100 text-blue-800 px-4 py-2 rounded-full"> | |
| <i class="fas fa-cogs mr-2"></i>Self-Attention | |
| </div> | |
| <div class="bg-purple-100 text-purple-800 px-4 py-2 rounded-full"> | |
| <i class="fas fa-language mr-2"></i>NLP | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Problem Statement --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-question-circle mr-4"></i>Проблема существующих подходов | |
| </h2> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-6"> | |
| <div class="bg-red-50 p-6 rounded-lg border-l-4 border-red-500"> | |
| <h3 class="text-xl font-semibold text-red-700 mb-3">RNN/LSTM</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>Последовательная обработка данных</li> | |
| <li>Проблемы с длинными зависимостями</li> | |
| <li>Трудно распараллелить</li> | |
| <li>Вычислительно затратно</li> | |
| </ul> | |
| </div> | |
| <div class="bg-blue-50 p-6 rounded-lg border-l-4 border-blue-500"> | |
| <h3 class="text-xl font-semibold text-blue-700 mb-3">CNN для последовательностей</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>Ограниченное рецептивное поле</li> | |
| <li>Требуется много слоёв для глобальных зависимостей</li> | |
| <li>Не учитывает порядок элементов</li> | |
| </ul> | |
| </div> | |
| </div> | |
| <div class="mt-8 bg-yellow-50 p-6 rounded-lg border-l-4 border-yellow-500"> | |
| <h3 class="text-xl font-semibold text-yellow-700 mb-3">Решение: Механизм внимания</h3> | |
| <p class="text-gray-700"> | |
| Позволяет модели напрямую обращаться к любой части входной последовательности, | |
| независимо от расстояния, с постоянным количеством операций. | |
| </p> | |
| </div> | |
| </div> | |
| <!-- Key Innovations --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-lightbulb mr-4"></i>Ключевые инновации | |
| </h2> | |
| <div class="grid grid-cols-1 md:grid-cols-3 gap-6"> | |
| <div class="bg-green-50 p-6 rounded-lg transform transition hover:scale-105"> | |
| <div class="text-4xl text-green-600 mb-4 text-center"> | |
| <i class="fas fa-arrows-alt-h"></i> | |
| </div> | |
| <h3 class="text-xl font-semibold text-green-700 mb-3 text-center">Self-Attention</h3> | |
| <p class="text-gray-700"> | |
| Прямые связи между всеми элементами последовательности, независимо от расстояния | |
| </p> | |
| </div> | |
| <div class="bg-purple-50 p-6 rounded-lg transform transition hover:scale-105"> | |
| <div class="text-4xl text-purple-600 mb-4 text-center"> | |
| <i class="fas fa-layer-group"></i> | |
| </div> | |
| <h3 class="text-xl font-semibold text-purple-700 mb-3 text-center">Multi-Head Attention</h3> | |
| <p class="text-gray-700"> | |
| Несколько механизмов внимания, работающих параллельно для изучения разных типов зависимостей | |
| </p> | |
| </div> | |
| <div class="bg-indigo-50 p-6 rounded-lg transform transition hover:scale-105"> | |
| <div class="text-4xl text-indigo-600 mb-4 text-center"> | |
| <i class="fas fa-bolt"></i> | |
| </div> | |
| <h3 class="text-xl font-semibold text-indigo-700 mb-3 text-center">Positional Encoding</h3> | |
| <p class="text-gray-700"> | |
| Информация о порядке элементов добавляется через специальные embeddings | |
| </p> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Transformer Architecture --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-project-diagram mr-4"></i>Архитектура Transformer | |
| </h2> | |
| <div class="flex justify-center mb-8"> | |
| <img src="https://miro.medium.com/v2/resize:fit:1400/1*BHzGVskWGS_3jEcYYiLmiQ.png" | |
| alt="Transformer Architecture" | |
| class="rounded-lg shadow-md max-w-full h-auto"> | |
| </div> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-6"> | |
| <div> | |
| <h3 class="text-xl font-semibold text-blue-600 mb-3">Encoder</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>6 идентичных слоёв</li> | |
| <li>Каждый слой содержит: | |
| <ul class="list-circle pl-5 mt-2"> | |
| <li>Multi-head self-attention</li> | |
| <li>Position-wise feed-forward network</li> | |
| </ul> | |
| </li> | |
| <li>Residual connections и layer normalization</li> | |
| </ul> | |
| </div> | |
| <div> | |
| <h3 class="text-xl font-semibold text-purple-600 mb-3">Decoder</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>6 идентичных слоёв</li> | |
| <li>Дополнительно к encoder содержит: | |
| <ul class="list-circle pl-5 mt-2"> | |
| <li>Masked multi-head attention</li> | |
| <li>Multi-head attention над выходом encoder</li> | |
| </ul> | |
| </li> | |
| <li>Также residual connections и layer normalization</li> | |
| </ul> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Scaled Dot-Product Attention --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-atom mr-4"></i>Scaled Dot-Product Attention | |
| </h2> | |
| <div class="attention-formula"> | |
| Attention(Q, K, V) = softmax(QK<sup>T</sup>/√d<sub>k</sub>)V | |
| </div> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-8"> | |
| <div> | |
| <h3 class="text-xl font-semibold text-blue-600 mb-3">Компоненты</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li><strong>Q (Query)</strong> - что ищем</li> | |
| <li><strong>K (Key)</strong> - что можем предложить</li> | |
| <li><strong>V (Value)</strong> - фактическая информация</li> | |
| <li><strong>d<sub>k</sub></strong> - размерность ключей (масштабирующий коэффициент)</li> | |
| </ul> | |
| <div class="mt-6 bg-blue-50 p-4 rounded-lg"> | |
| <h4 class="font-semibold text-blue-700 mb-2">Почему масштабирование?</h4> | |
| <p class="text-gray-700"> | |
| При больших d<sub>k</sub> скалярное произведение становится очень большим, | |
| pushing the softmax into regions with extremely small gradients. | |
| </p> | |
| </div> | |
| </div> | |
| <div> | |
| <h3 class="text-xl font-semibold text-purple-600 mb-3">Реализация на Python</h3> | |
| <div class="code-block"> | |
| <pre>import torch | |
| import torch.nn.functional as F | |
| def attention(query, key, value, mask=None): | |
| d_k = query.size(-1) | |
| scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k) | |
| if mask is not None: | |
| scores = scores.masked_fill(mask == 0, -1e9) | |
| p_attn = F.softmax(scores, dim=-1) | |
| return torch.matmul(p_attn, value), p_attn</pre> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Multi-Head Attention --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-people-arrows mr-4"></i>Multi-Head Attention | |
| </h2> | |
| <div class="flex justify-center mb-8"> | |
| <img src="https://miro.medium.com/v2/resize:fit:1400/1*_92bns4XYKhZ6Yx1NY6R2A.png" | |
| alt="Multi-Head Attention" | |
| class="rounded-lg shadow-md max-w-full h-auto" style="max-height: 200px;"> | |
| </div> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-8"> | |
| <div> | |
| <h3 class="text-xl font-semibold text-blue-600 mb-3">Концепция</h3> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>Несколько attention heads (обычно 8)</li> | |
| <li>Каждый head изучает разные аспекты зависимостей</li> | |
| <li>Линейные проекции для Q, K, V перед attention</li> | |
| <li>Результаты конкатенируются и проецируются обратно</li> | |
| </ul> | |
| <div class="mt-6 bg-yellow-50 p-4 rounded-lg"> | |
| <h4 class="font-semibold text-yellow-700 mb-2">Преимущества</h4> | |
| <p class="text-gray-700"> | |
| Позволяет модели совместно обращать внимание на информацию из разных подпространств | |
| представления в разных позициях. | |
| </p> | |
| </div> | |
| </div> | |
| <div> | |
| <h3 class="text-xl font-semibold text-purple-600 mb-3">Реализация</h3> | |
| <div class="code-block"> | |
| <pre>class MultiHeadedAttention(nn.Module): | |
| def __init__(self, h, d_model): | |
| super().__init__() | |
| self.d_k = d_model // h | |
| self.h = h | |
| self.linears = clones(nn.Linear(d_model, d_model), 4) | |
| def forward(self, query, key, value, mask=None): | |
| nbatches = query.size(0) | |
| # 1) Проецируем и меняем размерность для h heads | |
| query, key, value = [ | |
| lin(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) | |
| for lin, x in zip(self.linears, (query, key, value)) | |
| ] | |
| # 2) Применяем attention ко всем проекциям | |
| x, self.attn = attention(query, key, value, mask) | |
| # 3) Конкатенируем и проецируем обратно | |
| x = x.transpose(1, 2).contiguous() \ | |
| .view(nbatches, -1, self.h * self.d_k) | |
| return self.linears[-1](x)</pre> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Positional Encoding --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-map-marked-alt mr-4"></i>Positional Encoding | |
| </h2> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-8"> | |
| <div> | |
| <h3 class="text-xl font-semibold text-blue-600 mb-3">Зачем нужно?</h3> | |
| <p class="text-gray-700 mb-4"> | |
| Поскольку Transformer не содержит рекуррентных и сверточных операций, | |
| ему необходимо явное представление порядка элементов в последовательности. | |
| </p> | |
| <div class="bg-green-50 p-4 rounded-lg mb-4"> | |
| <h4 class="font-semibold text-green-700 mb-2">Формула</h4> | |
| <div class="text-center"> | |
| PE<sub>(pos,2i)</sub> = sin(pos/10000<sup>2i/d<sub>model</sub></sup>)<br> | |
| PE<sub>(pos,2i+1)</sub> = cos(pos/10000<sup>2i/d<sub>model</sub></sup>) | |
| </div> | |
| </div> | |
| <ul class="list-disc pl-5 text-gray-700 space-y-2"> | |
| <li>pos - позиция в последовательности</li> | |
| <li>i - размерность</li> | |
| <li>d<sub>model</sub> - размерность embedding</li> | |
| </ul> | |
| </div> | |
| <div> | |
| <h3 class="text-xl font-semibold text-purple-600 mb-3">Визуализация</h3> | |
| <div class="flex justify-center"> | |
| <img src="https://jalammar.github.io/images/t/transformer_positional_encoding_example.png" | |
| alt="Positional Encoding" | |
| class="rounded-lg shadow-md max-w-full h-auto"> | |
| </div> | |
| <p class="text-gray-700 mt-4 text-center"> | |
| Синусоидальные функции разных частот создают уникальные паттерны для каждой позиции | |
| </p> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Results and Impact --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <h2 class="text-3xl font-bold text-indigo-700 mb-6"> | |
| <i class="fas fa-chart-line mr-4"></i>Результаты и влияние | |
| </h2> | |
| <div class="grid grid-cols-1 md:grid-cols-2 gap-8"> | |
| <div> | |
| <h3 class="text-xl font-semibold text-blue-600 mb-3">Производительность</h3> | |
| <div class="bg-white p-4 rounded-lg border border-gray-200 shadow-sm mb-4"> | |
| <h4 class="font-semibold text-gray-700 mb-2">WMT 2014 English-to-German</h4> | |
| <div class="flex items-center"> | |
| <div class="w-3/4 bg-gray-200 rounded-full h-4"> | |
| <div class="bg-blue-600 h-4 rounded-full" style="width: 75%"></div> | |
| </div> | |
| <span class="ml-2 font-bold">28.4 BLEU (Transformer)</span> | |
| </div> | |
| <div class="flex items-center mt-2"> | |
| <div class="w-3/4 bg-gray-200 rounded-full h-4"> | |
| <div class="bg-blue-400 h-4 rounded-full" style="width: 65%"></div> | |
| </div> | |
| <span class="ml-2">25.8 BLEU (Previous best)</span> | |
| </div> | |
| </div> | |
| <div class="bg-white p-4 rounded-lg border border-gray-200 shadow-sm"> | |
| <h4 class="font-semibold text-gray-700 mb-2">WMT 2014 English-to-French</h4> | |
| <div class="flex items-center"> | |
| <div class="w-3/4 bg-gray-200 rounded-full h-4"> | |
| <div class="bg-green-600 h-4 rounded-full" style="width: 85%"></div> | |
| </div> | |
| <span class="ml-2 font-bold">41.8 BLEU (Transformer)</span> | |
| </div> | |
| <div class="flex items-center mt-2"> | |
| <div class="w-3/4 bg-gray-200 rounded-full h-4"> | |
| <div class="bg-green-400 h-4 rounded-full" style="width: 80%"></div> | |
| </div> | |
| <span class="ml-2">40.4 BLEU (Previous best)</span> | |
| </div> | |
| </div> | |
| <div class="mt-6 bg-blue-50 p-4 rounded-lg"> | |
| <h4 class="font-semibold text-blue-700 mb-2">Вычислительная эффективность</h4> | |
| <p class="text-gray-700"> | |
| В 3-10 раз быстрее в обучении, чем рекуррентные модели на GPU/TPU благодаря полной параллелизации | |
| </p> | |
| </div> | |
| </div> | |
| <div> | |
| <h3 class="text-xl font-semibold text-purple-600 mb-3">Влияние на NLP</h3> | |
| <div class="space-y-4"> | |
| <div class="flex items-start"> | |
| <div class="bg-purple-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-robot text-purple-600"></i> | |
| </div> | |
| <div> | |
| <h4 class="font-semibold text-gray-800">BERT (2018)</h4> | |
| <p class="text-gray-600">Bidirectional Encoder Representations from Transformers</p> | |
| </div> | |
| </div> | |
| <div class="flex items-start"> | |
| <div class="bg-green-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-language text-green-600"></i> | |
| </div> | |
| <div> | |
| <h4 class="font-semibold text-gray-800">GPT series (2018-2020)</h4> | |
| <p class="text-gray-600">Generative Pre-trained Transformer (GPT-1, GPT-2, GPT-3)</p> | |
| </div> | |
| </div> | |
| <div class="flex items-start"> | |
| <div class="bg-red-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-exchange-alt text-red-600"></i> | |
| </div> | |
| <div> | |
| <h4 class="font-semibold text-gray-800">T5 (2019)</h4> | |
| <p class="text-gray-600">Text-to-Text Transfer Transformer</p> | |
| </div> | |
| </div> | |
| <div class="flex items-start"> | |
| <div class="bg-yellow-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-comments text-yellow-600"></i> | |
| </div> | |
| <div> | |
| <h4 class="font-semibold text-gray-800">ChatGPT (2022)</h4> | |
| <p class="text-gray-600">Основан на архитектуре Transformer</p> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="mt-6 bg-purple-50 p-4 rounded-lg"> | |
| <h4 class="font-semibold text-purple-700 mb-2">За пределами NLP</h4> | |
| <p class="text-gray-700"> | |
| Transformers успешно применяются в компьютерном зрении (ViT), обработке звука, биоинформатике и других областях | |
| </p> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Conclusion --> | |
| <div class="slide bg-white rounded-xl shadow-2xl p-8 mb-8"> | |
| <div class="flex flex-col items-center justify-center h-full"> | |
| <div class="text-4xl font-bold text-indigo-700 mb-6 text-center"> | |
| <i class="fas fa-graduation-cap mr-4"></i>Выводы | |
| </div> | |
| <div class="w-full max-w-2xl space-y-6"> | |
| <div class="flex items-start bg-indigo-50 p-4 rounded-lg"> | |
| <div class="bg-indigo-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-check text-indigo-600"></i> | |
| </div> | |
| <div> | |
| <h3 class="font-semibold text-indigo-800">Новая парадигма</h3> | |
| <p class="text-gray-700"> | |
| Transformer представил полностью attention-based архитектуру, отказавшись от рекуррентных и сверточных слоёв | |
| </p> | |
| </div> | |
| </div> | |
| <div class="flex items-start bg-green-50 p-4 rounded-lg"> | |
| <div class="bg-green-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-bolt text-green-600"></i> | |
| </div> | |
| <div> | |
| <h3 class="font-semibold text-green-800">Вычислительная эффективность</h3> | |
| <p class="text-gray-700"> | |
| Полностью параллелизуемая архитектура, быстрее обучение, лучшее качество на длинных последовательностях | |
| </p> | |
| </div> | |
| </div> | |
| <div class="flex items-start bg-purple-50 p-4 rounded-lg"> | |
| <div class="bg-purple-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-project-diagram text-purple-600"></i> | |
| </div> | |
| <div> | |
| <h3 class="font-semibold text-purple-800">Гибкость</h3> | |
| <p class="text-gray-700"> | |
| Архитектура легко адаптируется для различных задач (перевод, классификация, генерация) | |
| </p> | |
| </div> | |
| </div> | |
| <div class="flex items-start bg-yellow-50 p-4 rounded-lg"> | |
| <div class="bg-yellow-100 p-3 rounded-full mr-4"> | |
| <i class="fas fa-star text-yellow-600"></i> | |
| </div> | |
| <div> | |
| <h3 class="font-semibold text-yellow-800">Будущее</h3> | |
| <p class="text-gray-700"> | |
| Transformer стал основой для большинства современных моделей NLP и находит применение в других областях | |
| </p> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| <!-- Navigation --> | |
| <div class="flex justify-between mt-8"> | |
| <button id="prevBtn" class="bg-indigo-600 hover:bg-indigo-700 text-white font-bold py-3 px-6 rounded-full transition transform hover:scale-105"> | |
| <i class="fas fa-arrow-left mr-2"></i>Назад | |
| </button> | |
| <div class="flex space-x-2"> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| <button class="page-indicator w-3 h-3 rounded-full bg-indigo-300"></button> | |
| </div> | |
| <button id="nextBtn" class="bg-indigo-600 hover:bg-indigo-700 text-white font-bold py-3 px-6 rounded-full transition transform hover:scale-105"> | |
| Вперед<i class="fas fa-arrow-right ml-2"></i> | |
| </button> | |
| </div> | |
| </div> | |
| <script> | |
| document.addEventListener('DOMContentLoaded', function() { | |
| const slides = document.querySelectorAll('.slide'); | |
| const prevBtn = document.getElementById('prevBtn'); | |
| const nextBtn = document.getElementById('nextBtn'); | |
| const pageIndicators = document.querySelectorAll('.page-indicator'); | |
| let currentSlide = 0; | |
| // Initialize first slide and indicator | |
| slides[0].classList.add('active'); | |
| pageIndicators[0].classList.remove('bg-indigo-300'); | |
| pageIndicators[0].classList.add('bg-indigo-600'); | |
| function showSlide(index) { | |
| // Hide all slides | |
| slides.forEach(slide => slide.classList.remove('active')); | |
| // Show current slide | |
| slides[index].classList.add('active'); | |
| // Update indicators | |
| pageIndicators.forEach(indicator => { | |
| indicator.classList.remove('bg-indigo-600'); | |
| indicator.classList.add('bg-indigo-300'); | |
| }); | |
| pageIndicators[index].classList.remove('bg-indigo-300'); | |
| pageIndicators[index].classList.add('bg-indigo-600'); | |
| // Update button states | |
| prevBtn.disabled = index === 0; | |
| nextBtn.disabled = index === slides.length - 1; | |
| } | |
| // Next slide | |
| nextBtn.addEventListener('click', function() { | |
| if (currentSlide < slides.length - 1) { | |
| currentSlide++; | |
| showSlide(currentSlide); | |
| } | |
| }); | |
| // Previous slide | |
| prevBtn.addEventListener('click', function() { | |
| if (currentSlide > 0) { | |
| currentSlide--; | |
| showSlide(currentSlide); | |
| } | |
| }); | |
| // Keyboard navigation | |
| document.addEventListener('keydown', function(e) { | |
| if (e.key === 'ArrowRight' || e.key === ' ') { | |
| if (currentSlide < slides.length - 1) { | |
| currentSlide++; | |
| showSlide(currentSlide); | |
| } | |
| } else if (e.key === 'ArrowLeft') { | |
| if (currentSlide > 0) { | |
| currentSlide--; | |
| showSlide(currentSlide); | |
| } | |
| } | |
| }); | |
| // Click on indicator | |
| pageIndicators.forEach((indicator, index) => { | |
| indicator.addEventListener('click', function() { | |
| currentSlide = index; | |
| showSlide(currentSlide); | |
| }); | |
| }); | |
| }); | |
| </script> | |
| <p style="border-radius: 8px; text-align: center; font-size: 12px; color: #fff; margin-top: 16px;position: fixed; left: 8px; bottom: 8px; z-index: 10; background: rgba(0, 0, 0, 0.8); padding: 4px 8px;">Made with <img src="https://enzostvs-deepsite.hf.space/logo.svg" alt="DeepSite Logo" style="width: 16px; height: 16px; vertical-align: middle;display:inline-block;margin-right:3px;filter:brightness(0) invert(1);"><a href="https://enzostvs-deepsite.hf.space" style="color: #fff;text-decoration: underline;" target="_blank" >DeepSite</a> - 🧬 <a href="https://enzostvs-deepsite.hf.space?remix=JohnConnor123/abs" style="color: #fff;text-decoration: underline;" target="_blank" >Remix</a></p></body> | |
| </html> |