Update README.md
Browse files
README.md
CHANGED
|
@@ -32,58 +32,46 @@ This is a Vision Transformer (ViT) model fine-tuned using Low-Rank Adaptation (L
|
|
| 32 |
2. Start frequency (Hz)
|
| 33 |
3. End frequency (Hz)
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; margin-bottom: 20px; border-left: 4px solid #
|
| 43 |
-
<h2 style="margin-top: 0;"
|
| 44 |
<div style="display: flex; flex-wrap: wrap; gap: 15px;">
|
| 45 |
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 46 |
-
<h4 style="margin-top: 0;">
|
| 47 |
-
<p><a href="https://
|
| 48 |
</div>
|
| 49 |
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 50 |
-
<h4 style="margin-top: 0;">
|
| 51 |
-
<p><a href="https://
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
</div>
|
| 53 |
</div>
|
| 54 |
</div>
|
| 55 |
|
| 56 |
-
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px;
|
| 57 |
-
<h2 style="margin-top: 0;">🗂 Dataset Structure</h2>
|
| 58 |
-
<div style="background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); margin-bottom: 15px;">
|
| 59 |
-
<h4 style="margin-top: 0;">Content Includes:</h4>
|
| 60 |
-
<ul>
|
| 61 |
-
<li>Spectrogram images</li>
|
| 62 |
-
<li>Corresponding labels including chirp parameters and locations</li>
|
| 63 |
-
<li>Metadata about generation parameters</li>
|
| 64 |
-
</ul>
|
| 65 |
-
</div>
|
| 66 |
-
|
| 67 |
-
<div style="background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 68 |
-
<h4 style="margin-top: 0;">Data Collection and Processing</h4>
|
| 69 |
-
<p>Data was synthetically generated using the provided Python package, with parameters randomly sampled from physiologically relevant ranges.</p>
|
| 70 |
-
<p><strong>Repository:</strong> <a href="https://github.com/nbahador/chirp_spectrogram_generator/tree/main">GitHub Package</a></p>
|
| 71 |
-
</div>
|
| 72 |
-
</div>
|
| 73 |
-
|
| 74 |
-
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; margin-bottom: 20px; border-left: 4px solid #673ab7;">
|
| 75 |
<h2 style="margin-top: 0;">📄 Citation</h2>
|
| 76 |
<div style="background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
|
|
|
| 77 |
<p>Bahador, N., & Lankarany, M. (2025). Chirp localization via fine-tuned transformer model: A proof-of-concept study. arXiv preprint arXiv:2503.22713. <a href="https://arxiv.org/pdf/2503.22713">[PDF]</a></p>
|
| 78 |
</div>
|
| 79 |
-
</div>
|
| 80 |
-
|
| 81 |
-
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; border-left: 4px solid #00bcd4;">
|
| 82 |
-
<h2 style="margin-top: 0;">ℹ️ More Information</h2>
|
| 83 |
-
<p>For more information and generation code, visit the <a href="https://github.com/nbahador/Train_Spectrogram_Transformer">GitHub repository</a>.</p>
|
| 84 |
-
|
| 85 |
-
<div style="margin-top: 15px; padding-top: 15px; border-top: 1px solid #e0e0e0;">
|
| 86 |
-
<h4 style="margin-bottom: 5px;">Card Author</h4>
|
| 87 |
-
<p><a href="https://www.linkedin.com/in/nooshin-bahador-30348950/">Nooshin Bahador</a></p>
|
| 88 |
-
</div>
|
| 89 |
</div>
|
|
|
|
| 32 |
2. Start frequency (Hz)
|
| 33 |
3. End frequency (Hz)
|
| 34 |
|
| 35 |
+
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; margin-bottom: 20px; border-left: 4px solid #4285f4;">
|
| 36 |
+
<h2 style="margin-top: 0;">🔧 Fine-Tuning Details</h2>
|
| 37 |
+
<div style="background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 38 |
+
<ul>
|
| 39 |
+
<li><strong>Framework:</strong> PyTorch</li>
|
| 40 |
+
<li><strong>Architecture:</strong> Pre-trained Vision Transformer (ViT)</li>
|
| 41 |
+
<li><strong>Adaptation Method:</strong> LoRA (Low-Rank Adaptation)</li>
|
| 42 |
+
<li><strong>Task:</strong> Regression on time-frequency representations</li>
|
| 43 |
+
<li><strong>Training Protocol:</strong> Automatic Mixed Precision (AMP), Early stopping, Learning Rate scheduling</li>
|
| 44 |
+
<li><strong>Output:</strong> Quantitative predictions + optional natural language descriptions</li>
|
| 45 |
+
</ul>
|
| 46 |
+
</div>
|
| 47 |
+
</div>
|
| 48 |
|
| 49 |
+
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; margin-bottom: 20px; border-left: 4px solid #34a853;">
|
| 50 |
+
<h2 style="margin-top: 0;">📦 Resources</h2>
|
| 51 |
<div style="display: flex; flex-wrap: wrap; gap: 15px;">
|
| 52 |
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 53 |
+
<h4 style="margin-top: 0;">Trained Model</h4>
|
| 54 |
+
<p><a href="https://huggingface.co/nubahador/Fine_Tuned_Transformer_Model_for_Chirp_Localization/tree/main">HuggingFace Model Hub</a></p>
|
| 55 |
</div>
|
| 56 |
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 57 |
+
<h4 style="margin-top: 0;">Spectrogram Dataset</h4>
|
| 58 |
+
<p><a href="https://huggingface.co/datasets/nubahador/ChirpLoc100K___A_Synthetic_Spectrogram_Dataset_for_Chirp_Localization/tree/main">HuggingFace Dataset Hub</a></p>
|
| 59 |
+
</div>
|
| 60 |
+
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 61 |
+
<h4 style="margin-top: 0;">PyTorch Implementation</h4>
|
| 62 |
+
<p><a href="https://github.com/nbahador/Train_Spectrogram_Transformer">GitHub Repository</a></p>
|
| 63 |
+
</div>
|
| 64 |
+
<div style="flex: 1; min-width: 250px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 65 |
+
<h4 style="margin-top: 0;">Chirp Generator</h4>
|
| 66 |
+
<p><a href="https://github.com/nbahador/chirp_spectrogram_generator">GitHub Package</a></p>
|
| 67 |
</div>
|
| 68 |
</div>
|
| 69 |
</div>
|
| 70 |
|
| 71 |
+
<div style="background: #f8f9fa; border-radius: 8px; padding: 20px; border-left: 4px solid #ea4335;">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
<h2 style="margin-top: 0;">📄 Citation</h2>
|
| 73 |
<div style="background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
| 74 |
+
<p>If you use this model in your research, please cite:</p>
|
| 75 |
<p>Bahador, N., & Lankarany, M. (2025). Chirp localization via fine-tuned transformer model: A proof-of-concept study. arXiv preprint arXiv:2503.22713. <a href="https://arxiv.org/pdf/2503.22713">[PDF]</a></p>
|
| 76 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
</div>
|