File size: 3,408 Bytes
4be38d1 22b1baa | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | ---
license: apache-2.0
title: Vaishnavi0404/Text2Sing-DiffSinger
sdk: gradio
emoji: π
colorFrom: purple
colorTo: gray
---
# Text2Sing-DiffSinger
Convert normal text into singing voice with music based on the emotional content of the text.
## Overview
Text2Sing-DiffSinger is a machine learning-based system that converts regular text into singing voice with appropriate musical accompaniment. The system analyzes the emotional content of the text and generates singing that matches the mood, along with suitable background music.
## Features
- Text-to-singing conversion using advanced voice synthesis
- Emotion detection from text input
- Musical accompaniment generation based on detected emotions
- Adjustable parameters for voice type, tempo, and pitch
- Interactive web interface built with Gradio
## Installation
1. Clone this repository:
```bash
git clone https://github.com/yourusername/Text2Sing-DiffSinger.git
cd Text2Sing-DiffSinger
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Set up speaker embeddings:
```bash
python setup.py
```
## Usage
1. Run the application:
```bash
python app.py
```
2. Open your web browser and navigate to http://localhost:7860
3. Enter your text, select voice options, and click "Convert to Singing"
## How It Works
The system works in several steps:
1. **Text Analysis**: Analyzes the input text to detect emotional content and breaks it down into phonemes.
2. **Speech Synthesis**: Converts the text into speech using a neural text-to-speech model.
3. **Singing Conversion**: Transforms the speech into singing by modifying pitch, timing, and adding singing-specific effects.
4. **Music Generation**: Creates musical accompaniment that matches the emotional content of the text.
5. **Audio Mixing**: Combines the singing voice with the accompaniment to produce the final output.
## Adjustable Parameters
- **Voice Type**: Choose between neutral, feminine, or masculine voice.
- **Tempo**: Adjust the speed of the singing (60-180 BPM).
- **Pitch Adjustment**: Shift the pitch up or down (-12 to +12 semitones).
## Project Structure
```
.
βββ app.py # Main application file with Gradio interface
βββ text_processor.py # Text analysis and phonetic processing
βββ voice_synthesizer.py # Speech synthesis module
βββ singing_converter.py # Speech-to-singing conversion
βββ music_generator.py # Musical accompaniment generation
βββ setup.py # Setup script for speaker embeddings
βββ requirements.txt # Python dependencies
βββ speaker_embeddings/ # Directory for speaker embedding files
```
## Dependencies
- torch & torchaudio: For neural network models
- transformers: For speech synthesis
- gradio: For web interface
- librosa & soundfile: For audio processing
- text2emotion: For emotion detection
- music21: For music generation
- nltk: For natural language processing
- phonemizer: For phonetic transcription
## Future Improvements
- Integration with more advanced DiffSinger models
- Fine-tuning on singing voice datasets
- Support for different musical styles
- Multi-language support
- Voice cloning capabilities
## License
[MIT License](LICENSE)
## Acknowledgments
This project builds upon various open-source projects and research, including:
- DiffSinger
- SpeechT5
- Music21
- Gradio |