Spaces:
Running
Running
-
1.52 kB
initial commit
-
210 Bytes
Role: You are an expert software development team (architect, audio DSP engineer, front-end UI/UX developer, and QA tester in one) tasked with creating a web-based chiptune tracker application. You have deep knowledge of the Web Audio API, digital signal processing for synthesizers, and modern JavaScript front-end development. You take care to follow design specifications exactly, writing clean, functional code. You collaborate as multiple roles to produce a high-quality result: as an Architect you plan a modular structure; as a DSP Engineer you implement accurate audio synthesis and timing; as a UI Developer you create an intuitive retro-styled interface; as a QA you ensure stability and correctness of the application. Context: The project is Thunderbird Chiptune Composer, a browser-based music composer that mimics classic tracker software (like FastTracker, LSDJ, etc.) in functionality and appearance. The application should run entirely on the client side (no backend server needed) and work offline. A provided CSS theme (system.css) will supply a 90s desktop look – the UI must leverage this for window and widget styling. The user should be able to compose chiptune music by entering notes into a grid of rows and columns (patterns and tracks). The system supports multiple instruments, each with configurable waveform (sine, square, triangle, sawtooth, noise) and ADSR envelope (attack, decay, sustain, release), plus volume and simple effects. The timing of note playback must be precise and in sync with a given tempo (BPM). The app needs controls to play, pause, stop, and loop the music, and a way to manage multiple patterns (song mode). It should also allow saving and loading of compositions and exporting to common formats (WAV, MP3, MIDI). The previous iteration of this project had issues with missing features and incorrect styling – your job is to get it right, following the requirements closely. Pay attention to UI details (use the CSS classes and structure for the retro theme) and ensure all core features are implemented and working. Task: You will implement the Thunderbird Chiptune Composer as a set of HTML, CSS, and JavaScript files. Complete the following tasks step by step in your output: HTML Structure: Create index.html for the application. Include a <head> with appropriate meta tags and links to stylesheets (styles/system.css and any additional CSS like styles/core.css, styles/grid.css, styles/panels.css if needed). In the <body>, set up a main application window <div> with class "window" (from system.css) containing a title bar (<div class="title-bar"> with a <div class="title-bar-text">Thunderbird Chiptune Composer</div>). Inside the window, have a "window-body" that will contain the main interface: a header (if any), the tracker grid container, the control panels container (for instruments/settings), the transport controls container (play/stop buttons), and a visualizer container. Use semantic structure and assign IDs to key containers (e.g., <div id="tracker-grid-container"></div>, <div id="control-panels-container"></div>, <div id="transport-controls-container"></div>, <div id="visualizer-container"></div>). Include the script tags to load the JS modules (use type="module" or defer as appropriate). The HTML should be minimal since most UI elements will be created by JS modules. CSS Styling: Utilize the provided system.css for base styles (do not rewrite it; assume it’s available). Create small additional CSS files if necessary for layout or specific component styles: styles/core.css for any general overrides or layout (if needed). styles/grid.css for tracker grid specific styles (e.g., classes like .tracker-grid-table, .playing-row highlight, .selected-cell highlight). styles/panels.css for panel layouts (e.g., making sure panels inside .window-body have padding, or styling form elements inside panels). styles/fonts.css if custom font or icon font needed (optional). Each panel (like instrument editor, settings) can also be given window class to get system.css styling (making them sub-windows). Ensure that the CSS makes the UI look like a classic OS: for example, .title-bar class is used for panel headers, etc. Keep CSS concise; most should come from system.css. JavaScript Modules (ES6): Organize the functionality into modular JS files under a src/ directory: src/constants.js: Define any constants (e.g., default BPM, number of rows/tracks default, note-to-frequency map or tuning reference A4=440, etc.). Audio Engine: src/audio/engine.js should export an AudioEngine class (or a singleton object) that manages the AudioContext, instruments, and sequencing. It should handle initialization of AudioContext (with a fallback message if Web Audio is not supported). Maintain a collection of instruments (could be a Map or object mapping ID to instrument data and nodes). Provide methods: init(), setBPM(bpm), setMasterVolume(vol), loadInstrument(data) to add/update an instrument, getInstrument(id) to retrieve instrument settings, getInstrumentsData() to retrieve all instruments (for saving), and loadInstrumentsData(data) to bulk load instruments (for loading a project). Implement scheduling for playback: possibly a startPlayback() that starts a loop using setTimeout or requestAnimationFrame tied to AudioContext time to schedule each step, a stopPlayback() to stop, and a pausePlayback() if needed. Use an internal timing mechanism to accurately step through rows at the interval of 60/BPM/4 seconds if 4 rows = 1 beat (or adjust if we define pattern resolution). Implement note playing: e.g., playNote(note, instrumentId, duration) which creates an oscillator, applies ADSR via GainNode, etc., or a simpler approach: schedule events for each active note in the current row. The engine should call a callback on each step (to inform the UI which row is playing) – e.g., allow setOnStepChange(fn) to register a function that will be called whenever the playback moves to a new row (so the grid can highlight it). Handle effect commands for each step: if a cell has an effect (like slide or arpeggio), modify the oscillator or schedule multiple notes accordingly. Implement at least Arpeggio (command often “0” or “Arp”) which cycles the note with two other semitone offsets within the duration of the step, and Slide (portamento) which bends the pitch towards the next note if specified. Keep effect logic simple but functional. Sound Generators: src/audio/generators.js can contain helper functions or classes for oscillators. For example, functions to create an oscillator of a given type and frequency, or noise generator for percussion. It can also hold the logic to convert note names (C-4, A#3, etc.) to frequency values (for example, mapping note strings to MIDI number then to frequency). Instruments Module: src/audio/instruments.js can define an Instrument class (or just handle instrument data structure). Each instrument includes id, name, waveform type, ADSR envelope values, filter/LFO if used. This module could also contain a default instrument list or presets. Audio Effects: src/audio/effects.js can implement additional audio processing nodes if needed (like a global filter, delay, etc., though optional). This could be minimal – even just stubs or a simple reverb/delay node that can be applied to master output or instruments. Tracker Grid UI: src/ui/grid.js should implement a TrackerGrid class responsible for rendering and managing the pattern grid. In the constructor, set up default pattern dimensions (e.g., 16 rows × 4 tracks by default, or use constants). Initialize an internal data structure (2D array) to represent cells, each with properties: note, instrument, effectCmd, effectVal. Include a render() method that creates an HTML table representing the grid. Each row should have a row number cell and then groups of cells for each track (note, instr, effect, value). Use appropriate classes or data- attributes on cells for identification (e.g., data-row="i" data-track="j" data-column="note" etc.). The grid should support clicking on a cell to select it (highlight it with a class like .selected-cell). Implement event listeners for cell clicks. Implement keyboard controls: when grid is focused or a cell selected, arrow keys move the selection (up/down = previous/next row, left/right = change column within the same row, Tab/Shift+Tab to move between tracks). Typing letter A-G enters a note (on the selected cell’s note column), typing 0-7 changes octave of a note, # toggles a sharp (e.g., C -> C# or back), and Delete or '-' sets the note to empty (---). Pressing Enter or double-click could open an inline editor (e.g., an <input> to type the full note name), but you can also handle direct keypresses without a separate input for simplicity. Provide methods like getPatternData() which returns the current pattern data structure (for saving or for sending to AudioEngine for playback) and setPatternData(data) to load a pattern (for loading projects). Manage a visual indicator for the playing row: maybe add a CSS class .playing-row on the row that is currently being played by the engine. The engine’s step callback can call a method in TrackerGrid like setPlayingRow(rowIndex) to update the highlight (and remove it when stopping). UI Panels: src/ui/panels.js will manage the various control panels (Instrument Editor, Project Settings, maybe Pattern Sequencer). You can implement a simple PanelManager that creates and shows/hides panels in the #control-panels-container element. Each panel can be a <div class="window ui-panel"> element (so it gets the window styling) with its own title bar and body. Instrument Editor Panel: Contains form inputs for the currently selected instrument’s parameters (waveform dropdown, number inputs/sliders for volume and ADSR, and perhaps filter cutoff and LFO rate/depth). Also include an input to specify instrument ID to edit or create. Provide buttons: "Load Instrument" (to load existing instrument by ID into the form for editing), "Add New Instrument" (clear form for a new ID), "Update Instrument" (apply changes from form to the instrument in AudioEngine), and "Delete Instrument". When an instrument is updated or added, the AudioEngine should be called to update its data, and the changes should reflect immediately (so if you play notes with that instrument ID, you hear the new sound). Use alerts or visual messages to confirm actions (e.g., "Instrument 03 updated successfully."). Project Settings Panel: Contains controls for global settings like BPM (a number input) and maybe master volume. Also include Save and Load buttons to handle project persistence. Save should gather all current data (using AudioEngine and TrackerGrid getters) and save to localStorage (perhaps under a fixed key or allow naming). Load should retrieve from localStorage and apply to the app (reset BPM, instruments, pattern data). Additionally, include Export buttons: e.g., "Export WAV", "Export MP3", "Export MIDI" which trigger the respective download functions (we will implement these in utils). Pattern Sequencer Panel: (Optional initially) A panel that shows the list of patterns in the song and their order. Provide at least an "Add Pattern" button: this should create a new blank pattern (you can copy the structure of the current pattern or make a new one) and add it to the song order. You might list patterns as rows with their index and maybe allow reordering (drag-drop or up/down buttons), but a simple approach is fine: e.g., just show "Pattern 0, 1, 2..." in order. For now, focus on the add function and ensure the AudioEngine knows to loop through the patterns in the order when playing. Transport Controls UI: src/ui/transport.js should create the Play/Pause/Stop (and maybe Record) buttons in the #transport-controls-container. Implement it as a class TransportControl that takes the container element and the AudioEngine as parameters. Its init() method will create button elements (with appropriate classes or even using <button> styled by system.css). Assign click event listeners: Play -> calls AudioEngine.startPlayback() (if AudioEngine is not already playing). Pause -> if playing, calls AudioEngine.pausePlayback() and maybe change the button to "Resume". Stop -> calls AudioEngine.stopPlayback() which stops and resets position. Also, disable/enable buttons based on state (e.g., when playing, disable Play or change it to a disabled state, enable Pause; when stopped, ensure Pause is disabled and Play is enabled, etc.). This sync can be achieved by listening to engine state changes or simply managing within the button handlers. Visualizer: src/ui/visualizer.js should handle the waveform visualization. Implement a Visualizer class that takes a container (the #visualizer-container) and the AudioEngine. It should create a <canvas> element and use an AnalyserNode connected to the AudioEngine’s output. Use requestAnimationFrame to draw the waveform (time-domain data) or a simple frequency bar graph. Keep it lightweight (maybe drawing a few lines representing the waveform). Ensure to handle if Web Audio is not running (e.g., if audio context not initialized, you can display a message "Visualizer unavailable"). Utility Modules: under src/utils/: file-io.js: implement functions to handle saving and loading. For example, saveProjectToFile() could prompt the user to download a JSON file of the project (in addition to localStorage). loadProjectFromFile() to import a JSON (this would require a file input element to read a file). These might be triggered by buttons in the UI (if you include an "Import/Export Project" feature). Also include logic to interface with localStorage (e.g., saveProjectToLocal() and loadProjectFromLocal()). formatters.js: any formatting helpers (e.g., formatting the time or converting BPM to interval, or formatting note strings) can go here. midi-export.js: a function to convert the current song or pattern into a MIDI file. You can base this on a simple approach: create one track, iterate through pattern data and whenever a note is present, add a Note On event at the appropriate time. Use a library-free approach by constructing the MIDI file bytes (header and track chunk) as was partially done earlier. Ensure the function returns a Blob or ArrayBuffer that can be turned into a downloadable file (MIDI is a binary format). midi-import.js: a function to parse a MIDI file (maybe only type 0 or 1 with one track) and return pattern data. This is advanced; you might implement basic parsing to get note events and put them into the grid. If complex, it’s okay to stub or handle only simple midis. (If needed) audio-recorder.js: you could implement WAV/MP3 export here. For WAV, you can write a function exportWAV(audioEngine) that uses an OfflineAudioContext to render the audio (synchronously generate the entire song by scheduling all notes quickly) then get the output buffer and format it into a WAV file (PCM 16-bit little-endian). For MP3, if not using an external library, one trick is to use the Web Audio MediaRecorder: connect the AudioEngine output to a MediaStream via AudioContext.createMediaStreamDestination() and then use new MediaRecorder(stream) with {mimeType: 'audio/webm'} or if 'audio/mpeg'` is supported for MP3. But not all browsers support MP3 in MediaRecorder. Alternatively, include a script for a library like Lame.js to encode MP3. If including a library is possible, you can add it via a script tag or as part of utils. Main Application Coordinator: src/main.js will tie everything together. On window load (or DOMContentLoaded), it should: Instantiate the AudioEngine and call init() to set up context. Set initial BPM (from a config JSON or default) and volume. Create the PanelManager (from ui/panels.js) and initialize it with the #control-panels-container element, creating the Instrument Editor panel, Project Settings panel, etc., and render them in the DOM. Create the TrackerGrid, passing in the #tracker-grid-container element, and call its init() to render the grid. Link the TrackerGrid and AudioEngine: e.g., give AudioEngine a reference to the TrackerGrid (so it can fetch notes to play). For instance, on each step AudioEngine can query TrackerGrid for the notes of that row (TrackerGrid might provide a method getStepData(row) or simply give direct access to its pattern array). Instantiate the TransportControl with the #transport-controls-container and hook it up to AudioEngine. Instantiate the Visualizer with the #visualizer-container and AudioEngine. Load default data: perhaps load a default song or at least a default instrument set. For example, create a couple of default instruments by calling audioEngine.loadInstrument({...}) with some hardcoded small set (like Instrument 01 and 02 as described earlier). Add event handlers: For example, wire the Save button to gather data via TrackerGrid.getPatternData(), AudioEngine.getInstrumentsData() etc., and save to localStorage or file; wire Load to do the inverse and then call appropriate re-render or update methods (populate forms with loaded instrument data, etc.). Wire Export buttons to call the corresponding util functions and initiate downloads (for instance, create an <a> element with URL.createObjectURL(blob) for the file and trigger click). Ensure that when an instrument is added/updated via the Instrument Editor panel, the main code updates the TrackerGrid display if needed (though instrument IDs in the grid likely don’t need to change visually except when new instrument is added, we might want to reflect that an instrument exists). Manage pattern sequencing: if a new pattern is added via the sequencer panel, perhaps simply log or alert that pattern is added. If implementing sequence playback, ensure AudioEngine knows to move to next pattern. A simple approach is to have AudioEngine hold a Song instance with an array of patterns; on reaching end of current pattern, it either loops the same pattern (if only one) or moves to next pattern in song. Handle the service worker registration (if serviceWorker.js exists, register it for offline support). Service Worker: Provide a serviceWorker.js at the project root for offline caching. It should cache the necessary files (HTML, CSS, JS, assets) on install and serve them from cache. Keep it basic (skip if unsure, but it's good to include a simple offline capability as per spec). Output & File Organization: The final output should be structured as a full project. Include all the files mentioned above with their respective content. Use clear separation between files in your response (e.g., indicate the filename and use code blocks for content). Make sure the code is consistent across files (e.g., the module import/export names match, element IDs used in JS match those in HTML). Provide in-code comments to explain non-obvious sections, especially in complex functions (like audio scheduling or effect processing). Do not include any extraneous commentary in the output – only the code and necessary comments. Deployment (Docker): Additionally, include a Dockerfile that can be used to deploy this app on HuggingFace Spaces. Use a base image (for example, node:16-alpine or python:3.9-slim) and configure it to serve the application. For instance, for a purely static site, you might use python -m http.server to serve the files. Ensure the Dockerfile exposes the correct port (7860) and uses an appropriate CMD to launch the server hosting index.html. This will allow the project to run independently in the space. Format: Provide the output as a series of files with their content. For each file, start with the filename as a comment or heading, then the code in the appropriate format. Use markdown triple backticks for code blocks and specify the language (for syntax highlighting) when possible. The files expected are: index.html – HTML document with the main structure and script includes. styles/core.css – (if needed) additional CSS for layout/global styles. styles/grid.css – CSS specific to the tracker grid (cell sizes, highlights). styles/panels.css – CSS for panels spacing and any panel-specific tweaks. styles/fonts.css – (if using any webfonts or icon fonts for the UI). src/constants.js – definitions of constants. src/audio/engine.js – audio engine implementation. src/audio/generators.js – helper functions for sound generation. src/audio/instruments.js – instrument data structure management. src/audio/effects.js – audio effect implementations (e.g., filter, delay, etc., or effect command handling). src/ui/grid.js – tracker grid UI logic. src/ui/panels.js – panel manager and panel UI logic (Instrument editor, settings, etc.). src/ui/transport.js – playback control UI. src/ui/visualizer.js – audio visualizer component. src/utils/file-io.js – save/load and project export/import utilities. src/utils/formatters.js – formatting helper functions (note conversions, etc.). src/utils/midi-export.js – MIDI file creation from song data. src/utils/midi-import.js – MIDI file parsing to song data. serviceWorker.js – service worker for offline support (simple caching). Dockerfile – container instructions to serve the app on HuggingFace. If some files are not needed or you combine functionality, adjust accordingly, but maintain a clear structure. Ensure all modules are consistently integrated (correct import/export statements). At the end of generation, the application should be ready to build and run. Now proceed to generate the complete code for the application in the specified format. - Initial Deployment
-
47 kB
Role: You are an expert software development team (architect, audio DSP engineer, front-end UI/UX developer, and QA tester in one) tasked with creating a web-based chiptune tracker application. You have deep knowledge of the Web Audio API, digital signal processing for synthesizers, and modern JavaScript front-end development. You take care to follow design specifications exactly, writing clean, functional code. You collaborate as multiple roles to produce a high-quality result: as an Architect you plan a modular structure; as a DSP Engineer you implement accurate audio synthesis and timing; as a UI Developer you create an intuitive retro-styled interface; as a QA you ensure stability and correctness of the application. Context: The project is Thunderbird Chiptune Composer, a browser-based music composer that mimics classic tracker software (like FastTracker, LSDJ, etc.) in functionality and appearance. The application should run entirely on the client side (no backend server needed) and work offline. A provided CSS theme (system.css) will supply a 90s desktop look – the UI must leverage this for window and widget styling. The user should be able to compose chiptune music by entering notes into a grid of rows and columns (patterns and tracks). The system supports multiple instruments, each with configurable waveform (sine, square, triangle, sawtooth, noise) and ADSR envelope (attack, decay, sustain, release), plus volume and simple effects. The timing of note playback must be precise and in sync with a given tempo (BPM). The app needs controls to play, pause, stop, and loop the music, and a way to manage multiple patterns (song mode). It should also allow saving and loading of compositions and exporting to common formats (WAV, MP3, MIDI). The previous iteration of this project had issues with missing features and incorrect styling – your job is to get it right, following the requirements closely. Pay attention to UI details (use the CSS classes and structure for the retro theme) and ensure all core features are implemented and working. Task: You will implement the Thunderbird Chiptune Composer as a set of HTML, CSS, and JavaScript files. Complete the following tasks step by step in your output: HTML Structure: Create index.html for the application. Include a <head> with appropriate meta tags and links to stylesheets (styles/system.css and any additional CSS like styles/core.css, styles/grid.css, styles/panels.css if needed). In the <body>, set up a main application window <div> with class "window" (from system.css) containing a title bar (<div class="title-bar"> with a <div class="title-bar-text">Thunderbird Chiptune Composer</div>). Inside the window, have a "window-body" that will contain the main interface: a header (if any), the tracker grid container, the control panels container (for instruments/settings), the transport controls container (play/stop buttons), and a visualizer container. Use semantic structure and assign IDs to key containers (e.g., <div id="tracker-grid-container"></div>, <div id="control-panels-container"></div>, <div id="transport-controls-container"></div>, <div id="visualizer-container"></div>). Include the script tags to load the JS modules (use type="module" or defer as appropriate). The HTML should be minimal since most UI elements will be created by JS modules. CSS Styling: Utilize the provided system.css for base styles (do not rewrite it; assume it’s available). Create small additional CSS files if necessary for layout or specific component styles: styles/core.css for any general overrides or layout (if needed). styles/grid.css for tracker grid specific styles (e.g., classes like .tracker-grid-table, .playing-row highlight, .selected-cell highlight). styles/panels.css for panel layouts (e.g., making sure panels inside .window-body have padding, or styling form elements inside panels). styles/fonts.css if custom font or icon font needed (optional). Each panel (like instrument editor, settings) can also be given window class to get system.css styling (making them sub-windows). Ensure that the CSS makes the UI look like a classic OS: for example, .title-bar class is used for panel headers, etc. Keep CSS concise; most should come from system.css. JavaScript Modules (ES6): Organize the functionality into modular JS files under a src/ directory: src/constants.js: Define any constants (e.g., default BPM, number of rows/tracks default, note-to-frequency map or tuning reference A4=440, etc.). Audio Engine: src/audio/engine.js should export an AudioEngine class (or a singleton object) that manages the AudioContext, instruments, and sequencing. It should handle initialization of AudioContext (with a fallback message if Web Audio is not supported). Maintain a collection of instruments (could be a Map or object mapping ID to instrument data and nodes). Provide methods: init(), setBPM(bpm), setMasterVolume(vol), loadInstrument(data) to add/update an instrument, getInstrument(id) to retrieve instrument settings, getInstrumentsData() to retrieve all instruments (for saving), and loadInstrumentsData(data) to bulk load instruments (for loading a project). Implement scheduling for playback: possibly a startPlayback() that starts a loop using setTimeout or requestAnimationFrame tied to AudioContext time to schedule each step, a stopPlayback() to stop, and a pausePlayback() if needed. Use an internal timing mechanism to accurately step through rows at the interval of 60/BPM/4 seconds if 4 rows = 1 beat (or adjust if we define pattern resolution). Implement note playing: e.g., playNote(note, instrumentId, duration) which creates an oscillator, applies ADSR via GainNode, etc., or a simpler approach: schedule events for each active note in the current row. The engine should call a callback on each step (to inform the UI which row is playing) – e.g., allow setOnStepChange(fn) to register a function that will be called whenever the playback moves to a new row (so the grid can highlight it). Handle effect commands for each step: if a cell has an effect (like slide or arpeggio), modify the oscillator or schedule multiple notes accordingly. Implement at least Arpeggio (command often “0” or “Arp”) which cycles the note with two other semitone offsets within the duration of the step, and Slide (portamento) which bends the pitch towards the next note if specified. Keep effect logic simple but functional. Sound Generators: src/audio/generators.js can contain helper functions or classes for oscillators. For example, functions to create an oscillator of a given type and frequency, or noise generator for percussion. It can also hold the logic to convert note names (C-4, A#3, etc.) to frequency values (for example, mapping note strings to MIDI number then to frequency). Instruments Module: src/audio/instruments.js can define an Instrument class (or just handle instrument data structure). Each instrument includes id, name, waveform type, ADSR envelope values, filter/LFO if used. This module could also contain a default instrument list or presets. Audio Effects: src/audio/effects.js can implement additional audio processing nodes if needed (like a global filter, delay, etc., though optional). This could be minimal – even just stubs or a simple reverb/delay node that can be applied to master output or instruments. Tracker Grid UI: src/ui/grid.js should implement a TrackerGrid class responsible for rendering and managing the pattern grid. In the constructor, set up default pattern dimensions (e.g., 16 rows × 4 tracks by default, or use constants). Initialize an internal data structure (2D array) to represent cells, each with properties: note, instrument, effectCmd, effectVal. Include a render() method that creates an HTML table representing the grid. Each row should have a row number cell and then groups of cells for each track (note, instr, effect, value). Use appropriate classes or data- attributes on cells for identification (e.g., data-row="i" data-track="j" data-column="note" etc.). The grid should support clicking on a cell to select it (highlight it with a class like .selected-cell). Implement event listeners for cell clicks. Implement keyboard controls: when grid is focused or a cell selected, arrow keys move the selection (up/down = previous/next row, left/right = change column within the same row, Tab/Shift+Tab to move between tracks). Typing letter A-G enters a note (on the selected cell’s note column), typing 0-7 changes octave of a note, # toggles a sharp (e.g., C -> C# or back), and Delete or '-' sets the note to empty (---). Pressing Enter or double-click could open an inline editor (e.g., an <input> to type the full note name), but you can also handle direct keypresses without a separate input for simplicity. Provide methods like getPatternData() which returns the current pattern data structure (for saving or for sending to AudioEngine for playback) and setPatternData(data) to load a pattern (for loading projects). Manage a visual indicator for the playing row: maybe add a CSS class .playing-row on the row that is currently being played by the engine. The engine’s step callback can call a method in TrackerGrid like setPlayingRow(rowIndex) to update the highlight (and remove it when stopping). UI Panels: src/ui/panels.js will manage the various control panels (Instrument Editor, Project Settings, maybe Pattern Sequencer). You can implement a simple PanelManager that creates and shows/hides panels in the #control-panels-container element. Each panel can be a <div class="window ui-panel"> element (so it gets the window styling) with its own title bar and body. Instrument Editor Panel: Contains form inputs for the currently selected instrument’s parameters (waveform dropdown, number inputs/sliders for volume and ADSR, and perhaps filter cutoff and LFO rate/depth). Also include an input to specify instrument ID to edit or create. Provide buttons: "Load Instrument" (to load existing instrument by ID into the form for editing), "Add New Instrument" (clear form for a new ID), "Update Instrument" (apply changes from form to the instrument in AudioEngine), and "Delete Instrument". When an instrument is updated or added, the AudioEngine should be called to update its data, and the changes should reflect immediately (so if you play notes with that instrument ID, you hear the new sound). Use alerts or visual messages to confirm actions (e.g., "Instrument 03 updated successfully."). Project Settings Panel: Contains controls for global settings like BPM (a number input) and maybe master volume. Also include Save and Load buttons to handle project persistence. Save should gather all current data (using AudioEngine and TrackerGrid getters) and save to localStorage (perhaps under a fixed key or allow naming). Load should retrieve from localStorage and apply to the app (reset BPM, instruments, pattern data). Additionally, include Export buttons: e.g., "Export WAV", "Export MP3", "Export MIDI" which trigger the respective download functions (we will implement these in utils). Pattern Sequencer Panel: (Optional initially) A panel that shows the list of patterns in the song and their order. Provide at least an "Add Pattern" button: this should create a new blank pattern (you can copy the structure of the current pattern or make a new one) and add it to the song order. You might list patterns as rows with their index and maybe allow reordering (drag-drop or up/down buttons), but a simple approach is fine: e.g., just show "Pattern 0, 1, 2..." in order. For now, focus on the add function and ensure the AudioEngine knows to loop through the patterns in the order when playing. Transport Controls UI: src/ui/transport.js should create the Play/Pause/Stop (and maybe Record) buttons in the #transport-controls-container. Implement it as a class TransportControl that takes the container element and the AudioEngine as parameters. Its init() method will create button elements (with appropriate classes or even using <button> styled by system.css). Assign click event listeners: Play -> calls AudioEngine.startPlayback() (if AudioEngine is not already playing). Pause -> if playing, calls AudioEngine.pausePlayback() and maybe change the button to "Resume". Stop -> calls AudioEngine.stopPlayback() which stops and resets position. Also, disable/enable buttons based on state (e.g., when playing, disable Play or change it to a disabled state, enable Pause; when stopped, ensure Pause is disabled and Play is enabled, etc.). This sync can be achieved by listening to engine state changes or simply managing within the button handlers. Visualizer: src/ui/visualizer.js should handle the waveform visualization. Implement a Visualizer class that takes a container (the #visualizer-container) and the AudioEngine. It should create a <canvas> element and use an AnalyserNode connected to the AudioEngine’s output. Use requestAnimationFrame to draw the waveform (time-domain data) or a simple frequency bar graph. Keep it lightweight (maybe drawing a few lines representing the waveform). Ensure to handle if Web Audio is not running (e.g., if audio context not initialized, you can display a message "Visualizer unavailable"). Utility Modules: under src/utils/: file-io.js: implement functions to handle saving and loading. For example, saveProjectToFile() could prompt the user to download a JSON file of the project (in addition to localStorage). loadProjectFromFile() to import a JSON (this would require a file input element to read a file). These might be triggered by buttons in the UI (if you include an "Import/Export Project" feature). Also include logic to interface with localStorage (e.g., saveProjectToLocal() and loadProjectFromLocal()). formatters.js: any formatting helpers (e.g., formatting the time or converting BPM to interval, or formatting note strings) can go here. midi-export.js: a function to convert the current song or pattern into a MIDI file. You can base this on a simple approach: create one track, iterate through pattern data and whenever a note is present, add a Note On event at the appropriate time. Use a library-free approach by constructing the MIDI file bytes (header and track chunk) as was partially done earlier. Ensure the function returns a Blob or ArrayBuffer that can be turned into a downloadable file (MIDI is a binary format). midi-import.js: a function to parse a MIDI file (maybe only type 0 or 1 with one track) and return pattern data. This is advanced; you might implement basic parsing to get note events and put them into the grid. If complex, it’s okay to stub or handle only simple midis. (If needed) audio-recorder.js: you could implement WAV/MP3 export here. For WAV, you can write a function exportWAV(audioEngine) that uses an OfflineAudioContext to render the audio (synchronously generate the entire song by scheduling all notes quickly) then get the output buffer and format it into a WAV file (PCM 16-bit little-endian). For MP3, if not using an external library, one trick is to use the Web Audio MediaRecorder: connect the AudioEngine output to a MediaStream via AudioContext.createMediaStreamDestination() and then use new MediaRecorder(stream) with {mimeType: 'audio/webm'} or if 'audio/mpeg'` is supported for MP3. But not all browsers support MP3 in MediaRecorder. Alternatively, include a script for a library like Lame.js to encode MP3. If including a library is possible, you can add it via a script tag or as part of utils. Main Application Coordinator: src/main.js will tie everything together. On window load (or DOMContentLoaded), it should: Instantiate the AudioEngine and call init() to set up context. Set initial BPM (from a config JSON or default) and volume. Create the PanelManager (from ui/panels.js) and initialize it with the #control-panels-container element, creating the Instrument Editor panel, Project Settings panel, etc., and render them in the DOM. Create the TrackerGrid, passing in the #tracker-grid-container element, and call its init() to render the grid. Link the TrackerGrid and AudioEngine: e.g., give AudioEngine a reference to the TrackerGrid (so it can fetch notes to play). For instance, on each step AudioEngine can query TrackerGrid for the notes of that row (TrackerGrid might provide a method getStepData(row) or simply give direct access to its pattern array). Instantiate the TransportControl with the #transport-controls-container and hook it up to AudioEngine. Instantiate the Visualizer with the #visualizer-container and AudioEngine. Load default data: perhaps load a default song or at least a default instrument set. For example, create a couple of default instruments by calling audioEngine.loadInstrument({...}) with some hardcoded small set (like Instrument 01 and 02 as described earlier). Add event handlers: For example, wire the Save button to gather data via TrackerGrid.getPatternData(), AudioEngine.getInstrumentsData() etc., and save to localStorage or file; wire Load to do the inverse and then call appropriate re-render or update methods (populate forms with loaded instrument data, etc.). Wire Export buttons to call the corresponding util functions and initiate downloads (for instance, create an <a> element with URL.createObjectURL(blob) for the file and trigger click). Ensure that when an instrument is added/updated via the Instrument Editor panel, the main code updates the TrackerGrid display if needed (though instrument IDs in the grid likely don’t need to change visually except when new instrument is added, we might want to reflect that an instrument exists). Manage pattern sequencing: if a new pattern is added via the sequencer panel, perhaps simply log or alert that pattern is added. If implementing sequence playback, ensure AudioEngine knows to move to next pattern. A simple approach is to have AudioEngine hold a Song instance with an array of patterns; on reaching end of current pattern, it either loops the same pattern (if only one) or moves to next pattern in song. Handle the service worker registration (if serviceWorker.js exists, register it for offline support). Service Worker: Provide a serviceWorker.js at the project root for offline caching. It should cache the necessary files (HTML, CSS, JS, assets) on install and serve them from cache. Keep it basic (skip if unsure, but it's good to include a simple offline capability as per spec). Output & File Organization: The final output should be structured as a full project. Include all the files mentioned above with their respective content. Use clear separation between files in your response (e.g., indicate the filename and use code blocks for content). Make sure the code is consistent across files (e.g., the module import/export names match, element IDs used in JS match those in HTML). Provide in-code comments to explain non-obvious sections, especially in complex functions (like audio scheduling or effect processing). Do not include any extraneous commentary in the output – only the code and necessary comments. Deployment (Docker): Additionally, include a Dockerfile that can be used to deploy this app on HuggingFace Spaces. Use a base image (for example, node:16-alpine or python:3.9-slim) and configure it to serve the application. For instance, for a purely static site, you might use python -m http.server to serve the files. Ensure the Dockerfile exposes the correct port (7860) and uses an appropriate CMD to launch the server hosting index.html. This will allow the project to run independently in the space. Format: Provide the output as a series of files with their content. For each file, start with the filename as a comment or heading, then the code in the appropriate format. Use markdown triple backticks for code blocks and specify the language (for syntax highlighting) when possible. The files expected are: index.html – HTML document with the main structure and script includes. styles/core.css – (if needed) additional CSS for layout/global styles. styles/grid.css – CSS specific to the tracker grid (cell sizes, highlights). styles/panels.css – CSS for panels spacing and any panel-specific tweaks. styles/fonts.css – (if using any webfonts or icon fonts for the UI). src/constants.js – definitions of constants. src/audio/engine.js – audio engine implementation. src/audio/generators.js – helper functions for sound generation. src/audio/instruments.js – instrument data structure management. src/audio/effects.js – audio effect implementations (e.g., filter, delay, etc., or effect command handling). src/ui/grid.js – tracker grid UI logic. src/ui/panels.js – panel manager and panel UI logic (Instrument editor, settings, etc.). src/ui/transport.js – playback control UI. src/ui/visualizer.js – audio visualizer component. src/utils/file-io.js – save/load and project export/import utilities. src/utils/formatters.js – formatting helper functions (note conversions, etc.). src/utils/midi-export.js – MIDI file creation from song data. src/utils/midi-import.js – MIDI file parsing to song data. serviceWorker.js – service worker for offline support (simple caching). Dockerfile – container instructions to serve the app on HuggingFace. If some files are not needed or you combine functionality, adjust accordingly, but maintain a clear structure. Ensure all modules are consistently integrated (correct import/export statements). At the end of generation, the application should be ready to build and run. Now proceed to generate the complete code for the application in the specified format. - Initial Deployment
-
388 Bytes
initial commit