Abstract
Recent advancements in artificial intelligence (AI) have significantly transformed the landscape of music generation, enabling context-sensitive and emotionally expressive soundtracks for diverse media applications such as film, gaming, and therapeutic environments. However, existing AI models continue to face persistent challenges in maintaining melodic coherence, thematic continuity, and emotional depth—qualities essential for professional soundtrack production.
This research addresses these limitations by fine-tuning \textbf{MusicGen}, a transformer-based generative AI model, to create EchoScript—an optimized variant specifically tailored for cinematic soundtrack composition through script-driven conditioning. A curated dataset enriched with detailed metadata, including genre, mood, instrumentation, tempo, and narrative context, was employed to guide the fine-tuning process.
Evaluation results demonstrate substantial improvements over the baseline model. EchoScript achieved a lower Fréchet Audio Distance (FAD) score (4.3738 vs. 4.5492) and outperformed the baseline in structured listening tests, with participants consistently preferring EchoScript for musical quality and narrative alignment.
Beyond these empirical findings, the study critically examines technical constraints and outlines key future directions, including symbolic-audio integration, enhanced audio mixing, and the development of standardised evaluation metrics. Collectively, these contributions advance the pursuit of AI-generated music that closely approximates human-level expressiveness and narrative coherence, offering meaningful benefits for creative industries reliant on adaptive and emotionally resonant soundtracks.
This research addresses these limitations by fine-tuning \textbf{MusicGen}, a transformer-based generative AI model, to create EchoScript—an optimized variant specifically tailored for cinematic soundtrack composition through script-driven conditioning. A curated dataset enriched with detailed metadata, including genre, mood, instrumentation, tempo, and narrative context, was employed to guide the fine-tuning process.
Evaluation results demonstrate substantial improvements over the baseline model. EchoScript achieved a lower Fréchet Audio Distance (FAD) score (4.3738 vs. 4.5492) and outperformed the baseline in structured listening tests, with participants consistently preferring EchoScript for musical quality and narrative alignment.
Beyond these empirical findings, the study critically examines technical constraints and outlines key future directions, including symbolic-audio integration, enhanced audio mixing, and the development of standardised evaluation metrics. Collectively, these contributions advance the pursuit of AI-generated music that closely approximates human-level expressiveness and narrative coherence, offering meaningful benefits for creative industries reliant on adaptive and emotionally resonant soundtracks.
Original language | English |
---|---|
Publication status | Accepted/In press - 21 May 2025 |
Event | Association for the Advancement of Artificial Intelligence Syposium on Human-AI Collaboration 2025: Exploring diversity of human cognitive abilities and varied AI models for hybrid intelligent systems - Heriot-Watt Campus, Dubai, United Arab Emirates Duration: 20 May 2025 → 22 May 2025 |
Conference
Conference | Association for the Advancement of Artificial Intelligence Syposium on Human-AI Collaboration 2025 |
---|---|
Abbreviated title | AAAI SuS 2025 |
Country/Territory | United Arab Emirates |
City | Dubai |
Period | 20/05/25 → 22/05/25 |