EchoScript: Enhancing AI Music Generation for Cinematic Scoring via Script-Aware Fine-Tuning

Mohammad Kasra Sartaee, Kayvan Karim

Research output: Contribution to conferencePaperpeer-review

Abstract

Recent advancements in artificial intelligence (AI) have significantly transformed the landscape of music generation, enabling context-sensitive and emotionally expressive soundtracks for diverse media applications such as film, gaming, and therapeutic environments. However, existing AI models continue to face persistent challenges in maintaining melodic coherence, thematic continuity, and emotional depth—qualities essential for professional soundtrack production.
This research addresses these limitations by fine-tuning \textbf{MusicGen}, a transformer-based generative AI model, to create EchoScript—an optimized variant specifically tailored for cinematic soundtrack composition through script-driven conditioning. A curated dataset enriched with detailed metadata, including genre, mood, instrumentation, tempo, and narrative context, was employed to guide the fine-tuning process.
Evaluation results demonstrate substantial improvements over the baseline model. EchoScript achieved a lower Fréchet Audio Distance (FAD) score (4.3738 vs. 4.5492) and outperformed the baseline in structured listening tests, with participants consistently preferring EchoScript for musical quality and narrative alignment.
Beyond these empirical findings, the study critically examines technical constraints and outlines key future directions, including symbolic-audio integration, enhanced audio mixing, and the development of standardised evaluation metrics. Collectively, these contributions advance the pursuit of AI-generated music that closely approximates human-level expressiveness and narrative coherence, offering meaningful benefits for creative industries reliant on adaptive and emotionally resonant soundtracks.
Original languageEnglish
Publication statusAccepted/In press - 21 May 2025
EventAssociation for the Advancement of Artificial Intelligence Syposium on Human-AI Collaboration 2025: Exploring diversity of human cognitive abilities and varied AI models for hybrid intelligent systems - Heriot-Watt Campus, Dubai, United Arab Emirates
Duration: 20 May 202522 May 2025

Conference

ConferenceAssociation for the Advancement of Artificial Intelligence Syposium on Human-AI Collaboration 2025
Abbreviated titleAAAI SuS 2025
Country/TerritoryUnited Arab Emirates
CityDubai
Period20/05/2522/05/25

Fingerprint

Dive into the research topics of 'EchoScript: Enhancing AI Music Generation for Cinematic Scoring via Script-Aware Fine-Tuning'. Together they form a unique fingerprint.

Cite this