Keynote Presentations

Pre-Conference (Virtual)

  • Pre-Conference Keynote 1 (Virtual): Eduardo Miranda, University of Plymouth

    The Advent of Quantum Computer Music
    Monday, October 28, 10am PDT

    Quantum computing technology is developing at a fast pace. The impact of quantum computing on the music industry is inevitable. The emerging field of Quantum Computer Music investigates and develops applications and methods to process music using quantum computing technology. This talk will discuss examples of approaches to leverage quantum computing to learn, process and generate music. The methods discussed range from rendering music using data from physical quantum mechanical systems and quantum mechanical simulations to computational quantum algorithms to generate music, including quantum AI. The ambition to develop techniques to encode audio quantumly for making sound synthesisers and audio signal processing systems is also discussed.

    Biography
    Eduardo Reck Miranda is a classically trained composer and computer scientist. He has composed for renowned ensembles such as the BBC Concert Orchestra,
    Scottish Chamber Orchestra and London Sinfonietta. He is a Professor of Computer Music at the University of Plymouth, UK, and works with at Moth,
    a quantum technology company building the next era of music, gaming and the arts. Prof Miranda published over 100 research papers in learned journals and 16 books. He is world-renowned for his groundbreaking work in AI and music. He is a pioneer of quantum computing with a focus on creativity and music composition. His latest book, Quantum Computer Music, comprising a collection of chapters by leading practitioners in the field, was published in 2022 by Springer Nature.

  • Pre-Conference Keynote 2 (Virtual): Emilia Gomez, European Commission's Joint Research Centre

    Wednesday, October 30, 1am PDT

    Abstract
    This talk focuses on audio-based music information retrieval (MIR) and reflects on the origins of the field, the different MIR eras, and the recent developments. I will first focus on the paradigm shift from knowledge-driven to data-driven algorithmic design, thanks to recent developments in machine learning.  After that, I will discuss the current challenges that the MIR field addresses and the current and future research challenges, notably on the social and ethical impact of MIR algorithmic systems.

    Biography
    Dr. Emilia Gómez (MSc. Telecommunication Engineering, PhD in Computer Science, Full professor accreditation) is a senior scientist at the European Commission’s Joint Research Centre, where she leads the Human Behaviour and Machine Intelligence (HUMAINT) team that provides scientific support to EU AI policies as part of the European Centre for Algorithmic Transparency, notably the AI Act and the Digital Services Act. She is also a guest professor in Music Technology at Universitat Pompeu Fabra in Barcelona, Spain.

    Dr Gómez has a long academic experience in the field of Music Information Retrieval, where she has contributed to different approaches for music content description, notably in pitch-content description. Starting from the music domain, she now studies the impact of AI in human behaviour, notably how AI affects jobs, decisions, fundamental rights and children. She was the first female president of ISMIR, is currently a member of the OECD One AI expert group, an ELLIS (European Laboratory for Learning and Intelligent systems) fellow, and her work has been recognized by means of citations and honors, e.g. EUWomen4Future, Red Cross Award to Humanitarian Technologies or ICREA Academia.

Main Conference

  • Main Conference Keynote 1: Ed Newton Rex, Fairly Trained

    Monday, November 11, 1pm PST

    Ed Newton-Rex is the founder of Fairly Trained, a non-profit that certifies generative AI companies for fair training data practices. He is also a Visiting Scholar at Stanford University.

    In 2010, Ed founded Jukedeck, one of the first AI music generation startups. Jukedeck let video creators generate music for their videos, and was used to create more than a million pieces of music. It was acquired by ByteDance in 2019. At ByteDance, Ed led the AI Music lab, then led Product for TikTok in Europe.

    In 2022 Ed joined Stability AI, the company behind Stable Diffusion, to lead their Audio team. His team launched Stable Audio, Stability’s music generation product, which was named one of TIME Magazine’s best inventions of the year in 2023. He resigned from Stability in November 2023 due to the company’s policy of training AI models on copyrighted work without consent, and in 2024 founded Fairly Trained. He is a published composer of choral music.

  • Main Conference Keynote 2: Elizabeth Moody

    Tuesday, November 12, 5pm PST

    ELIZABETH MOODY, partner and chair of Granderson Des Rochers, LLP's New Media Group, is a pioneer in the digital media world. Moody has been spearheading digital music and video initiatives since the post-Napster era, both as outside counsel, and as a business executive in-house at companies like YouTube and Pandora. Today, Moody remains positioned at the intersection of technology and music rights and continues to advise her technology and rightsholder clients toward new and innovative business models and licensing deals.

    Moody is at the forefront of the developing issues and opportunities that AI presents to the music and entertainment industries. She counsels several prominent generative voice and audio AI companies, advises the non-profit Fairly Trained, which certifies AI companies who are training the data sets with fairly acquired, licensed or owned data, and Audioshake, an AI-based stem separation tool in use by record labels, movie studios, and entertainment companies today to ease production and marketing.
     
    She is also keyed into the gaming and the web 3.0 world. She is partnerships counsel for the gaming company Roblox and also works closely with Wave XR, a virtual reality concerts start-up that works with artists to create unique live performances as avatar versions of themselves in imaginative digital landscapes. She developed and continues to grow Styngr’s efforts to power music in video games and online gaming experiences.
     
    Along with gaming and the metaverse, she is passionate about the opportunities web 3.0 will bring to the music community and creators. She represents Audius, the blockchain-based music streaming service, in its efforts to help creators and their fans connect more authentically by embracing the opportunities offered through a decentralized network and Revelator, an all-in-one music platform providing digital distribution, analytics, and web 3.0 services to artists, record labels and publishers.  She advises Copyright Delta, providing data connections to rights holders and AI tech platforms.

    Moody is excited to bring opportunities to the music industry by forging deals with those in industries outside of music, including at the intersection of music and fitness. She represents connected fitness, yoga, pilates, mindfulness, cycling, and dance services to help them integrate music into their services. She has worked closely with Hydrow, the successful Peloton-style live reality-connected rowing experience, since its launch in 2019. She believes that VR plays an important role in fitness and works with Litesport and FitXR to ensure they have access to top-notch music experiences. She has also been working in the medical and wellness space exploring licensing structures to use music in the treatment of pain, dementia, and mental illness concerns through her work with MediMusic and her advisory participation on the board of Music Health.

  • Main Conference Keynote 3: Douglas Eck

    Wednesday, November 13, 1:15pm PST

    Doug is a Senior Research Director at Google, and leads research efforts at Google DeepMind in Generative Media, including image, video, 3D, music and audio generation. His own research lies at the intersection of machine learning and human-computer interaction (HCI). In 2015, Doug created Magenta, an ongoing research project exploring the role of AI in art and music creation. Before joining Google in 2010, Doug did research in music perception, aspects of music performance, machine learning for large audio datasets and music recommendation. He completed his PhD in Computer Science and Cognitive Science at Indiana University in 2000 and went on to a postdoctoral fellowship with Juergen Schmidhuber at IDSIA in Lugano Switzerland. From 2003-2010, Doug was faculty in Computer Science in the University of Montreal machine learning group (now MILA machine learning lab), where he became Associate Professor.  For more information see http://g.co/research/douglaseck.