Professor, Music Therapy Program, Temple University, USA
Editorial Board, Journal of Music Therapy
Board member of the International Association of Music and Medicine
Wendy L. Magee PhD is Professor of Music Therapy at Temple University, Philadelphia. She has an extensive background of more than 35 years in neurological rehabilitation as a music therapy clinician, researcher, manager and trainer in Australia, England, and Ireland before moving to the USA in 2011. Her clinical experience with people with complex needs incorporated technology within music therapy practice over this period, positioning her to research the clinical applications of music technology supported by the Leverhulme Foundation, the American Music Therapy Association and, most recently, the GRAMMY Foundation. Her seminal work Music Technology in Therapeutic and Health Settings published by Jessica Kingsley in 2013 explored practice, research and emerging theory on the applications of wide ranging technologies in clinical settings. Current research collaborations span Europe, South America, America and Asia. She is a former Chairperson of the British Society for Music Therapy, a board member of the International Association of Music and Medicine, and serves on the editorial boards of several music therapy journals.
Title: Access, agency and aesthetics: developments music therapy and technology
Technology in its broadest sense has long afforded access to music-making in situations where the use of traditional instruments limits access. Whether adaptive or digital, music technology can enhance an individual’s agency in creativity. Beyond merely physical advantages, the recent widespread adoption of virtual platforms has illustrated the affordances that technology may offer from geographical and economic perspectives as well. However, many challenges persist from aesthetic and sensory perspectives. This presentation will share latest developments globally in technological applications in music therapy, considering cultural, aesthetic and ethical dimensions.
Professor of Music Technology at McGill University, Canada
Title: Five Decades of Computer Music Interfaces: from ICMC & CMJ to NIME
Marcelo M. Wanderley is a Professor of Music Technology at McGill University, Canada. His research interests include the design and evaluation of digital musical instruments and the analysis of performer movements. He co-edited the electronic book “Trends in Gestural Control of Music” in 2000, co-authored the textbook “New Digital Musical Instruments: Control and Interaction Beyond the Keyboard” in 2006, and chaired the 2003 International Conference on New Interfaces for Musical Expression (NIME03). He is a member of the Computer Music Journal’s Editorial Advisory Board and a senior member of the ACM and the IEEE.
Computer music interfaces, aka musical interfaces or gestural controllers, have been a topic of interest since the beginning of academic gatherings and journals on computer music, including the International Computer Music Conference (ICMC) and the Computer Music Journal (CMJ). This interest builds upon previous developments in electronic music instruments, which have been proposed for more than a century (Chadabe, 1997). The establishment of the International Conference on New Interfaces in 2002, after the original NIME workshop in 2001, has decoupled the number of academic publications in the field from a few hundred to thousands. In this talk, I will discuss research on musical interfaces starting in the late 70s until today, pointing out various trends with more or less longevity that permeate the development of musical interfaces. Examples include works on interface design and evaluation, mapping, and haptic feedback, as well as tools to help the community navigate among the works published since then.
Full professor of Artificial Intelligence and Digital Forensics at the Institute of Computing,
University of Campinas (Unicamp), Brazil
Anderson Rocha is a full-professor of Artificial Intelligence and Digital Forensics at the Institute of Computing, University of Campinas (Unicamp), Brazil. He is the Director of the Artificial Intelligence Lab., Recod.ai, and the Institute Director for the 2019-2023 term. He has actively worked as an editor of important international journals such as the IEEE Transactions on Information Forensics and Security (T.IFS), Elsevier Journal of Visual Communication and Image Representation (JVCI), and IEEE Signal Processing Letters (SPL), and the IEEE Security & Privacy Magazine. He is an elected affiliate of the Brazilian Academy of Sciences (ABC) and the Brazilian Academy of Forensic Sciences (ABC). He is a two-term elected member of the IEEE Information Forensics and Security Technical Committee (IFS-TC) and its chair for the 2019-2020 term. He is a Microsoft Research and a Google Research Faculty Fellow, important academic recognitions that Microsoft Research and Google bestowed on researchers, respectively. In addition, in 2016, he was awarded the Tan Chin Tuan (TCT) Fellowship, a recognition promoted by the Tan Chin Tuan Foundation in Singapore. Finally, he is ranked Top-2% among the most influential scientists worldwide, according to recent studies from Research.com and Stanford/PlosOne. In 2023, he was selected a LinkedIn TopVoice in Artificial Intelligence for continuously raising awareness of AI and its potential impacts to the Society-at-large.
Title: How to Live with Synthetic Realities: ChatGPT, Midjourney, Dall-E2, StableDiffusion, and others
In this talk, we will discuss some how our society is living what is being referred to as Synthetic Realities. We will touch upon important technologies shaping this new realities such as ChatGPT, Midjourney, Dall-E2, StableDiffusion, and others. More importantly, we will discuss what kinds of telltales can be explored to expose such creations or forgeries and pinpoint the research challenges ahead and implications of such fakes to the society at large.
Head of the Sound Music Movement Interaction team at IRCAM in Paris,
Co-founder of the International Conference on Movement and Computing
Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris (part of the joint research lab on Science & Technology for Music and Sound between IRCAM – CNRS – Sorbonne Université). His research concerns the modelling and the design of interaction between human movement and sound, and the development of gesture-based digital musical instruments. The applications range from artistic creation, education to health. Recent projects concerned learning processes in movement-based interaction and collective musical interactions. He was keynote or invited speaker at several international conferences such as the ACM TEI’13. As the coordinator of the “Interlude project”, he was awarded in 2011 the 1st Prize of the Guthman Musical Instrument Competition (Georgia Tech). He is the co-founder of the International Conference on Movement and Computing.
Title: Sound-Music-Movement Interaction: from listening to performing, from general public to musician, from solo to collective experiences
Embodied music interaction, is a broad framework that describe the role of our movement and body in listening, playing or more generally experiencing music. Within this framework, the use of interactive technologies enable opportunities for designing a large variety of systems and instruments. In this talk, I will describe approaches, design methodologies, computational models using machine learning we developed over the years to explore different types of movement-based music interaction. Interestingly, this allows us to consider different perspectives in embodied music interaction such as listening / performing, beginner / virtuoso, solo / collective. I will in particular describe issues in learning and appropriating interactive technologies, ranging from artistic creation to health.