This is a list of Project Guides and their areas of interest for the 2022 WiMIR Virtual Workshop, which will take place as an online-only satellite event of ISMIR2022.
The Workshop will take place on Monday 28 and Tuesday 29 November – please sign up by using this form.
We know that timezones for this are complicated, so we’ve made a Google Calendar with all the events on it – visit this link to add them to your calendar.
This year’s workshop is organized by Courtney Reed (Queen Mary University), Iris Yuping Ren (Utrecht University), Kitty Zhengshan Shi (Stanford University), and Jordan B.L. Smith (TikTok).
November 28
Vinod Vidwans: AI-Raga: A Mélange of the Musical and the Computational
This event will take place at 1600, GMT+5.5
AI-Raga is an artificially intelligent system that generates a musical composition in a given Raga without any human intervention. A close scrutiny of the treatises on Indian music shows that a computational perspective is inherently embedded in the ancient theories of music in India. It is an intellectual delight to decode the computational concepts from the vocabulary of Indian music, it also gives a sense of fructification and fulfilment to see these concepts successfully encoded in the system AI-Raga. In the vocabulary of Indian music certain concepts and principles viz. swara (a musical note), shruti (a microtone), Shadja-Panchama Bhava (the rule of fifths), and Shadja-Madhyama Bhava (the rule of fourths) have been rigorously theorized and well-established. This paper and presentation attempt to unravel the computational facets of these fundamental concepts and principles. It proves that a computational orientation of Indian music provides strong foundations for artificial creations and lends legitimacy to the aesthetic experience evoked. Based on the computational interpretation of these principles from the treatises viz. Natyashastra and Sangeet Ratnakara, the author has developed the aforementioned creative AI system that is yielding promising results. The latter part of the presentation involves a demonstration of the AI-Raga system. It is hoped that the audience will get many new insights into the role of Artificial Intelligence (AI) in the future of Indian music through this presentation.
Bio: Dr. Vinod Vidwans is a professor, in the Department of Design, Art and Performances at the FLAME University, Pune, India. Before joining FLAME University, he was a Senior Designer/ Professor at National Institute of Design (NID), Ahmedabad, India. He also headed the Departments of New Media and Software User Interface Design at NID. He has been a visiting faculty & a resource person at many prestigious institutes in India. He inherits Indic knowledge tradition and currently engaged in research on various Indic themes. He has designed and developed Artificial Intelligence (AI) Systems for Indian Music called `AI-Raga’ and `AI-Tala’. The AIRaga system generates a Bandish (a musical composition) on its own without any human assistance in a given Raga and renders it in traditional classical Indian style. The AI system generates a Bandish based on the Vadi, Samvadi, Aroha and Avroha of the Raga. It does not have any database of Ragas. The AITala system generates a Tabla performance in the given Tala.
Marcelo Queiroz: What do we look for in a research project?
This event will take place at 0900, GMT-3
The idea of this collaborative workshop is to brainstorm on the things that motivate participants to propose or join research projects, including goals (both personal and collective), frames of reference (theories, perspectives, aspirations), tools (models, algorithms, datasets), research practices (teamworking, interdisciplinarity, ethics), attitudes (the good, the bad and the ugly), among others that may be proposed on the spot. I’ll make a small introduction and share a personal perspective to get the ball rolling, and then open the floor for a horizontal debate.
Bio: Marcelo Queiroz is Associate Professor of the Computer Science Department and vice-coordinator of the Sonology Research Center at the University of São Paulo. His background includes a BSc in Computer Science, a BA in Music Composition, an MSc in Applied Mathematics, and PhD and Habilitation degrees in Computer Science. He was a visiting scholar at the Universities of Coimbra (Portugal), Maynooth (Ireland) and Thessaloniki (Greece). His research interests include sound/music/speech processing, computational acoustics, network and mobile music, and more generally computational techniques for music analysis/composition/performance/improvisation.
Kate Helsen: En-chanted: how medieval notation and Gregorian chant gets digital
This event will take place at 1000, GMT-5
Taking a ‘Digital Humanities’ approach to medieval chant means getting to tackle various interesting questions. First, there is the musical notation itself, which is unlike modern notation and requires specialized document analysis to be developed for the digital images of these manuscripts. Second, there is the enormous repertoire, mostly uncatalogued, which must be encoded in standard, efficient ways that will allow for large-scale comparative analysis and the discovery of local musical traditions or even just small, characteristic ‘riffs’. Third, we are looking at the implications of digital chant projects on our ability to know what it sounded like. In addition, chant melodies offer computer scientists the opportunity to work with various machine learning algorithms to think about these melodies as ‘strings’, analogous to language or even genetic strands of information. The results are as new as they are exciting – cross-disciplinary inspiration is mixed with musically rewarding results.
Bio: Kate Helsen is an Assistant Professor in the Department of Music Research and Composition at the Don Wright Faculty of Music at the University of Western Ontario. Her specialization in plainchant, and Early Music more broadly, has led to her involvement in several international projects that dismantle the traditional boundaries of music and technology and have resulted in publications in Plainsong and Medieval Music, Acta Musicologica, Empirical Musicology Review and Early Music. Her research interests include early notations, the structural analysis of chant, melodic encodings, and obsessing over gorgeous manuscripts. Kate has sung professionally with the Tafelmusik Chamber Choir in Toronto since 2000.

Sakinat Oluwabukonla Folorunso: ORIN: A Nigerian music benchmark dataset for MIR tasks and research.
This event will take place at 1730, GMT+1
Music is often seen as the only truly universal language. Nigerian music is typically used for relaxation, health therapy, ceremonials, war, work, etc. Music Information Retrieval (MIR) is the field of extracting high-level information, such as genre, artist recognition, or instrumentation from music. My talk will be based on presenting a new music dataset, namely the ORIN: A Nigerian music benchmark dataset for MIR tasks and research. ORIN is the first Nigerian music dataset and it consists of three categories of music: The traditional, English-Contemporary, and NaijArtist. The motivation is that to date, only a few research works have been done on Nigerian songs, the first publicly available robust benchmark dataset of Nigerian music to be openly shared in a Findable, Accessible, Interoperable and Reusable (FAIR) way and used to accelerate MIR tasks, AI models, and machine learning analysis, and train on Nigerian cultural values, for the African students and students in the diaspora. Firstly, I will talk about the ORIN dataset and the initial results that I got. I will also talk about feature importance and the use of SHAP (an XAI tool) in my work. I will then discuss the challenges faced and some open issues.
Bio: Sakinat Folorunso is a computer science lecturer in the Department of Mathematical Sciences, Computer Science unit at Olabisi Onabanjo University (OOU), Nigeria. She is the team lead for the Artificial Intelligence Research Group (ArIRSG) in OOU and the lead organizer for IndabaX, Nigeria. Her research focuses on music information retrieval, computer vision, machine learning, and Responsible Artificial Intelligence. More information about his research can be found on her home page.
November 29
Keunwoo Choi: How to get rejected from ISMIR and ICASSP for quite a few times and still feel OK.
This event will take place at 1530, GMT+9
This is yet another talk about failures; this time, in academia. I will talk about what I wanted, what I thought, what I thought wrong, and etc. during my preparation for and attending a PhD program. My submissions were rejected four times in a row within two years from ISMIR and ICASSP. It was frustrating but there were some lessons I could (try to) squeeze from — from that moments / retrospectively. Let’s talk about them, maybe there are something in common between us.
Bio: AI Research Director of Gaudio Lab. Research scientist at TikTok and Spotify, previously. PhD from Queen Mary University of London, Master and Bachelor from Seoul National University. Topics: music classification and recommendation, transcription, source separation, spatial audio, and audio synthesis.
Shantala Hegde: Neurocognitive deficits and how to repair it? Role of Music in Neuropsychological rehabilitation
This event will take place at 1400, GMT+5.5
Deficits in neurocognitive functions such as attention, memory, executive functions, language emotional and social functions form the sequelae of neurological, neurosurgical and neuropsychiatric conditions. These deficits are debilitating and determine functional recovery. Neuropsychological rehabilitation is an evidence-based treatment and the main goal is to enable patients to achieve their optimum level of functioning and overall well-being. Interventions to remediate cognitive deficits have often employed drill-based exercises- paper–pencil or computer-based tasks that focus on direct training of cognitive functions. Over the decade and half, there have been major advances in neuropsychological rehabilitation and newer methods are being used to provide the holistic method of intervention. Music based interventions are one such newer addition with a paradigm shift from social science approach to a neuroscience model. Music shares neural networks with other neurocognitive as well as motor functions. Music based interventions have been used to improve gait, neurocognitive, including socio-cognitive, speech as well as emotional domains of functioning. The need for further well-controlled trials is imperative and future research is the field of music and neuroscience has a crucial role to play in contributing to our better understanding of brain-behaviour and clinical recovery.
Bio: Dr. Shantala Hegde, is an Additional Professor and Consultant at the Neuropsychology Unit, Department of Clinical Psychology and Consultant to the Department of Neurorehabilitation, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru. She is the Intermediate Fellow of the prestigious Wellcome Trust UK-DBT India Alliance and is a mentee to Dr. Gottfried Schlaug, Professor of Neurology and Biomedical Engineering University of Massachusetts Chan Medical School – Baystate Medical Center Biomedical Engineering – IALS / UMass Amherst. Her other mentor as part of this fellowship is Dr. Matcheri S Keshavan, Stanley Cobb Professor and Academic Head of Psychiatry, Beth Israel Deaconess Medical Center and Massachusetts Mental Health Center, Harvard Medical School She is the first clinical psychologist to receive this prestigious fellowship in the country. She is the faculty-in-charge of the Music Cognition Lab at NIMHANS.

Fabien Gouyon and friends: A Day in the Life of a Music Track
This event will take place at 1700, GMT+0
The plan for this session is to interact with the audience on many different research areas that are relevant to music streaming (and digital audio entertainment generally). We’ll organize the session as the imaginary trip of a music track from its release in our catalog at SiriusXM/Pandora, to its recommendation to listeners. In parallel, this should provide you a tour of the type of scientific problems we face in an industrial context, and how we tackle them collaboratively.
Bio: Fabien Gouyon, Chun Guo, Sergio Oramas, Matt McCallum, Elaine Mao, and Matthew Davies are some of the ML scientists at SiriusXM / Pandora, working from the United States and Europe on Music Information Retrieval, Recommender Systems, and Natural Language Processing. Applications of our research range from music content understanding, personalized algorithmic radio programming, to search & voice interaction, and more. Fabien is a former ISMIR President, now heading SiriusXM/Pandora Europe Science team. After an internship, Chun worked at Pandora since 2017 on many aspects of our products, ranging from search, voice, to algorithmic radio programming. Sergio also interned and then joined to build voice interaction features, he now works on music understanding and multimodal embeddings. Matt worked at Gracenote and Serato before joining in 2019, he currently works on machine listening. Elaine joined from Rdio in 2016 and works on Pandora homepage personalization. Matthew joined SiriusXM / Pandora after 15 years in academia.
Audrey Laplante: The discoverability of local content on global music streaming platforms
This event will take place at 1330, GMT-5
Algorithms are not neutral. Research has shown that they could be biased and produce results that are unfair or discriminatory in terms of gender, race or ethnicity, among other characteristics. On music streaming platforms, algorithms are used to classify music, determine how search results are ranked, create personalized playlists, and recommend playlists, songs or artists to a user. Could these algorithms be biased? In this workshop, we will look at the impact of algorithms on the discoverability of local content on global music streaming platforms. We will present an overview of what we know so far on this topic. We will then discuss how we could measure the discoverability of this content considering the opacity of algorithms and the lack of accessibility of user data. We will also discuss how we could study and compare what challenges these biases pose for artists, music labels, and/or end-users from different cultures.
Bio: Audrey Laplante is an associate professor at the Université de Montréal’s School of Library and Information Science. She is a member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). She received a PhD (2008) and a Master’s (2001) in information science from McGill University and Université de Montréal, respectively, and an undergraduate degree in piano performance at Université de Montréal (1999). Her research interests include user studies, information practices, music information retrieval and discovery systems, and social media.