New-to-ISMIR paper mentoring program

We are excited to announce that the New-to-ISMIR paper mentoring program will be active also for this year’s conference as a part of the Diversity & Inclusion (D&I) initiatives!

The New-to-ISMIR paper mentoring program is designed for members new to the ISMIR conference (early stage researchers in MIR or researchers from allied fields who wish to consider submitting their work to an ISMIR conference) to share their advanced-stage work-in-progress ISMIR paper drafts with senior members of the ISMIR community (mentors) to obtain focused reviews and constructive feedback. The program supplements the generic submission guidelines. The program is being run in 2023 closely aligned with the ISMIR 2023 paper submission deadlines.

The topics of the mentored papers are expected to be in alignment with those of ISMIR conferences. We strongly encourage the mentored papers to be revised based on the feedback and submit to the upcoming ISMIR 2023 conference. All papers submitted to ISMIR 2023, whether mentored or not, will go through the regular scientific review process without any exceptions. 

How do I apply?

If you are interested and eligible, please apply to the program by Mar 2, 2023, using the form: https://forms.gle/LdH7HtfQ6tLuNzJcA (please fill in one form for each paper).

FAQ

Additional details about the program, topics, eligibility, expectations, timeline, and FAQ are available at https://ismir2023.ismir.net/diversity/mentoring or you can contact the organizer at ismir2023-diversity@ismir.net if you have any further questions. 

Advertisement

WiMIR Workshop 2022 Project Guides

This is a list of Project Guides and their areas of interest for the 2022 WiMIR Virtual Workshop, which will take place as an online-only satellite event of ISMIR2022

The Workshop will take place on Monday 28 and Tuesday 29 November – please sign up by using this form.

We know that timezones for this are complicated, so we’ve made a Google Calendar with all the events on it – visit this link to add them to your calendar.

This year’s workshop is organized by Courtney Reed (Queen Mary University), Iris Yuping Ren (Utrecht University), Kitty Zhengshan Shi (Stanford University), and Jordan B.L. Smith (TikTok).

November 28

Vinod Vidwans: AI-Raga: A Mélange of the Musical and the Computational

This event will take place at 1600, GMT+5.5

AI-Raga is an artificially intelligent system that generates a musical composition in a given Raga without any human intervention. A close scrutiny of the treatises on Indian music shows that a computational perspective is inherently embedded in the ancient theories of music in India. It is an intellectual delight to decode the computational concepts from the vocabulary of Indian music, it also gives a sense of fructification and fulfilment to see these concepts successfully encoded in the system AI-Raga. In the vocabulary of Indian music certain concepts and principles viz. swara (a musical note), shruti (a microtone), Shadja-Panchama Bhava (the rule of fifths), and Shadja-Madhyama Bhava (the rule of fourths) have been rigorously theorized and well-established. This paper and presentation attempt to unravel the computational facets of these fundamental concepts and principles. It proves that a computational orientation of Indian music provides strong foundations for artificial creations and lends legitimacy to the aesthetic experience evoked. Based on the computational interpretation of these principles from the treatises viz. Natyashastra and Sangeet Ratnakara, the author has developed the aforementioned creative AI system that is yielding promising results. The latter part of the presentation involves a demonstration of the AI-Raga system. It is hoped that the audience will get many new insights into the role of Artificial Intelligence (AI) in the future of Indian music through this presentation.

Bio: Dr. Vinod Vidwans is a professor, in the Department of Design, Art and Performances at the FLAME University, Pune, India. Before joining FLAME University, he was a Senior Designer/ Professor at National Institute of Design (NID), Ahmedabad, India. He also headed the Departments of New Media and Software User Interface Design at NID. He has been a visiting faculty & a resource person at many prestigious institutes in India. He inherits Indic knowledge tradition and currently engaged in research on various Indic themes. He has designed and developed Artificial Intelligence (AI) Systems for Indian Music called `AI-Raga’ and `AI-Tala’. The AIRaga system generates a Bandish (a musical composition) on its own without any human assistance in a given Raga and renders it in traditional classical Indian style. The AI system generates a Bandish based on the Vadi, Samvadi, Aroha and Avroha of the Raga. It does not have any database of Ragas. The AITala system generates a Tabla performance in the given Tala.

Marcelo Queiroz: What do we look for in a research project?

This event will take place at 0900, GMT-3

The idea of this collaborative workshop is to brainstorm on the things that motivate participants to propose or join research projects, including goals (both personal and collective), frames of reference (theories, perspectives, aspirations), tools (models, algorithms, datasets), research practices (teamworking, interdisciplinarity, ethics), attitudes (the good, the bad and the ugly), among others that may be proposed on the spot. I’ll make a small introduction and share a personal perspective to get the ball rolling, and then open the floor for a horizontal debate.

Bio: Marcelo Queiroz is Associate Professor of the Computer Science Department and vice-coordinator of the Sonology Research Center at the University of São Paulo. His background includes a BSc in Computer Science, a BA in Music Composition, an MSc in Applied Mathematics, and PhD and Habilitation degrees in Computer Science. He was a visiting scholar at the Universities of Coimbra (Portugal), Maynooth (Ireland) and Thessaloniki (Greece). His research interests include sound/music/speech processing, computational acoustics, network and mobile music, and more generally computational techniques for music analysis/composition/performance/improvisation.

Kate Helsen: En-chanted: how medieval notation and Gregorian chant gets digital

This event will take place at 1000, GMT-5

Taking a ‘Digital Humanities’ approach to medieval chant means getting to tackle various interesting questions. First, there is the musical notation itself, which is unlike modern notation and requires specialized document analysis to be developed for the digital images of these manuscripts. Second, there is the enormous repertoire, mostly uncatalogued, which must be encoded in standard, efficient ways that will allow for large-scale comparative analysis and the discovery of local musical traditions or even just small, characteristic ‘riffs’. Third, we are looking at the implications of digital chant projects on our ability to know what it sounded like. In addition, chant melodies offer computer scientists the opportunity to work with various machine learning algorithms to think about these melodies as ‘strings’, analogous to language or even genetic strands of information. The results are as new as they are exciting – cross-disciplinary inspiration is mixed with musically rewarding results. 

Bio: Kate Helsen is an Assistant Professor in the Department of Music Research and Composition at the Don Wright Faculty of Music at the University of Western Ontario. Her specialization in plainchant, and Early Music more broadly, has led to her involvement in several international projects that dismantle the traditional boundaries of music and technology and have resulted in publications in Plainsong and Medieval MusicActa MusicologicaEmpirical Musicology Review and Early Music. Her research interests include early notations, the structural analysis of chant, melodic encodings, and obsessing over gorgeous manuscripts. Kate has sung professionally with the Tafelmusik Chamber Choir in Toronto since 2000. 

Sakinat Oluwabukonla Folorunso: ORIN: A Nigerian music benchmark dataset for MIR tasks and research.

This event will take place at 1730, GMT+1

Music is often seen as the only truly universal language. Nigerian music is typically used for relaxation, health therapy, ceremonials, war, work, etc. Music Information Retrieval (MIR) is the field of extracting high-level information, such as genre, artist recognition, or instrumentation from music. My talk will be based on presenting a new music dataset, namely the ORIN: A Nigerian music benchmark dataset for MIR tasks and research. ORIN is the first Nigerian music dataset and it consists of three categories of music: The traditional, English-Contemporary, and NaijArtist. The motivation is that to date, only a few research works have been done on Nigerian songs, the first publicly available robust benchmark dataset of Nigerian music to be openly shared in a Findable, Accessible, Interoperable and Reusable (FAIR) way and used to accelerate MIR tasks, AI models, and machine learning analysis, and train on Nigerian cultural values, for the African students and students in the diaspora. Firstly, I will talk about the ORIN dataset and the initial results that I got. I will also talk about feature importance and the use of SHAP (an XAI tool) in my work. I will then discuss the challenges faced and some open issues.

Bio: Sakinat Folorunso is a computer science lecturer in the Department of Mathematical Sciences, Computer Science unit at Olabisi Onabanjo University (OOU), Nigeria. She is the team lead for the Artificial Intelligence Research Group (ArIRSG) in OOU and the lead organizer for IndabaX, Nigeria. Her research focuses on music information retrieval, computer vision, machine learning, and Responsible Artificial Intelligence. More information about his research can be found on her home page.

November 29

Keunwoo Choi: How to get rejected from ISMIR and ICASSP for quite a few times and still feel OK.

This event will take place at 1530, GMT+9

This is yet another talk about failures; this time, in academia. I will talk about what I wanted, what I thought, what I thought wrong, and etc. during my preparation for and attending a PhD program. My submissions were rejected four times in a row within two years from ISMIR and ICASSP. It was frustrating but there were some lessons I could (try to) squeeze from — from that moments / retrospectively. Let’s talk about them, maybe there are something in common between us.

Bio: AI Research Director of Gaudio Lab. Research scientist at TikTok and Spotify, previously. PhD from Queen Mary University of London, Master and Bachelor from Seoul National University. Topics: music classification and recommendation, transcription, source separation, spatial audio, and audio synthesis.

Shantala Hegde: Neurocognitive deficits and how to repair it? Role of Music in Neuropsychological rehabilitation

This event will take place at 1400, GMT+5.5

Deficits in neurocognitive functions such as attention, memory, executive functions, language emotional and social functions form the sequelae of neurological, neurosurgical and neuropsychiatric conditions. These deficits are debilitating and determine functional recovery. Neuropsychological rehabilitation is an evidence-based treatment and the main goal is to enable patients to achieve their optimum level of functioning and overall well-being. Interventions to remediate cognitive deficits have often employed drill-based exercises- paper–pencil or computer-based tasks that focus on direct training of cognitive functions. Over the decade and half, there have been major advances in neuropsychological rehabilitation and newer methods are being used to provide the holistic method of intervention. Music based interventions are one such newer addition with a paradigm shift from social science approach to a neuroscience model. Music shares neural networks with other neurocognitive as well as motor functions. Music based interventions have been used to improve gait, neurocognitive, including socio-cognitive, speech as well as emotional domains of functioning. The need for further well-controlled trials is imperative and future research is the field of music and neuroscience has a crucial role to play in contributing to our better understanding of brain-behaviour and clinical recovery.

Bio: Dr. Shantala Hegde, is an Additional Professor and Consultant at the Neuropsychology Unit, Department of Clinical Psychology and Consultant to the Department of Neurorehabilitation, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru. She is the Intermediate Fellow of the prestigious Wellcome Trust UK-DBT India Alliance and is a mentee to Dr. Gottfried Schlaug, Professor of Neurology and Biomedical Engineering University of Massachusetts Chan Medical School – Baystate Medical Center Biomedical Engineering – IALS / UMass Amherst. Her other mentor as part of this fellowship is Dr. Matcheri S Keshavan, Stanley Cobb Professor and Academic Head of Psychiatry, Beth Israel Deaconess Medical Center and Massachusetts Mental Health Center, Harvard Medical School She is the first clinical psychologist to receive this prestigious fellowship in the country. She is the faculty-in-charge of the Music Cognition Lab at NIMHANS.

Fabien Gouyon and friends: A Day in the Life of a Music Track

This event will take place at 1700, GMT+0

The plan for this session is to interact with the audience on many different research areas that are relevant to music streaming (and digital audio entertainment generally). We’ll organize the session as the imaginary trip of a music track from its release in our catalog at SiriusXM/Pandora, to its recommendation to listeners. In parallel, this should provide you a tour of the type of scientific problems we face in an industrial context, and how we tackle them collaboratively.

Bio: Fabien GouyonChun GuoSergio OramasMatt McCallumElaine Mao, and Matthew Davies are some of the ML scientists at SiriusXM / Pandora, working from the United States and Europe on Music Information Retrieval, Recommender Systems, and Natural Language Processing. Applications of our research range from music content understanding, personalized algorithmic radio programming, to search & voice interaction, and more. Fabien is a former ISMIR President, now heading SiriusXM/Pandora Europe Science team. After an internship, Chun worked at Pandora since 2017 on many aspects of our products, ranging from search, voice, to algorithmic radio programming. Sergio also interned and then joined to build voice interaction features, he now works on music understanding and multimodal embeddings. Matt worked at Gracenote and Serato before joining in 2019, he currently works on machine listening. Elaine joined from Rdio in 2016 and works on Pandora homepage personalization. Matthew joined SiriusXM / Pandora after 15 years in academia.

Audrey Laplante: The discoverability of local content on global music streaming platforms

This event will take place at 1330, GMT-5

Algorithms are not neutral. Research has shown that they could be biased and produce results that are unfair or discriminatory in terms of gender, race or ethnicity, among other characteristics. On music streaming platforms, algorithms are used to classify music, determine how search results are ranked, create personalized playlists, and recommend playlists, songs or artists to a user. Could these algorithms be biased? In this workshop, we will look at the impact of algorithms on the discoverability of local content on global music streaming platforms. We will present an overview of what we know so far on this topic. We will then discuss how we could measure the discoverability of this content considering the opacity of algorithms and the lack of accessibility of user data. We will also discuss how we could study and compare what challenges these biases pose for artists, music labels, and/or end-users from different cultures.

Bio: Audrey Laplante is an associate professor at the Université de Montréal’s School of Library and Information Science. She is a member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). She received a PhD (2008) and a Master’s (2001) in information science from McGill University and Université de Montréal, respectively, and an undergraduate degree in piano performance at Université de Montréal (1999). Her research interests include user studies, information practices, music information retrieval and discovery systems, and social media.

WiMIR Workshop 2022

We’re very pleased to say that we’ll be doing the fifth annual WiMIR Workshop this year. It will take place virtually across two dates, Monday 28 and Tuesday 29 November as a satellite event of ISMIR 2022.

The WiMIR Workshop will be two days of talks by eminent researchers in the WiMIR community, as well as time for networking, socializing, and discussion. The Workshop is, as ever, free and open to ALL members of the MIR community.  

We hope you can join us. We will have more information about the Workshop for you soon, and registration for it will be free and will open in late October.

WiMIR mentoring 2022 signups are now open!

Signups are now open for the seventh round of the Women in Music Information Retrieval (WiMIR) mentoring program, to run in 2022! We kindly invite previous and new mentors and mentees to sign up for the upcoming round.

The WiMIR mentoring program connects women students, postdocs, early-stage researchers, industry employees, and faculty to more senior women and men in MIR who are dedicated to increasing opportunities for women in the field. Mentors will share their experiences and offer guidance to support mentees in achieving and exceeding their goals and aspirations. The program also offers all mentors the option to pair up with a peer mentor for discussing relevant topics with a professional at a similar stage of their career. By connecting individuals of different backgrounds and expertise, this program strengthens networks within the MIR community, both in academia and industry.

Mentee eligibility

  • ​​You identify as women, trans, or non-binary, at any career stage in MIR.
  • This year we are also starting to expand the mentorship program to other underrepresented groups, pending an availability of mentors. If you are from a group that is underrepresented either at large in the ISMIR community (e.g., the Global South) or in the MIR community in your own country (e.g., an ethnic or racial group), we encourage community members of all genders to fill out this form if you are interested in receiving mentorship.
  • Sign up to GET a mentor: https://tinyurl.com/wimir-mentee-signup  

Mentor eligibility

Sign up by December 17, 2021. Mentoring to begin in February 2022.

Questions? Email: wimir-mentoring@ismir.net  

Sincerely,
WiMIR Mentoring Program Team 2022

  • Johanna Devaney, Brooklyn College/Graduate Center (CUNY), United States
  • Lamtharn “Hanoi” Hantrakul, TikTok/ByteDance, Thailand
  • Michael Mandel, Reality Labs and Brooklyn College/Graduate Center (CUNY), United States
  • Francesca Ronchini, Inria, France

WiMIR Workshop 2021: A Virtual Success!

We’re pleased to say, again, that our virtual, distributed-in-time-and-space WiMIR Workshop 2021 went very well!

We had just under 200 signups this year, and did a two-day intensive across timezones from Tokyo to San Francisco. We were delighted to have speakers from Shanghai, Singapore, Denmark, Paris, and more!

We did longer talks this year, and then were on Discord as our socializing and networking space – we also resumed our discussions from Delft, covering Work/Life Balance, Burnout, Applying to Post Docs & Grad School, Picking An Advisor, Starting an MIR Business, and Interviewing.

Thanks to everyone who joined us! Special thanks to our volunteers:  Rachel Bittner, Jan Van Balen, Elio Quinton, Becky Liang, Meinard Mueller, Elena Georgieva – and extra-special thanks to our project guides: Dorien Herremans, Kat Agres, Tian Cheng, Stefania Serafin, Oriol Nieto, Emma Frid, Nick Bryan, Lamtharn “Hanoi” Hantrakul, Jason Hockman, Jake Drysdale, Olumide Okubadejo, and Cory McKay.

We’ll see you next year!

The WiMIR Workshop Organizers,

  • Courtney Reed, Queen Mary, UK
  • Kitty Shi, Stanford, USA
  • Jordan Smith, TikTok, UK
  • Thor Kell, Spotify, USA
  • Blair Kaneshiro, Stanford, USA

Sharing science: the MIP-frontiers video communication project

Originally published at https://mip-frontiers.eu/ on July 28, 2021 by Giorgia Cantisani.

Sharing science

Sharing your research with the rest of the world can be very challenging. Sometimes you may need to target a broader audience than simply the colleagues in your particular research field. Colleagues in other communities or disciplines are already less likely to read about your work. When it comes to sharing your research with the general public, things become even more difficult. 

There are several reasons why we all should aim to disseminate our research beyond our universities and scientific communities. For instance, it might be essential to explain your research to a general audience because you are doing it thanks to some public funding. In such a case, it is a social duty to inform the citizens about your findings and make your research comprehensible. It’s a virtuous circle that produces culture and participation, and in return, can pay for new investments in research. 

Another reason is to attract the next generation towards science and your specific research field. This is an aspect that is often underrated because it hasn’t an immediate economic and/or social recognition return, but that is critical in the long term. Undergraduate students can orient their education choices and be our future colleagues and enlarge our research community. It’s vital then to let them know that your research exists and might be interesting for them. This would also benefit and increase diversity in the community and reach all those students for whom computer science is not among the options because of societal, demographic, or socioeconomic factors. 

In this context, it is still tough for scientists to involve the uninitiated on very specific topics that seem to have almost no connection with their everyday lives. However, many different techniques, tools, and languages have been studied and gradually refined over time. With the increasing amount of information available online, it is becoming more and more important to be concise and attract the audience’s attention from the very beginning. Video might be one of the ways to go.

Number of views per day (log) x video length (min). Plot courtesy of “Communicating Science With YouTube Videos: How Nine Factors Relate to and Affect Video Views” by Velho et al. (https://www.frontiersin.org/articles/10.3389/fcomm.2020.567606/full).

Videos about science

Videos about science have become more and more popular over the last decade as they are a low-barrier medium to communicate ideas efficiently and effectively. Short videos from 3 to 5 minutes are ideal because they are long enough to explain a concept and sufficiently short for viewers to decide if they are interested. We all have learned about the advantages and disadvantages of this medium during the last year of the pandemic. The format of the conferences has changed, and video abstracts are now a standard. However, video abstracts are intended for peers and not for a broader audience. When disseminating science, complex concepts should be made accessible for the largest audience possible. In such a case, motion graphics and animated storytelling can be a possible solution. Through the process of abstraction in an animated representation, we can effectively simplify the concept we want to transmit. The style, colour palette, transitions, aesthetic and functional choices can all concur to convey the main message.

Examples of scientific dissemination projects. Image courtesy of Scienseed.

Now, I can’t say this process of abstraction is easy. It takes time, many iterations over the script and many drafts before coming up with something good. You have to learn to work with visual designers who do not know anything about your research. We experienced this when working on the MIP-frontiers video communication project, meant to attract young researchers in our research field. It’s very hard to simplify and abstract things you work on every day. It feels like sacrificing many details which are essential to you for the sake of simplicity. Because of that, you have to always keep in mind who’s your target audience. In the specific case of this video, there was an additional problem: we needed to cover the most possible areas in music information processing (MIP), which was quite hard. The trick we found was to trace back the history of a song that an imaginary inhabitant of the future is listening to. We managed to derive a circular story following the song from composition to recording and from distribution to the user experience. Therefore, the music is the backbone of the video, and its choice was crucial.

Making-of

When preparing a motion graphic, you need to provide to the visual designers a script (description of the scenes), the voiceover (text that an actor needs to read and which describes the scene), and the background music. With those three elements, the visual designers built an animation on which you can then give feedback and adapt the voiceover and the music again. This process is reiterated repeatedly until convergence, when everyone is happy with the result.

Draft of the initial script of the MIP-frontiers animation.

In our case, an additional difficulty was that the music wasn’t just some “background” music. It was, on the contrary, the absolute protagonist that mainly contributes to conveying the main message. The music evolves throughout the video and changes according to the MIR application we wanted to illustrate. All of this needs a not negligible effort of synchronization and composition.

Regarding the voiceover, we quickly realized how few words can fit a 3-minutes-long video. More importantly, we learned how hard it can be to summarize the vast diversity of research in our community. Moreover, there are synchronization constraints that impose a fixed number of words to express complex concepts. In the end, we reached a compromise trying to represent as extensively as possible some MIP applications.

Once the voiceover, the animation and the music are done, it is not trivial to create the final video anyway. In fact, in addition to a temporal synchronization of events, automation on the volume of the various instruments and the voice are necessary. This operation is always necessary for video production, and the role of a sound engineer is essential for an optimal result. Especially in this work, where music and its evolving parts are the protagonists, this professional figure had a particularly central role in glueing all the components.

Special thanks

In general, it was a great experience! I learned a lot, and I’ve spent some time doing something that is not strictly related to my research, but that is a fundamental part of the scientist job. We really thank Mandela (music) and Scienseed (animation) and Alberto Di Carlo (sound engineer) for their great work!

Mandela is an Italian instrumental jazz band from Vicenza. The sound of the band is characterized by a fusion of jazz idioms, rock, world music, psychedelic, and funk. Over the years, the band has performed in several festivals and venues and released 3 full-length albums. These recordings are all available on the major streaming service. Their last release was presented at the festival Rimusicazioni (Bolzano, Italy) and consists of an original soundtrack for “Grass: A Nation’s Battle for Life” — one of the earliest documentaries ever produced (1925).

For this video, the track Simple from the album Mandela s.t. was used. The song was remixed and remastered by Alberto Di Carlo.

Scienseed is a multifunctional agency for the dissemination of scientific findings. Its founding goal is to promote public engagement in science through all available tools in the Era of IT. We are specialized in the translation of scientific data into different accessible products and activities, aimed at either the scientific community (peers) or the general public (society). We provide support to academic laboratories, research institutes, universities and private institutions to raise public awareness and increase the repercussion of their contribution to science.

Giorgia Cantisani is a PhD student at Télécom Paris in France within the Audio Data Analysis and Signal Processing (ADASP) team and the European training network MIP-Frontiers. Her research interests range from music information retrieval (MIR) to neuroscience. In particular, she is interested in the analysis of brain responses to music and how these can be used to guide and inform music source separation.

Diversity and Inclusion Initiatives at ISMIR 2021

Originally published at https://ismir2021.ismir.net/blog/diversity_inclusion/ on September 5, 2021 by Blair Kaneshiro, Jordan B. L. Smith, Jin Ha Lee, and Alexander Lerch.

The 22nd International Society for Music Information Retrieval Conference (ISMIR2021) is excited to announce a number of Diversity & Inclusion (D&I) initiatives for this year’s conference. These initiatives are aimed toward ensuring a positive and supportive conference environment while also supporting a diverse range of presenters and attendees across backgrounds, career stages, and MIR research areas. This year’s efforts, facilitated by the online format of ISMIR2021, expand upon the numerous Women in Music Information Retrieval (WiMIR) initiatives that have taken place at ISMIR conferences over the past decade, and represent a broadening of how the MIR community views and supports D&I in the field.

This year’s D&I efforts are led by ISMIR2021 D&I Chairs Blair Kaneshiro (Stanford University, US) and Jordan B. L. Smith (ByteDance/TikTok, UK) in close collaboration with the ISMIR2021 General Chairs Jin Ha Lee (University of Washington, US) and Alexander Lerch (Georgia Institute of Technology, US), but it is the efforts of all the ISMIR2021 Organizers that make these initiatives possible. In this blog post we summarize the various D&I initiatives planned for ISMIR2021.

Code of Conduct

Since 2018, a Code of Conduct has accompanied the ISMIR conference. It is prepared by the conference organizers in conjunction with the ISMIR Board and is intended to ensure that the conference environment — whether in person or virtual — is a safe and inclusive space for all participants. In 2020 the Code of Conduct was updated for the first ever virtual ISMIR. All ISMIR2021 participants agree, at time of registration, to adhere to this year’s Code of Conduct.

Registration fees and financial support

To maximize the accessibility of the ISMIR2021 conference for students, the student registration fee is only $15 USD at the early-bird rate (until September 30) and $25 USD thereafter; and the student tutorial fee is just $5. These low rates are subsidized by generous ISMIR2021 sponsor contributions and full (non-student) registrations.

In addition to reduced registration fees, ISMIR2021 offers registration waivers and childcare support to a broad range of attendees. The ISMIR conference has a long history of offering student travel grants to cover conference registration and lodging. Since 2016, the conference has offered WiMIR grants as well, which provide financial assistance to women of any career stage to attend the conference. The ISMIR2019 conference expanded financial support opportunities once more to include Community grants for former and prospective MIR community members, and the ISMIR2020 conference offered childcare grants as well as Black in MIR registration waivers for the first time.

This year, the low cost of attending the conference, combined with generous sponsor support, enables the ISMIR2021 conference to once again provide a wide range of grants. Registration grants are available to students, unaffiliated attendees (anyone who has no professional affiliation that will cover the registration fee), and to attendees who self-identify with a broad range of D&I categories including Black in MIR, attendees from low- or middle-income countries, “New to ISMIR” presenter, Queer in MIR, and WiMIR. In addition, any attendee is eligible to apply for a childcare grant.

Details about ISMIR2021 grant eligibility and the application process are available here: https://bit.ly/ismir2021grants

D&I blog posts

This year’s D&I initiatives also seek to address the “hidden curriculum” of navigating academia as well as STEM research. To this end, the ISMIR2021 organizers are authoring a number of blog posts on such topics as preparing a successful ISMIR submission and reviewing ISMIR papers, as well as reposting relevant content from the WiMIR blog. Upcoming blog posts will include an introduction to ISMIR2021 Newcomer Initiatives and community advice on navigating the conference. Visit the ISMIR2021 blog page (maintained by Qhansa Bayu (Social Media Chair; Telkom University, ID) and Ashvala Vinay (Website Chair; Georgia Institute of Technology, US)) to stay up to date!

Special call for papers on “Cultural Diversity in MIR”

This year, the conference organizers wanted to promote the cultural diversity of the ISMIR community and its research. To this end, the ISMIR2021 Call for Papers included a special call, for papers on “Cultural Diversity in MIR”. This year’s Scientific Chairs — Zhiyao Duan (University of Rochester, US), Juhan Nam (KAIST, KR), Preeti Rao (IIT Bombay, IN), and Peter van Kranenburg (Meertens Institute, NL) — organized the track with a focus on non-Western music and cross-cultural studies. Submissions to this track underwent the same review process as papers in the main track, with specially selected meta-reviewers.

In all, 44 papers were submitted to this track, of which 11 were accepted and verified by the Scientific Chairs to match the call. Accepted papers in this track will be presented in the same format as other accepted papers, with recognition on the conference website and the institution of a special paper award for the themed track.

Special Late-Breaking/Demo (LBD) “New to ISMIR” track

The Late-Breaking/Demo (LBD) session, involving short-format papers which undergo light peer review, has long served as a venue for new or junior researchers to gain a foothold in the MIR community. The ISMIR2021 LBD chairs — Li Su (Academia Sinica, TW), Chih-Wei Wu (Netflix, US), and Siddharth Gururani (Electronic Arts, US) — present a new special track called “New to ISMIR”. In this track, first-time ISMIR attendees, students, WiMIR community members, and underrepresented minorities have the opportunity to receive extra mentoring on their LBD submissions. Presenters in this track are strongly encouraged to apply for a registration waiver as well. More information can be found in the full Call for LBDs.

Newcomer Initiatives

Navigating a new conference can be challenging in the best of times, but is especially difficult in the virtual format. The ISMIR2021 Newcomer Initiatives Chairs — Nick Gang (Apple, US) and Elona Shatri (Queen Mary University of London, UK) — are organizing a number of initiatives to help newcomers navigate the conference; meet other attendees; and establish social and professional connections to help them achieve their academic, research, and career aims during and beyond the conference. Announcements on these initiatives are coming soon!

WiMIR Sponsorship

Since 2016, industry sponsors have contributed specifically to WiMIR, typically by funding WiMIR travel grants and/or hosting WiMIR-themed receptions during the conference. These initiatives increase access to the ISMIR conference for women of all career stages, and also provide a designated setting for women and other attendees to network during the conference.

We express our sincere thanks to this year’s WiMIR sponsors, whose contributions support the various D&I initiatives described here, and to the ISMIR2021 Sponsorship Chairs: Sertan Şentürk (Kobalt Music, UK), Alia Morsi (Universitat Pompeu Fabra, ES), and Lamtharn “Hanoi” Hantrakul (ByteDance/TikTok, CN).

If your company would like to participate as a WiMIR sponsor, more information can be found on the ISMIR2021 Call for Sponsors page.

WiMIR Plenary Session

WiMIR began meeting informally at ISMIR conferences starting at ISMIR2011. After a few years of ad-hoc meetings organized by interested attendees, a WiMIR plenary session was incorporated into the main ISMIR conference program starting in 2015. Since then, WiMIR sessions have included presentations on the WiMIR Mentoring Program as well as community and invited keynote presentations. This year’s WiMIR plenary session will include an invited keynote speaker to be announced soon!

WiMIR Meetup Sessions

While the virtual conference format has posed challenges when it comes to offering the range and serendipity of interactions experienced in person, it also offers opportunities to try out new formats to bring attendees together — not only for formal research presentations, but also for informal discussions. Last year, the ISMIR2020 conference for the first time included WiMIR-themed meetup sessions throughout the conference. These sessions, centered around the theme of “Notable Women in MIR”, gave conference attendees the chance to meet informally with women in the field and discuss topics ranging from career paths to technical details of their research.

This year, the ISMIR2021 conference is expanding upon the format of these sessions to include a range of underrepresented communities in MIR. More information on these special meetup sessions will be announced in coming months.

WiMIR Workshop

The WiMIR Workshop has taken place annually since 2018 as a satellite event of the ISMIR conference. The goal of the WiMIR Workshop is to provide a venue for mentorship, networking, and collaboration among women and allies in the ISMIR community. The Workshop took place as an in-person one-day event in 2018 and again in 2019, and migrated to a virtual format in 2020 due to the COVID-19 pandemic.

In 2021, the WiMIR 4th Annual Workshop will take place virtually on Friday, October 29 and Saturday, October 30. It is a free event open to all members of the MIR community, and will include events for all time zones. The speaker lineup and schedule will be announced soon!

Other ISMIR Community D&I Initiatives

The D&I initiatives of ISMIR2021 are part of a larger ecosystem of ISMIR community initiatives. For more information on these initiatives, and to stay up to date on what is happening with the ISMIR2021 conference and the community at large, visit the following resources:

See you at ISMIR2021 in November!

Blair Kaneshiro (Stanford University, USA) and Jordan B. L. Smith (ByteDance, UK) are the Diversity & Inclusion Chairs of the ISMIR2021 Conference. Jin Ha Lee (University of Washington, USA) and Alexander Lerch (Georgia Institute of Technology, USA) are the General Chairs of the ISMIR2021 Conference. ISMIR2021 will take place as a virtual conference from November 4-8, 2021.

WiMIR Workshop 2021 Project Guides

This is a list of Project Guides and their areas of interest for the 2021 WiMIR Virtual Workshop, which will take place as an online-only satellite event of ISMIR2021.

The Workshop will take place on Friday, October 29 and Saturday, October 30 – please sign up by using this form: https://forms.gle/GHjqwaHWBciX9tuT7

We know that timezones for this are complicated, so we’ve made a Google Calendar with all the events on it – visit this link to add them to your calendar

This year’s Workshop is organized by Courtney Reed (Queen Mary University), Kitty Shi (Stanford University), Jordan B. L. Smith (ByteDance), Thor Kell (Spotify), and Blair Kaneshiro (Stanford University).

October 29

Dorien Herremans: Music Generation – from musical dice games to controllable AI models

This event will take place at 1500, GMT+8

In this fireside chat, Dorien will give a brief overview of the history of music generation systems, with a focus on the current challenges in the field, followed by an open discussion and Ask-Me-Anything (AMA) session. Prof. Herremans’ recent work has focused on creating controllable music generation systems using deep learning technologies. One challenge in particular – generating music with steerable emotion – has been central in her research. When it comes to affect and emotion, computer models still do not compare to humans. Using affective computing techniques and deep learning, Dorien’s team has built models that learn to predict perceived emotion from music. These models are then used to generate new fragments in a controllable manner, so that users can steer the desired arousal/valence level or tension in newly generated music. Other challenges tackled by Dorien’s team include ensuring repeated themes in music, automatic music transcription, and novel music representations, including the library Pytorch GPU library: nnAudio. 

Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, where she is also Director of Game Lab. Before joining SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London, where she worked on the project: “MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion”. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods, and graduated as a Business Engineer in Management Information Systems at the University of Antwerp in 2005. After that, she worked as a consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans’ research interests include AI for novel applications in music and audio.

Kat Agres: Music, Brains, and Computers, Oh My!

This event will take place at 1600, GMT+8

In an informal, ask-me-anything chat, Kat will discuss her career path through cognitive science to computational approaches to music cognition, to her current research in music, computing and health.

Kat Agres is an Assistant Professor at the Yong Siew Toh Conservatory of Music (YSTCM) at the National University of Singapore (NUS), and teaches classes at YSTCM, Yale-NUS, and the NUS YLL School of Medicine. She was previously a Research Scientist III and founder of the Music Cognition group at the Institute of High Performance Computing, A*STAR. Kat received her PhD in Psychology (with a graduate minor in Cognitive Science) from Cornell University in 2013, and holds a bachelor’s degree in Cognitive Psychology and Cello Performance from Carnegie Mellon University. Her postdoctoral research was conducted at Queen Mary University of London, in the areas of Music Cognition and Computational Creativity. She has received numerous grants to support her research, including Fellowships from the National Institute of Health (NIH) and the National Institute of Mental Health (NIMH) in the US, postdoctoral funding from the European Commission’s Future and Emerging Technologies (FET) program, and grants from various funding agencies in Singapore. Kat’s research explores a wide range of topics, including music technology for healthcare and well-being, music perception and cognition, computational modeling of learning and memory, statistical learning, automatic music generation and computational creativity. She has presented her work in over fifteen countries across four continents, and remains an active cellist in Singapore.

Tian Cheng: Beat Tracking with Sequence Models

This event will take place at 1830, GMT+9

Beat tracking is an important MIR task with a long history. It provides basic metrical
information and is the fundament of synchronize-based applications. In this task, I
will summarize common choices for building a beat tracking model based on
research on related topics (beat, downbeat, and tempo). I will also compare simple
sequence models for beat tracking. In the last part, I will give some examples to show
how beat tracking is used in real-work applications.

Tian Cheng is a researcher at Media Interaction Group in National Institute of
Advanced Industrial Science and Technology (AIST), Japan. From 2016 to 2018, she
was a postdoctoral researcher in the same group. Her research interests include beat
tracking and music structure analysis. Her work provides basic music content
estimations to support applications for music editing and creation. She received her
PhD from Queen Mary University of London in 2016 and her dissertation focused on
using music acoustics for piano transcription.

Stefania Serafin: Sonic Interactions for All

This event will take place at 1800, GMT+2

In this workshop I will introduce our  recent work on using novel technologies and sonic interaction design to help hearing impaired users and individuals with limited mobility enjoy music.The talk will present the technologies we develop in the Multisensory Experience lab at Aalborg University in Copenhagen, such as VR, AR and novel interfaces and haptic devices, as well as how these technologies can be used to help populations in need.

Stefania Serafin is professor of Sonic Interaction Design at Aalborg University in Copenhagen. She received a Ph.D. in Computer Based Music Theory and Acoustics from Stanford University. She is the president of the Sound and Music Computing Association and principal investigator of the Nordic Sound and Music Computing Network. Her research interest is on sonic interaction design, sound for VR and AR and multi sensory processing.

Oriol Nieto: Overview, Challenges, and Applications of Audio-based Music Structure Analysis

This event will take place at 1000, GMT-7

The task of audio-based music structure analysis aims at identifying the different parts of a given music signal and labeling them accordingly (e.g., verse, chorus). The automatic approach to this problem can help several applications such as intra- and inter-track navigation, section-aware automatic DJ-ing, section-based music recommendation, etc. This is a fundamental MIR task that has significantly advanced over the past two decades, yet still poses several interesting research challenges. In this talk I will give an overview of the task, discuss its open challenges, and explore the potential applications, some of which have been employed at Adobe Research to help our users have better creative experiences.

Oriol Nieto (he/him or they/them) is a Senior Audio Research Engineer at Adobe Research in San Francisco. He is a former Staff Scientist in the Radio and Music Informatics team at Pandora, and holds a PhD from the Music and Audio Research Laboratory of New York University. His research focuses on topics such as music information retrieval, large scale recommendation systems, music generation, and machine learning on audio with especial emphasis on deep architectures. His PhD thesis is about trying to better teach computers at “understanding” the structure of music. Oriol develops open source Python packages, plays guitar, violin, cajón, and sings (and screams) in their spare time.

Emma Frid: Music Technology for Health

This event will take place at 2000, GMT+2

There is a growing interest in sound and music technologies designed to promote health, well-being, and inclusion, with many multidisciplinary research teams aiming to bridge the fields of accessibility, music therapy, universal design, and music technology. This talk will explore some of these topics through examples from two projects within my postdoctoral work at IRCAM/KTH: Accessible Digital Musical Instruments – Multimodal Feedback and Artificial Intelligence for Improved Musical Frontiers for People with Disabilities, focused on the design and customization of Digital Musical Instruments (DMIs) to promote access to music-making; and COSMOS (Computational Shaping and Modeling of Musical Structures), focused on the use of data science, optimization, and citizen science to study musical structures as they are created in music performances and in unusual sources such as heart signals. 

Emma Frid is a postdoctoral researcher at the Sciences et technologies de la musique et du sons (STMS) Laboratory, at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris, where she is working in the COSMOS project, under a Swedish Research Council International Postdoctoral Grant hosted by the Sound and Music Computing Group at KTH Royal Institute of Technology. She holds a PhD in Sound and Music Computing from the Division of Media Technology and Interaction Design at KTH, and a Master of Science in Engineering in Media Technology from the same university. Her PhD thesis focused on how Sonic Interaction Design can be used to promote inclusion and diversity in music-making. Emma’s research is centered on multimodal sound and music interfaces designed to promote health and inclusion, predominantly through work on Accessible Digital Musical Instruments (ADMIs).

Nick Bryan: Learning to Control Signal Processing Algorithms with Deep Learning

This event will take place at 1230, GMT-7

Expertly designed signal processing algorithms have been ubiquitous for decades and helped create the foundation of countless industries and areas of research (e.g. music information retrieval, audio fx, voice processing). In the last decade, however, expertly designed signal processing algorithms have been rapidly replaced with data-driven neural networks, posing the question — is signal processing still useful? And if so, how? In this talk, I will attempt to address these questions and provide an overview of we can combine both disciplines and use neural networks to control (or optimize) existing signal processing algorithms from data and perform a variety of tasks such as guitar distortion modeling, automatic removal of breaths and pops from voice recordings, automatic music mastering, acoustic echo cancelation, and automatic voice production. I will then discuss open research questions and future research directions with a focus on music applications.

Nicholas J. Bryan is a senior research scientist at Adobe Research and interested in (neural) audio and music signal processing, analysis, and synthesis. Nick received his PhD and MA from CCRMA, Stanford University and MS in Electrical Engineering, also from Stanford as well as his Bachelor of Music and BS in Electrical Engineering with summa cum laude honors at the University of Miami-FL. Before Adobe, Nick was a senior audio algorithm engineer at Apple and worked on voice processing algorithms for 4.5 years.

October 30

Lamtharn “Hanoi” Hantrakul: Transcultural Machine Learning in Music and Technology

This event will take place at 1600, GMT+8

Transcultural Technologies empower cultural pluralism at every phase of engineering and design. We often think of technology as a neutral tool, but technology is always created and optimized within the cultural scope of its inventors. This cultural mismatch is most apparent when tools are used across a range of contrasting traditions. Music and Art from different cultures, and the people that create and breathe these mediums, are an uncompromising sandbox to both interrogate these limitations and develop breakthroughs that empower a plurality of cultures. In this talk, we will be taking a deep dive into tangible audio technologies incubated in musical traditions from Southeast Asia, South Asia, South America and beyond.

Hanoi is a Bangkok-born Shanghai-based Cultural Technologist, Research Scientist and Composer. As an AI researcher, Hanoi focuses on audio ML that is inclusive of musical traditions from around the world. At Google AI, he co-authored the breakthrough Differentiable Digital Signal Processing (DDSP) library with the Magenta team and led its deployment across two Google projects: Tone Transfer and Sounds of India.  At TikTok, he continues to develop AI tools that empower music making across borders and skill levels. As a Cultural Technologist, Hanoi has won international acclaim for his transcultural fiddle “Fidular” (Core77, A’), which has been displayed in museums and exhibitions in the US, EU and Asia. He is fluent in French, Thai, English and is working on his Mandarin.

Jason Hockman + Jake Drysdale: Give the Drummer Some

This event will take place at 1000, GMT+1

In the late 1980s, popular electronic music (EM) emerged at the critical intersection between affordable computer technology and the consumer market, and has since grown to become the one of the most popular genres in the world. Ubiquitous within EM creation, digital sampling has facilitated the incorporation of professional-quality recorded performances into productions; one of the most frequently sampled types of recordings used in EM are short percussion solos from funk and jazz performances—or breakbeats. While these samples add an essential energetic edge to productions, they are generally used without consent or recognition. Thus, there is an urgency for the ethical redistribution of cultural value to account for the influence of a previous generation of artists. This workshop will present an overview on the topic of breakbeats and their relation to modern music genres as well as current approaches for breakbeat analysis, synthesis and transformative effects developed in the SoMA Group at Birmingham City University.

Jake Drysdale is currently a PhD student in the Sound and Music Analysis Group (SoMA) at Birmingham City University, where he specialises in neural audio synthesis and structural analysis in the electronic music genres. Jake leverages his perspective as an professional electronic music producer and DJ towards the development of intelligent music production tools that break down boundaries imposed by current technology.

Jason Hockman is an associate professor of audio engineering at Birmingham City University. He is a member of the Digital Media Technology Laboratory (DMTLab), in which he leads the Sound and Music (SoMA) Group for computational analysis of sound and music and digital audio processing. Jason conducts research in music informatics, machine listening and computational musicology, with a focus on rhythm and metre detection, music transcription, and content-based audio effects. As an electronic musician, he has had several critically-acclaimed releases on established international record labels, including his own Detuned Transmissions imprint.

Olumide Okubadejo: Ask Me Anything

This event will take place at 1800, GMT+2

Bring your industry questions for Spotify’s Olumide Okubadejo in this informal discussion, covering moving research into production, industry scale vs. academic scale, and moving between the two worlds.

Olumide Okubadejo was born and raised in Nigeria. The musical nature of the family he was born in and the streets he was raised in instilled in him early, a penchant for music. This appreciation for music led to him to play the drum in his local community by the age of 9. He later went on to learn to play the piano and the guitar by the age of 15. He studied for his undergraduate degree at FUTMinna, Nigeria, earning a Bachelor of Engineering degree in Electrical and Computer Engineering. He proceeded to University of Southampton, where he earned a masters degree in Artificial intelligence and then France where he earned a PhD. Since then he has focused his research around Machine learning for sound and music. These days, with Spotify, he researches and focuses on assisted music creation using machine learning.

Cory McKay: What can MIR teach us about music? What can music teach us in MIR?

This event will take place at 1300, GMT-4

Part of MIR’s richness is that it brings together experts in diverse fields, from both academia and industry, and gets us to think about music together. However, there is perhaps an increasingly tendency to segment MIR into discrete, narrowly defined problems, and to attempt to address them largely by grinding huge, noisy datasets, often with the goal of eventually accomplishing something with commercial applications. While all of this is certainly valuable, and much good has come from it, there has been an accompanying movement away from introspective thought about music, from investigating fundamental questions about what music is intrinsically, and how and why people create, consume and are changed by it. The goals of this workshop are to discuss how we can use the diverse expertise of the MIR community to do better in addressing foundational music research, and how we can reinforce and expand collaborations with other research communities.

Cory McKay is a professor of music and humanities at Marianopolis College and a member of the Centre for Interdisciplinary Research in Music Media and Technology in Montréal, Canada. His multidisciplinary background in information science, jazz, physics and sound recording has helped him publish research in a diverse range of music-related fields, including multimodal work involving symbolic music representations, audio, text and mined cultural data. He received his Ph.D., M.A. and B.Sc. from McGill University, completed a second bachelor’s degree at the University of Guelph, and did a postdoc at the University of Waikato. He is the primary designer of the jMIR software framework for performing multimodal music information retrieval research, which includes the jSymbolic framework for extracting musical features from digital scores, and also serves as music director of the Marianopolis Laptop Computer Orchestra (MLOrk).

WiMIR Workshop 2021

We’re very pleased to say that we’ll be doing the fourth annual WiMIR Workshop around ISMIR 2021 this year, on Friday, October 29 and Saturday, October 30.

Like last year, the event will be 100% virtual, mostly on Zoom! We’ll again offer programming across time zones and regions to make it easier for community members around the world to attend.  The Workshop is, as ever, free and open to ALL members of the MIR community.  

We hope you can join us – we’ll add more information about signups and our invited presenters soon!

Looking Back on WiMIR@ISMIR2020

As the ISMIR community organizes and prepares submissions for the ISMIR 2021 conference (to take place virtually November 8-12), let’s take a moment to reflect on the WiMIR events from last year’s conference! ISMIR 2020 was held October 11-15, 2020 as the first virtual ISMIR conference, with unprecedented challenges and opportunities. Slack and Zoom were used as the main platforms, which enabled the conference to designate channels for each presentation, poster and social space. With the support of WiMIR sponsors, substantial grants were given for underrepresented researchers, including women.

The ISMIR 2020 WiMIR events were organized by Dr. Claire Arthur (Georgia Institute of Technology) and Dr. Katherine Kinnaird (Smith College). A variety of WiMIR events took place during the conference, through which the ISMIR community showed support, shared ideas, and learned through thought-provoking sessions.

WiMIR Keynote

Dr. Johanna Devaney from the Brooklyn College and the Graduate Center, CUNY, gave an insightful keynote on our current comprehension and analysis of musical performance, The keynote, titled Performance Matters: Beyond the current conception of musical performance in MIR, was presented on October 13th.

WiMIR keynote video    

WiMIR keynote slides

Abstract: This talk will reflect on what we can observe about musical performance in the audio signal and where MIR techniques have succeeded and failed in enhancing our understanding of musical performance. Since its foundation, ISMIR has showcased a range of approaches for studying musical performance. Some of these have been explicit approaches for studying expressive performance while others implicitly analyze performance with other aspects of the musical audio. Building on my own work developing tools for analyzing musical performance, I will consider not only the assumptions that underlie the questions we ask about performance but what we learn and what we miss in our current approaches to summarizing performance-related information from audio signals. I will also reflect on a number of related questions, including what do we gain by summarizing over large corpora versus close reading of a select number of recordings. What do we lose? What can we learn from generative techniques, such as those applied in style transfer? And finally, how can we integrate these disparate approaches in order to better understand the role of performance in our conception of musical style?

Johanna Devaney is an Assistant Professor at Brooklyn College and the CUNY Graduate Center. At Brooklyn College she teaches primarily in the Music Technology and Sonic Arts areas and at the Graduate Center she is appointed to the Music and the Data Analysis and Visualization programs. Previously, she was an Assistant Professor of Music Theory and Cognition at Ohio State University and a postdoctoral scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley. Johanna completed her PhD in music technology at the Schulich School of Music of McGill University. She also holds an MPhil degree in music theory from Columbia University and an MA in composition from York University in Toronto.

Johanna’s research focuses on interdisciplinary approaches to the study of musical performance. Primarily, she examines the ways in which recorded performances can be used to study performance practice and develops computational tools to facilitate this. Her work draws on the disciplines of music, computer science, and psychology, and has been funded by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Google Faculty Research Awards program and the National Endowment for the Humanities (NEH) Digital Humanities program.  

Twitter: Johanna Devaney (@jcdevaney)

“Notable Women in MIR” Meetups

This year’s WiMIR programming also included a series of meet-up sessions, each of which was an informal Q&A-type drop-in event akin to an “office hour”. In these sessions, participants had the opportunity to talk with the following notable women in the field.

Dr. Amélie Anglade is a freelance Music Information Retrieval and Machine Learning / Artificial Intelligence Consultant based in Berlin, Germany. She carried out a PhD on knowledge representation of musical harmony and modelling of genre, composer and musical style using machine learning techniques and logic programming at Queen Mary University of London (2014). After being employed as the first MIR Engineer at SoundCloud (2011-2013) and working for a couple of other music tech startups, she is now offering (since 2014) freelance MIR and ML/AI services to startups, larger companies and institutions in Berlin and remotely. Her projects range from building search and recommendation engines to supporting product development with Data Science solutions, including designing, implementing, training and optimising MIR features and products. To her clients she provides advice, experimentation, prototyping, production code implementation, management and teaching services. During her career she has worked for Sony CSL, Philips Research, Mercedes-Benz, the EU Commission, Senzari, and Data Science Retreat, among others.

Dr. Rachel Bittner is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning  applied  to fundamental frequency estimation. She has a Master’s degree in mathematics from New York University’s Courant Institute, as well as two Bachelor’s degrees in Music Performance and in Mathematics from the University of California, Irvine.

In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.

Dr. Estefanía Cano is a senior scientist at AudioSourceRe in Ireland, where she researches topics related to music source separation. Her research interests also include music information retrieval (MIR), computational musicology, and music education. She is the CSO and co-founder of Songquito, a company that builds MIR technologies for music education. She previously worked at the Agency for Science, Technology and Research A*STAR in Singapore, and at the Fraunhofer Institute for Digital Media Technology IDMT in Germany.

Dr. Elaine Chew is a senior CNRS (Centre National de la Recherche Scientifique) researcher in the STMS (Sciences et Technologies de la Musique et du Son) Lab at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, and a Visiting Professor of Engineering in the Faculty of Natural & Mathematical Sciences at King’s College London. She is principal investigator of the European Research Council Advanced Grant project COSMOS and Proof of Concept project HEART.FM. Her work has been recognised by PECASE (Presidential Early Career Award in Science and Engineering) and NSF CAREER (Faculty Early Career Development Program) awards, and Fellowships at Harvard’s Radcliffe Institute for Advanced Study. She is an alum (Fellow) of the NAS Kavli and NAE Frontiers of Science/Engineering Symposia. Her research focuses on the mathematical and computational modelling of musical structures in music and electrocardiographic sequences. Applications include modelling of music performance, AI music generation, music-heart-brain interactions, and computational arrhythmia research. As a pianist, she integrates her research into concert-conversations that showcase scientific visualisations and lab-grown compositions.

Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London, where she designs new ways for humans to interact with computers in creative practice. Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 40,000 times. She is the creator of the world’s first MOOC about machine learning for creative practice, titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Dr. Fiebrink was previously an Assistant Professor at Princeton University and a lecturer at Goldsmiths University of London. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She holds a PhD in Computer Science from Princeton University.

Dr. Emilia Gómez is Lead Scientist of the HUMAINT team that studies the impact of Artificial Intelligence on human behaviour at the Joint Research Centre, European Commission. She is also a Guest Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona, where she leads the MIR (Music Information Research) lab of the Music Technology Group and coordinates the TROMPA (Towards Richer Online Music Public-domain Archives) EU project.

Telecommunication Engineer (Universidad de Sevilla, Spain), Msc in Acoustics, Signal Processing and Computing applied to Music (ATIAM-IRCAM, Paris) and PhD in Computer Science at Universitat Pompeu Fabra, her work deals with the design of data-driven algorithms for music content description (e.g. melody, tonality, genre, emotion) by combining methodologies from signal processing, machine learning, music theory and cognition. She has been contributing to the ISMIR community as author, reviewer, PC member, board and WiMIR member and she was the first woman president of ISMIR. 

Dr. Blair Kaneshiro is a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education at Stanford University, as well as an Adjunct Professor at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA). She earned a BA in Music; MA in Music, Science, and Technology; MS in Electrical Engineering; and PhD in Computer-Based Music Theory and Acoustics, all from Stanford. Her MIR research focuses on human aspects of musical engagement, approached primarily through neuroscience and user research. Dr. Kaneshiro is a member of the ISMIR Board and has organized multiple community initiatives with WiMIR, including as co-founder of the WiMIR Mentoring Program and WiMIR Workshop. 

Dr. Gissel Velarde, PhD in computer science and engineering, is an award-winning researcher, consultant and lecturer specialized in Artificial Intelligence. Her new book: Artificial Era: Predictions for ultrahumans, robots and other intelligent entities, presents a groundbreaking view of technology trends and their impact on our society. 

Additionally, she published several scientific articles in international journals and conferences, and her research has been featured in the media by Jyllands-Posten, La Razón, LadoBe and Eju. She earned her doctoral degree from Aalborg University in Denmark in 2017, an institution recognized as the best university in Europe and fourth in the world in engineering according to the US News World Ranking and the MIT 2018 ranking. She obtained her master’s degree in electronic systems and engineering management from the University of Applied Sciences of South Westphalia, Soest in Germany, thanks to a DAAD scholarship, and she holds a licenciatura’s degree in systems engineering from the Universidad Católica Boliviana, recognized as the third best university in Bolivia according to the Webometrics Ranking 2020. 

Velarde has more than 20 years of experience in engineering and computer science. She was a research member in the European Commission’s project: Learning to Create, was a lecturer at Aalborg University, and currently teaches at the Universidad Privada Boliviana. She worked for Miebach Gmbh, Hansa Ltda, SONY Computer Science Laboratories, Moodagent, and Pricewaterhouse Coopers, among others. She has developed machine learning and deep learning algorithms for classification, structural analysis, pattern discovery, and recommendation systems. In 2019 & 2020 she was internationally selected as one of 120 technologists by the Top Women Tech summit in Brussels.

Dr. Anja Volk (MA, MSc, PhD), Associate Professor in Information and Computing Sciences (Utrecht University) has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. She has an international reputation in the areas of music information retrieval (MIR), computational musicology, and mathematical music theory. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands.  Her research aims at enhancing our understanding of music as a fundamental human trait while applying these insights for developing music technologies that offer new ways of interacting with music. Anja has given numerous invited talks worldwide and held editorships in leading journals, including the Journal of New Music Research and Musicae Scientiae. She has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music (SMCM), the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR (WIMIR) mentoring program. Anja’s commitment to diversity and inclusion was recognized with the Westerdijk Award in 2018 from Utrecht University, and the Diversity and Inclusion Award from Utrecht University in 2020. She is also committed to connecting different research communities and providing interdisciplinary education for the next generation through the organization of international workshops, such as the Lorentz Center in Leiden workshops on music similarity (2015), computational ethnomusicology (2017) and music, computing, and health (2019).

WiMIR Grants

Thanks to the generous contributions of WiMIR sponsors, a number of women received financial support to cover conference registration, paper publication, and – for the first time in 2020 – childcare expenses. In all, WiMIR covered registration costs for 42 attendees; covered publication fees for 3 papers; and provided financial support to cover child-care expenses for 4 attendees.

Thank you WiMIR Sponsors!

Patron

Contributor

Supporter