Looking Back on WiMIR@ISMIR2020

As the ISMIR community organizes and prepares submissions for the ISMIR 2021 conference (to take place virtually November 8-12), let’s take a moment to reflect on the WiMIR events from last year’s conference! ISMIR 2020 was held October 11-15, 2020 as the first virtual ISMIR conference, with unprecedented challenges and opportunities. Slack and Zoom were used as the main platforms, which enabled the conference to designate channels for each presentation, poster and social space. With the support of WiMIR sponsors, substantial grants were given for underrepresented researchers, including women.

The ISMIR 2020 WiMIR events were organized by Dr. Claire Arthur (Georgia Institute of Technology) and Dr. Katherine Kinnaird (Smith College). A variety of WiMIR events took place during the conference, through which the ISMIR community showed support, shared ideas, and learned through thought-provoking sessions.

WiMIR Keynote

Dr. Johanna Devaney from the Brooklyn College and the Graduate Center, CUNY, gave an insightful keynote on our current comprehension and analysis of musical performance, The keynote, titled Performance Matters: Beyond the current conception of musical performance in MIR, was presented on October 13th.

WiMIR keynote video    

WiMIR keynote slides

Abstract: This talk will reflect on what we can observe about musical performance in the audio signal and where MIR techniques have succeeded and failed in enhancing our understanding of musical performance. Since its foundation, ISMIR has showcased a range of approaches for studying musical performance. Some of these have been explicit approaches for studying expressive performance while others implicitly analyze performance with other aspects of the musical audio. Building on my own work developing tools for analyzing musical performance, I will consider not only the assumptions that underlie the questions we ask about performance but what we learn and what we miss in our current approaches to summarizing performance-related information from audio signals. I will also reflect on a number of related questions, including what do we gain by summarizing over large corpora versus close reading of a select number of recordings. What do we lose? What can we learn from generative techniques, such as those applied in style transfer? And finally, how can we integrate these disparate approaches in order to better understand the role of performance in our conception of musical style?

Johanna Devaney is an Assistant Professor at Brooklyn College and the CUNY Graduate Center. At Brooklyn College she teaches primarily in the Music Technology and Sonic Arts areas and at the Graduate Center she is appointed to the Music and the Data Analysis and Visualization programs. Previously, she was an Assistant Professor of Music Theory and Cognition at Ohio State University and a postdoctoral scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley. Johanna completed her PhD in music technology at the Schulich School of Music of McGill University. She also holds an MPhil degree in music theory from Columbia University and an MA in composition from York University in Toronto.

Johanna’s research focuses on interdisciplinary approaches to the study of musical performance. Primarily, she examines the ways in which recorded performances can be used to study performance practice and develops computational tools to facilitate this. Her work draws on the disciplines of music, computer science, and psychology, and has been funded by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Google Faculty Research Awards program and the National Endowment for the Humanities (NEH) Digital Humanities program.  

Twitter: Johanna Devaney (@jcdevaney)

“Notable Women in MIR” Meetups

This year’s WiMIR programming also included a series of meet-up sessions, each of which was an informal Q&A-type drop-in event akin to an “office hour”. In these sessions, participants had the opportunity to talk with the following notable women in the field.

Dr. Amélie Anglade is a freelance Music Information Retrieval and Machine Learning / Artificial Intelligence Consultant based in Berlin, Germany. She carried out a PhD on knowledge representation of musical harmony and modelling of genre, composer and musical style using machine learning techniques and logic programming at Queen Mary University of London (2014). After being employed as the first MIR Engineer at SoundCloud (2011-2013) and working for a couple of other music tech startups, she is now offering (since 2014) freelance MIR and ML/AI services to startups, larger companies and institutions in Berlin and remotely. Her projects range from building search and recommendation engines to supporting product development with Data Science solutions, including designing, implementing, training and optimising MIR features and products. To her clients she provides advice, experimentation, prototyping, production code implementation, management and teaching services. During her career she has worked for Sony CSL, Philips Research, Mercedes-Benz, the EU Commission, Senzari, and Data Science Retreat, among others.

Dr. Rachel Bittner is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning  applied  to fundamental frequency estimation. She has a Master’s degree in mathematics from New York University’s Courant Institute, as well as two Bachelor’s degrees in Music Performance and in Mathematics from the University of California, Irvine.

In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.

Dr. Estefanía Cano is a senior scientist at AudioSourceRe in Ireland, where she researches topics related to music source separation. Her research interests also include music information retrieval (MIR), computational musicology, and music education. She is the CSO and co-founder of Songquito, a company that builds MIR technologies for music education. She previously worked at the Agency for Science, Technology and Research A*STAR in Singapore, and at the Fraunhofer Institute for Digital Media Technology IDMT in Germany.

Dr. Elaine Chew is a senior CNRS (Centre National de la Recherche Scientifique) researcher in the STMS (Sciences et Technologies de la Musique et du Son) Lab at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, and a Visiting Professor of Engineering in the Faculty of Natural & Mathematical Sciences at King’s College London. She is principal investigator of the European Research Council Advanced Grant project COSMOS and Proof of Concept project HEART.FM. Her work has been recognised by PECASE (Presidential Early Career Award in Science and Engineering) and NSF CAREER (Faculty Early Career Development Program) awards, and Fellowships at Harvard’s Radcliffe Institute for Advanced Study. She is an alum (Fellow) of the NAS Kavli and NAE Frontiers of Science/Engineering Symposia. Her research focuses on the mathematical and computational modelling of musical structures in music and electrocardiographic sequences. Applications include modelling of music performance, AI music generation, music-heart-brain interactions, and computational arrhythmia research. As a pianist, she integrates her research into concert-conversations that showcase scientific visualisations and lab-grown compositions.

Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London, where she designs new ways for humans to interact with computers in creative practice. Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 40,000 times. She is the creator of the world’s first MOOC about machine learning for creative practice, titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Dr. Fiebrink was previously an Assistant Professor at Princeton University and a lecturer at Goldsmiths University of London. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She holds a PhD in Computer Science from Princeton University.

Dr. Emilia Gómez is Lead Scientist of the HUMAINT team that studies the impact of Artificial Intelligence on human behaviour at the Joint Research Centre, European Commission. She is also a Guest Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona, where she leads the MIR (Music Information Research) lab of the Music Technology Group and coordinates the TROMPA (Towards Richer Online Music Public-domain Archives) EU project.

Telecommunication Engineer (Universidad de Sevilla, Spain), Msc in Acoustics, Signal Processing and Computing applied to Music (ATIAM-IRCAM, Paris) and PhD in Computer Science at Universitat Pompeu Fabra, her work deals with the design of data-driven algorithms for music content description (e.g. melody, tonality, genre, emotion) by combining methodologies from signal processing, machine learning, music theory and cognition. She has been contributing to the ISMIR community as author, reviewer, PC member, board and WiMIR member and she was the first woman president of ISMIR. 

Dr. Blair Kaneshiro is a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education at Stanford University, as well as an Adjunct Professor at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA). She earned a BA in Music; MA in Music, Science, and Technology; MS in Electrical Engineering; and PhD in Computer-Based Music Theory and Acoustics, all from Stanford. Her MIR research focuses on human aspects of musical engagement, approached primarily through neuroscience and user research. Dr. Kaneshiro is a member of the ISMIR Board and has organized multiple community initiatives with WiMIR, including as co-founder of the WiMIR Mentoring Program and WiMIR Workshop. 

Dr. Gissel Velarde, PhD in computer science and engineering, is an award-winning researcher, consultant and lecturer specialized in Artificial Intelligence. Her new book: Artificial Era: Predictions for ultrahumans, robots and other intelligent entities, presents a groundbreaking view of technology trends and their impact on our society. 

Additionally, she published several scientific articles in international journals and conferences, and her research has been featured in the media by Jyllands-Posten, La Razón, LadoBe and Eju. She earned her doctoral degree from Aalborg University in Denmark in 2017, an institution recognized as the best university in Europe and fourth in the world in engineering according to the US News World Ranking and the MIT 2018 ranking. She obtained her master’s degree in electronic systems and engineering management from the University of Applied Sciences of South Westphalia, Soest in Germany, thanks to a DAAD scholarship, and she holds a licenciatura’s degree in systems engineering from the Universidad Católica Boliviana, recognized as the third best university in Bolivia according to the Webometrics Ranking 2020. 

Velarde has more than 20 years of experience in engineering and computer science. She was a research member in the European Commission’s project: Learning to Create, was a lecturer at Aalborg University, and currently teaches at the Universidad Privada Boliviana. She worked for Miebach Gmbh, Hansa Ltda, SONY Computer Science Laboratories, Moodagent, and Pricewaterhouse Coopers, among others. She has developed machine learning and deep learning algorithms for classification, structural analysis, pattern discovery, and recommendation systems. In 2019 & 2020 she was internationally selected as one of 120 technologists by the Top Women Tech summit in Brussels.

Dr. Anja Volk (MA, MSc, PhD), Associate Professor in Information and Computing Sciences (Utrecht University) has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. She has an international reputation in the areas of music information retrieval (MIR), computational musicology, and mathematical music theory. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands.  Her research aims at enhancing our understanding of music as a fundamental human trait while applying these insights for developing music technologies that offer new ways of interacting with music. Anja has given numerous invited talks worldwide and held editorships in leading journals, including the Journal of New Music Research and Musicae Scientiae. She has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music (SMCM), the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR (WIMIR) mentoring program. Anja’s commitment to diversity and inclusion was recognized with the Westerdijk Award in 2018 from Utrecht University, and the Diversity and Inclusion Award from Utrecht University in 2020. She is also committed to connecting different research communities and providing interdisciplinary education for the next generation through the organization of international workshops, such as the Lorentz Center in Leiden workshops on music similarity (2015), computational ethnomusicology (2017) and music, computing, and health (2019).

WiMIR Grants

Thanks to the generous contributions of WiMIR sponsors, a number of women received financial support to cover conference registration, paper publication, and – for the first time in 2020 – childcare expenses. In all, WiMIR covered registration costs for 42 attendees; covered publication fees for 3 papers; and provided financial support to cover child-care expenses for 4 attendees.

Thank you WiMIR Sponsors!

Patron

Contributor

Supporter

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s