New-to-ISMIR paper mentoring program

We are excited to announce that the New-to-ISMIR paper mentoring program will be active also for this year’s conference as a part of the Diversity & Inclusion (D&I) initiatives!

The New-to-ISMIR paper mentoring program is designed for members new to the ISMIR conference (early stage researchers in MIR or researchers from allied fields who wish to consider submitting their work to an ISMIR conference) to share their advanced-stage work-in-progress ISMIR paper drafts with senior members of the ISMIR community (mentors) to obtain focused reviews and constructive feedback. The program supplements the generic submission guidelines. The program is being run in 2023 closely aligned with the ISMIR 2023 paper submission deadlines.

The topics of the mentored papers are expected to be in alignment with those of ISMIR conferences. We strongly encourage the mentored papers to be revised based on the feedback and submit to the upcoming ISMIR 2023 conference. All papers submitted to ISMIR 2023, whether mentored or not, will go through the regular scientific review process without any exceptions. 

How do I apply?

If you are interested and eligible, please apply to the program by Mar 2, 2023, using the form: (please fill in one form for each paper).


Additional details about the program, topics, eligibility, expectations, timeline, and FAQ are available at or you can contact the organizer at if you have any further questions. 


Sharing science: the MIP-frontiers video communication project

Originally published at on July 28, 2021 by Giorgia Cantisani.

Sharing science

Sharing your research with the rest of the world can be very challenging. Sometimes you may need to target a broader audience than simply the colleagues in your particular research field. Colleagues in other communities or disciplines are already less likely to read about your work. When it comes to sharing your research with the general public, things become even more difficult. 

There are several reasons why we all should aim to disseminate our research beyond our universities and scientific communities. For instance, it might be essential to explain your research to a general audience because you are doing it thanks to some public funding. In such a case, it is a social duty to inform the citizens about your findings and make your research comprehensible. It’s a virtuous circle that produces culture and participation, and in return, can pay for new investments in research. 

Another reason is to attract the next generation towards science and your specific research field. This is an aspect that is often underrated because it hasn’t an immediate economic and/or social recognition return, but that is critical in the long term. Undergraduate students can orient their education choices and be our future colleagues and enlarge our research community. It’s vital then to let them know that your research exists and might be interesting for them. This would also benefit and increase diversity in the community and reach all those students for whom computer science is not among the options because of societal, demographic, or socioeconomic factors. 

In this context, it is still tough for scientists to involve the uninitiated on very specific topics that seem to have almost no connection with their everyday lives. However, many different techniques, tools, and languages have been studied and gradually refined over time. With the increasing amount of information available online, it is becoming more and more important to be concise and attract the audience’s attention from the very beginning. Video might be one of the ways to go.

Number of views per day (log) x video length (min). Plot courtesy of “Communicating Science With YouTube Videos: How Nine Factors Relate to and Affect Video Views” by Velho et al. (

Videos about science

Videos about science have become more and more popular over the last decade as they are a low-barrier medium to communicate ideas efficiently and effectively. Short videos from 3 to 5 minutes are ideal because they are long enough to explain a concept and sufficiently short for viewers to decide if they are interested. We all have learned about the advantages and disadvantages of this medium during the last year of the pandemic. The format of the conferences has changed, and video abstracts are now a standard. However, video abstracts are intended for peers and not for a broader audience. When disseminating science, complex concepts should be made accessible for the largest audience possible. In such a case, motion graphics and animated storytelling can be a possible solution. Through the process of abstraction in an animated representation, we can effectively simplify the concept we want to transmit. The style, colour palette, transitions, aesthetic and functional choices can all concur to convey the main message.

Examples of scientific dissemination projects. Image courtesy of Scienseed.

Now, I can’t say this process of abstraction is easy. It takes time, many iterations over the script and many drafts before coming up with something good. You have to learn to work with visual designers who do not know anything about your research. We experienced this when working on the MIP-frontiers video communication project, meant to attract young researchers in our research field. It’s very hard to simplify and abstract things you work on every day. It feels like sacrificing many details which are essential to you for the sake of simplicity. Because of that, you have to always keep in mind who’s your target audience. In the specific case of this video, there was an additional problem: we needed to cover the most possible areas in music information processing (MIP), which was quite hard. The trick we found was to trace back the history of a song that an imaginary inhabitant of the future is listening to. We managed to derive a circular story following the song from composition to recording and from distribution to the user experience. Therefore, the music is the backbone of the video, and its choice was crucial.


When preparing a motion graphic, you need to provide to the visual designers a script (description of the scenes), the voiceover (text that an actor needs to read and which describes the scene), and the background music. With those three elements, the visual designers built an animation on which you can then give feedback and adapt the voiceover and the music again. This process is reiterated repeatedly until convergence, when everyone is happy with the result.

Draft of the initial script of the MIP-frontiers animation.

In our case, an additional difficulty was that the music wasn’t just some “background” music. It was, on the contrary, the absolute protagonist that mainly contributes to conveying the main message. The music evolves throughout the video and changes according to the MIR application we wanted to illustrate. All of this needs a not negligible effort of synchronization and composition.

Regarding the voiceover, we quickly realized how few words can fit a 3-minutes-long video. More importantly, we learned how hard it can be to summarize the vast diversity of research in our community. Moreover, there are synchronization constraints that impose a fixed number of words to express complex concepts. In the end, we reached a compromise trying to represent as extensively as possible some MIP applications.

Once the voiceover, the animation and the music are done, it is not trivial to create the final video anyway. In fact, in addition to a temporal synchronization of events, automation on the volume of the various instruments and the voice are necessary. This operation is always necessary for video production, and the role of a sound engineer is essential for an optimal result. Especially in this work, where music and its evolving parts are the protagonists, this professional figure had a particularly central role in glueing all the components.

Special thanks

In general, it was a great experience! I learned a lot, and I’ve spent some time doing something that is not strictly related to my research, but that is a fundamental part of the scientist job. We really thank Mandela (music) and Scienseed (animation) and Alberto Di Carlo (sound engineer) for their great work!

Mandela is an Italian instrumental jazz band from Vicenza. The sound of the band is characterized by a fusion of jazz idioms, rock, world music, psychedelic, and funk. Over the years, the band has performed in several festivals and venues and released 3 full-length albums. These recordings are all available on the major streaming service. Their last release was presented at the festival Rimusicazioni (Bolzano, Italy) and consists of an original soundtrack for “Grass: A Nation’s Battle for Life” — one of the earliest documentaries ever produced (1925).

For this video, the track Simple from the album Mandela s.t. was used. The song was remixed and remastered by Alberto Di Carlo.

Scienseed is a multifunctional agency for the dissemination of scientific findings. Its founding goal is to promote public engagement in science through all available tools in the Era of IT. We are specialized in the translation of scientific data into different accessible products and activities, aimed at either the scientific community (peers) or the general public (society). We provide support to academic laboratories, research institutes, universities and private institutions to raise public awareness and increase the repercussion of their contribution to science.

Giorgia Cantisani is a PhD student at Télécom Paris in France within the Audio Data Analysis and Signal Processing (ADASP) team and the European training network MIP-Frontiers. Her research interests range from music information retrieval (MIR) to neuroscience. In particular, she is interested in the analysis of brain responses to music and how these can be used to guide and inform music source separation.

Looking Back on WiMIR@ISMIR2020

As the ISMIR community organizes and prepares submissions for the ISMIR 2021 conference (to take place virtually November 8-12), let’s take a moment to reflect on the WiMIR events from last year’s conference! ISMIR 2020 was held October 11-15, 2020 as the first virtual ISMIR conference, with unprecedented challenges and opportunities. Slack and Zoom were used as the main platforms, which enabled the conference to designate channels for each presentation, poster and social space. With the support of WiMIR sponsors, substantial grants were given for underrepresented researchers, including women.

The ISMIR 2020 WiMIR events were organized by Dr. Claire Arthur (Georgia Institute of Technology) and Dr. Katherine Kinnaird (Smith College). A variety of WiMIR events took place during the conference, through which the ISMIR community showed support, shared ideas, and learned through thought-provoking sessions.

WiMIR Keynote

Dr. Johanna Devaney from the Brooklyn College and the Graduate Center, CUNY, gave an insightful keynote on our current comprehension and analysis of musical performance, The keynote, titled Performance Matters: Beyond the current conception of musical performance in MIR, was presented on October 13th.

WiMIR keynote video    

WiMIR keynote slides

Abstract: This talk will reflect on what we can observe about musical performance in the audio signal and where MIR techniques have succeeded and failed in enhancing our understanding of musical performance. Since its foundation, ISMIR has showcased a range of approaches for studying musical performance. Some of these have been explicit approaches for studying expressive performance while others implicitly analyze performance with other aspects of the musical audio. Building on my own work developing tools for analyzing musical performance, I will consider not only the assumptions that underlie the questions we ask about performance but what we learn and what we miss in our current approaches to summarizing performance-related information from audio signals. I will also reflect on a number of related questions, including what do we gain by summarizing over large corpora versus close reading of a select number of recordings. What do we lose? What can we learn from generative techniques, such as those applied in style transfer? And finally, how can we integrate these disparate approaches in order to better understand the role of performance in our conception of musical style?

Johanna Devaney is an Assistant Professor at Brooklyn College and the CUNY Graduate Center. At Brooklyn College she teaches primarily in the Music Technology and Sonic Arts areas and at the Graduate Center she is appointed to the Music and the Data Analysis and Visualization programs. Previously, she was an Assistant Professor of Music Theory and Cognition at Ohio State University and a postdoctoral scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley. Johanna completed her PhD in music technology at the Schulich School of Music of McGill University. She also holds an MPhil degree in music theory from Columbia University and an MA in composition from York University in Toronto.

Johanna’s research focuses on interdisciplinary approaches to the study of musical performance. Primarily, she examines the ways in which recorded performances can be used to study performance practice and develops computational tools to facilitate this. Her work draws on the disciplines of music, computer science, and psychology, and has been funded by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Google Faculty Research Awards program and the National Endowment for the Humanities (NEH) Digital Humanities program.  

Twitter: Johanna Devaney (@jcdevaney)

“Notable Women in MIR” Meetups

This year’s WiMIR programming also included a series of meet-up sessions, each of which was an informal Q&A-type drop-in event akin to an “office hour”. In these sessions, participants had the opportunity to talk with the following notable women in the field.

Dr. Amélie Anglade is a freelance Music Information Retrieval and Machine Learning / Artificial Intelligence Consultant based in Berlin, Germany. She carried out a PhD on knowledge representation of musical harmony and modelling of genre, composer and musical style using machine learning techniques and logic programming at Queen Mary University of London (2014). After being employed as the first MIR Engineer at SoundCloud (2011-2013) and working for a couple of other music tech startups, she is now offering (since 2014) freelance MIR and ML/AI services to startups, larger companies and institutions in Berlin and remotely. Her projects range from building search and recommendation engines to supporting product development with Data Science solutions, including designing, implementing, training and optimising MIR features and products. To her clients she provides advice, experimentation, prototyping, production code implementation, management and teaching services. During her career she has worked for Sony CSL, Philips Research, Mercedes-Benz, the EU Commission, Senzari, and Data Science Retreat, among others.

Dr. Rachel Bittner is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning  applied  to fundamental frequency estimation. She has a Master’s degree in mathematics from New York University’s Courant Institute, as well as two Bachelor’s degrees in Music Performance and in Mathematics from the University of California, Irvine.

In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.

Dr. Estefanía Cano is a senior scientist at AudioSourceRe in Ireland, where she researches topics related to music source separation. Her research interests also include music information retrieval (MIR), computational musicology, and music education. She is the CSO and co-founder of Songquito, a company that builds MIR technologies for music education. She previously worked at the Agency for Science, Technology and Research A*STAR in Singapore, and at the Fraunhofer Institute for Digital Media Technology IDMT in Germany.

Dr. Elaine Chew is a senior CNRS (Centre National de la Recherche Scientifique) researcher in the STMS (Sciences et Technologies de la Musique et du Son) Lab at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, and a Visiting Professor of Engineering in the Faculty of Natural & Mathematical Sciences at King’s College London. She is principal investigator of the European Research Council Advanced Grant project COSMOS and Proof of Concept project HEART.FM. Her work has been recognised by PECASE (Presidential Early Career Award in Science and Engineering) and NSF CAREER (Faculty Early Career Development Program) awards, and Fellowships at Harvard’s Radcliffe Institute for Advanced Study. She is an alum (Fellow) of the NAS Kavli and NAE Frontiers of Science/Engineering Symposia. Her research focuses on the mathematical and computational modelling of musical structures in music and electrocardiographic sequences. Applications include modelling of music performance, AI music generation, music-heart-brain interactions, and computational arrhythmia research. As a pianist, she integrates her research into concert-conversations that showcase scientific visualisations and lab-grown compositions.

Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London, where she designs new ways for humans to interact with computers in creative practice. Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 40,000 times. She is the creator of the world’s first MOOC about machine learning for creative practice, titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Dr. Fiebrink was previously an Assistant Professor at Princeton University and a lecturer at Goldsmiths University of London. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She holds a PhD in Computer Science from Princeton University.

Dr. Emilia Gómez is Lead Scientist of the HUMAINT team that studies the impact of Artificial Intelligence on human behaviour at the Joint Research Centre, European Commission. She is also a Guest Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona, where she leads the MIR (Music Information Research) lab of the Music Technology Group and coordinates the TROMPA (Towards Richer Online Music Public-domain Archives) EU project.

Telecommunication Engineer (Universidad de Sevilla, Spain), Msc in Acoustics, Signal Processing and Computing applied to Music (ATIAM-IRCAM, Paris) and PhD in Computer Science at Universitat Pompeu Fabra, her work deals with the design of data-driven algorithms for music content description (e.g. melody, tonality, genre, emotion) by combining methodologies from signal processing, machine learning, music theory and cognition. She has been contributing to the ISMIR community as author, reviewer, PC member, board and WiMIR member and she was the first woman president of ISMIR. 

Dr. Blair Kaneshiro is a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education at Stanford University, as well as an Adjunct Professor at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA). She earned a BA in Music; MA in Music, Science, and Technology; MS in Electrical Engineering; and PhD in Computer-Based Music Theory and Acoustics, all from Stanford. Her MIR research focuses on human aspects of musical engagement, approached primarily through neuroscience and user research. Dr. Kaneshiro is a member of the ISMIR Board and has organized multiple community initiatives with WiMIR, including as co-founder of the WiMIR Mentoring Program and WiMIR Workshop. 

Dr. Gissel Velarde, PhD in computer science and engineering, is an award-winning researcher, consultant and lecturer specialized in Artificial Intelligence. Her new book: Artificial Era: Predictions for ultrahumans, robots and other intelligent entities, presents a groundbreaking view of technology trends and their impact on our society. 

Additionally, she published several scientific articles in international journals and conferences, and her research has been featured in the media by Jyllands-Posten, La Razón, LadoBe and Eju. She earned her doctoral degree from Aalborg University in Denmark in 2017, an institution recognized as the best university in Europe and fourth in the world in engineering according to the US News World Ranking and the MIT 2018 ranking. She obtained her master’s degree in electronic systems and engineering management from the University of Applied Sciences of South Westphalia, Soest in Germany, thanks to a DAAD scholarship, and she holds a licenciatura’s degree in systems engineering from the Universidad Católica Boliviana, recognized as the third best university in Bolivia according to the Webometrics Ranking 2020. 

Velarde has more than 20 years of experience in engineering and computer science. She was a research member in the European Commission’s project: Learning to Create, was a lecturer at Aalborg University, and currently teaches at the Universidad Privada Boliviana. She worked for Miebach Gmbh, Hansa Ltda, SONY Computer Science Laboratories, Moodagent, and Pricewaterhouse Coopers, among others. She has developed machine learning and deep learning algorithms for classification, structural analysis, pattern discovery, and recommendation systems. In 2019 & 2020 she was internationally selected as one of 120 technologists by the Top Women Tech summit in Brussels.

Dr. Anja Volk (MA, MSc, PhD), Associate Professor in Information and Computing Sciences (Utrecht University) has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. She has an international reputation in the areas of music information retrieval (MIR), computational musicology, and mathematical music theory. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands.  Her research aims at enhancing our understanding of music as a fundamental human trait while applying these insights for developing music technologies that offer new ways of interacting with music. Anja has given numerous invited talks worldwide and held editorships in leading journals, including the Journal of New Music Research and Musicae Scientiae. She has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music (SMCM), the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR (WIMIR) mentoring program. Anja’s commitment to diversity and inclusion was recognized with the Westerdijk Award in 2018 from Utrecht University, and the Diversity and Inclusion Award from Utrecht University in 2020. She is also committed to connecting different research communities and providing interdisciplinary education for the next generation through the organization of international workshops, such as the Lorentz Center in Leiden workshops on music similarity (2015), computational ethnomusicology (2017) and music, computing, and health (2019).

WiMIR Grants

Thanks to the generous contributions of WiMIR sponsors, a number of women received financial support to cover conference registration, paper publication, and – for the first time in 2020 – childcare expenses. In all, WiMIR covered registration costs for 42 attendees; covered publication fees for 3 papers; and provided financial support to cover child-care expenses for 4 attendees.

Thank you WiMIR Sponsors!




Inspiring Women in Science: an interview with Dr. Blair Kaneshiro

Why are women still so underrepresented in science? Female scientists represent only a third of researchers globally, and things are not getting better when talking about information and communication technologies, where less than a fifth of the graduates are women.
The Music Information Retrieval (MIR) community is no exception, with less than 20% female participation at ISMIR 2019, the conference of the International Society for Music Information Retrieval. Despite the efforts of organizers this year to promote diversity in the choice of the keynote and the session chairs, the gender gap is still evident when looking at the author and attendee statistics.

Much still needs to be done to bring women into the community and to reduce the gender gap. For this purpose, Women in Music Information Retrieval (WiMIR) – a group of people within ISMIR – has put together ideas and started a number of diversity and inclusion initiatives. The goal is to build a community around women in the field and create a network able to support young researchers through grants, workshops and mentoring from senior scientists. Thanks to the WiMIR grants, ISMIR 2019 female participation had a 5% increase!

I am starting a series of interviews with female researchers in MIR to find out more about their experiences and give an insight to young female researchers who want to start a career in MIR research. The first name on my list is Dr. Blair Kaneshiro, who is a researcher at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, a member of the ISMIR Board, and one of the WiMIR organizers.

Whereabouts did you study? 

I’m from the United States and completed all of my schooling at Stanford University. My undergraduate degree was in Music. I later returned for graduate school, completing an MA in Music, Science, and Technology; MS in Electrical Engineering; and finally a PhD in Computer-Based Music Theory and Acoustics.

When did you first know you wanted to pursue a career in science?

A career interest in science for me did not develop until graduate school. I had been working at an education company at Stanford called the Education Program for Gifted Youth (EPGY) after my undergraduate degree when Patrick Suppes – co-founder of EPGY and Emeritus Professor of Philosophy, among many other remarkable things – suggested I pursue graduate work. Dr. Suppes was actively running a neuroscience lab at that time, and offered to fund my Master’s through a research assistantship in his lab. Once there, I began to see how neuroscience and engineering could be employed to address fundamental questions about perception and music – questions I feel, three graduate degrees and over a decade later, I’ve still barely begun to answer! Sadly, Dr. Suppes passed away in 2014. I am forever grateful for his support and mentorship, and for encouraging me to pursue science in the first place. I try to pay forward what I have learned from him, as both a scientist and a mentor.

How did you first become interested in MIR?

For the first few years of my graduate study, I didn’t feel I had a ‘home’ research community as my work was falling somewhere between perception / cognition and machine learning / brain decoding. In 2011, my classmate suggested I attend the ISMIR conference. I was immediately drawn in by the field of MIR, not only by the research topics – which to me were combining computation, perception, and application in exciting ways – but also by the community itself, which was welcoming and open to new ideas and approaches.

What are you currently working on?

These days I have two main research tracks. The first is electroencephalography (EEG) research, where I continue to use the decoding techniques I first encountered in the Suppes lab, and related approaches, to study proximity spaces of neural responses. I’m also working with analysis techniques that enable us to study neural processing of ‘natural’ stimuli (e.g., real-world music). My second area of research focuses on how social practices around music selection and consumption are supported (or not) by present-day technologies such as streaming platforms – more in the direction of user research. In all, I really enjoy working with a variety of collaborators, study designs, data modalities, and analysis techniques to gain a better understanding of how we humans engage with music.

Are there still gender imbalances in your research environment and in the MIR community? If yes, how can we overcome that?

Yes, definitely! In fact, I was relatively unfazed by the low number of women at the first ISMIR conference I attended, if only because it was what I was used to from being in engineering classes. But there is definitely an imbalance. In terms of overcoming this challenge, the MIR community stands out in its willingness to take action. Community members (women and men) have signed on to mentor women, organize initiatives, lead Workshop groups, and serve on conference committees; and sponsors contribute extra travel funds specifically for women to attend the ISMIR conference. While there is still a lot of progress to be made, the fact that the community as a whole is already on board makes a huge difference in moving forward.

Which changes, if any, are needed in the MIR community to be more attractive to women?

Building a more diverse research community will take time. It also requires support at multiple career stages, from recruiting women into the field to retaining those who are here. We are already starting to see positive outcomes from community initiatives. For instance, the WiMIR Mentoring Program, WiMIR Workshop, and WiMIR Travel Awards can serve as entry points for newcomers to the field, and we have seen cases of WiMIR Mentoring participants pivoting into MIR-related jobs or graduate programs, and of newcomers attending ISMIR for the first time through WiMIR Travel Awards and returning in future years as full-paper authors. But how exactly does one progress from attendee to author? And how do we keep women in the field for the long term – what are the challenges there? Will our progress in recent years translate to long-term change? I hope we can all continue to examine these challenges, understand underlying factors and biases, and take steps – on a community level and in our immediate working environments – to recruit and retain more women in MIR.

What advice would you give to young girls who are considering a career in science?

I recommend taking a broad look at what types of scientific fields are out there. Maybe you have a picture in your mind of what it looks like to ‘do science’. In fact, science spans a vast array of disciplines – even music! Also, it’s important to recognize that there is no one way to be a scientist, and no one way to look or act as a scientist. I highly recommend browsing the profiles at #UniqueScientists to see just how diverse the people, topics, and career paths in science are today.

Giorgia Cantisani graduated with a Master’s Degree in Biomedical Engineering from the Polytechnic University of Turin and, since September 2018, is a PhD student at Télécom Paris in France. Her research interests range from music information retrieval (MIR) to neuroscience. In particular, she is interested in the analysis of brain responses to music and how these can be used to guide and inform MIR tasks.