WiMIR mentoring 2022 signups are now open!

Signups are now open for the seventh round of the Women in Music Information Retrieval (WiMIR) mentoring program, to run in 2022! We kindly invite previous and new mentors and mentees to sign up for the upcoming round.

The WiMIR mentoring program connects women students, postdocs, early-stage researchers, industry employees, and faculty to more senior women and men in MIR who are dedicated to increasing opportunities for women in the field. Mentors will share their experiences and offer guidance to support mentees in achieving and exceeding their goals and aspirations. The program also offers all mentors the option to pair up with a peer mentor for discussing relevant topics with a professional at a similar stage of their career. By connecting individuals of different backgrounds and expertise, this program strengthens networks within the MIR community, both in academia and industry.

Mentee eligibility

  • ​​You identify as women, trans, or non-binary, at any career stage in MIR.
  • This year we are also starting to expand the mentorship program to other underrepresented groups, pending an availability of mentors. If you are from a group that is underrepresented either at large in the ISMIR community (e.g., the Global South) or in the MIR community in your own country (e.g., an ethnic or racial group), we encourage community members of all genders to fill out this form if you are interested in receiving mentorship.
  • Sign up to GET a mentor: https://tinyurl.com/wimir-mentee-signup  

Mentor eligibility

Sign up by December 17, 2021. Mentoring to begin in February 2022.

Questions? Email: wimir-mentoring@ismir.net  

Sincerely,
WiMIR Mentoring Program Team 2022

  • Johanna Devaney, Brooklyn College/Graduate Center (CUNY), United States
  • Lamtharn “Hanoi” Hantrakul, TikTok/ByteDance, Thailand
  • Michael Mandel, Reality Labs and Brooklyn College/Graduate Center (CUNY), United States
  • Francesca Ronchini, Inria, France

WiMIR Workshop 2021: A Virtual Success!

We’re pleased to say, again, that our virtual, distributed-in-time-and-space WiMIR Workshop 2021 went very well!

We had just under 200 signups this year, and did a two-day intensive across timezones from Tokyo to San Francisco. We were delighted to have speakers from Shanghai, Singapore, Denmark, Paris, and more!

We did longer talks this year, and then were on Discord as our socializing and networking space – we also resumed our discussions from Delft, covering Work/Life Balance, Burnout, Applying to Post Docs & Grad School, Picking An Advisor, Starting an MIR Business, and Interviewing.

Thanks to everyone who joined us! Special thanks to our volunteers:  Rachel Bittner, Jan Van Balen, Elio Quinton, Becky Liang, Meinard Mueller, Elena Georgieva – and extra-special thanks to our project guides: Dorien Herremans, Kat Agres, Tian Cheng, Stefania Serafin, Oriol Nieto, Emma Frid, Nick Bryan, Lamtharn “Hanoi” Hantrakul, Jason Hockman, Jake Drysdale, Olumide Okubadejo, and Cory McKay.

We’ll see you next year!

The WiMIR Workshop Organizers,

  • Courtney Reed, Queen Mary, UK
  • Kitty Shi, Stanford, USA
  • Jordan Smith, TikTok, UK
  • Thor Kell, Spotify, USA
  • Blair Kaneshiro, Stanford, USA

Sharing science: the MIP-frontiers video communication project

Originally published at https://mip-frontiers.eu/ on July 28, 2021 by Giorgia Cantisani.

Sharing science

Sharing your research with the rest of the world can be very challenging. Sometimes you may need to target a broader audience than simply the colleagues in your particular research field. Colleagues in other communities or disciplines are already less likely to read about your work. When it comes to sharing your research with the general public, things become even more difficult. 

There are several reasons why we all should aim to disseminate our research beyond our universities and scientific communities. For instance, it might be essential to explain your research to a general audience because you are doing it thanks to some public funding. In such a case, it is a social duty to inform the citizens about your findings and make your research comprehensible. It’s a virtuous circle that produces culture and participation, and in return, can pay for new investments in research. 

Another reason is to attract the next generation towards science and your specific research field. This is an aspect that is often underrated because it hasn’t an immediate economic and/or social recognition return, but that is critical in the long term. Undergraduate students can orient their education choices and be our future colleagues and enlarge our research community. It’s vital then to let them know that your research exists and might be interesting for them. This would also benefit and increase diversity in the community and reach all those students for whom computer science is not among the options because of societal, demographic, or socioeconomic factors. 

In this context, it is still tough for scientists to involve the uninitiated on very specific topics that seem to have almost no connection with their everyday lives. However, many different techniques, tools, and languages have been studied and gradually refined over time. With the increasing amount of information available online, it is becoming more and more important to be concise and attract the audience’s attention from the very beginning. Video might be one of the ways to go.

Number of views per day (log) x video length (min). Plot courtesy of “Communicating Science With YouTube Videos: How Nine Factors Relate to and Affect Video Views” by Velho et al. (https://www.frontiersin.org/articles/10.3389/fcomm.2020.567606/full).

Videos about science

Videos about science have become more and more popular over the last decade as they are a low-barrier medium to communicate ideas efficiently and effectively. Short videos from 3 to 5 minutes are ideal because they are long enough to explain a concept and sufficiently short for viewers to decide if they are interested. We all have learned about the advantages and disadvantages of this medium during the last year of the pandemic. The format of the conferences has changed, and video abstracts are now a standard. However, video abstracts are intended for peers and not for a broader audience. When disseminating science, complex concepts should be made accessible for the largest audience possible. In such a case, motion graphics and animated storytelling can be a possible solution. Through the process of abstraction in an animated representation, we can effectively simplify the concept we want to transmit. The style, colour palette, transitions, aesthetic and functional choices can all concur to convey the main message.

Examples of scientific dissemination projects. Image courtesy of Scienseed.

Now, I can’t say this process of abstraction is easy. It takes time, many iterations over the script and many drafts before coming up with something good. You have to learn to work with visual designers who do not know anything about your research. We experienced this when working on the MIP-frontiers video communication project, meant to attract young researchers in our research field. It’s very hard to simplify and abstract things you work on every day. It feels like sacrificing many details which are essential to you for the sake of simplicity. Because of that, you have to always keep in mind who’s your target audience. In the specific case of this video, there was an additional problem: we needed to cover the most possible areas in music information processing (MIP), which was quite hard. The trick we found was to trace back the history of a song that an imaginary inhabitant of the future is listening to. We managed to derive a circular story following the song from composition to recording and from distribution to the user experience. Therefore, the music is the backbone of the video, and its choice was crucial.

Making-of

When preparing a motion graphic, you need to provide to the visual designers a script (description of the scenes), the voiceover (text that an actor needs to read and which describes the scene), and the background music. With those three elements, the visual designers built an animation on which you can then give feedback and adapt the voiceover and the music again. This process is reiterated repeatedly until convergence, when everyone is happy with the result.

Draft of the initial script of the MIP-frontiers animation.

In our case, an additional difficulty was that the music wasn’t just some “background” music. It was, on the contrary, the absolute protagonist that mainly contributes to conveying the main message. The music evolves throughout the video and changes according to the MIR application we wanted to illustrate. All of this needs a not negligible effort of synchronization and composition.

Regarding the voiceover, we quickly realized how few words can fit a 3-minutes-long video. More importantly, we learned how hard it can be to summarize the vast diversity of research in our community. Moreover, there are synchronization constraints that impose a fixed number of words to express complex concepts. In the end, we reached a compromise trying to represent as extensively as possible some MIP applications.

Once the voiceover, the animation and the music are done, it is not trivial to create the final video anyway. In fact, in addition to a temporal synchronization of events, automation on the volume of the various instruments and the voice are necessary. This operation is always necessary for video production, and the role of a sound engineer is essential for an optimal result. Especially in this work, where music and its evolving parts are the protagonists, this professional figure had a particularly central role in glueing all the components.

Special thanks

In general, it was a great experience! I learned a lot, and I’ve spent some time doing something that is not strictly related to my research, but that is a fundamental part of the scientist job. We really thank Mandela (music) and Scienseed (animation) and Alberto Di Carlo (sound engineer) for their great work!

Mandela is an Italian instrumental jazz band from Vicenza. The sound of the band is characterized by a fusion of jazz idioms, rock, world music, psychedelic, and funk. Over the years, the band has performed in several festivals and venues and released 3 full-length albums. These recordings are all available on the major streaming service. Their last release was presented at the festival Rimusicazioni (Bolzano, Italy) and consists of an original soundtrack for “Grass: A Nation’s Battle for Life” — one of the earliest documentaries ever produced (1925).

For this video, the track Simple from the album Mandela s.t. was used. The song was remixed and remastered by Alberto Di Carlo.

Scienseed is a multifunctional agency for the dissemination of scientific findings. Its founding goal is to promote public engagement in science through all available tools in the Era of IT. We are specialized in the translation of scientific data into different accessible products and activities, aimed at either the scientific community (peers) or the general public (society). We provide support to academic laboratories, research institutes, universities and private institutions to raise public awareness and increase the repercussion of their contribution to science.

Giorgia Cantisani is a PhD student at Télécom Paris in France within the Audio Data Analysis and Signal Processing (ADASP) team and the European training network MIP-Frontiers. Her research interests range from music information retrieval (MIR) to neuroscience. In particular, she is interested in the analysis of brain responses to music and how these can be used to guide and inform music source separation.

Diversity and Inclusion Initiatives at ISMIR 2021

Originally published at https://ismir2021.ismir.net/blog/diversity_inclusion/ on September 5, 2021 by Blair Kaneshiro, Jordan B. L. Smith, Jin Ha Lee, and Alexander Lerch.

The 22nd International Society for Music Information Retrieval Conference (ISMIR2021) is excited to announce a number of Diversity & Inclusion (D&I) initiatives for this year’s conference. These initiatives are aimed toward ensuring a positive and supportive conference environment while also supporting a diverse range of presenters and attendees across backgrounds, career stages, and MIR research areas. This year’s efforts, facilitated by the online format of ISMIR2021, expand upon the numerous Women in Music Information Retrieval (WiMIR) initiatives that have taken place at ISMIR conferences over the past decade, and represent a broadening of how the MIR community views and supports D&I in the field.

This year’s D&I efforts are led by ISMIR2021 D&I Chairs Blair Kaneshiro (Stanford University, US) and Jordan B. L. Smith (ByteDance/TikTok, UK) in close collaboration with the ISMIR2021 General Chairs Jin Ha Lee (University of Washington, US) and Alexander Lerch (Georgia Institute of Technology, US), but it is the efforts of all the ISMIR2021 Organizers that make these initiatives possible. In this blog post we summarize the various D&I initiatives planned for ISMIR2021.

Code of Conduct

Since 2018, a Code of Conduct has accompanied the ISMIR conference. It is prepared by the conference organizers in conjunction with the ISMIR Board and is intended to ensure that the conference environment — whether in person or virtual — is a safe and inclusive space for all participants. In 2020 the Code of Conduct was updated for the first ever virtual ISMIR. All ISMIR2021 participants agree, at time of registration, to adhere to this year’s Code of Conduct.

Registration fees and financial support

To maximize the accessibility of the ISMIR2021 conference for students, the student registration fee is only $15 USD at the early-bird rate (until September 30) and $25 USD thereafter; and the student tutorial fee is just $5. These low rates are subsidized by generous ISMIR2021 sponsor contributions and full (non-student) registrations.

In addition to reduced registration fees, ISMIR2021 offers registration waivers and childcare support to a broad range of attendees. The ISMIR conference has a long history of offering student travel grants to cover conference registration and lodging. Since 2016, the conference has offered WiMIR grants as well, which provide financial assistance to women of any career stage to attend the conference. The ISMIR2019 conference expanded financial support opportunities once more to include Community grants for former and prospective MIR community members, and the ISMIR2020 conference offered childcare grants as well as Black in MIR registration waivers for the first time.

This year, the low cost of attending the conference, combined with generous sponsor support, enables the ISMIR2021 conference to once again provide a wide range of grants. Registration grants are available to students, unaffiliated attendees (anyone who has no professional affiliation that will cover the registration fee), and to attendees who self-identify with a broad range of D&I categories including Black in MIR, attendees from low- or middle-income countries, “New to ISMIR” presenter, Queer in MIR, and WiMIR. In addition, any attendee is eligible to apply for a childcare grant.

Details about ISMIR2021 grant eligibility and the application process are available here: https://bit.ly/ismir2021grants

D&I blog posts

This year’s D&I initiatives also seek to address the “hidden curriculum” of navigating academia as well as STEM research. To this end, the ISMIR2021 organizers are authoring a number of blog posts on such topics as preparing a successful ISMIR submission and reviewing ISMIR papers, as well as reposting relevant content from the WiMIR blog. Upcoming blog posts will include an introduction to ISMIR2021 Newcomer Initiatives and community advice on navigating the conference. Visit the ISMIR2021 blog page (maintained by Qhansa Bayu (Social Media Chair; Telkom University, ID) and Ashvala Vinay (Website Chair; Georgia Institute of Technology, US)) to stay up to date!

Special call for papers on “Cultural Diversity in MIR”

This year, the conference organizers wanted to promote the cultural diversity of the ISMIR community and its research. To this end, the ISMIR2021 Call for Papers included a special call, for papers on “Cultural Diversity in MIR”. This year’s Scientific Chairs — Zhiyao Duan (University of Rochester, US), Juhan Nam (KAIST, KR), Preeti Rao (IIT Bombay, IN), and Peter van Kranenburg (Meertens Institute, NL) — organized the track with a focus on non-Western music and cross-cultural studies. Submissions to this track underwent the same review process as papers in the main track, with specially selected meta-reviewers.

In all, 44 papers were submitted to this track, of which 11 were accepted and verified by the Scientific Chairs to match the call. Accepted papers in this track will be presented in the same format as other accepted papers, with recognition on the conference website and the institution of a special paper award for the themed track.

Special Late-Breaking/Demo (LBD) “New to ISMIR” track

The Late-Breaking/Demo (LBD) session, involving short-format papers which undergo light peer review, has long served as a venue for new or junior researchers to gain a foothold in the MIR community. The ISMIR2021 LBD chairs — Li Su (Academia Sinica, TW), Chih-Wei Wu (Netflix, US), and Siddharth Gururani (Electronic Arts, US) — present a new special track called “New to ISMIR”. In this track, first-time ISMIR attendees, students, WiMIR community members, and underrepresented minorities have the opportunity to receive extra mentoring on their LBD submissions. Presenters in this track are strongly encouraged to apply for a registration waiver as well. More information can be found in the full Call for LBDs.

Newcomer Initiatives

Navigating a new conference can be challenging in the best of times, but is especially difficult in the virtual format. The ISMIR2021 Newcomer Initiatives Chairs — Nick Gang (Apple, US) and Elona Shatri (Queen Mary University of London, UK) — are organizing a number of initiatives to help newcomers navigate the conference; meet other attendees; and establish social and professional connections to help them achieve their academic, research, and career aims during and beyond the conference. Announcements on these initiatives are coming soon!

WiMIR Sponsorship

Since 2016, industry sponsors have contributed specifically to WiMIR, typically by funding WiMIR travel grants and/or hosting WiMIR-themed receptions during the conference. These initiatives increase access to the ISMIR conference for women of all career stages, and also provide a designated setting for women and other attendees to network during the conference.

We express our sincere thanks to this year’s WiMIR sponsors, whose contributions support the various D&I initiatives described here, and to the ISMIR2021 Sponsorship Chairs: Sertan Şentürk (Kobalt Music, UK), Alia Morsi (Universitat Pompeu Fabra, ES), and Lamtharn “Hanoi” Hantrakul (ByteDance/TikTok, CN).

If your company would like to participate as a WiMIR sponsor, more information can be found on the ISMIR2021 Call for Sponsors page.

WiMIR Plenary Session

WiMIR began meeting informally at ISMIR conferences starting at ISMIR2011. After a few years of ad-hoc meetings organized by interested attendees, a WiMIR plenary session was incorporated into the main ISMIR conference program starting in 2015. Since then, WiMIR sessions have included presentations on the WiMIR Mentoring Program as well as community and invited keynote presentations. This year’s WiMIR plenary session will include an invited keynote speaker to be announced soon!

WiMIR Meetup Sessions

While the virtual conference format has posed challenges when it comes to offering the range and serendipity of interactions experienced in person, it also offers opportunities to try out new formats to bring attendees together — not only for formal research presentations, but also for informal discussions. Last year, the ISMIR2020 conference for the first time included WiMIR-themed meetup sessions throughout the conference. These sessions, centered around the theme of “Notable Women in MIR”, gave conference attendees the chance to meet informally with women in the field and discuss topics ranging from career paths to technical details of their research.

This year, the ISMIR2021 conference is expanding upon the format of these sessions to include a range of underrepresented communities in MIR. More information on these special meetup sessions will be announced in coming months.

WiMIR Workshop

The WiMIR Workshop has taken place annually since 2018 as a satellite event of the ISMIR conference. The goal of the WiMIR Workshop is to provide a venue for mentorship, networking, and collaboration among women and allies in the ISMIR community. The Workshop took place as an in-person one-day event in 2018 and again in 2019, and migrated to a virtual format in 2020 due to the COVID-19 pandemic.

In 2021, the WiMIR 4th Annual Workshop will take place virtually on Friday, October 29 and Saturday, October 30. It is a free event open to all members of the MIR community, and will include events for all time zones. The speaker lineup and schedule will be announced soon!

Other ISMIR Community D&I Initiatives

The D&I initiatives of ISMIR2021 are part of a larger ecosystem of ISMIR community initiatives. For more information on these initiatives, and to stay up to date on what is happening with the ISMIR2021 conference and the community at large, visit the following resources:

See you at ISMIR2021 in November!

Blair Kaneshiro (Stanford University, USA) and Jordan B. L. Smith (ByteDance, UK) are the Diversity & Inclusion Chairs of the ISMIR2021 Conference. Jin Ha Lee (University of Washington, USA) and Alexander Lerch (Georgia Institute of Technology, USA) are the General Chairs of the ISMIR2021 Conference. ISMIR2021 will take place as a virtual conference from November 4-8, 2021.

WiMIR Workshop 2021 Project Guides

This is a list of Project Guides and their areas of interest for the 2021 WiMIR Virtual Workshop, which will take place as an online-only satellite event of ISMIR2021.

The Workshop will take place on Friday, October 29 and Saturday, October 30 – please sign up by using this form: https://forms.gle/GHjqwaHWBciX9tuT7

We know that timezones for this are complicated, so we’ve made a Google Calendar with all the events on it – visit this link to add them to your calendar

This year’s Workshop is organized by Courtney Reed (Queen Mary University), Kitty Shi (Stanford University), Jordan B. L. Smith (ByteDance), Thor Kell (Spotify), and Blair Kaneshiro (Stanford University).

October 29

Dorien Herremans: Music Generation – from musical dice games to controllable AI models

This event will take place at 1500, GMT+8

In this fireside chat, Dorien will give a brief overview of the history of music generation systems, with a focus on the current challenges in the field, followed by an open discussion and Ask-Me-Anything (AMA) session. Prof. Herremans’ recent work has focused on creating controllable music generation systems using deep learning technologies. One challenge in particular – generating music with steerable emotion – has been central in her research. When it comes to affect and emotion, computer models still do not compare to humans. Using affective computing techniques and deep learning, Dorien’s team has built models that learn to predict perceived emotion from music. These models are then used to generate new fragments in a controllable manner, so that users can steer the desired arousal/valence level or tension in newly generated music. Other challenges tackled by Dorien’s team include ensuring repeated themes in music, automatic music transcription, and novel music representations, including the library Pytorch GPU library: nnAudio. 

Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, where she is also Director of Game Lab. Before joining SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London, where she worked on the project: “MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion”. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods, and graduated as a Business Engineer in Management Information Systems at the University of Antwerp in 2005. After that, she worked as a consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans’ research interests include AI for novel applications in music and audio.

Kat Agres: Music, Brains, and Computers, Oh My!

This event will take place at 1600, GMT+8

In an informal, ask-me-anything chat, Kat will discuss her career path through cognitive science to computational approaches to music cognition, to her current research in music, computing and health.

Kat Agres is an Assistant Professor at the Yong Siew Toh Conservatory of Music (YSTCM) at the National University of Singapore (NUS), and teaches classes at YSTCM, Yale-NUS, and the NUS YLL School of Medicine. She was previously a Research Scientist III and founder of the Music Cognition group at the Institute of High Performance Computing, A*STAR. Kat received her PhD in Psychology (with a graduate minor in Cognitive Science) from Cornell University in 2013, and holds a bachelor’s degree in Cognitive Psychology and Cello Performance from Carnegie Mellon University. Her postdoctoral research was conducted at Queen Mary University of London, in the areas of Music Cognition and Computational Creativity. She has received numerous grants to support her research, including Fellowships from the National Institute of Health (NIH) and the National Institute of Mental Health (NIMH) in the US, postdoctoral funding from the European Commission’s Future and Emerging Technologies (FET) program, and grants from various funding agencies in Singapore. Kat’s research explores a wide range of topics, including music technology for healthcare and well-being, music perception and cognition, computational modeling of learning and memory, statistical learning, automatic music generation and computational creativity. She has presented her work in over fifteen countries across four continents, and remains an active cellist in Singapore.

Tian Cheng: Beat Tracking with Sequence Models

This event will take place at 1830, GMT+9

Beat tracking is an important MIR task with a long history. It provides basic metrical
information and is the fundament of synchronize-based applications. In this task, I
will summarize common choices for building a beat tracking model based on
research on related topics (beat, downbeat, and tempo). I will also compare simple
sequence models for beat tracking. In the last part, I will give some examples to show
how beat tracking is used in real-work applications.

Tian Cheng is a researcher at Media Interaction Group in National Institute of
Advanced Industrial Science and Technology (AIST), Japan. From 2016 to 2018, she
was a postdoctoral researcher in the same group. Her research interests include beat
tracking and music structure analysis. Her work provides basic music content
estimations to support applications for music editing and creation. She received her
PhD from Queen Mary University of London in 2016 and her dissertation focused on
using music acoustics for piano transcription.

Stefania Serafin: Sonic Interactions for All

This event will take place at 1800, GMT+2

In this workshop I will introduce our  recent work on using novel technologies and sonic interaction design to help hearing impaired users and individuals with limited mobility enjoy music.The talk will present the technologies we develop in the Multisensory Experience lab at Aalborg University in Copenhagen, such as VR, AR and novel interfaces and haptic devices, as well as how these technologies can be used to help populations in need.

Stefania Serafin is professor of Sonic Interaction Design at Aalborg University in Copenhagen. She received a Ph.D. in Computer Based Music Theory and Acoustics from Stanford University. She is the president of the Sound and Music Computing Association and principal investigator of the Nordic Sound and Music Computing Network. Her research interest is on sonic interaction design, sound for VR and AR and multi sensory processing.

Oriol Nieto: Overview, Challenges, and Applications of Audio-based Music Structure Analysis

This event will take place at 1000, GMT-7

The task of audio-based music structure analysis aims at identifying the different parts of a given music signal and labeling them accordingly (e.g., verse, chorus). The automatic approach to this problem can help several applications such as intra- and inter-track navigation, section-aware automatic DJ-ing, section-based music recommendation, etc. This is a fundamental MIR task that has significantly advanced over the past two decades, yet still poses several interesting research challenges. In this talk I will give an overview of the task, discuss its open challenges, and explore the potential applications, some of which have been employed at Adobe Research to help our users have better creative experiences.

Oriol Nieto (he/him or they/them) is a Senior Audio Research Engineer at Adobe Research in San Francisco. He is a former Staff Scientist in the Radio and Music Informatics team at Pandora, and holds a PhD from the Music and Audio Research Laboratory of New York University. His research focuses on topics such as music information retrieval, large scale recommendation systems, music generation, and machine learning on audio with especial emphasis on deep architectures. His PhD thesis is about trying to better teach computers at “understanding” the structure of music. Oriol develops open source Python packages, plays guitar, violin, cajón, and sings (and screams) in their spare time.

Emma Frid: Music Technology for Health

This event will take place at 2000, GMT+2

There is a growing interest in sound and music technologies designed to promote health, well-being, and inclusion, with many multidisciplinary research teams aiming to bridge the fields of accessibility, music therapy, universal design, and music technology. This talk will explore some of these topics through examples from two projects within my postdoctoral work at IRCAM/KTH: Accessible Digital Musical Instruments – Multimodal Feedback and Artificial Intelligence for Improved Musical Frontiers for People with Disabilities, focused on the design and customization of Digital Musical Instruments (DMIs) to promote access to music-making; and COSMOS (Computational Shaping and Modeling of Musical Structures), focused on the use of data science, optimization, and citizen science to study musical structures as they are created in music performances and in unusual sources such as heart signals. 

Emma Frid is a postdoctoral researcher at the Sciences et technologies de la musique et du sons (STMS) Laboratory, at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris, where she is working in the COSMOS project, under a Swedish Research Council International Postdoctoral Grant hosted by the Sound and Music Computing Group at KTH Royal Institute of Technology. She holds a PhD in Sound and Music Computing from the Division of Media Technology and Interaction Design at KTH, and a Master of Science in Engineering in Media Technology from the same university. Her PhD thesis focused on how Sonic Interaction Design can be used to promote inclusion and diversity in music-making. Emma’s research is centered on multimodal sound and music interfaces designed to promote health and inclusion, predominantly through work on Accessible Digital Musical Instruments (ADMIs).

Nick Bryan: Learning to Control Signal Processing Algorithms with Deep Learning

This event will take place at 1230, GMT-7

Expertly designed signal processing algorithms have been ubiquitous for decades and helped create the foundation of countless industries and areas of research (e.g. music information retrieval, audio fx, voice processing). In the last decade, however, expertly designed signal processing algorithms have been rapidly replaced with data-driven neural networks, posing the question — is signal processing still useful? And if so, how? In this talk, I will attempt to address these questions and provide an overview of we can combine both disciplines and use neural networks to control (or optimize) existing signal processing algorithms from data and perform a variety of tasks such as guitar distortion modeling, automatic removal of breaths and pops from voice recordings, automatic music mastering, acoustic echo cancelation, and automatic voice production. I will then discuss open research questions and future research directions with a focus on music applications.

Nicholas J. Bryan is a senior research scientist at Adobe Research and interested in (neural) audio and music signal processing, analysis, and synthesis. Nick received his PhD and MA from CCRMA, Stanford University and MS in Electrical Engineering, also from Stanford as well as his Bachelor of Music and BS in Electrical Engineering with summa cum laude honors at the University of Miami-FL. Before Adobe, Nick was a senior audio algorithm engineer at Apple and worked on voice processing algorithms for 4.5 years.

October 30

Lamtharn “Hanoi” Hantrakul: Transcultural Machine Learning in Music and Technology

This event will take place at 1600, GMT+8

Transcultural Technologies empower cultural pluralism at every phase of engineering and design. We often think of technology as a neutral tool, but technology is always created and optimized within the cultural scope of its inventors. This cultural mismatch is most apparent when tools are used across a range of contrasting traditions. Music and Art from different cultures, and the people that create and breathe these mediums, are an uncompromising sandbox to both interrogate these limitations and develop breakthroughs that empower a plurality of cultures. In this talk, we will be taking a deep dive into tangible audio technologies incubated in musical traditions from Southeast Asia, South Asia, South America and beyond.

Hanoi is a Bangkok-born Shanghai-based Cultural Technologist, Research Scientist and Composer. As an AI researcher, Hanoi focuses on audio ML that is inclusive of musical traditions from around the world. At Google AI, he co-authored the breakthrough Differentiable Digital Signal Processing (DDSP) library with the Magenta team and led its deployment across two Google projects: Tone Transfer and Sounds of India.  At TikTok, he continues to develop AI tools that empower music making across borders and skill levels. As a Cultural Technologist, Hanoi has won international acclaim for his transcultural fiddle “Fidular” (Core77, A’), which has been displayed in museums and exhibitions in the US, EU and Asia. He is fluent in French, Thai, English and is working on his Mandarin.

Jason Hockman + Jake Drysdale: Give the Drummer Some

This event will take place at 1000, GMT+1

In the late 1980s, popular electronic music (EM) emerged at the critical intersection between affordable computer technology and the consumer market, and has since grown to become the one of the most popular genres in the world. Ubiquitous within EM creation, digital sampling has facilitated the incorporation of professional-quality recorded performances into productions; one of the most frequently sampled types of recordings used in EM are short percussion solos from funk and jazz performances—or breakbeats. While these samples add an essential energetic edge to productions, they are generally used without consent or recognition. Thus, there is an urgency for the ethical redistribution of cultural value to account for the influence of a previous generation of artists. This workshop will present an overview on the topic of breakbeats and their relation to modern music genres as well as current approaches for breakbeat analysis, synthesis and transformative effects developed in the SoMA Group at Birmingham City University.

Jake Drysdale is currently a PhD student in the Sound and Music Analysis Group (SoMA) at Birmingham City University, where he specialises in neural audio synthesis and structural analysis in the electronic music genres. Jake leverages his perspective as an professional electronic music producer and DJ towards the development of intelligent music production tools that break down boundaries imposed by current technology.

Jason Hockman is an associate professor of audio engineering at Birmingham City University. He is a member of the Digital Media Technology Laboratory (DMTLab), in which he leads the Sound and Music (SoMA) Group for computational analysis of sound and music and digital audio processing. Jason conducts research in music informatics, machine listening and computational musicology, with a focus on rhythm and metre detection, music transcription, and content-based audio effects. As an electronic musician, he has had several critically-acclaimed releases on established international record labels, including his own Detuned Transmissions imprint.

Olumide Okubadejo: Ask Me Anything

This event will take place at 1800, GMT+2

Bring your industry questions for Spotify’s Olumide Okubadejo in this informal discussion, covering moving research into production, industry scale vs. academic scale, and moving between the two worlds.

Olumide Okubadejo was born and raised in Nigeria. The musical nature of the family he was born in and the streets he was raised in instilled in him early, a penchant for music. This appreciation for music led to him to play the drum in his local community by the age of 9. He later went on to learn to play the piano and the guitar by the age of 15. He studied for his undergraduate degree at FUTMinna, Nigeria, earning a Bachelor of Engineering degree in Electrical and Computer Engineering. He proceeded to University of Southampton, where he earned a masters degree in Artificial intelligence and then France where he earned a PhD. Since then he has focused his research around Machine learning for sound and music. These days, with Spotify, he researches and focuses on assisted music creation using machine learning.

Cory McKay: What can MIR teach us about music? What can music teach us in MIR?

This event will take place at 1300, GMT-4

Part of MIR’s richness is that it brings together experts in diverse fields, from both academia and industry, and gets us to think about music together. However, there is perhaps an increasingly tendency to segment MIR into discrete, narrowly defined problems, and to attempt to address them largely by grinding huge, noisy datasets, often with the goal of eventually accomplishing something with commercial applications. While all of this is certainly valuable, and much good has come from it, there has been an accompanying movement away from introspective thought about music, from investigating fundamental questions about what music is intrinsically, and how and why people create, consume and are changed by it. The goals of this workshop are to discuss how we can use the diverse expertise of the MIR community to do better in addressing foundational music research, and how we can reinforce and expand collaborations with other research communities.

Cory McKay is a professor of music and humanities at Marianopolis College and a member of the Centre for Interdisciplinary Research in Music Media and Technology in Montréal, Canada. His multidisciplinary background in information science, jazz, physics and sound recording has helped him publish research in a diverse range of music-related fields, including multimodal work involving symbolic music representations, audio, text and mined cultural data. He received his Ph.D., M.A. and B.Sc. from McGill University, completed a second bachelor’s degree at the University of Guelph, and did a postdoc at the University of Waikato. He is the primary designer of the jMIR software framework for performing multimodal music information retrieval research, which includes the jSymbolic framework for extracting musical features from digital scores, and also serves as music director of the Marianopolis Laptop Computer Orchestra (MLOrk).

WiMIR Workshop 2021

We’re very pleased to say that we’ll be doing the fourth annual WiMIR Workshop around ISMIR 2021 this year, on Friday, October 29 and Saturday, October 30.

Like last year, the event will be 100% virtual, mostly on Zoom! We’ll again offer programming across time zones and regions to make it easier for community members around the world to attend.  The Workshop is, as ever, free and open to ALL members of the MIR community.  

We hope you can join us – we’ll add more information about signups and our invited presenters soon!

Looking Back on WiMIR@ISMIR2020

As the ISMIR community organizes and prepares submissions for the ISMIR 2021 conference (to take place virtually November 8-12), let’s take a moment to reflect on the WiMIR events from last year’s conference! ISMIR 2020 was held October 11-15, 2020 as the first virtual ISMIR conference, with unprecedented challenges and opportunities. Slack and Zoom were used as the main platforms, which enabled the conference to designate channels for each presentation, poster and social space. With the support of WiMIR sponsors, substantial grants were given for underrepresented researchers, including women.

The ISMIR 2020 WiMIR events were organized by Dr. Claire Arthur (Georgia Institute of Technology) and Dr. Katherine Kinnaird (Smith College). A variety of WiMIR events took place during the conference, through which the ISMIR community showed support, shared ideas, and learned through thought-provoking sessions.

WiMIR Keynote

Dr. Johanna Devaney from the Brooklyn College and the Graduate Center, CUNY, gave an insightful keynote on our current comprehension and analysis of musical performance, The keynote, titled Performance Matters: Beyond the current conception of musical performance in MIR, was presented on October 13th.

WiMIR keynote video    

WiMIR keynote slides

Abstract: This talk will reflect on what we can observe about musical performance in the audio signal and where MIR techniques have succeeded and failed in enhancing our understanding of musical performance. Since its foundation, ISMIR has showcased a range of approaches for studying musical performance. Some of these have been explicit approaches for studying expressive performance while others implicitly analyze performance with other aspects of the musical audio. Building on my own work developing tools for analyzing musical performance, I will consider not only the assumptions that underlie the questions we ask about performance but what we learn and what we miss in our current approaches to summarizing performance-related information from audio signals. I will also reflect on a number of related questions, including what do we gain by summarizing over large corpora versus close reading of a select number of recordings. What do we lose? What can we learn from generative techniques, such as those applied in style transfer? And finally, how can we integrate these disparate approaches in order to better understand the role of performance in our conception of musical style?

Johanna Devaney is an Assistant Professor at Brooklyn College and the CUNY Graduate Center. At Brooklyn College she teaches primarily in the Music Technology and Sonic Arts areas and at the Graduate Center she is appointed to the Music and the Data Analysis and Visualization programs. Previously, she was an Assistant Professor of Music Theory and Cognition at Ohio State University and a postdoctoral scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley. Johanna completed her PhD in music technology at the Schulich School of Music of McGill University. She also holds an MPhil degree in music theory from Columbia University and an MA in composition from York University in Toronto.

Johanna’s research focuses on interdisciplinary approaches to the study of musical performance. Primarily, she examines the ways in which recorded performances can be used to study performance practice and develops computational tools to facilitate this. Her work draws on the disciplines of music, computer science, and psychology, and has been funded by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Google Faculty Research Awards program and the National Endowment for the Humanities (NEH) Digital Humanities program.  

Twitter: Johanna Devaney (@jcdevaney)

“Notable Women in MIR” Meetups

This year’s WiMIR programming also included a series of meet-up sessions, each of which was an informal Q&A-type drop-in event akin to an “office hour”. In these sessions, participants had the opportunity to talk with the following notable women in the field.

Dr. Amélie Anglade is a freelance Music Information Retrieval and Machine Learning / Artificial Intelligence Consultant based in Berlin, Germany. She carried out a PhD on knowledge representation of musical harmony and modelling of genre, composer and musical style using machine learning techniques and logic programming at Queen Mary University of London (2014). After being employed as the first MIR Engineer at SoundCloud (2011-2013) and working for a couple of other music tech startups, she is now offering (since 2014) freelance MIR and ML/AI services to startups, larger companies and institutions in Berlin and remotely. Her projects range from building search and recommendation engines to supporting product development with Data Science solutions, including designing, implementing, training and optimising MIR features and products. To her clients she provides advice, experimentation, prototyping, production code implementation, management and teaching services. During her career she has worked for Sony CSL, Philips Research, Mercedes-Benz, the EU Commission, Senzari, and Data Science Retreat, among others.

Dr. Rachel Bittner is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning  applied  to fundamental frequency estimation. She has a Master’s degree in mathematics from New York University’s Courant Institute, as well as two Bachelor’s degrees in Music Performance and in Mathematics from the University of California, Irvine.

In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.

Dr. Estefanía Cano is a senior scientist at AudioSourceRe in Ireland, where she researches topics related to music source separation. Her research interests also include music information retrieval (MIR), computational musicology, and music education. She is the CSO and co-founder of Songquito, a company that builds MIR technologies for music education. She previously worked at the Agency for Science, Technology and Research A*STAR in Singapore, and at the Fraunhofer Institute for Digital Media Technology IDMT in Germany.

Dr. Elaine Chew is a senior CNRS (Centre National de la Recherche Scientifique) researcher in the STMS (Sciences et Technologies de la Musique et du Son) Lab at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, and a Visiting Professor of Engineering in the Faculty of Natural & Mathematical Sciences at King’s College London. She is principal investigator of the European Research Council Advanced Grant project COSMOS and Proof of Concept project HEART.FM. Her work has been recognised by PECASE (Presidential Early Career Award in Science and Engineering) and NSF CAREER (Faculty Early Career Development Program) awards, and Fellowships at Harvard’s Radcliffe Institute for Advanced Study. She is an alum (Fellow) of the NAS Kavli and NAE Frontiers of Science/Engineering Symposia. Her research focuses on the mathematical and computational modelling of musical structures in music and electrocardiographic sequences. Applications include modelling of music performance, AI music generation, music-heart-brain interactions, and computational arrhythmia research. As a pianist, she integrates her research into concert-conversations that showcase scientific visualisations and lab-grown compositions.

Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London, where she designs new ways for humans to interact with computers in creative practice. Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 40,000 times. She is the creator of the world’s first MOOC about machine learning for creative practice, titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Dr. Fiebrink was previously an Assistant Professor at Princeton University and a lecturer at Goldsmiths University of London. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She holds a PhD in Computer Science from Princeton University.

Dr. Emilia Gómez is Lead Scientist of the HUMAINT team that studies the impact of Artificial Intelligence on human behaviour at the Joint Research Centre, European Commission. She is also a Guest Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona, where she leads the MIR (Music Information Research) lab of the Music Technology Group and coordinates the TROMPA (Towards Richer Online Music Public-domain Archives) EU project.

Telecommunication Engineer (Universidad de Sevilla, Spain), Msc in Acoustics, Signal Processing and Computing applied to Music (ATIAM-IRCAM, Paris) and PhD in Computer Science at Universitat Pompeu Fabra, her work deals with the design of data-driven algorithms for music content description (e.g. melody, tonality, genre, emotion) by combining methodologies from signal processing, machine learning, music theory and cognition. She has been contributing to the ISMIR community as author, reviewer, PC member, board and WiMIR member and she was the first woman president of ISMIR. 

Dr. Blair Kaneshiro is a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education at Stanford University, as well as an Adjunct Professor at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA). She earned a BA in Music; MA in Music, Science, and Technology; MS in Electrical Engineering; and PhD in Computer-Based Music Theory and Acoustics, all from Stanford. Her MIR research focuses on human aspects of musical engagement, approached primarily through neuroscience and user research. Dr. Kaneshiro is a member of the ISMIR Board and has organized multiple community initiatives with WiMIR, including as co-founder of the WiMIR Mentoring Program and WiMIR Workshop. 

Dr. Gissel Velarde, PhD in computer science and engineering, is an award-winning researcher, consultant and lecturer specialized in Artificial Intelligence. Her new book: Artificial Era: Predictions for ultrahumans, robots and other intelligent entities, presents a groundbreaking view of technology trends and their impact on our society. 

Additionally, she published several scientific articles in international journals and conferences, and her research has been featured in the media by Jyllands-Posten, La Razón, LadoBe and Eju. She earned her doctoral degree from Aalborg University in Denmark in 2017, an institution recognized as the best university in Europe and fourth in the world in engineering according to the US News World Ranking and the MIT 2018 ranking. She obtained her master’s degree in electronic systems and engineering management from the University of Applied Sciences of South Westphalia, Soest in Germany, thanks to a DAAD scholarship, and she holds a licenciatura’s degree in systems engineering from the Universidad Católica Boliviana, recognized as the third best university in Bolivia according to the Webometrics Ranking 2020. 

Velarde has more than 20 years of experience in engineering and computer science. She was a research member in the European Commission’s project: Learning to Create, was a lecturer at Aalborg University, and currently teaches at the Universidad Privada Boliviana. She worked for Miebach Gmbh, Hansa Ltda, SONY Computer Science Laboratories, Moodagent, and Pricewaterhouse Coopers, among others. She has developed machine learning and deep learning algorithms for classification, structural analysis, pattern discovery, and recommendation systems. In 2019 & 2020 she was internationally selected as one of 120 technologists by the Top Women Tech summit in Brussels.

Dr. Anja Volk (MA, MSc, PhD), Associate Professor in Information and Computing Sciences (Utrecht University) has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. She has an international reputation in the areas of music information retrieval (MIR), computational musicology, and mathematical music theory. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands.  Her research aims at enhancing our understanding of music as a fundamental human trait while applying these insights for developing music technologies that offer new ways of interacting with music. Anja has given numerous invited talks worldwide and held editorships in leading journals, including the Journal of New Music Research and Musicae Scientiae. She has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music (SMCM), the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR (WIMIR) mentoring program. Anja’s commitment to diversity and inclusion was recognized with the Westerdijk Award in 2018 from Utrecht University, and the Diversity and Inclusion Award from Utrecht University in 2020. She is also committed to connecting different research communities and providing interdisciplinary education for the next generation through the organization of international workshops, such as the Lorentz Center in Leiden workshops on music similarity (2015), computational ethnomusicology (2017) and music, computing, and health (2019).

WiMIR Grants

Thanks to the generous contributions of WiMIR sponsors, a number of women received financial support to cover conference registration, paper publication, and – for the first time in 2020 – childcare expenses. In all, WiMIR covered registration costs for 42 attendees; covered publication fees for 3 papers; and provided financial support to cover child-care expenses for 4 attendees.

Thank you WiMIR Sponsors!

Patron

Contributor

Supporter

WiMIR Mentoring Call for Organizers 2021

Dear ISMIR community,

Women in Music Information Retrieval (WiMIR) is seeking new organizers for the WiMIR Mentoring Program.

Now in its 6th round, the WiMIR mentoring program connects women, trans, and non-binary students, postdocs, early-stage researchers, industry employees, and faculty to more senior women and allies in MIR who are dedicated to increasing opportunities for underrepresented community members. By connecting individuals of different backgrounds and expertise, this program strengthens networks within the MIR community, in both academia and industry. 

We are seeking motivated peers from the MIR community who can commit to serving as organizer for at least 2 years.

The responsibilities of this role are as follows (responsibilities are distributed among the organizers and average around 2 hours per month per organizer):

  • Prepare and distribute signup forms; match applicants into mentor-mentee pairs (Fall/Winter).
  • Announce the pairs and serve as a support resource during the mentoring round (Winter/Spring).
  • Create and distribute evaluation forms; review evaluations and integrate into the next mentoring round (Summer).
  • Prepare and/or present summary slides on the mentoring program for the ISMIR conference and other community presentations as needed (Fall/year round).
  • Supervise and delegate program tasks to a team of 2-3 student volunteers; recruit new volunteers as needed (year round).
  • Author blog posts announcing the start and end of each mentoring round (Winter/Summer).
  • Maintain and update organizational materials, and onboard new organizers (year round).

Sign up here: https://forms.gle/44z6kSNj8D1kgxzE9 

Signups will close March 31, 2021, and we will notify and begin onboarding new organizers around May 15, 2021.

Questions? Email wimir-mentoring@ismir.net.

Sincerely,

Johanna Devaney, Michael Mandel, and Eva Zangerle

WiMIR Mentoring Program Organizing Committee

Inspiring Women in Science: An Interview with Dr. Dorien Herremans

Believing in the importance of shedding light on the stories of successful women in the Music Information Retrieval (MIR) field, we are happy to share our interview with Dr. Dorien Herremans, the second Inspiring Women in Science interview. Dr. Herremans is an Assistant Professor at Singapore University of Technology and Design and Director of Game Lab. She has a joint-appointment at the Institute of High Performance Computing, A*STAR and works as a certified instructor for the NVIDIA Deep Learning Institute. Her research interests include machine learning and music for automatic music generation, data mining for music classification (hit prediction) and novel applications at the intersection of machine learning/optimization and music.

Whereabouts did you study? 

I completed a five-year masters degree in business engineering (in management information systems) at the University of Antwerp. I spent the next few years living in the Swiss Alps, where I was an IT lecturer at Les Roches, Bluche, and had my own company as a web developer. I returned to the University of Antwerp to obtain my PhD in Applied Economics. My dissertation focused on the use of methods from operations research and data mining in music, more specifically for music generation and hit prediction. I then got a Marie-Sklodowsi postdoctoral fellowship and joined the Centre for Digital Music (C4DM), at Queen Mary University of London to develop Morpheus, a music composition system with long-term structure based on tonal tension. After my postdoc I joined Singapore University of Technology and Design, where I am an assistant professor and teach data science and AI. My lab focuses on using AI for audio, music and affective computing (AMAAI), I’m also Director of the SUTD Game Lab and have a joint appointment at the Institute for High Performance Computing, A*STAR.  

What are you currently working on?

Some of our current projects include a Music Generation system based on emotion (aiMuVi); music transcription; a GPU-based library for spectrogram extraction (nnAudio); multi-modal predictive models (from video/audio/text) on emotion and sarcasm detection. 

When did you first know you wanted to pursue a career in science?

It happened rather naturally. When I was about to graduate, I felt more of a pull towards staying in academia versus going into industry. Especially because at the time, with a degree in business engineering, that would have most probably meant joining the big corporate world. As a 24 year old, I instead wanted to keep exploring new things, stay in the dynamic environment of academia, especially since I could do so while living in a very quaint mountain village. 

How did you first become interested in MIR?

During my last year as a student in business engineering, I was looking for a master thesis topic and came across ‘music and metaheuristics’. Having been passionate about music my whole life, I jumped at the opportunity to combine mathematics with music. This started an exciting journey in the field of MIR, a field I did not know existed at that time (2004). 

What advice would you give to women who are interested in joining the field of MIR but don’t know where to begin?

We are fortunate to have a growing community of MIR researchers. Through groups such as WiMIR or ISMIR, you can join mentoring programs and get in touch with researchers who have more experience in the field. If you are a beginning researcher, you could also attend one of the conferences and start building a network. 

How is life in Singapore? Is there a difference for your research between working in Europe and Asia?

My first impression when arriving in Singapore a few years ago, was that it felt very much like living in the future. It’s quite an amazing country, efficient, safe, warm (hot really), and with amazingly futuristic architecture. As a researcher, Singapore offers great funding opportunities and a dynamic environment. We have been growing the AMAAI lab steadily, and are excited to connect with other music researchers in Singapore (there are more than you might think!). 

You are working on AI and music now which is a fascinating field. What can it do and cannot now?

After almost a decade of science fiction movies that play around with the concept of AI, people seem to equate AI with machines obtaining self-awareness. That’s not what we should think of as AI these days. I see (narrow) AI systems as models that learn from historical data, extract patterns, and use that to make predictions on new data. Narrow AI focuses on clearly defined problems, whereas general AI is more challenging and tries to cope with more generalised tasks. In MIR we typically develop narrow AI systems, and due to the recent developments in neural network technologies and the increasing GPU power, we are making large strides. The challenges that we are currently facing are in large part related to the lack of labeled data in music, and the cross-domain expertise required to leverage music knowledge in AI systems. 

How to make human musicians and AI musicians work together and not compete with each other?

This will be the natural first step. Unless properly educated about AI, many people will not trust AI systems to take on tasks on their own. I believe this is why many of the personal assistant AI systems are given female names and voices (exudes trust?). For example, a composer might not want a system to generate music automatically,  but they might appreciate a computer-aided-composition system, which, for instance, gives them an initial suggestion for how to harmonise their composed melody. 

It seems still some distance for it to be useful in daily life compared with face/voice recognition. What is your expectation for that field?

I actually think that AI in music is being integrated in our daily lives, through companies such as Spotify, Amazon Music, etc. as well as through smaller startups such as AIVA. I expect the number of startups in the music tech area to increase strongly in the coming years. 

You are also working on combining emotion and music together. On what level do you think the computer can understand human emotion?

The word ‘understand’ is tricky here. We can train models to predict our perceived or experienced emotion based on observations we have done in the past, however, the biggest challenge seems to be: why are different people experiencing different emotions when listening to the same piece of music? 

These days more and more people work in different fields with AI. For the students working on music and AI, can you give them some guidance about their research strategy and career path?

As for any research topic, I would recommend students to tackle a problem that they are fascinated by. Then you dive deep into the topic and explore how it can be advanced even further. To stick with a topic, it’s essential that you are passionate about it. 

Can you give a few tips for people working at home in the days of Covid-19? 

Stay inside, get as much exercise as you can, and try to think of this as the perfect time to do undisturbed research…

To keep up to date with Dr. Herremans, you can refer to https://dorienherremans.com/

Rui Guo graduated with a Master’s Degree in Computer Science from the Shenzhen University, China. Currently he is a third year PhD student in University of Sussex, UK pursing his PhD degree in music. His research topic is AI music generation with better control and emotional music generation.

Our lives during the COVID-19 pandemic

2020 was definitely a strange year. It felt shorter than ever. With vaccines becoming available in more countries, I am hoping that there is a better future for us. Also, that we can meet in real person for next year’s ISMIR.

Since we were not able to see each other for the last year ISMIR, I wanted to share our community members’ 2020 story by conducting a survey with questions related to work, hobbies and general lifestyle. A total of 62 people have responded (thank you!) and I would like to share some results.

Work

Majority of people did like working from home! 

But online meetings, not so much…We like talking to real people. 

More than 50% of people said they were less productive last year. Don’t feel bad if you feel like you didn’t achieve much! If you managed to stay healthy last year, that is the biggest achievement.

I asked the members to share one good thing about working from home. In terms of work impacts, people reported less distraction; a more flexible schedule and efficient work – “I can do things with my own rhythm”, “No need to go to the office (which means saving time, and also less environmental impact)“; and more equality between remote and “local” employees. Respondents also reported positive outcomes for family life (e.g., “Being able to see my daughter throughout the day”). Finally, working from home was perceived as positive in its integration with home life – for instance in terms of food (e.g., “No need to eat on the road.” “Eating better food”) and a comfortable work environment (e.g., “Comfortable couch”, “Could listen to music without headphones”) – as well as lifestyle: “Fitting exercise, cooking, and music practice into my day”, “Can play with my dog”, “Not having to wear pants ;)”, “Wearing pajamas all day :D”.

Hobbies

It seems like many of us were able to put more time into our hobbies. Same goes for me! 

Also, quite a lot of people found new hobbies. 

Our members shared some of their hobbies. As music lovers, there were many hobbies related to music: making music, playing instruments, jamming online, singing, DJing, hosting radio shows and learning music. In fact,  71% of us listened to more music compared to last year! I assume there is a correlation with COVID-19 situation. Plus, there were lots of physical activities, such as running, cycling and yoga. Some of us enjoyed baking, cooking and even brewing kombucha! Meditating, knitting, drawing, reading and gardening also were mentioned several times. 

Lifestyle

Staying healthy mentally and physically seem to be the greatest challenge these days. I personally had to consciously remind myself to move and stay calm. It definitely wasn’t easy. I think I managed to find some balance finally in December.

With increased time at home, I personally learned a lot about myself. It gave me time to reflect on my life once again and I was able to remind myself what is important in my life. 

So I asked, “Is there anything new that you discovered about yourself during the pandemic?” and found 4 common responses.

First, there were people who discovered that they enjoy remote working and staying home – “I don’t get bored!”, “I’m more OK than I thought with extended alone-time”, “I should stay at home more often than I used to”.  Some even mentioned they were able to build good routines.  Opposite to the first, the second most common response was the realization of how interactions with people are important in their lives – “how much I depend on physical connections with friends”, “That I am more of a people person than I ever suspected”. Some said they would rather go to the office than stay home (Me too!). Third common response was that they felt the need to physically move more and spend time outdoors – “I’m unfit for living as a hermit”,  “don’t underestimate the power of breaks in the sunshine”. It seems like many of us learned that physical activities not only improve physical, but also mental health. Last common response was the need for discipline. Having 100% control over our time and this freedom appeared to be attractive, but in fact, it requires a lot of effort to keep everything on track.

Here are some additional memorable responses: “I am living in a more privileged condition than I have realized”, “I can cook!”, “That we need to enjoy life” and  “That I’ve actually been living pretty much like this even before.” 

COVID-19

Everyone misses pre-covid days. Our community members told us that they miss live concerts and festivals, hanging out with real people and travelling the most. 

Let’s not give up hope, so that we can meet again soon. Please don’t forget to take care of the environment to prevent this kind of pandemic in the future. 

As a closing note, we asked people to share one tip they have for the community to survive this time. 

The most mentioned tip was to stay healthy physically and mentally. Some suggestions include getting a good sleep, eating good food, moderately exercising and doing mediation. Staying connected with others and helping others were a runner-up tip. Although we already have enough zoom talks, non-work related zoom socializing can actually make us feel better, like zoom beers! 

Also many emphasized having a routine or some rule, such as setting a certain time slot for work and break. Don’t think too much about how others have been doing; breathe and focus on yourself 🙂 

There were some extra fun and useful tips: “Don’t let the dirty dishes accumulate”, “fix things in your house”, “learn new things”,  and “TableTop simulator (on Steam) was quite useful during lockdown”. 

Stay healthy and happy till we meet next time !!

https://lizadonnelly.com/archives/animating-earth-day

Kyungyun Lee is a MSc student at Music and Audio Computing Lab, KAIST. Her research interests range from MIR to HCI. Currently, she is interested in automatic music generation and analyzing user interaction with music platforms.