WiMIR Workshop 2018: Modeling Repetition and Variation for MIR

Blog post by Iris Yuping Ren, Hendrik Vincent Koops, and Anja Volk.

(Materials are available at https://github.com/hvkoops/wimir2018)

Right after the main ISMIR2018 conference, the WiMIR workshop awaited. As planned, we gathered and formed a working group to tackle the problem of modeling repetition and variation in music for MIR, consisting of the following participants:

Anja Volk (Utrecht University) – Project Guide
Hendrik Vincent Koops (Utrecht University) – Project Guide
Iris Yuping Ren (Utrecht University) – Project Guide
Juan Pablo Bello (New York University)
Eric Nichols (Microsoft)
Jaehun Kim (Delft University)
Marcelo Rodriguez Lopez (Yousician)
Changhong Wang (Queen Mary University of London)
Jing Chen (Nanchang University)
Tejaswinee Kelkar (University of Oslo)

In the morning session, we first reflected on the background of repetitions and variations as central concepts in music observed by musicologists, and then the computational modeling thereof in Music Information Retrieval within different contexts. We discussed that there exists disagreement in annotations in many MIR tasks, such as automatic chord extraction and repeated pattern discovery. Comparable to many other subareas in machine learning and data science, we face complications brought by the unattainability of an absolute, all-encompassing ground truth annotation.

We then provided more detailed motivations and ideas on how to gather annotations on repetitions and variations in music. For example, one set of guidelines was:

Listen to the following pieces and annotate the salient melodic patterns with

  1. How relevant this pattern is to this piece
  2. One word to label the type of this pattern
  3. A short description on why you find it to be a pattern
  4. How difficult it was for you to decide whether it’s a pattern

Using the prepared materials, we had a very active discussion on topics such as: how to define the concepts for specific annotation tasks? How can we use tools such as wearable sensors, a wrist band for example, to help the annotation process?  How can we compare the annotations and annotation methods? (For more, please refer to the github link).

In the afternoon, after a very interesting and useful lunch breakout session, we started with the actual annotation process on the first page of the String Quartet No. 1 in F major, Op. 18, No. 1, Ludwig van Beethoven (1798 and 1800), Violin I. We provided the sheet music, midi and audio files. The participants used different tools to their liking to mark the repetitions and variations on the sheet music. During the annotations, there were already some interesting discussions in some subgroups: how repetitive the young Beethoven was!

In the second part of the afternoon, using the individual annotations, we began our exchange on the experience of the annotation process. We discussed how we can improve on the current designs of annotation processes and tools for annotation tasks, and how the annotated patterns could be used to design an automatic pattern discovery system. We concluded the day with a short presentation.

Throughout the day, we gained many new insights into what are the good and bad ways to create and employ annotations on repetitions and variations. We warmly thank the participants for a great a day of discussions, listening to music, and annotating repetitions and variations!

Iris Yuping Ren is a second year PhD candidate in the Information and Computing Sciences department, Utrecht University, under the supervision of dr. Anja Volk, dr. Wouter Swierstra and dr. Remco C. Veltkamp. She obtained Bachelor degrees in Statistics and Cultural Industry Management from Shandong University, Master degrees in Complex System Science from the University of Warwick and École Polytechnique, Computer and Electrical Engineering in the University of Rochester, and a diploma in violin performance from the Eastman Community Music School. Her current research has a focus on the computational modelling and statistical analysis of musical patterns in various corpora. She is comparing both generic and domain-specific approaches, such as data mining methods, times series analysis, machine learning based clustering and classification algorithms. To discover useful patterns in music, she makes use of functional programming languages to compute pattern transformations and similarity dimensions. Her research contributes to a computationally- and quantitatively-based understanding of music and algorithms. Music wise, she enjoys playing with local orchestra projects and sessions.

Hendrik Vincent Koops is a PhD candidate at Utrecht University under supervision of Dr. Anja Volk and Dr. Remco C. Veltkamp. Vincent holds degrees in Sound Design and Music Composition from the HKU University of the Arts Utrecht, and degrees in Artificial Intelligence from the Utrecht University. After a research internship at Carnegie Mellon University, he started his PhD in Music Information Retrieval. His PhD research concerns the computational modeling of variance in musical harmony. For example, he studied annotator subjectivity to better understand the amount of agreement we can expect among harmony annotators. Using data fusion methods, he investigated how to integrate multiple harmony annotations into a single, improved annotation. For a deep learning study, he created new features for chord-label personalization. Vincent’s research contributes to a better understanding of computational harmony analysis tasks, such as automatic chord estimation. Vincent is also active as a composer for film and small ensembles. Currently, he’s working towards work for a string quartet.

Anja Volk (Utrecht University), holds master degrees in both mathematics and musicology, and a PhD in the field of computational musicology. The results of her research have substantially contributed to areas such as music information retrieval, computational musicology,  music cognition, and mathematical music theory.  In 2016 she launched together with Amélie Anglade, Emilia Gómez and Blair Kaneshiro the Women in MIR (WIMIR) Mentoring Program.  She co-organized the launch of the Transactions of the International Society for Music Information Retrieval, the open access journal of the ISMIR society, and is serving as Editor-in-Chief for the journal’s first term. Anja received the Westerdijk Award 2018 from Utrecht University in recognition of her efforts on increasing diversity.


WiMIR Workshop 2018: Success!

Screen Shot 2018-10-17 at 11.56.45.png

We’re pleased to tell you that the WiMIR 1st Annual Workshop was a resounding success!

Why organize a WiMIR Workshop? We saw this as a way to build upon the MIR community’s already strong support for diversity and inclusion in the field. The Workshop format was a fitting complement to the remote pairings of the mentoring program and brief introductions gained during the main ISMIR conference. We proposed three aims for the WiMIR 1st Annual Workshop:

  • Further amplify the scientific efforts of women in the field.
  • Encourage the discussion of proposed or unfinished work.
  • Create additional space for networking.

Thanks to support from Spotify, we were able to offer the WiMIR Workshop as a free event, and open it up to ALL members of the community!  The Workshop took place as a satellite event of ISMIR2018, at Télécom ParisTech. We had 65 pre-registrations, and closer to 80 people attending.  We had poster presentations from 18 women in the field, with topics ranging from Indian Classical music to musical gestures.  We had 11 project groups, ranging from karaoke-at-scale, music for mood modulation, and the relationship between cardiac rhythms & music.  We had a staggering number of croissants and pain au chocolats, too – thanks, Paris.

Screen Shot 2018-10-17 at 11.54.17.png

Screen Shot 2018-10-17 at 11.54.38.png

The day started with the aforementioned pastries and coffee, and then people joined up with their project groups, introduced themselves, and got a big-picture overview from their Project Guides.  This led into a poster session, focusing on early-stage research ideas.

Posters turned into lunch, which was informally structured around topics like “Dealing with Sexism” and “Surviving Grad School”.  The lunch provided attendees with an opportunity to connect with new people and learn about topics that members in the field (especially those who are not women) don’t often discuss.  

After lunch, the project groups started a deeper dive into their topic areas, with an eye to present at 4 pm.  The presentations were great – we had everything from machine-learned piano melodies to microsurveys about music and emotion to a whole lot of post-it notes about cover songs.

Screen Shot 2018-10-17 at 11.55.00.png

Screen Shot 2018-10-17 at 11.55.07.png

It was, in general, a lot of fun, and we achieved the aims of the event.  We’re looking forward to next year – it seems like most folks are as well:

It was a fruitful session and our group will certainly continue the work that we started yesterday.” – Elaine Chew, Professor of Digital Media, Queen Mary University

“The most inspiringly diverse event in the field of MIR!” – Oriol Nieto, Senior Scientist, Pandora

“It was exciting to see new diverse groups of people across different backgrounds, disciplines and institutions form new research collaborations!” – Rachel Bittner, Research Scientist, Spotify.

“The first WiMIR Workshop was an amazing way to meet a diverse set of people working in MIR who want to make the world better.  I loved our workshop chats as well as breaking the ice on tougher discussion points during lunch, such as overcoming sexism.  It was staggering to see what each group accomplished in such a short period of time at the first WiMIR Workshop, and I made many great new friendships as well.  Bravo!” – Tom Butcher, Principal Engineering & Science Manager, Microsoft


“I am already looking forward to next year’s!” – Kyungyun Lee, MS student, KAIST

“Very organized, inspiring and motivating event! Excellent way to meet the most welcoming people of the MIR community.” – Bruna Wundervald, PhD Candidate, Maynooth University

“I loved the format! Emerged at the end of the day full of ideas and new motivation.” – Polina Proutskova, Postdoc, Centre for Digital Media, Queen Mary University

And, of course, the tweets:Screen Shot 2018-10-24 at 12.04.49.png

Screen Shot 2018-10-24 at 12.04.56.png

Screen Shot 2018-10-24 at 12.05.08.png


Big thanks to everyone who helped out: The ISMIR2018 volunteers & General Chairs, the ISMIR Board, WIMIR leadership, Télécom ParisTech, and Emile Marx from Spotify Paris.  

We’ll see you next year in Delft!

The WiMIR Workshop Organizers,

WiMIR Workshop 2018 Project Guides

This is a list of project guides and their areas of interest for the 2018 WiMIR workshop.  These folks will be leading the prototyping and early research investigations at the workshop.  You can read about them and their work in detail below, and sign up to attend the WiMIR workshop here.



Rachel Bittner:  MIR with Stems

The majority of digital audio exists as mono or stereo mixtures, and because of this MIR research has largely focused on estimating musical information (beats, chords, melody, etc.) from these polyphonic mixtures. However, stems (the individual components of a mixture) are becoming an increasingly common audio format. This project focuses on how MIR techniques could be adapted if stems were available for all music. Which MIR problems suddenly become more important? What information – that was previously difficult to estimate from mixtures – is now simple to estimate? What new questions can we ask about music that we couldn’t before? As part the project, we will try to answer some of these questions and create demos that demonstrate our hypotheses.

Rachel is a Research Scientist at Spotify in New York City, and recently completed her Ph.D. at the Music and Audio Research Lab at New York University under Dr. Juan P. Bello. Previously, she was a research assistant at NASA Ames Research Center working with Durand Begault in the Advanced Controls and Displays Laboratory. She did her master’s degree in math at NYU’s Courant Institute, and her bachelor’s degree in music performance and math at UC 2 Irvine. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. Her dissertation work applied machine learning to various types of fundamental frequency estimation.

tomButcher (1).jpg

Tom Butcher: Expanding the Human Impact of MIR with Mixed Reality

Mixed reality has the potential to transform our relationship with music. In this workshop, we will survey the new capabilities mixed reality affords as a new computing paradigm and explore how these new affordances can open the world of musical creation, curation, and enjoyment to new vistas. We will begin by discussing what mixed reality means, from sensors and hardware to engines and platforms for mixed reality experiences. From there, we will discuss how mixed reality can be applied to MIR- related fields of study and applications, considering some of the unique challenges and new research questions posed by the technology. Finally, we will discuss human factors and how mixed reality coupled with MIR can lead to greater understanding, empathy, expression, enjoyment, and fulfillment.

Tom Butcher leads a team of engineers applied scientists in Microsoft’s Cloud & AI division focusing on audio sensing, machine listening, avatars, and applications of AI. In the technology realm, Tom is an award-winning creator of audio and music services, which include recommendation engines, continuous playlist systems, assisted composition agents, and other tools for creativity and productivity. Motivated by a deep enthusiasm for synthesizers and electronic sounds from an early age, Tom has released many pieces of original music as Orqid and Codebase and continues to record and perform. In 2015, Tom co- founded a Seattle-based business focusing on community, education, and retail for synthesizers and electronic music instruments called Patchwerks.


Elaine Chew: MIR Rhythm Analysis Techniques for Arrhythmia ECG Sequences

Cardiac arrhythmia has been credited as the source of the dotted rhythm at the beginning of Beethoven’s “Adieux” Sonata (Op.81a) (Goldberger, Whiting, Howell 2014); the authors have also ascribed Beethoven’s “Cavatina” (Op.130) and another piano sonata (Op.110) to his possible arrhythmia. It is arguably problematic and controversial to diagnose arrhythmia in a long-dead composer through his music. Without making any hypothesis on composers’ cardiac conditions, Chew (2018) linked the rhythms of trigeminy (a ventricular arrhythmia) to the Viennese Waltz and scored atrial fibrillation rhythms to mixed meters, Bach’s Siciliano, and the tango; she also made collaborative compositions (Chew et al. 2017-8) from longer ventricular tachycardia sequences. Given the established links between heart and musical rhythms, in this workshop, we shall take the pragmatic and prosaic approach of applying a wide variety of MIR rhythm analysis techniques to ECG recordings of cardiac arrhythmias, exploring the limits of what is currently possible.

Chew, E. (2018). Notating Disfluencies and Temporal Deviations in Music and Arrhythmia. Music and Science. [ html | pdf ]
Chew, E., A. Krishna, D. Soberanes, M. Ybarra, M. Orini, P. Lambiase (2017-8). Arrhythmia Suitebit.ly/heart-music-recordings
Goldberger, Z. D., S. M. Whiting, J. D. Howell (2014). The Heartfelt Music of Ludwig van Beethoven. Perspectives in Biology and Medicine, 57(2): 285-294. [synopsis]

Elaine Chew is Professor of Digital Media at Queen Mary University of London, where she is affiliated with the Centre for Digital Music in the School of Electronic Engineering and Computer Science. She was awarded a 2018 ERC ADG for the project COSMOS: Computational Shaping and Modeling of Musical Structures, and is recipient of a 2005 Presidential Early Career Award in Science and Engineering / NSF CAREER Award, and 2007/2017 Fellowships at Harvard’s Radcliffe Institute for Advanced Studies. Her research, which centers on computational analysis of music structures in performed music, performed speech, and cardiac arrhythmias, has been supported by the ERC, EPSRC, AHRC, and NSF, and featured on BBC World Service/Radio 3, Smithsonian Magazine, Philadelphia Inquirer, Wired Blog, MIT Technology Review, etc. She has authored numerous articles and a Springer monograph (Mathematical and Computational Modeling of Tonality: Theory and Applications), and served on the ISMIR steering committee.



Johanna Devaney:  Cover Songs for Musical Performance Comparison and Musical Style Transfer

Cover versions of a song typically retain basic musical the material of the song being covered but may vary a great deal in their fidelity to other aspects of the original recording. While some covers only differ in minor ways, such as timing and dynamics, while others may use completely different instrumentation, performance techniques, or genre. This workshop will explore the potential of cover songs for studying musical performance and for performing musical style transfer. In contrast to making comparisons between different performances of different songs, cover songs provide a unique opportunity to evaluate differences in musical performance, both within and across genres. For musical style transfer, the stability of the musical material serves as an invariant representation, which allows for paired examples for training machine learning algorithms. The workshop will consider issues in dataset creation as well as metrics for evaluating performance similarity and style transfer.

Johanna is an Assistant Professor of Music Technology at Brooklyn College, City University of New York and the speciality chief editor for the Digital Musicology section of Frontiers in Digital Humanities. Previously she taught in the Music Technology program at NYU Steinhardt and the Music Theory and Cognition program at Ohio State University. Johanna completed her post-doc at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley and her PhD in music technology at the Schulich School of Music of McGill University. She also holds an MPhil degree in music theory from Columbia University, as well as an MA in composition from York University in Toronto. Johanna’s research seeks to understand how humans engage with music, primarily through performance, with a particular focus on intonation in the singing voice, and how computers can be used to model and augment our understanding of this engagement.


DougEck (1).jpg

Doug Eck: Building Collaborations Among Artists, Coders and Machine Learning

We propose to talk about challenges and future directions for building collaborations among artists, coders and machine learning researchers. The starting point is g.co/magenta. We’ve learned a lot about what works and (more importantly) what doesn’t work in building bridges across these areas. We’ll explore community building, UX/HCI issues, research directions, open source advocacy and the more general question of deciding what to focus on in such an open-ended, ill-defined domain. We hope that the session is useful even for people who don’t know of or don’t care about Magenta. In other words, we’ll use Magenta as a starting point for exploring these issues, but we don’t need to focus solely on that project.

Douglas Eck is a Principal Research Scientist at Google working in the areas of music, art and machine learning. Currently he is leading the Magenta Project, a Google Brain effort to generate music, video, images and text using deep learning and reinforcement learning. One of the primary goals of Magenta is to better understand how machine learning algorithms can learn to produce more compelling media based on feedback from artists, musicians and consumers. Before focusing on generative models for media, Doug worked in areas such as rhythm and meter perception, aspects of music performance, machine learning for large audio datasets and music recommendation for Google Play Music. He completed his PhD in Computer Science and Cognitive Science at Indiana University in 2000 and went on to a postdoctoral fellowship with Juergen Schmidhuber at IDSIA in Lugano Switzerland. Before joining Google in 2010, Doug worked in Computer Science at the University of Montreal (MILA machine learning lab) where he became Associate Professor.



Ryan Groves:  Discovering Emotion from Musical Segments

In this project, we’ll first survey the existing literature for research on detecting emotions from musical audio, and find relevant software tools and datasets to assist in the process. Then, we’ll try to formalize our own expertise in how musical emotion might be perceived, elicited and automatically evaluated from musical audio. The goal of the project will be to create a software service or tool that can take a musical audio segment that is shorter than a whole song, and detect the emotion from it.

Ryan Groves is an award-winning music researcher and veteran developer of intelligent music systems. He did a Masters’ in Music Technology at McGill University under Ichiro Fujinaga, has published in conference proceedings including Mathematics and Computation in Music, Musical Metacreation (ICCC & AIIDE), and ISMIR. In 2016, he won the Best Paper award at ISMIR for his paper on “Automatic melodic reduction using a supervised probabilistic context-free grammar”.  He is currently the President and Chief Product Officer at Melodrive – an adaptive music generation system. Using cutting-edge artificial intelligence techniques, Melodrive allows any developer to automatically create and integrate a musical soundtrack into their game, virtual world or augmented reality system.  With a strong technical background, extensive industry experience in R&D, and solid research footing in academia, Ryan is focused on delivering innovative and robust musical products.



Christine Ho, Oriol Nieto, & Kristi Schneck:  Large-scale Karaoke Song Detection

We propose to investigate the problem of automatically identifying Karaoke tracks in a large music catalog. Karaoke songs are typically instrumental renditions of popular tracks, often including backing vocals in the mix, such that a live performer can sing on top of them. The automatic identification of such tracks would not only benefit the curation of large collections, but also its navigation and exploration. We challenge the participants to think about the type of classifiers we could use in this problem, what features would be ideal, and what dataset would be beneficial to the community to potentially propose this as a novel MIREX (MIR Evaluation eXchange) task in the near future.

Oriol Nieto is a Senior Scientist at Pandora. Prior to that, he defended his Ph.D Dissertation in the Music and Audio Research Lab at NYU focusing on the automatic analysis of structure in music. He holds an M.A. in Music, Science and Technology from the Center for Computer Research in Music and Acoustics at Stanford University, an M.S. in Information Theories from the Music Technology Group at Pompeu Fabra University, and a Bachelor’s degree in Computer Science from the Polytechnic University of Catalonia. His research focuses on music information retrieval, large scale recommendation systems, and machine learning with especial emphasis on deep architectures. Oriol plays guitar, violin, and sings (and screams) in his spare time.

Kristi Schneck is a Senior Scientist at Pandora, where she is leading several science initiatives on Pandora’s next-generation podcast recommendation system. She has driven the science work for a variety of applications, including concert recommendations and content management systems. Kristi holds a PhD in physics from Stanford University and dual bachelors degrees in physics and music from MIT.

Christine Ho is a scientist on Pandora’s content science team, where she works on detecting music spam and helps teams with designing their AB experiments. Before joining Pandora, she completed her PhD in Statistics at University of California, Berkeley and interned at Veracyte, a company focused on applying machine learning to genomic data to improve outcomes for patients with hard-to-diagnose diseases.


Xiao Hu: MIR for Mood Modulation: A Multidisciplinary Research Agenda

Mood modulation is a main reason behind people’s engagement with music, whereas how people use music to modulate mood and how MIR techniques and systems can facilitate this process continue fascinating researchers in various related fields. In this workshop group, we will discuss how MIR researchers with diverse backgrounds and interests can participate in this broad direction of research. Engaging activities are designed to enable hands-on practice on multiple research methods and study design (both qualitative and quantitative/computational). Through feedback from peers and the project guide, participants are expected to start developing a focused research agenda with theoretical, methodological and practical significance, based on their own strengths and interests. Participants from different disciplines and levels are all welcomed. Depending on the background and interests of the participants, a small new dataset is prepared for fast prototyping on how MIR techniques and tools can help enhancing this multidisciplinary research agenda.

Dr. Xiao Hu has been studying music mood recognition and MIR evaluation since 2006. Her research on affective interactions between music and users has been funded by the National Science Foundation of China and Research Grant Council (RGC) of the Hong Kong S. A. R. Dr. Hu was a tutorial speaker in ISMIR conferences in 2012 and 2016. Her papers have won several awards in international conferences and have been cited extensively. She has served as a conference co-chair (2014), a program co-chair (2017 and 2018) for ISMIR, and an editorial board member of TISMIR. She was in the Board of Directors of ISMIR from 2012 to 2017. Dr. Hu has a multidisciplinary background, holding a PhD degree in Library and Information Science, Multi-disciplinary Certificate in Language and Speech Processing, and a Master’s degree in Computer Science, a Master’s degree in Electrical Engineering and a Bachelor’s degree in Electronics and Information Systems.

Anja Volk, Iris Yuping Ren, & Hendrik Vincent Koops:  Modeling Repetition and Variation for MIR

Repetition and variation are fundamental principles in music. Accordingly, many MIR tasks are based on automatically detecting repeating units in music, such as repeating time intervals that establish the beat, repeating segments in pop songs that establish the chorus, or repeating patterns that constitute the most characteristic part of a composition. In many cases, repetitions are not literal, but subject to slight variations, which introduces the challenge as to what types of variation of a musical unit can be reasonably considered as a re-occurrence of this unit. In this project we look into the computational modelling of rhythmic, melodic, and harmonic units, and the challenge of evaluating state-of-the-art computational models by comparing the output to human annotations. Specifically, we investigate for the MIR tasks of 1) automatic chord extraction from audio, and 2) repeated pattern discovery from symbolic data, how to gain high-quality human annotations which account for different plausible interpretations of complex musical units. In this workshop we discuss different strategies of instructing annotators and undertake case studies on annotating patterns and chords on small data sets. We compare different annotations, jointly reflect on the rationales regarding these annotations, develop novel ideas on how to setup annotation tasks and discuss the implications for the computational modelling of these musical units for MIR.

Anja Volk holds masters degrees in both Mathematics and Musicology, and a PhD from Humboldt University Berlin, Germany. Her area of specialization is the development and application of computational and mathematical models for music research. The results of her research have substantially contributed to areas such as music information retrieval, computational musicology, digital cultural heritage, music cognition, and mathematical music theory. In 2003 she has been awarded a Postdoctoral Fellowship Award at the University of Southern California, in 2006 she joined Utrecht University as a Postdoc in the area of Music Information Retrieval. In 2010 she has been awarded a highly prestigious NWO-VIDI grant from the Netherlands Organisation for Scientific Research, which allowed her to start her own research group. In 2016 she co-launched the international Women in MIR mentoring program, in 2017 she co-organized the launch of the Transactions of the International Society for Music Information Retrieval, and is serving as Editor-in-Chief for the journal’s first term.

Cynthia C. S. Liem & Andrew Demetriou:  Beyond the Fun: Can Music We Do Not Actively Like Still Have Personal Significance?

In today’s digital information society,music is typically perceived and framed as ‘mere entertainment’. However, historically, the significance of music to human practitioners and listeners has been much broader and more profound. Music has been used to emphasize social status, to express praise or protest, to accompany shared social experiences and activities, and to moderate activity, mood and self-established identity as a ‘technology of the self’. Yet today, our present-day music services (and their underlying Music Information Retrieval (MIR) technology) do not focus explicitly on fostering these broader effects: they may be hidden in existing user interaction data, but this data usually lacks sufficient context to tell for sure.  As a controversial thought, music that is appropriate for the scenarios above may not necessarily need to be our favorite music, yet still be of considerable personal value and significance to us. How can and should we deal with this in the context of MIR and recommendation? May MIR systems then become the tools that can surface such items, and thus create better user experiences that users could not have imagined themselves? What ethical and methodological considerations should we take into account when pursuing this? And, for technologists in need of quantifiable and measurable criteria of success, how should the impact of suggested items on users be measured in these types of scenarios?   In this workshop, we will focus on discussing these questions from an interdisciplinary perspective, and jointly designing corresponding initial MIR experimental setups.

Cynthia Liem graduated in Computer Science at Delft University of Technology, and in Classical Piano Performance at the Royal Conservatoire in The Hague. Now an Assistant Professor at the Multimedia Computing Group of Delft University of Technology, her research focuses on music and multimedia search and recommendation, with special interest in fostering the discovery of content which is not trivially on users’ radars. She gained industrial experience at Bell Labs Netherlands, Philips Research and Google, was a recipient of multiple scholarships and awards (e.g. Lucent Global Science & Google Anita Borg Europe Memorial scholarships, Google European Doctoral Fellowship, NWO Veni) and is a 2018 Researcher-in-Residence at the National Library of The Netherlands. Always interested in discussion across disciplines, she also is co-editor of the Multidisciplinary Column of the ACM SIGMM Records. As a musician, she still has an active performing career, particularly with the (inter)nationally award-winning Magma Duo.

Andrew Demetriou is currently a PhD candidate in the Multimedia Computing Group at the Technical University at Delft. His academic interests lie in the intersection of the psychological and biological sciences, and the relevant data sciences, and furthering our understanding of 1) love, relationships, and social bonding, and 2) optimal, ego-dissolutive, and meditative mental states, 3) by studying people performing, rehearsing, and listening to music. His prior experience includes: assessing the relationship between initial romantic attraction and hormonal assays (saliva and hair) during speed-dating events, validating new classes of experimental criminology VR paradigms using electrocardiography data collected both in a lab and in a wild setting (Lowlands music festival), and syntheses of musical psychology literature which were presented at ISMIR 2016 and 2017.



Matt McVicar: Creative applications of MIR Data

In this workshop, you’ll explore the possibility of building creative tools using MIR data. You’ll discuss the abundance of prevailing data for creative applications, which in the context of this workshop simply means “a human making something musical”. You, as a team, may come up with new product or research ideas based on your own backgrounds, or you may develop an existing idea from existing products or research papers. You may find that the data for your application exists already, so that you can spend the time in the workshop fleshing out the details of how your application will work. Else, you may discover that the data for your task does not exist, in which case you, as a team, could start gathering or planning the gathering of these data.

Matt is Head of Research at Jukedeck. He began his PhD at the University of Bristol under the supervision of Tijl De Bie and finished it whilst on a Fulbright Scholarship at Columbia University in the city of New York with Dan Ellis. He then went on to work under Masataka Goto at the National Institute for Advanced Industrial Science and Technology in Tsukuba, Japan. Subsequently, he returned to Bristol to undertake a 2 year grant in Bristol. He joined Jukedeck in April 2016, and his main interests are the creative applications of MIR to domains such as algorithmic composition.


WiMIR 1st Annual Workshop


WiMIR 1st Annual Workshop

WiMIR is excited to partner with Spotify to offer the first-ever WiMIR Workshop, taking place on Friday, 28 September 2018 at Télécom ParisTech in Paris, France. This event is open to all members of the MIR community.

The goal of this event is to provide a venue for mentorship, networking, and collaboration among women and allies in the ISMIR community, while also highlighting technical work by women in MIR in different stages of completion. This is the first time we’ve organized such an event, and we’d love to see you there!


An ISMIR Satellite Event

The workshop will take place following the ISMIR2018, featuring a WiMIR reception and the Late-breaking & Demos session. This satellite event aims to complement the conference in three notable ways:

  • Further amplify the scientific efforts of women in the field.
  • Encourage the discussion of proposed or unfinished work.
  • Create additional space for networking.


Opportunities for Research, Networking, and Mentorship

The WiMIR Workshop will combine a variety of activities, including a poster session (see below), networking lunch, and small-group ideation and prototyping sessions under the mentorship of senior members of the WiMIR community. From the poster session to the group activities, the event will emphasize early research ideas that can be shaped and developed through discussions that occur throughout the day!

Who Can Participate?

The WiMIR Workshop is open for everyone to attend, and is free! You do not need to attend ISMIR to attend the WiMIR workshop.

Researchers who self-identify as women are invited to submit short abstracts for poster presentations on projects at any stage of completion, from proposal to previously published work. Preliminary and early results are especially encouraged so that presenters can get feedback from peers and mentors. Any topic broadly related to the field of MIR is welcome and encouraged. Click here to submit a poster. Poster submissions close on August 15, 2018, and acceptance notifications will be sent by August 31, 2018.

Please don’t hesitate to send questions to wimir.workshop@gmail.com.







Opening Remarks



Mentoring Session I (intros and big picture)



Poster Session



Lunch/theme breakout



Mentoring Session II (deep dive into the topic)



Group Presentations



Closing remarks

We look forward to seeing you at the Women in Music Information Retrieval 1st Annual Workshop!

The WiMIR Workshop Organizers

Abstract submission form here: https://goo.gl/forms/hy3ygYnKKS9fTLa13