This is a list of Project Guides and their areas of interest for the 2020 WiMIR Virtual Workshop, which will take place as an online-only satellite event of ISMIR2020.
These events are FREE and open to the public, but advance sign-up is REQUIRED. Sign up here: https://bit.ly/WiMIRWorkshop2020
This year’s Workshop is organized by Blair Kaneshiro (Stanford University), Katherine M. Kinnaird (Smith College), Jordan B. L. Smith (ByteDance), and Thor Kell (Spotify).
Please note that start times vary for each event – see each event for exact time and time zones, or view a GMT calendar of all events here.
August 22

Daniel Ellis: Sound Event Recognition
This event will start at 10:00 AM, Eastern Daylight Time – copy to your calendar, in your time zone.
My group at Google has been working on developing general-purpose sound event recognizers. I’ll briefly recap the evolution of this work from its origins from virtually nothing in 2014 to deployed apps today. I’ll also talk a little about my own transition from academia to industry, and the day-to-day details of my work as a Tech Lead – Research Scientist – Manager at Google.
Dan Ellis leads a small team developing sound event recognition technologies within Google AI/Perception. From 2000-2015 he was on the faculty of the Electrical Engineering department at Columbia University, leading research into environmental sound processing and music audio analysis. He now regrets encouraging his students to write Matlab without unit tests.

Jenn Thom: Improving the Music Listening Experience: HCI Research at Spotify
This event will start at 1:00 PM, Eastern Daylight Time – copy to your calendar, in your time zone.
Music plays an important role in everyday life around the world. People rely on music to manage their mood, express their identity and celebrate milestone events. Streaming services like Spotify have transformed the way that people consume audio by providing listeners with multiple personalized ways to access an abundant catalog of content. In my talk, I will describe several active areas of HCI research at Spotify and present our work on understanding how people search for music and how we can enable exploration for listeners.
Jenn Thom leads the HCI research lab at Spotify. Her current research interests include understanding how people search for and describe music and developing novel design and prototyping methods for conversational interactions. Prior to joining Spotify, she was a Research Scientist at Amazon where she worked on collecting and mining data to bootstrap new features for the launch of the Echo. She was also a Research Staff Member at IBM Research where she studied how employees used social networks for intercultural collaboration. Jenn received her PhD from Cornell University and her dissertation focused on how people expressed territorial behaviors in user-generated content communities.
September 5

Doug Turnbull: Locally-Focused Music Recommendation
This event will start at 12:00 PM, Eastern Daylight Time – copy to your calendar, in your time zone.
There are talented musicians all around us. They play amazing live shows at small venues in every city all around the world. Yet music services like Spotify, Apple Music, YouTube, and Pandora do a poor job of helping listeners discover these artists for a variety of commercial and technical reasons. To remedy this problem, I will discuss our recent efforts to use recommender systems to support locally-focused music discovery. First, I’ll provide a brief introduction to recommender systems, the long-tail consumption models, and popularity bias. I’ll then describe how we can adapt typical recommender system algorithms to be better at recommending local (long-tail) music. Finally, I will describe a personalized Internet radio project called MegsRadio.fm, why it failed after years of dedicated development, and how lessons learned are being incorporated into the design of my new project called Localify.org.
Doug Turnbull is an Associate Professor in the Department of Computer Science at Ithaca College. His research focuses on music information retrieval, computer audition, machine learning, and human computation. His research passion is using recommender systems to promote music by talented local artists. He is currently working on Localify.org which explores using music event recommendations and playlist generation on Spotify to support local music communities. This project is funded by the National Science Foundation and being developed by a large team of undergraduate students at Ithaca College. He is a former ISMIR conference program co-chair and former ISMIR board member. More information about his research can be found at https://dougturnbull.org.

Amanda Krause: Everyday Experiences of Music: A Fireside Chat with Dr. Amanda Krause
This event will start at 6:00 PM, Pacific Daylight Time – copy to your calendar, in your time zone.
Given the prominence of music in our everyday lives and developmental shifts in technology, how do people access, consume, and respond to music? Working in the social and applied psychology of music, Dr. Amanda Krause researches how our everyday experiences with music influence our well-being, in order to better understand the place that music occupies in modern life. In this fireside chat, Amanda will discuss her research topics and approaches, how she has pursued her research interests via travel, education, and collaboration, and the challenges and opportunities that have arisen from establishing an inter-disciplinary research career. She will also reflect on how the MIR and music psychology disciplines intersect, how she has made connections within the MIR community, and how researchers working in these disciplines can collaborate to tackle some very interesting and challenging research questions.
This fireside chat will be moderated by Dr. Blair Kaneshiro.
As a music psychology scholar based at James Cook University, Dr. Amanda Krause studies how we experience music in our everyday lives. Her research asks how our musical experiences influence our well-being. Amanda’s current projects examine the role of music listening and the radio in supporting individual and community well-being. Amanda is the author of numerous academic publications and currently serves on the Australian Music & Psychology Society (AMPS) committee. She has also spoken on her research to academics and industry leaders at conferences around the world, to students through programs like Skype A Scientist and STEM Professionals in Schools, and to members of the general public via radio appearances and events like Pint Of Science.
September 19
Preeti Rao and Rohit M. A.: Unity in Diversity: MIR Tools for Non-Western Music
This event will start at 1:30 PM, India Standard Time – copy to your calendar, in your time zone.
Just as there is so much linguistic and cultural diversity, there is rich diversity in music across the globe. But the universals of musical structure and attributes such as pitch, rhythm and timbre that describe all music enable us to apply the rich tools of MIR developed for Western music to interesting and musically relevant tasks in genres as distinct as Indian art music. We discuss some important considerations for researchers such as (i) identifying MIR-addressable problems and the tools to apply, and (ii) dealing with the anticipated limitations of labeled datasets. We do this with easy to follow examples from Indian music and show how the insights obtained can be rewarding, also in terms of understanding the music better!
Preeti Rao has been on the faculty of Electrical Engineering at I.I.T. Bombay, teaching and researching in the area of signal processing with applications in speech and audio. She received her Ph.D. from the University of Florida in Gainesville in 1990. She was a collaborator in the CompMusic project during 2011-2016 for the application of MIR to non-Western music, led by the MTG at UPF, Barcelona. She currently leads another international collaboration funded by the Government of India for research in Computational Musicology and Musical Instruments Modeling for Indian Music. She has been actively involved in development of technology for Indian music and spoken language learning applications. She co-founded SensiBol Audio Technologies, a start-up incubated by I.I.T. Bombay in 2011, with her Ph.D. and Masters students.
Rohit is a Master’s student and a research assistant in the Digital Audio Processing lab in the Electrical Eng department at IIT Bombay. His background is in communication and digital signal processing and his research interests lie in MIR, computational musicology and machine learning for audio. His current research is centered around developing tools for analysis of the Hindustani classical art form and instruments, with a focus on studying performance related aspects. He is also a trained violinist.

Juanjo Bosch: AI-Assisted Music Creation
This event will start at 2:00 PM, Central European Summer Time – copy to your calendar, in your time zone.
This workshop will give an overview of the usage of music information retrieval and more generally artificial intelligence for assisting composers and producers when making music, from both a research and an industry perspective. We will talk about some of the recent advancements in machine learning applied to (audio and symbolic) music generation and repurposing, and we will review some of the techniques that paved the way there. We will also look at how startups and large companies are approaching this field, some of the real-world applications that have been created, and we will finally discuss some specific examples of how artists and coders have been using such technologies. Could we even try to imagine what the future of this exciting field may look like?
Juanjo is a Research Scientist working at the Creator Technology Research Lab at Spotify, whose main mission is to create tools for musicians / producers. He holds a Telecommunications Engineering degree from Universitat Politécnica de Valencia, a Masters (in Sound and Music Computing) and PhD from the Universitat Pompeu Fabra (Music Technology Group, Barcelona), which was conducted under the supervision of Emilia Gómez. He has also visited other academic institutions such as the University of Sheffield, Queen Mary University of London (C4DM), and worked for three years at Fraunhofer IDMT. Before joining Spotify, he already had experience in the industry including Hewlett Packard and Yamaha Music. His main research interests lie at the intersection of music information retrieval and AI-assisted music creation.
Amy LaMeyer and Darragh Dandurand: XR and Music – A Conversation
This event will start at 9:00 AM, Pacific Daylight Time – copy to your calendar, in your time zone.
Extended reality (XR) is radically changing the way we create, consume, and socialize around music. In this conversation, Amy LaMeyer and Darragh Dandurand will discuss today’s landscape of XR and music, including the current state of the industry, recent technological advances, and innovations in artist-fan connections in the age of COVID. They’ll also speak about the history and mission of the WXR Fund, and reflect upon her own professional journey and what it means to forge an authentic career path.
Amy LaMeyer is Managing Partner at the WXR Fund investing in early stage companies with female leadership that are transforming business and human interaction using spatial computing (VR/AR) and AI. She has been named one of the people to watch in AR by Next Reality. Amy is the author of ‘Sound and AR’ in the book “Convergence: how the world will be painted with data”. She has 20 years of experience in a high growth technology industry in corporate development, mergers and acquisitions, engineering and finance.
Darragh Dandurand is an award-winning creative director, brand strategist, photojournalist, and curator who has worked in media for a decade and now in immersive technology / spatial computing, as well. Recent clients include a number of media outlets and studios, such as Refinery29, VICE, Verizon, the New Museum, Buck Co, Superbright, Sensorium, iHeartMedia and Wallplay. Currently, Darragh is consulting creative tech teams, developing her own experiential projects, publishing articles on mixed-reality, and researching and presenting on the intersection of fashion, e-commerce and wearable tech. She sits on the board of directors for the Femme Futures Grant via The Kaleidoscope Fund. Darragh has lectured at Stanford University, Temple University, University of Maryland, University of Rhode Island, The New School, and the Fashion Institute of Technology, as well as VRARA Global Summit, Samsung, Out in Tech, MAVRIC, Magic Leap’s LeapCon, and others.
October 3

Christine Bauer: The *Best Ever* Recommendation – For Who? And How Do You Know That?
This event will start at 11:00 AM, Central European Summer Time – copy to your calendar, in your time zone.
Music recommender systems are an inherent ingredient of all kind of music platforms. They are meant to assist users in searching, sorting, and filtering the huge repertoire. Now, if a recommender computes the *best ever* recommendation. Is it the best choice for the user? Or the best for the recommended artist? Is it the best choice for the platform provider? Is the *best ever* recommendation equally valuable for users, artists, and providers alike? If you (think you) have an answer, how do you know that? Is it indeed the *best ever* recommendation? In this session, I will provide insights on what we miss out on in research on music recommenders. I will point to the perspectives of the various stakeholders and to the sphere of methods that may allow us to shed light upon answers to questions that we have not even asked so far. I will *not* provide the ultimate answer. I do not know it. It is research in progress. The goal is to move forward together. Expect this session to be interactive with lots of brainstorming and discussion.
Christine Bauer is an assistant professor at Utrecht University, The Netherlands. Her research activities center on interactive intelligent systems. She focuses on context-adaptive systems and, currently, on music recommender systems in particular. Her activities are driven by her interdisciplinary background. She holds a Doctoral degree in Social and Economic Sciences, a Diploma degree in International Business Administration, and a Master degree in Business Informatics. Furthermore, she pursued studies in jazz saxophone. Christine is an experienced teacher and has been teaching a wide spectrum of topics in computing and information systems across 10 institutions. She has authored more than 90 papers, received the prestigious Elise Richter grant, and holds awards for her research as well as her reviewing activities. Earlier, she researched at Johannes Kepler University Linz, Austria, WU Vienna, Austria, University of Cologne, Germany, and the E-Commerce Competence Center, Austria. In 2013 and 2015, she was Visiting Fellow at Carnegie Mellon University, Pittsburgh, PA, USA. Before starting her academic career, she worked at Austria’s biggest collecting society AKM. More information can be found on her website: https://christinebauer.eu

Tom Collins: Automatic Music Generation: Demos and Applications
This event will start at 1:00 PM, British Summer Time – copy to your calendar, in your time zone.
There has been a marked increase in recent years in the number of papers and algorithms addressing automatic music generation (AMG). This workshop will:
- Invite participants to try out some tweak-able demos and applications of music generation algorithms;
- Cover some of my lab’s projects in this area, which include a recent collaboration with Grammy Award-winning artist Imogen Heap, and integrating AMG algorithms into computer games;
- Review approaches and applications from other research groups, such as Google Magenta;
- Underline that a literature existed on this topic before deep learning(!), and that evaluation should consist of more than optimizing a metric.
Tom studied Music at Cambridge, Math and Stats at Oxford, and did his PhD on automatic pattern discovery and music generation at the Open University. He has held multiple postdoc and visiting assistant professor positions in the US and Europe, and now splits his time between University of York (where he runs the Music Computing and Psychology Lab) and the music cooperative MAIA, Inc.
One thought on “WiMIR Workshop 2020 Project Guides”