1-16 - Modeling Music and Code Knowledge to Support a Co-creative AI Agent for Education
Jason Smith, Erin Truesdell, Jason Freeman, Brian Magerko, Kristy Boyer, Tom Mcklin
Keywords: Applications, Music training and education, Domain knowledge, Representations of music, Human-centered MIR, User behavior analysis and mining, user modeling, MIR fundamentals and methodology, Multimodality, Symbolic music processing, Musical features and properties, Structure, segmentation, and form
Abstract:
EarSketch is an online environment for learning introductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce comprehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user's code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users' musical design choices. These analyses produce a model containing users' coding and musical decisions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.