Classifying Classroom Audio With Supervised Deep Models
Research Mentor(s)
Hutchinson, Brian
Description
Studies have shown that “student-centered” approaches to teaching and learning at the college level are highly effective. Increasingly, instructors are incorporating these methods into their classrooms. These classrooms tend to exhibit greater diversity in instructional style; e.g. incorporating more small group exercises, silent individual work, small and full group discussion and more Q&A in comparison to the traditional lecture. To assess the impact of these changes, it is helpful to accurately quantify the extent to which different activities are being performed in the classroom and correlate this with student evaluations and test scores. Self-reporting is often inaccurate, while many methods of manually annotating classroom activity require in-class observations by trained individuals, which does not scale well. Both pose time and cost constraints on institutions and instructors looking to quantify classroom activity. To address these limitations, we propose supervised deep learning models to automatically annotate classroom activity using audio captured from low-cost, non-invasive, portable audio recorders. We train deep and recurrent neural networks to classify small slices of the audio into one of seven predefined activities, using over 70 hours of hand-annotated classroom recordings provided to us by collaborators at San Francisco State University. Our initial models yield accuracies around 90%. Work to further improve this performance is on-going. Long term, we and our SFSU collaborators plan to make this annotation system available to all instructors in an easy-to-use format.
Document Type
Event
Start Date
17-5-2018 9:00 AM
End Date
17-5-2018 12:00 PM
Department
Computer Science
Genre/Form
student projects, posters
Subjects – Topical (LCSH)
Education, Higher--Aims and objectives; Universities and colleges; Effective teaching
Geographic Coverage
United States
Type
Image
Rights
Copying of this document in whole or in part is allowable only for scholarly purposes. It is understood, however, that any copying or publication of this documentation for commercial purposes, or for financial gain, shall not be allowed without the author's written permission.
Language
English
Format
application/pdf
Classifying Classroom Audio With Supervised Deep Models
Studies have shown that “student-centered” approaches to teaching and learning at the college level are highly effective. Increasingly, instructors are incorporating these methods into their classrooms. These classrooms tend to exhibit greater diversity in instructional style; e.g. incorporating more small group exercises, silent individual work, small and full group discussion and more Q&A in comparison to the traditional lecture. To assess the impact of these changes, it is helpful to accurately quantify the extent to which different activities are being performed in the classroom and correlate this with student evaluations and test scores. Self-reporting is often inaccurate, while many methods of manually annotating classroom activity require in-class observations by trained individuals, which does not scale well. Both pose time and cost constraints on institutions and instructors looking to quantify classroom activity. To address these limitations, we propose supervised deep learning models to automatically annotate classroom activity using audio captured from low-cost, non-invasive, portable audio recorders. We train deep and recurrent neural networks to classify small slices of the audio into one of seven predefined activities, using over 70 hours of hand-annotated classroom recordings provided to us by collaborators at San Francisco State University. Our initial models yield accuracies around 90%. Work to further improve this performance is on-going. Long term, we and our SFSU collaborators plan to make this annotation system available to all instructors in an easy-to-use format.
Comments
Outstanding Poster Award Recipient