Patrick Ehlen, Matthew Purver, John Niekrasz
We present a system for extracting useful information from multi-party meetings and presenting the results to users via a browser. Users can view automatically extracted discussion topics and action items, initially seeing high-level descriptions, but with the ability to click through to meeting audio and video. Users can also add value by defining and searching for new topics and editing, correcting, deleting, or confirming action items. These feedback actions are used as implicit supervision by the understanding agents, retraining classifier models for improved or user-tailored performance.
Subjects: 6.3 User Interfaces; 13.1 Discourse
Submitted: Jan 26, 2007