Crowdsourcing and Human Computation

5 papers in this research thread

This thread of research explores systems that combine human and machine intelligence to tackle complex tasks that neither can solve well alone. Revolt introduces a collaborative crowdsourcing approach for labeling machine learning datasets that harnesses worker disagreements to surface ambiguous concepts, producing high-quality labels without detailed guidelines. Evorus presents a crowd-powered conversational assistant designed to gradually automate itself over time by integrating chatbots, reusing prior crowd answers, and learning to approve responses automatically — demonstrating through a 5-month deployment that automation can be introduced without compromising conversation quality. The Knowledge Accelerator uses crowds to synthesize information into coherent knowledge articles, while Alloy combines crowd input with computational methods for clustering tasks — both illustrating how structured crowd-machine collaboration can produce outputs beyond what either alone could achieve. SOLVENT takes a mixed-initiative approach to finding cross-domain analogies between research papers, where human annotations of paper aspects are combined with computational semantic representations, outperforming purely automated retrieval methods. Together, these works demonstrate that carefully designed architectures with well-scoped roles for humans and machines yield robust, scalable systems across diverse domains from data labeling to open-domain dialogue to scientific discovery.

Papers

Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

Ting-Hao 'Kenneth' Huang·Joseph Chee Chang·Jeffrey P. Bigham
CHI·2018·93 citations🏆 Best Paper Honorable MentionPDF + AI Q&AVideo

TLDREvolent, a crowd-powered conversational assistant built to automate itself over time by allowing new chatbots to be easily integrated to automate more scenarios, and reusing prior crowd answers, and learning to automatically approve response candidates is introduced.

The Knowledge Accelerator: Big Picture Thinking in Small Pieces

Nathan Hahn·Joseph Chee Chang·Ji Eun Kim
CHI·2016·66 citations🏆 Best Paper Honorable MentionPDF

TLDRThis paper instantiates the idea that a computational system can scaffold an emerging interdependent, big picture view entirely through the small contributions of individuals through a prototype system for accomplishing distributed information synthesis and evaluates its output across a variety of topics.

Alloy: Clustering with Crowds and Computation

Joseph Chee Chang·A. Kittur·Nathan Hahn
CHI·2016·54 citations🏆 Best Paper Honorable MentionPDF

TLDRAlloy, a hybrid approach that combines the richness of human judgments with the power of machine algorithms, is introduced, a modular "cast and gather" approach which leverages a machine learning backbone to stitch together different types of judgment tasks.

SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

Joel Chan·Joseph Chee Chang·Tom Hope
Proc. ACM Hum. Comput. Interact.·2018·82 citationsPDF

TLDRSOLVENT is introduced, a mixed-initiative system where humans annotate aspects of research papers that denote their background, purpose, mechanism, and findings, and a computational model constructs a semantic representation from these annotations that can be used to find analogies among the research papers.

Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets

Joseph Chee Chang·Saleema Amershi·Ece Kamar
CHI·2017·260 citationsPDF

TLDRRevolt eliminates the burden of creating detailed label guidelines by harnessing crowd disagreements to identify ambiguous concepts and create rich structures (groups of semantically related items) for post-hoc label decisions.