A Learning Agent for Parameter Adaptation in Speeded Tests

Activity: Talk or presentationConference PresentationsResearch

Daniel Bengs - Speaker

Ulf Brefeld - Speaker

The assessment of a person’s traits such as ability is a fundamental
problem in human sciences. Compared to traditional paper
and pencil tests, computer based assessment not only facilitates data
acquisition and processing, but also allows for real-time adaptivity and
personalization. By adaptively selecting tasks for each test subject, competency
levels can be assessed with fewer items. We focus on assessments
of traits that can be measured by determining the shortest time limit allowing
a testee to solve simple repetitive tasks (speed tests). Existing approaches
for adjusting the time limit are either intrinsically non-adaptive
or lack theoretical foundation. By contrast, we propose a mathematically
sound framework in which latent competency skills are represented by
belief distributions on compact intervals. The algorithm iteratively computes
a new difficulty setting, such that the amount of belief that can be
updated after feedback has been received is maximized. We rigorously
prove a bound on the algorithms’ step size paving the way for convergence
analysis. Empirical simulations show that our method performs
equally well or better than state of the art baselines in a near-realistic
scenario simulating testee behaviour under different assumptions.
23.09.201627.09.2016

Event

European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases - ECML-PKDD 2016

23.09.1627.09.16

Riva del Garda, Italy

Event: Conference