Partially-Automated Learning Task
Jump to navigation
Jump to search
A Partially-Automated Learning Task is a Machine Learning Task that is a Partially-Automated Task.
- AKA: Human-in-the-Loop Learning Task.
- Context:
- It can be instantiated in a Partially-Automated Learning Act.
- …
- Counter-Example(s):
- See: Human Learning Task.
References
2010
- http://nips.cc/Conferences/2014/Program/event.php?ID=4304
- In typical applications of machine learning (ML), humans typically enter the process at an early stage, in determining an initial representation of the problem and in preparing the data, and at a late stage, in interpreting and making decisions based on the results. Consequently, the bulk of the ML literature deals with such situations. Much less research has been devoted to ML involving “humans-in-the-loop,” where humans play a more intrinsic role in the process, interacting with the ML system to iterate towards a solution to which both humans and machines have contributed. In these situations, the goal is to optimize some quantity that can be obtained only by evaluating human responses and judgments. Examples of this hybrid, “human-in-the-loop” ML approach include:
- ML-based education, where a scheduling system acquires information about learners with the goal of selecting and recommending optimal lessons;
- Adaptive testing in psychological surveys, educational assessments, and recommender systems, where the system acquires testees’ responses and selects the next item in an adaptive and automated manner;
- Interactive topic modeling, where human interpretations of the topics are used to iteratively refine an estimated model;
- Image classification, where human judgments can be leveraged to improve the quality and information content of image features or classifiers.
- The key difference between typical ML problems and problems involving “humans-in-the-loop” and is that in the latter case we aim to fit a model of human behavior as we collect data from subjects and adapt the experiments we conduct based on our model fit. This difference demands flexible and robust algorithms and systems, since the resulting adaptive experimental design depends on potentially unreliable human feedback (e.g., humans might game the system, make mistakes, or act lazily or adversarially). Moreover, the “humans-in-the-loop” paradigm requires a statistical model for human interactions with the environment, which controls how the experimental design adapts to human feedback; such designs are, in general, difficult to construct due to the complex nature of human behavior. Suitable algorithms also need to be very accurate and reliable, since humans prefer a minimal amount of interaction with ML systems; this aspect also prevents the use of computationally intensive parameter selection methods (e.g., a simple grid search over the parameter space). These requirements and real-world constraints render “humans-in-the-loop” ML problems much more challenging than more standard ML problems.
- In typical applications of machine learning (ML), humans typically enter the process at an early stage, in determining an initial representation of the problem and in preparing the data, and at a late stage, in interpreting and making decisions based on the results. Consequently, the bulk of the ML literature deals with such situations. Much less research has been devoted to ML involving “humans-in-the-loop,” where humans play a more intrinsic role in the process, interacting with the ML system to iterate towards a solution to which both humans and machines have contributed. In these situations, the goal is to optimize some quantity that can be obtained only by evaluating human responses and judgments. Examples of this hybrid, “human-in-the-loop” ML approach include: