Multiple Instance Learning (MIL) Algorithm
A Multiple Instance Learning (MIL) Algorithm is a Supervised Learning Algorithm that can be implemented by a Multiple Instance Learning System to solve a Multiple Instance Learning Task.
- Context:
- It can range from being a Multiple Instance Classification Algorithm to being a Multiple Instance Regression Algorithm.
- It can range from being an Instance-based Learning Algorithm, to being a Metadata-based Learning Algorithm, to being a Diverse Density Learning Algorithm,
- Example(s):
- Counter-Example(s):
- See: Labeled Training Record Bag, Decision Trees, Artificial Neural Network, Attribute, Classification, Expectation Maximization Clustering, First-Order Logic, Gaussian Distribution, Inductive Logic Programming, Kernel Method, Linear Regression, Nearest Neighbor, Online Learning, PAC Learning, Relational Learning.
References
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Multiple_instance_learning Retrieved:2019-2-3.
- In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept.
Convenient and simple example for MIL was given in.[1] Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren’t. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the “positive” key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn’t.
- In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept.
2017
- (Ray et al., 2017) ⇒ Soumya Ray, Stephen Scott, and Hendrik Blockeel. (2017). “Multiple -Instance Learning”. In: (Sammut & Webb, 2017). DOI: 10.1007/978-1-4899-7687-1_578
- QUOTE: Multiple-instance (MI) learning is an extension of the standard supervised learning setting. In standard supervised learning, the input consists of a set of labeled instances each described by an attribute vector. The learner then induces a concept that relates the label of an instance to its attributes. In MI learning, the input consists of labeled examples (called “bags”) consisting of multisets of instances, each described by an attribute vector, and there are constraints that relate the label of each bag to the unknown labels of each instance. The MI learner then induces a concept that relates the label of a bag to the attributes describing the instances in it. This setting contains supervised learning as a special case: if each bag contains exactly one instance, it reduces to a standard supervised learning problem.
2014
- http://www.kyb.mpg.de/bs/people/pgehler/mil/mil.html
- Multiple Instance Learning (MIL) is a special learning framework which deals with uncertainty of instance labels. In this setting training data is available only as pairs of bags of instances with labels for the bags. Instance labels remain unknown and might be inferred during learning. A positive bag label indicates that at least one instance of that bag can be assigned a positive label. This instance can therefore be thought of as a witness for the label. Instance in negative labelled bags are altogether of the negative class, so there is no uncertainty about their label.
- There exist quite an amount of literature to the Multiple Instance Learning problem. This website provides an overview of the MIL related research at this institute and hosts software we made available as well as datasets.
2003
- (Andrews et al., 2003) ⇒ S. Andrews, I. Tsochantaridis, and T. Hofmann. (2003). “Support Vector Machines for Multiple-Instance Learning].” In: Advances in Neural Information Processing Systems (NIPS 2003).
1998
- (Maron et al., 1998) ⇒ Oded Maron, and Tomás Lozano-Pérez. (1998). “A Framework for Multiple-Instance Learning.” In: Advances in Neural Information Processing Systems (NIPS 1998).
- ABSTRACT: Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.