Fairness Bias
Jump to navigation
Jump to search
An Fairness Bias is a cognitive bias based on Sterotypes, Prejudice or Favoritism.
- AKA: Ethics Bias.
- Context:
- It can be a Systematic Error introduced by sampling or reporting tasks.
- …
- Example(s):
- Counter-Example(s)
- See: Contextual Bias, Statistical Bias, Publication Bias.
References
2018a
- (TF-ML GLossary, 2018) ⇒ (2008). "bias (ethics/fairness)". In: Machine Learning Glossary (TensorFlow) Retrieved: 2018-05-27.
- QUOTE: 1. Stereotyping, prejudice or favoritism towards some things, people, or groups over others. These biases can affect collection and interpretation of data, the design of a system, and how users interact with a system. Forms of this type of bias include:
- 2. Systematic error introduced by a sampling or reporting procedure. Forms of this type of bias include:
- Not to be confused with the Linear Regression Biasbias term in machine learning models or prediction bias.