AI Fairness Bias
Jump to navigation
Jump to search
An AI Fairness Bias is a AI bias that aligns with a human prejudice.
References
2022
- "Privacy and responsible AI.” International Association of Privacy Professionals (IAPP).
- QUOTE: ... AI fairness is another growing field covering a very complex issue. Bias, discrimination and fairness are highly context-specific. Numerous definitions of fairness exist and vary widely between and within the various disciplines of mathematics, computer science, law, philosophy and economics. Some privacy regulators have issued clear guidelines. According to the ICO, fairness means personal data needs to be handled in ways people would reasonably expect and not use it in ways that have unjustified adverse effects on them. Similarly, the FTC explains that under the FTC Act, a practice will be considered unfair if it causes more harm than good. On the other hand, definitions of the fairness principle in the context of the GDPR are still scarce. At the same time, many organizations are unsure how to avoid bias in practice. In general, bias can be addressed pre-processing (prior to training the algorithm), in-processing (during model training), and post-processing (bias correction in predictions).
AI explainability and fairness are only two of many rapidly evolving principles in the field of responsible AI. ...
- QUOTE: ... AI fairness is another growing field covering a very complex issue. Bias, discrimination and fairness are highly context-specific. Numerous definitions of fairness exist and vary widely between and within the various disciplines of mathematics, computer science, law, philosophy and economics. Some privacy regulators have issued clear guidelines. According to the ICO, fairness means personal data needs to be handled in ways people would reasonably expect and not use it in ways that have unjustified adverse effects on them. Similarly, the FTC explains that under the FTC Act, a practice will be considered unfair if it causes more harm than good. On the other hand, definitions of the fairness principle in the context of the GDPR are still scarce. At the same time, many organizations are unsure how to avoid bias in practice. In general, bias can be addressed pre-processing (prior to training the algorithm), in-processing (during model training), and post-processing (bias correction in predictions).
2021
- https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html
- QUOTE: ... The definition of AI bias is straight-forward: AI that makes decisions that are systematically unfair to certain groups of people. Several studies have identified the potential for these biases to cause real harm. ...