Volitional Intelligent Agent
(Redirected from sentient volitional intelligence)
Jump to navigation
Jump to search
A Volitional Intelligent Agent is a volitional agent that is an intelligent agent.
- AKA: Intelligent Intentional System.
- Context:
- It can range from being Conscious Volitional Intelligent Agent to being an Unconscious Volitional Intelligent Agent.
- …
- Counter-Example(s):
- See: Sentient System, Volitional System, Intentional System, Intelligent System, Malevolent AI.
References
2014
- (Brooks, 2014) ⇒ Rodney Brooks. (2014). “Artificial Intelligence is a Tool, Not a Threat.” In: Rethinking Robotics Journal, November 10, 2014.
- QUOTE: I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.
2010
- (Tarleton, 2010) ⇒ Nick Tarleton. (2010). “Coherent Extrapolated Volition: A meta-level approach to machine ethics." Berkeley, CA: Machine Intelligence Research Institute.
- ABSTRACT: The field of machine ethics seeks methods to ensure that future intelligent machines will act in ways beneficial to human beings. Machine ethics is relevant to a wide range of possible artificial agents, but becomes especially difficult and especially important when the agents in question have at least human-level intelligence. This paper describes a solution, originally proposed by Yudkowsky (2004), to the problem of what goals to give such agents: rather than attempt to explicitly program in any specific normative theory (a project which would face numerous philosophical and immediate ethical difficulties), we should implement a system to discover what goals we would, upon reflection, want such agents to have. We discuss the motivations for and details of this approach, comparing it to other suggested methods for creating “artificial moral agents ” (Wallach, Allen, and Smit 2008), and describe underspecified and uncertain areas for further research.
2007
- (Bugaj & Gortzel, 2007) ⇒ Stephan Vladimir Bugaj, and Ben Goertzel. (2007). “Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology.