Autonomous Intelligent Machine
(Redirected from non-human intelligence)
Jump to navigation
Jump to search
An Autonomous Intelligent Machine is an intelligent agent that is an AI system.
- AKA: Non-Human Intelligence, Intelligent Agent System.
- Context:
- It can (typically) be based on AI Technologies, such as: machine learning, deep learning, [[natural language [rocessing]], and computer vision.
- It can (often) exhibit System Behaviors, such as: adaptive learning, decision making, problem solving, and autonomous navigation.
- It can range from being a Conscious Intelligent Machine to being an Non-Conscious Intelligent Machine.
- It can range from (typically) being a Linguistic AI Machine to being a Visual AI Machine.
- It can operate in a variety of domains, from (often) Autonomous Vehicles and Medical Diagnosis Systems to Automated Financial Trading Systems and Smart Home Technologies.
- ...
- Example(s)
- HAL 9000.
- one based on Auto-GPT.
- an LLM-based Agent, such as Vogager AI agent.
- a Self-Driving Car utilizing complex AI algorithms for navigation and safety.
- a Predictive Maintenance Robot in industrial settings, foreseeing machinery failures before they occur.
- …
- Counter-Example(s):
- a Non-Intelligent Mechanical Agent, such as a Roomba robot or an ASIMO robot.
- an Intelligent Virtual Assistant, due to its limited autonomy and dependence on human-defined rules.
- an Intelligent Living Agent, an intelligent person.
- a Heuristic-based Autonomous System, as it operates based on set rules rather than learning from data.
- a Corporation, because it is not a machine.
- See: Intelligent System, Moral Machine, AI Emotion, Advanced AI System, Machine Ethics, Human-AI Collaboration, Artificial General Intelligence.
References
2024
- (AI Open Letter, 2024-06-04) ⇒ 13 American Industry AI Researchers. (2024). “A Right to Warn About Advanced Artificial Intelligence.”
- NOTE: It highlights the potential benefits of AI technology while acknowledging serious social risks such as societal inequalities, misinformation, and the potential for autonomous AI systems to pose existential threats.
2017
- (Institute, 2017) ⇒ The Future of Life Institute. (2017). “Asilomar AI Principles.”
- QUOTE: ...
- 8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
- …
- 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
- QUOTE: ...
2006
- Stan Franklin, and F. G. Patterson Jr. (2006). “The LIDA architecture: Adding new modes of learning to an intelligent, autonomous, software agent.” In: Integrated Design and Process Technology, IDPT-2006
2001
- (Addlesee et al., 2001) ⇒ Mike Addlesee, Rupert Curwen, Steve Hodges, Joe Newman, Pete Steggles, Andy Ward, and Andy Hopper. (2001). “Implementing a Sentient Computing System.” In: Computer Journal, 34(8). doi:10.1109/2.940013
- QUOTE: Sentient computing systems, which can change their behavior based on a model of the environment they construct using sensor data,