Autonomous Intelligent Machine
(Redirected from non-human intelligence)
Jump to navigation
Jump to search
An Autonomous Intelligent Machine is an intelligent machine that is an AI system that can perform autonomous operations.
- AKA: Non-Human Intelligence, Intelligent Agent System.
- Context:
- Input(s): sensor data, environmental states, operational goals
- Output(s): autonomous actions, decisions, learning updates
- Performance Measure(s): autonomy level, task effectiveness, safety metrics
- ...
- It can (typically) be based on AI Technologies, such as: machine learning, deep learning, natural language processing, and computer vision.
- It can (typically) demonstrate Capabilities in increasing sophistication:
- Basic Capabilities, such as:
- Environmental Perception through sensor systems
- Action Execution through actuator systems
- Basic Decision Making through rule systems
- Advanced Capabilities, such as:
- Autonomous Learning through experience accumulation
- Complex Problem Solving through AI algorithms
- Adaptive Behavior through feedback systems
- Specialized Capabilities, such as:
- Basic Capabilities, such as:
- ...
- It can (typically) be composed of Core System Components:
- Cognitive Architecture Components, such as:
- Physical Architecture Components, such as:
- Safety Architecture Components, such as:
- ...
- It can (typically) operate within Operational Contexts:
- Safety Context, including:
- Performance Context, including:
- Interaction Context, including:
- ...
- It can (often) exhibit System Behaviors, such as: adaptive learning, decision making, problem solving, and autonomous navigation.
- It can range from being a Conscious Intelligent Machine to being a Non-Conscious Intelligent Machine.
- It can range from being a Linguistic AI Machine to being a Visual AI Machine.
- It can range from being a Basic Autonomous Machine to being an Advanced Autonomous Machine, depending on its capability level.
- It can range from being a Single-Domain Machine to being a Multi-Domain Machine, depending on its operational scope.
- It can range from being a Simple Learning Machine to being a Complex Learning Machine, depending on its learning sophistication.
- It can operate in a variety of domains, from Autonomous Vehicles and Medical Diagnosis Systems to Automated Financial Trading Systems and Smart Home Technologies.
- ...
- Examples:
- Advanced Autonomous Machines, such as:
- AI-Powered Robots, such as:
- HAL 9000 in science fiction
- Auto-GPT-based implementation
- LLM-based Agents like Voyager AI agent
- Autonomous Vehicles, such as:
- Smart Industrial Machines, such as:
- AI-Powered Robots, such as:
- Specialized Intelligence Machines, such as:
- Medical Machines, such as:
- Financial Machines, such as:
- ...
- Advanced Autonomous Machines, such as:
- Counter-Examples:
- a Non-Intelligent Mechanical Agent, such as a Roomba robot or an ASIMO robot.
- an Intelligent Virtual Assistant, due to its limited autonomy and dependence on human-defined rules.
- an Intelligent Living Agent, an intelligent person.
- a Heuristic-based Autonomous System, as it operates based on set rules rather than learning from data.
- a Corporation, because it is not a machine.
- See: Intelligent System, Moral Machine, AI Emotion, Advanced AI System, Machine Ethics, Human-AI Collaboration, Artificial General Intelligence.
References
2024
- (AI Open Letter, 2024-06-04) ⇒ 13 American Industry AI Researchers. (2024). “A Right to Warn About Advanced Artificial Intelligence.”
- NOTE: It highlights the potential benefits of AI technology while acknowledging serious social risks such as societal inequalities, misinformation, and the potential for autonomous AI systems to pose existential threats.
2017
- (Institute, 2017) ⇒ The Future of Life Institute. (2017). “Asilomar AI Principles.”
- QUOTE: ...
- 8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
- …
- 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
- QUOTE: ...
2006
- Stan Franklin, and F. G. Patterson Jr. (2006). “The LIDA architecture: Adding new modes of learning to an intelligent, autonomous, software agent.” In: Integrated Design and Process Technology, IDPT-2006
2001
- (Addlesee et al., 2001) ⇒ Mike Addlesee, Rupert Curwen, Steve Hodges, Joe Newman, Pete Steggles, Andy Ward, and Andy Hopper. (2001). “Implementing a Sentient Computing System.” In: Computer Journal, 34(8). doi:10.1109/2.940013
- QUOTE: Sentient computing systems, which can change their behavior based on a model of the environment they construct using sensor data,