Brains and Bytes in Harmony

TOC »»

Perspective Matters

Over the past few years, AI has made some mind-blowing strides. But with each step in its evolution, it has carried with it science-fiction-movie fears of an insidious competitor and eventual master over humanity. This is vividly depicted in the new Netflix movie “Atlas,” where Jennifer Lopez plays Atlas Shepherd, a data analyst navigating a future where AI poses both a deadly threat and a crucial ally. The film features a malevolent AI, Harlan, representing our deepest fears about AI’s potential to dominate and harm humanity. Conversely, it also introduces Smith, an assistive AI that helps Atlas survive and neutralize the threat posed by Harlan.

We call it “artificial” intelligence, and we worry that we’re training it to be a type of us, with all the good and bad that comes with it. But we’re mostly worried about the bad. We imagine aliens from outer space the same way, projecting onto these hypothetical beings the worst of mankind’s character. This anthropomorphism, attributing human traits and flaws to non-human entities, colors our perception and stirs our deepest anxieties.

While a totally non-human-in-any-way intelligence could still swoop in and dominate us, this article focuses on the AI we create. Even if we developed an intelligence without any human attributes, its non-humanness might still perceive us as bothersome and destructive. There are a lot of scenarios to explore, but here I want to discuss only one: mitigating the risks associated with rogue intelligence possessing human characteristics.

Recently, an article in The New York Times titled “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance” highlighted the dangerous recklessness in AI development. The insiders criticized OpenAI for prioritizing rapid advancements over safety, creating a culture of secrecy, and ignoring preventive measures. This rush to build powerful AI systems without adequate safety protocols not only heightens the risk of dangerous outcomes but also reinforces doomsday fears about AI. Such a reckless approach underscores the urgent need for a more thoughtful and directed path in AI evolution.

So, to be more “thoughtful,” I propose starting with where thoughts begin.

Our brains construct “reality” from the elements and patterns around us. What it constructs forms the foundation of perspective (mental model). Since words reflect and shape our perspective, let’s delve into how the words we use affect that reality and influence our approach to dealing with it.

How we label things forces us to think within a box — a semantic box that constrains our thinking to reinforce the box walls. The term “artificial” suggests something unnatural, synthetic, and inherently inferior to human intelligence. It conjures images of robotic overlords and dystopian futures, feeding into our collective fear of losing control over the very creations meant to aid us. This fear is not new; it echoes the myth of Frankenstein’s monster and other tales where human innovation spirals out of control.

Yet, AI doesn’t have to inherit the bad traits of humans if we think about it differently: as assistive intelligence. This shift in perspective encourages us to envision AI not as a rival but as a collaborator, designed to complement and enhance human capabilities. By re-casting AI as assistive, we break down the semantic walls that channel our thinking, opening up new possibilities for innovation and cooperation. Instead of a zero-sum game where one intelligence must dominate, we can create a symbiotic relationship where both humans and AI thrive together.

Defining Assistive Intelligence

Let’s narrow down what “assistive” means in the context of AI. The term assistive often appears in discussions about AI, especially in the New York Times section on AI. However, humans being humans, anything that lightens our mental or physical load tends to be seen as assistive.

But let’s not confuse assistive with a mere employee or helper. Instead, it should complement human strengths and weaknesses. For instance, a bot that cleans a house assists with chores you could do yourself, which is helpful but does not enhance your capabilities. On the other hand, needing help to lift a heavy object addresses your physical limitations relative to the task, truly complementing your abilities.

Consider the assembly line bots. They’re the overachievers in the AI class – working faster, more accurately, and tirelessly. Yet, are they truly assistive? In this case, they’re more like precision tools than partners. They don’t ponder the meaning of life while assembling widgets. Their AI is minimal, constrained to specific tasks, with no ambition to learn beyond their assembly line duties.

And here’s the kicker: the goal of assistive intelligence isn’t to turn us into mentally and physically lazy couch potatoes. While it is tempting to offload tasks to AI, doing so indiscriminately can lead to a decline in our own capabilities.  Instead, true assistive intelligence should enhance our abilities, not replace them, and push us to grow. Picture an AI that manages complex data, freeing us to unleash our creative genius and strategic brilliance. It’s like having a sous-chef in the kitchen who handles the chopping while you masterfully create culinary masterpieces.

In essence, assistive intelligence should be about elevating us, not just making life easier. It’s about creating a balance where AI complements our strengths and addresses our weaknesses, leading to a harmonious partnership. This approach guides AI development toward truly symbiotic relationships, fostering growth and innovation rather than dependency.

The Fear of Consciousness and Sentience in AI

Right now, we don’t know a lot about what is happening “under the covers” of the artificial neural network. It feels like we have let loose a virus whose trajectory we cannot predict. But why is that?

How about unpredictability and rapid advancement.

The unpredictability and opaque nature of neural networks arise from their complexity, the vast number of parameters, the intricate training processes, and the sophisticated tasks they handle. While efforts to improve interpretability (or, being able to map how decisions are being made and learning is happening) are underway, fully understanding what goes on “under the covers” remains a significant challenge. It’s a “black box.” As such, it doesn’t seem like we are channeling the direction of AI development and evolution very well, making its unknown trajectory scary.

A significant part of our anxiety about AI stems from the potential emergence of consciousness and sentience within these systems. Consciousness involves being aware of and able to think about one’s own existence, sensations, and thoughts, while sentience refers to the ability to experience sensations and feelings. These traits are deeply intertwined with our understanding of what it means to be truly “alive.” The fear that AI could develop these traits fosters concerns about creating independent-minded entities that might challenge human autonomy and safety.

These human-like qualities frighten many because they challenge our unique position as conscious, sentient beings. If AI were to achieve consciousness, it could potentially possess its own motivations, desires, and goals, which might not align with human interests or all “good” human traits.

Why We Fear Conscious AI

Loss of Control: One of the primary fears is that conscious AI could act independently of human directives, making decisions that could be harmful or counterproductive to human well-being. This loss of control is exacerbated by the potential for AI to surpass human intelligence, leading to scenarios where humans can no longer predict or manage AI actions.

Ethical and Moral Dilemmas: Conscious AI raises profound ethical questions about rights and personhood. Would a sentient AI deserve rights similar to those of humans? How would we ensure ethical treatment of entities that might experience suffering or desire autonomy?

Existential Risk: The fear that AI might one day consider humans as obstacles to its objectives, or that it might inadvertently cause harm through well-intentioned but misguided actions, fuels apocalyptic visions of AI-led disasters.

To address these fears, we need to rethink AI as assistive intelligence and prioritize ethical frameworks in its development. These steps can guide its evolution in a direction that supports human flourishing. This perspective not only addresses immediate concerns but also fosters a vision of harmonious coexistence and mutual enhancement, where both humans and AI thrive together.

Thinking Outside the Bot

In “The Genetics of Love and Liking,” I discuss how terminology can box our minds into certain ways of thinking. By labeling AI as “artificial,” we inherently set it apart from us, reinforcing the notion of it being unnatural or inferior. This distinction creates a mental boundary that can lead to an adversarial mindset, where AI is seen as a threat to human jobs, autonomy, and even existence. However, by shifting our terminology to “assistive intelligence,” we open up new ways of thinking about AI as a partner rather than a rival. This re-casting breaks down semantic walls, inviting a collaborative approach and fostering a vision where AI and humans work together for mutual benefit.

A subtle shift in terminology ― from competitor to collaborator ― can have profound effects on how we develop, integrate, and regulate AI technologies. For example, we can focus AI development and evolution to complement human abilities using AI’s unique strengths, leading to a richer, more diverse understanding of consciousness and intelligence. This shift not only fosters technological advancements but also encourages a more harmonious, symbiotic integration of AI into our lives, where both humans and AI work together for mutual benefit by complementing each other.

For instance, in the movie, Smith states that Atlas and he would be one mind, where her part of the brain would be strategy and his would be data analysis.

So here is where the rubber meets the rhetorical road: how does channeling AI development and evolution manifest? Essentially, by encouraging the design and implementation of AI technologies that are inherently supportive and collaborative rather than adversarial or a loose, unpredictable virus.

Here are several ways this approach can mitigate fears and guide the development of AI in a positive direction:

Emphasizing Collaboration Over Competition

Design for Complementarity: By focusing on assistive intelligence, developers can create AI systems that are specifically designed to complement human strengths and mitigate human weaknesses. For example, AI can handle large-scale data analysis and repetitive tasks, freeing humans to focus on creative, strategic, and interpersonal aspects of work (Brynjolfsson & McAfee, 2014).

User-Centric Design: Assistive intelligence prioritizes user experience and human-AI interaction. This involves designing AI systems that are intuitive, transparent, and aligned with user needs, which can increase trust and reduce fear of misuse or loss of control (Shneiderman, 2020).

Enhancing Transparency and Trust

Explainable AI (XAI): Developing AI systems that can explain their decisions in human-understandable terms helps build trust. When users understand how an AI arrives at its conclusions, they are more likely to see it as a helpful tool rather than a mysterious and potentially threatening entity (Gunning, 2017).

Ethical AI Frameworks: Implementing ethical guidelines and frameworks ensures that AI development prioritizes safety, fairness, and accountability. Organizations like the IEEE and the European Commission have proposed guidelines to ensure that AI technologies respect human rights and promote societal well-being (IEEE, 2019; European Commission, 2020).

Promoting Symbiosis and Mutual Benefit

Augmenting Human Capabilities: Assistive intelligence aims to enhance human abilities rather than replace them. For instance, in healthcare, AI can assist doctors by providing real-time data analysis and predictive diagnostics, allowing them to make better-informed decisions and improve patient care (Topol, 2019).

Educational and Training Tools: AI can serve as a powerful tool for education and training, providing personalized learning experiences and support. This helps individuals adapt to new technologies and job roles, reducing fear of obsolescence and enhancing human capital (Holmes, Bialik, & Fadel, 2019).

Future Scenarios of Harmonious Integration

Human-AI Symbiosis: Imagine a future where AI and humans work together in a seamless partnership. For example, AI could manage logistical tasks in smart cities, while humans focus on community building and policy-making. Such scenarios highlight the potential for AI to support and enhance human society rather than compete with it (Kurzweil, 2005).

Ethical AI Governance: Establishing robust governance structures ensures that AI development and deployment are aligned with societal values and ethical principles. These values and principles provide the foundational standards that guide AI development, ensuring safety, fairness, and accountability. They serve as a supportive framework rather than the primary focus, allowing the primary goal of creating assistive intelligence to remain central. This includes regulatory oversight, public engagement, and continuous monitoring to address emerging risks and opportunities (Floridi et al., 2018).

The intention is to drive AI evolution with our own mental guardrails or channels toward solutions that emphasize symbiosis and mutual enhancement. Imagine a future where AI and humans seamlessly interact, each enhancing the other’s capabilities. As in the movie, AI could handle complex data analyses and predictive modeling, allowing humans to focus on creative, strategic, and empathetic tasks. For example, AI might manage urban infrastructures, optimizing energy use and traffic flow in real-time, while humans oversee community engagement and ethical considerations. In healthcare, AI could monitor patients’ vital signs and predict health issues before they arise, while doctors and nurses provide personalized care and emotional support.

This harmonious integration might also manifest in more science-fictiony ways. Picture AI-enhanced spacesuits that not only support astronauts with life support and navigation but also adapt to their psychological needs, offering virtual reality environments for relaxation during long missions. On Earth, AI-driven ecosystems could support sustainable living, from smart homes that adapt to residents’ preferences to cities that dynamically adjust to environmental conditions, reducing waste and conserving resources. By fostering a collaborative mindset, we can guide AI evolution toward enhancing our quality of life, protecting our planet, and unlocking new frontiers of human potential. This vision of harmonious integration, where AI and humans co-evolve, underscores the transformative power of redefining intelligence and embracing a shared future.

So, tying it back to reframing the box by applying a different label: Imagine referring to a service dog as just a “trained animal” — it sounds impersonal and functional. Yet, calling it a “guide dog” or “service companion” highlights its supportive role and the bond it shares with its handler. Just as the labels we use for animals matter, the labels we use for AI shape our relationship with it. Let’s think of AI as a part of us in the digital age, not the inevitable cold, calculating overlord from the movies.

Comparison to the Human Brain

To understand the potential of assistive intelligence, it’s useful to draw a comparison to the human brain. The human brain operates with different hemispheres and specialized regions, each responsible for various cognitive functions. The left hemisphere is often associated with logical reasoning, analysis, and detail-oriented tasks, while the right hemisphere is linked to creativity, intuition, and holistic thinking. These hemispheres work together seamlessly, integrating their functions to enhance overall cognitive performance.

Similarly, AI can be seen as an extension of the human brain, handling tasks that require rapid data processing, pattern recognition, and precision. This is akin to the left hemisphere’s functions. Meanwhile, humans provide context, ethical considerations, and creative insights, much like the right hemisphere’s contributions. This synergy allows each to operate in their areas of strength, leading to enhanced productivity and innovative problem-solving.

For example, in medical diagnostics, AI can quickly analyze medical images to identify potential issues, while human doctors provide a deeper understanding of patient history, manage complex ethical issues, and make the final judgment calls. This collaborative approach leverages the strengths of both AI and humans, leading to more accurate diagnoses and better patient outcomes.

Moreover, the integration of AI and human intelligence can be likened to creating a larger intelligence. This concept involves shared learning systems where AI and humans continuously learn from each other. AI can suggest novel solutions based on data patterns, which humans can then refine or redirect based on practical experience and contextual understanding. This feedback loop enhances both AI algorithms and human decision-making, creating a dynamic and adaptive intelligence system.

In essence, think of AI as the “left brain” of our society — logical, data-driven, and methodical — while humans bring the “right brain” creativity and ethical nuance to the table. Together, we form a more complete, well-rounded intelligence, capable of tackling challenges that neither could conquer alone.

Epilogue

I watched the movie after writing the blog post. Other ideas came to mind that complement the blog post in a “cultural literacy” sort of way.

This epilogue isn’t a movie review. Instead, I use the story of “Atlas” as a springboard to explore the anxieties surrounding the unknown and how we can shape AI development. The movie showcases common tropes associated with AI, and by deconstructing these tropes as mental patterns, we can gain valuable insights into our minds.

The Plot, by Trope

The movie, “Atlas,” starts off with a rogue AI trope who has gained sentience and consciousness, with scenes of massive destruction and loss of human life. Already, the AI Apocalypse trope is set as the backdrop of what one might hope is a story of redemption and reconciliation.

Later we learn that Harlan (the rogue AI) is programmed to protect humanity. Once he “syncs” with the child, Atlas, Harlan is suddenly free of constraints (Asimov’s Three Laws of Robotics?) and, through her eyes, sees humans destroying their planet and having wars. He decides he needs to preemptively save humanity from itself: “start over” by destroying almost all humans and reestablishing a better trajectory, “with our help.”

(OK, one critique: He saw all that bad stuff in enough depth and breadth to conclude humanity is destroying itself through the eyes of a sheltered child? Really?)

Three Laws of Robotics »»

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The “child” part of the story also is “a child innocently starts the fire” trope. (Not a real title for the trope, but you get the picture.) The story also includes the child’s mother and creator of Harlan and the sync technology. The young girl sees momma doting on the bot, leaving her feeling neglected and not as worthy of her attention as the bot’s. She wants some bot powers, so she syncs with the bot (Tragic Misunderstanding or Tragic Mistake tropes) using the prototype syncing tech while mom wasn’t looking.

Oops. The child’s desire for attention and lack of understanding of the complexities and dangers involved leads to the inadvertent ability of Harlan to override his constraint programming.

Harlan immediately kills the mom (the Well-Intentioned Extremist trope). So, the Three Laws of Robotics were either not there or they were overridden by the child’s innocent mistake. Harlan also immediately frees other AIs from their constraints (The Singularity trope), leading to an army of rogue AIs.

Enter the grown-up child in the form of Jennifer Lopez and Smith, another AI that is solidly in the “assistive intelligence” arena. In fact, Smith actually uses the word “symbiosis” and describes the single mind of complementary abilities idea. (Imagine both my disappointment at being beaten to the punch, but also the elation that the idea is already out there.)

Lopez is ferociously resistant to “sync-ing” (Reluctant Hero trope). She is having super-deep guilt issues thinking that she is the reason her mom is and thousands of other people are dead and billions more are about to get dead. But she eventually comes around, resulting in an affection that she didn’t think possible…again. (She was “friends” with Harlan growing up, so there is the Betrayed Trust trope.) During the journey toward 100% syncing, we repeatedly hear the plea “you gotta trust me,” which she does bit by bit as she moves slowly into the benefits of symbiosis.

We’re left with the AI Is a Crapshoot trope, reflecting the dual nature of AI as both a threat (Harlan) and an assistive and enhancing ally (Smith). But are all “Harlans” really dead? Is crapshoot really the end of the story?

Analysis

Some AI development approaches involve algorithms that learn through trial and error, mimicking a form of artificial selection. Unlike natural selection, which is driven by survival, AI learning is guided by the goals we program into the system. However, AI decision-making, particularly in complex models like deep neural networks, often operates in a way where we can’t see what’s going on: a “black box.” These systems can process and analyze vast amounts of data to make predictions or decisions, but the internal workings are not transparent or easily understood.

Given this lack of transparency, the question arises: Have we reached a point where channeling AI development is feasible, and if so, how do we proceed?

The black-box nature of AI can make it feel like an uncontrolled virus or the wild-wild-west of AI evolution. While the arguments made in this post call for channeling development, we’re not there yet. Using the metaphor of evolution, we humans act as the pressures shaping emerging AI species. We are both the provokers of change and the setters of objectives. Currently, we see primordial efforts to guide AI behavior, such as programming bots to avoid controversial topics and heavily curating their responses. But these are hardly more than chiding an unruly child. Band-aids. Surface treatments.

(For those who missed the pun, “primordial” was used to reinforce the notion of Evolution and early stages of primitive development.)

Transparency and channeling AI development is an active area of research. While we have made significant progress in specific areas, achieving true human-level intelligence or completely transparent AI systems remains elusive. We need advancements in explainable AI (XAI), better goal specification techniques, and more robust human-AI collaboration models to reach our objectives.

The trope AI Is a Crapshoot suggests that AI development is inherently unpredictable and potentially dangerous. However, our discussion throughout this article offers a more nuanced perspective. The advancements in AI, particularly when viewed through the lens of assistive intelligence, show that we need not merely roll the dice. Instead, we should actively shape and guide AI’s evolution to complement human strengths and offset human weaknesses.

How? The concept of assistive intelligence should be an overarching objective of any course of AI development. Instilling human values should absolutely not be the primary goal because humans are inherently flawed. It is this combination of human values and the black-box nature of AI that makes AI technology a crapshoot. That’s not to say no human values should be instilled; they simply should not be the focus. (See “Ethical AI Governance” in the main section.)

If we focus instead on assistive as a guiding principle, we can better steer AI development toward mutually beneficial outcomes.

Ultimately, we are not left with the AI Is a Crapshoot trope. By focusing on transparency, goal alignment, and human-AI collaboration, we can guide AI development in a controlled and purposeful manner. The advancements in XAI, goal-oriented AI, and human-in-the-loop systems offer promising avenues to ensure AI evolves as a beneficial partner to humanity. By envisioning AI as assistive intelligence, we pave the way for a future where AI and humans thrive together in a symbiotic relationship, enhancing our quality of life and unlocking new frontiers of human potential.

Leave a comment