Magazine / Integrity Over Intelligence: Navigating AI for a Human-Centered Future

Integrity Over Intelligence: Navigating AI for a Human-Centered Future

Book Bites Science Technology

Hamilton Mann is a Tech Executive, Digital and AI for Good Pioneer, and serves as Group Vice President at Thales, where he co-leads the AI initiative, drives Digital Marketing, and leads Digital Transformation. He also serves as a Senior Lecturer at INSEAD and HEC Paris, as well as a mentor at the MIT Priscilla King Gray (PKG) Center. He is a doctoral researcher in AI at École Nationale des Ponts et Chaussées – Institut Polytechnique de Paris. He writes regularly for Forbes and hosts a podcast, The Hamilton Mann Conversation.

What’s the big idea?

Artificial intelligence cannot rely solely on intelligence. Integrity must, by design, come first for the well-being of people and society.

Below, Hamilton shares five key insights from his new book, Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future. Read his Book Bite below, or in the Next Big Idea App.

1. The future of AI is integrity over intelligence.

In the rapidly evolving world of artificial intelligence (AI), computational power isn’t enough. Warren Buffett once said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.” This wisdom is equally applicable to AI. As we begin to “hire” powerful, intelligent machines to perform tasks traditionally done by humans, we must ensure they possess something akin to integrity.

AI systems need to be designed to uphold integrity behind closed doors so that their functioning, beyond exhibiting trustworthiness, can adhere to societal needs, norms, values, and standards without infringing, harming, devaluing, or regressing their integrity. Artificial integrity over intelligence is a built-in capability within AI systems that ensures they respect human values from the very start. It is defined by three interconnected models:

  • Society Value Model enforces strong guardrails and value principles external to AI systems that support the human condition, creating a safe environment for humans to thrive.
  • AI Model ensures AI operational consistency with internal and intrinsic guardrails, guidelines, and values-driven standards from an AI development standpoint, thus ensuring that algorithms uphold a form of integrity over time.
  • Human and Artificial Co-intelligence Model builds capabilities based on the synergistic relationship between humans and AI while enhancing the human condition.

Together, these three models constitute one integrated approach, functioning as a metamodel essential for ensuring the participation of artificially intelligent systems while preserving, supporting, and defending the integrity of societal ecosystems.

2. Artificial integrity to sustain society value models.

Artificial integrity embodies a human commitment to establish guardrails and value principles that serve as a code of life for how AI is developed. It refers to the society value model that AI models must adhere to so that it is intrinsically capable of prioritizing and safeguarding human life and wellbeing in all aspects of its operation. It represents the value system in which these forms of intelligence operate and serve, upholding principles tailored to specific contexts, not just to the benefit of one given user, group, or community but to the superior interest of socio-economic, political, and environmental ecosystems’ integrity.

This approach highlights a paradigm shift for AI systems, which should not just exhibit intelligence for its own sake and for the hyper-narrow interests of an individual in the limited framework of a commercial purpose. AI should be algorithmically socially responsible and societally accountable in terms of its impact on society, considering the value system in which it is an artificial stakeholder.

“AI should be algorithmically socially responsible and societally accountable.”

A society’s value model involves multiple facets that collectively define the ethical, cultural, and operational principles guiding behavior and decision-making. There are the prevailing moral beliefs that dictate what is considered right or wrong. There are the cultural values that forge shared assumptions, traditions, and practices. Legal frameworks are another essential dimension. How a society organizes its economic activities, the distribution of resources, and how wealth is generated and distributed are also part of the society value model—and more.

These multidimensional aspects of any value model necessitate diverse approaches to analyze it. As such, there is a need for multidisciplinary inputs. It includes engaging with ethicists, community leaders, and potential users to inform strategic decision-making regarding a given society value model.

Developing a mathematical model of the value model is crucial for predicting the impacts of policies, technologies, and changes within societal frameworks. Such a model could optimize the interaction between technological advancements and societal values, ensuring that technologies are deployed in alignment with ethical norms and enhance public welfare.

3. Artificial integrity is a deliberate act of design.

Artificial integrity implies that AI is developed and operates in a manner that is not only aligned with guardrails and value principles that serve as a code of life for AI but also does so consistently and continuously across various situations without deviation from its programmed values-driven guidelines.

This is a fundamental part of how it has been conceived, trained, and maintained. It is a deliberate act of design. It suggests a level of algorithmic self-regulation and intrinsic adherence to values, like how a person with integrity would act morally regardless of external pressures or temptations, maintaining a vigilant stance toward risk and harm, ready to override programmed objectives if they conflict with the primacy of human safety. It involves a proactive and preemptive approach, where the AI model is not only reactive to ethical dilemmas as they arise but is also equipped with the foresight to prevent them. It is about embedding artificial artifacts into AI that will govern its processes, mimicking a form of consciously made actions while ensuring they are always aligned with human values. This is akin to a “value fail-safe” that operates under the overarching imperative that no action by the AI system should compromise human health, security, or rights.

“It suggests a level of algorithmic self-regulation and intrinsic adherence to values, like how a person with integrity would act morally regardless of external pressures or temptations.”

An essential element in building such an AI model is the data process. Beyond labeling, which generally refers to identifying and assigning a predefined category to a piece of data, it is necessary to annotate datasets in a systematic manner. While labeling data gives it a form of identification so that the system can recognize it, annotating allows for more detailed and extensive information. Data annotation gives the data a form of abstract meaning so that the system can contextualize the information.

Including annotations that characterize an integrity code, reflect values, integral judgments regarding these values, principles underlying them, or inappropriate outcomes is a promising approach to training AI that can produce results guided by integrity to a value model. For example, in a dataset used to train an AI customer service chatbot, annotations could ensure that the chatbot’s responses will be polite, respectful, and fair.

4. Human-AI co-intelligence revisits collaborative intelligence.

The balance between “human intelligence value added” and “AI value added” is delicate and consequential. In navigating this complexity, we must delineate the current landscape where human wit intersects with the prowess of AI and serves as a compass toward future terrains where the symbiosis of human and machine will redefine worth, work, and wisdom. This balance for artificial integrity could be achieved through four modes of AI inclusion:

The Marginal Mode. When it comes to value creation, there exists a quadrant where the contributions of both human and artificial intelligence are notably restrained, reflecting scenarios of limited impact. This segment captures tasks with minimal marginal benefits when subjected to human or artificial intelligence inputs. Such tasks are often too inconsequential to necessitate significant intellectual investment, yet simultaneously too intricate for the present capabilities of AI, rendering them economically unjustifiable for human endeavor. An example is document scanning for archival purposes: a task that, while manageable by humans, succumbs to monotony and error, and where AI, despite capabilities like optical character recognition (OCR), offers only marginal improvement due to challenges with non-standard inputs.

The AI-First Mode. In this paradigm, AI is the linchpin, spearheading core operational functionalities. It spotlights scenarios where AI’s unparalleled strengths—rapidly processing extensive datasets and delivering scalable solutions—stand out. This AI-centric approach is particularly relevant in contexts where the speed and precision of AI significantly surpass human capabilities. AI emerges as the driving force in operational efficiency. An example is observed in the financial industry, particularly in high-frequency trading. Here, AI-driven trading systems leverage complex algorithms and massive datasets to identify patterns and execute trades with a velocity and scale unachievable by human traders.

The Human-First Mode. In this segment, the spotlight shines brightly on the indispensable qualities of human cognition, including intuitive expertise and contextual, situational, emotional, and moral discernment. AI is deployed in a supportive or complementary capacity. This approach champions human capabilities and decision-making, particularly in realms necessitating emotional intelligence, nuanced problem-solving, and moral judgment. It emphasizes the irreplaceable depth of human insight, creativity and interpersonal abilities in contexts where the intricacies of human thought and emotional depth are critical.

For instance, in psychiatry, the nuanced interpretation of non-verbal communication, the provision of emotional support, and the application of seasoned judgment underscores the limitations of AI in replicating complex empathetic and moral considerations.

The Fusion Mode. This segment illustrates a synergistic integration where human intelligence and AI coalesce to leverage their distinct strengths: human creativity and integrity traits paired with AI’s analytical acumen and pattern recognition capabilities. In health, for example, AI can augment physicians’ capabilities with precise diagnostic suggestions and enhance surgical precision in medical procedures. In engineering and design, it can support creative problem-solving.

5. Navigating mode transitions.

Altogether, the four modes—Marginal, AI-First, Human-First, and Fusion—underscore a future of work in which AI augments human expertise, fostering a collaborative paradigm. As we migrate from one quadrant to another, we should aim to bolster, not erode, the distinctive strengths brought forth by humans and AI. While traditional AI ethics frameworks might not fully address the need for dynamic and adaptable governance frameworks that keep pace with the transitions in balancing human intelligence and AI evolution, artificial integrity suggests a more flexible approach to govern such journeys.

“Artificial integrity challenges us to contemplate the importance of human oversight in mitigating ethical blind spots that AI alone may overlook.”

This approach offers a more agile and responsive governance structure. When a job evolves from a quadrant of minimal human and AI value to one where both are instrumental, such a shift should be marked by thoroughly contemplating its repercussions, a quest for equilibrium, and adherence to universal human values. For instance, a move away from a quadrant characterized by AI dominance with minimal human contribution should spell a recalibration of the symbiosis between humans and AI. Here, artificial integrity calls for evaluating AI’s role beyond operational efficiency and considers its capacity to complement the complex expertise that embodies professional distinction. Conversely, when we consider a transition toward less engagement from both humans and AI, artificial integrity challenges us to contemplate the importance of human oversight in mitigating ethical blind spots that AI alone may overlook. AI systems capable of coping with such transitions are equipped with artificial integrity.

The difference between artificially intelligent-led and integrity-led machines is simple: the former were designed because we could—the latter because we should. This distinction underscores the growing need for artificial integrity.

Read Hamilton Mann’s Book Bite in the Next Big Idea App today:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.