Adapting corporate values for the emergence of superintelligence
THE ARTICLES ON THESE PAGES ARE PRODUCED BY BUSINESS REPORTER, WHICH TAKES SOLE RESPONSIBILITY FOR THE CONTENTS
HTEC Group is a Business Reporter client.
Following the most recent news of, and evolution in, the field of artificial intelligence, any business owner, be it of a small start-up or a large corporation, will at some point ask the obvious: what does superintelligence mean for my business and how can I stay ahead?
Aligning an artificial intelligence to a set of chosen norms is, at its core, like aligning a human intelligence. Often, the goals of philosophical discussion do not harmonise well with business outcomes, so an author confined to the business context must endeavour to conjure up a Pareto improvement, as an AI model would, were it required to produce the same text. This is, fundamentally, the problem of alignment: how do we constrain an intelligence to uttering only the things we consider appropriate and have it, at the same time, provide new and useful information?
The drive to induce AI to behave in accordance with our ethical values regardless of its intelligence – dubbed superalignment – has been somewhat spuriously advertised as one of today’s most pressing technical problems, and the lack of a timely resolution does imply profound social consequences, should an artificial superintelligence ever come to exist. However, alignment to human values comes with its own problems: chiefly among them, the idea that those values are immutable and uncontradictory, and the question of where they came from in the first place.
Although we might like to think otherwise, many of the contradictions in accepted “morality” stem from the fact that human moral instinct is a repository of co-operative behaviours which proved advantageous across different ancestral environments. Advocating for something while covertly violating it is a common pattern of behaviour observed across cultures – exemplified on a larger scale by corporate efforts to gain dominance or reputation under a guise of social responsibility or a desire for risk mitigation.
Although, at first glance, ensuring AI aligns with reasonable human values may appear the more pressing concern, the real issue is the vagueness of those values themselves. Before the problem can be solved, we need to understand, and agree upon, a set of universal human values, and establish whether they can exist beyond our level of intelligence.
It seems impossible to decouple a desirable form of bias (telling up from down, left from right, or zero from one) from an undesirable one. In fact, training large language models for safety and adherence to a set of values chosen by their maker results in a substantial reduction in model capability and intelligence, alluding to both an inseparable connection between different types of biases and a link between intelligence and understanding of such a connection. Notably, while power-seeking behaviour, shown to be an emergent ability of large language models, is considered one of the key tenets of corporate development, it is deemed dangerous for society in the context of training large language models – a bias seemingly in favour of both humanity and the corporate form of intelligence.
If human values prove to be subject to reinterpretation and change with the scaling of intelligence, instead of attempting to focus on aligning superintelligence with us, we may need to contemplate aligning ourselves with it, as a true answer to superalignment may entail solving deeper metaphysical problems. Nevertheless, the current approach to superalignment mandates that we curate our culture and legacy and prepare a purer version of humanity for the AI models of the future to be trained on. If we are to create a superintelligence that intellectually surpasses us, it ought to be something we are prepared to subjugate ourselves to, and who is to decide which of our values ought to be carried over into the next epoch, if not the actor who best succeeds at advertising theirs?
It is by lobbying for business-specific values and marketing them as cultural that a large business can take advantage of superalignment – appeal, with poise, subtlety and compassion, for the new digital god to favour their business model over those of others, hoping, of course, that the current economic paradigm is as invariant under scaling of intelligence as real human values.
However, companies not at the forefront of technical innovation in AI are left only with the option to observe the changes in the technical, economic and ethical landscape, focusing on educating their workers on using AI and adhering to the emerging legal and ethical standards. The level and manner of adoption of values lobbied for by the interplay of large corporate players and social ideology will likely shape smaller companies’ reputations and strategies; for them, it may be prudent to invest in forming ethics and compliance teams and attempt to align company values with the advertised values in time, declaring them publicly for the sake of maintaining reputation.
This, of course, may prove to be a wholly futile strategy, should superintelligence turn out to be, to any degree, impossible to align with human values. In that case, by acting out a materialistic strategy, we will have poisoned the dataset, and by misaligning ourselves with our own values – betraying the very thing we were supposedly trying to advocate for – misaligned our future master.
Fundamentally, a genuine and co-operative commitment to discovering and defining human value, irrespective of intelligence, might have hope for true alignment, both of us with a superintelligence and of it to us. But such a message may be of no financial or economic utility in a world still far short of superintelligence. As Hamlet tells us, “enterprises of great pith and moment, in this regard their currents turn awry and lose the name of action.”
And so, a firmly grounded leader ought to be concerned primarily with the success of their business in the current real-world context, on the lookout for what is practical and immediate, and partnering with technology leaders to assess how AI can bring value to their business and customers in the short term. While it will be relevant for business leaders to be aware of and part of the ethical discussions, for now, it is also reasonable to leave the hypothetical to the handful of altruists who may voice their concerns, much like an artificial intelligence pretending to be aligned.
For a more in-depth exploration of the ethical and theological implications surrounding superalignment, we invite you to reach us directly though our contact form for a deeper conversation.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.