Admin

~ by Alan Dupont. Originally published in The Australian on 26th October, 2024

Once the preserve of technology geeks, artificial intelligence has gone mainstream resulting in an explosion of digital and virtual capabilities. These range from the helpful personal assistants on our digital devices and the mapping of more than 160,000 new virus species, to troubling “deep fakes” that are impossible to detect with the naked eye.

OpenAI’s ChatGPT has helped democratise AI but raised many questions about the technology’s value and impact. Will it be genuinely transformational and a boon to society as advocates proclaim boosting productivity, creating undreamed of prosperity and liberating us from life’s drudgeries? Or are the naysayers right – that its promise is over-hyped and the risks potentially catastrophic without guardrails and humans in the loop?

Supporters point to the undoubted workplace efficiencies that AI brings and its centrality to a new “boundaryless ecosystem of collaboration that span continents, time zones and cultures.” A recent study by the International Labour Organisation concluded that AI is more likely to enhance job roles than eliminate them. Big Tech is overwhelmingly bullish, downplaying the dangers and arguing that sharing code allows engineers and researchers across the industry to identify and fix problems.

But the public is unconvinced. And the naysayers are gaining ground. Polling from the Lowy Institute shows that more than half Australians believe the risks of AI outweigh its benefits. Serious people, including those with deep knowledge of the technology, are beginning to sound the alarm. Geoffrey Hinton, regarded as the godfather of AI who has just won the Nobel Prize for Physics, says that while AI systems bring incredible promise the risks “are also very real and should be taken seriously.” Technophile Elon Musk has branded AI “the most disruptive force in history.”

Soroush Pour, CEO of tech start up Harmony Intelligence, testified at a Senate Inquiry into AI that the “next generation of AI models could pose grave risks to public safety.” And an analysis for the well-regarded International Institute for Strategic Studies concludes that: “Despite the best efforts of researchers and engineers, even our most advanced AI systems remain too brittle, biased, insecure and opaque to be relied upon in high-stake use cases. Put simply, we do not know how to build trustworthy AI systems.”

Researchers and tech executives have long believed that AI could fuel the creation of new bioweapons, turbocharge hacking and allow bad actors to steal the technology and use it for nefarious purposes, including military domination. Superintelligent machines might go rogue and pose an existential threat to humans giving rise to the question: How do we prevent AI from becoming a single point of societal failure while preserving its undoubted benefits?

Time is running out to provide answers because the technology is accelerating at a blistering pace driven by order of magnitude increases in computing power and the data that feeds the large language models essential for machine deep learning.

A decade ago, AI could barely identify images of cats and dogs. Early versions of ChatGPT had the capacity of primary school students. But the latest version (GPT-4) performs at the level of a smart high-schooler. It can write competent code and essays, reason through difficult math problems and outperform most students at secondary school.

The next big leap is shaping to be even more extraordinary. Generative AI will initiate an explosion of knowledge leading to superintelligence, often referred to as the “singularity”.

Joi Ito, digital architect and President of Japan’s Chiba Institute, says that AI is moving at incredible speed. It will be able to do most routine tasks within three years and then become “better than us.” Some insiders believe that the singularity could occur as early as 2027, when AI will become automated allowing machines to program machines. That could mean the AI labs of 2027 doing in a minute what took the models used for training GPT-4 three months.

According to former OpenAI technical program manager, Leopold Aschenbrenner, superintelligent AI systems would be “vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand—perhaps even a small civilization of billions of them.” We are building machines that can think and reason he says. They will be able to solve robotics and make dramatic leaps across other fields of science and technology. An industrial revolution will follow marked by unprecedented increases in productivity and economic growth.

It took 230,000 years for the global economy to double as civilisation went from hunting to farming, but only 15 years to double after the adoption of modern manufacturing techniques. It’s conceivable that superintelligent machines could increase productivity exponentially within a decade as armies of computer-generated avatars, operating like remote workers, become fully functional in any workplace environment.

But will we be able to trust superintelligent machines whose understanding, abilities and raw power, exceed those of humanity combined?

Thomas Hajdu, tech entrepreneur and chair of creative technologies at the University of Adelaide, says that “the rise of generative AI is not just another technological advancement; it is a seismic shift that will reshape the landscape of cognitive work.” As AI becomes more sophisticated, “our ability to discern when to trust its judgements and when to apply human insight will be crucial.” Solving the discernment problem – the ability to distinguish quality, relevance and ethical implications – becomes paramount.

That’s why the issue of AI safety is rising to the top of policy agendas globally. Many countries, including Australia, are moving to put guard rails around AI to ensure that humans retain control and the risks to public safety are minimised. As a first step, the Albanese government has introduced voluntary codes on the use of AI that will eventually lead to a comprehensive regulatory regime and a new Australian AI Act. Science Minister, Ed Husic, concedes it’s “one of the most complex public policy challenges for governments across the world.”

These challenges are only going to get bigger and more complex as the race for AI supremacy heats up. Finding enough money and power are two of the most pressing.

AI is not just a set of advanced computer chips and some clever software. Realising the technology’s vast potential will require trillions of dollars and the rapid construction of a supporting infrastructure of dedicated power plants and tech clusters, including large new chip fabrication plants. Doubters contend that the financial and infrastructure demands will constrain AI’s progress. Optimists disagree, pointing to the rapid scaling up of investment and infrastructure in the US, the world’s leading AI nation.

Aschenbrenner says AI has set in motion an “extraordinary techno-capital acceleration” that may surpass anything yet seen. Revenue for AI products is heading towards $US100 billion by 2026 from virtually zero a decade ago. The global military AI market is only about $US9bn but is expected to triple in the next 8 years. Total AI investment could exceed $US1 trillion by 2027 and we could soon see $US100bn AI clusters of large language models with their supporting infrastructure. AI specific chips will be common by the end of the decade stimulating a further leap in capability of the machine learning algorithms that will provide the computational power for GenAI.

Power of another kind may be the most important short-term constraint. AI is a voracious consumer of electricity partly because of the skyrocketing demand for data centres needed to process, store and cool AI data. Some computer clusters require as much energy as a medium sized city. One large data centre complex in the US state of Iowa burns the equivalent of 7 million laptops running eight hours a day according to its owner, Meta.

On present trends the US will have to boost electricity capacity by more than 20 percent by 2030 just to meet AI demand. A recent report by the Tech Council of Australia concludes that data centres consume 5 percent of energy from the grid with some analysts expecting that to double by 2030. Globally, the World Economic Forum estimates that AI energy requirements are growing 26-36 percent a year.

Finding this additional power will have major implications for energy and climate change policy because Big Tech, previously an enthusiastic supporter of renewable energy, is becoming agnostic about the power’s source. In an interview for the Washington Post, Aaron Zubaty, CEO of a Californian company which develops clean energy projects says: “The ability to find power right now will determine the winners and losers in the AI arms race. It has left us with a map bleeding with places where the retirement of fossil fuel plants is being delayed.”

But the biggest challenge of all will be how to ensure that AI doesn’t unleash powers of destruction that could surpass the advent of nuclear weapons. They profoundly reshaped war because for the first time humans had the means to destroy themselves.

Military power and technological progress have been tightly linked historically. The first computer was born in war. Today’s computers are ubiquitous and pivotal to defence capabilities with AI poised to transform the future battlefield. This is already evident in Ukraine which has become a global test bed for new weaponry. A proliferation of small, cheap drones increasingly enabled by AI are helping the Ukrainian armed forces to balance the combat ledger against more numerous, and better armed, Russian forces.

Israeli operations against Hezbollah and Hamas use sophisticated AI programs with names like Gospel and Lavender to process vast amounts of data to generate thousands of potential targets for assassinations and military strikes. Even Hezbollah is using AI, notably to help its surveillance drones evade Israel’s multi-layered air defences underlining its broad appeal as an equal opportunity technology.

Drones on their own are merely disruptive. But combining them with “digitised command and control systems”, incorporating AI enhanced “new-era meshed networks” of commercial and military sensors delivers a game changing “transformative trinity” write two retired generals, Australia’s Mick Ryan and America’s Clint Hinote. These capabilities allow soldiers on the front lines to see and act on real time information previously held in distant headquarters.

But “superintelligence will be the most powerful technology—and most powerful weapon—-mankind has ever developed,” says Aschenbrenner. It will give the first country able to harness its power a decisive and potentially revolutionary military advantage. “Authoritarians could use super intelligence for world conquest, and to enforce total control internally. Rogue states could use it to threaten annihilation.”

These possibilities have not escaped the notice of world leaders and defence policy makers. In 2017, Russian President Vladimir Putin asserted that the country which becomes the leader in AI development “will be the ruler of the world.” China has telegraphed that it wants to become the global leader in AI by 2030. And in a 2019 speech, then US Secretary of Defense Mark Esper declared that “whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years.”

Decisive could mean as little as a one or two-year lead. Who wins this race will have major consequences for the global balance of power and democracies. If the autocrats win, they will use AI to ruthlessly enforce control over their own populations and impose their will on other countries. Dictators aren’t too concerned about ethical, legal or privacy concerns.

For now, the US is ahead thanks to the huge investments by the AI “fab five” – Apple, Microsoft, Nvidia, Alphabet (Google) and Meta (Facebook). But China is mobilising for the race, ramping up investment in its own large language models and AI clusters. China’s AI model market is projected to reach about $US715bn by 2030. Even so, the best Chinese laboratories are only equivalent to second tier US labs and they are uncomfortably dependent on American open-source AI, a dependency China’s leader, Xi Jinping, is determined to redress.

China’s big advantage is a superior capacity for industrial mobilisation. It can close the gap by outbuilding the US on AI infrastructure, as it has in electricity generation, solar power, electric vehicle batteries and shipbuilding.

But the greater risk is that Beijing will steal its way to victory by covertly acquiring the digital blueprints and key algorithms that are the crown jewels of American AI. These are highly vulnerable for two reasons. They are poorly protected. And many leading AI researchers and companies are philosophically opposed to restrictions on access to their work, believing that the benefits of sharing outweigh the risks.

This perception is changing rapidly as the Biden administration recognises the dangers of advanced AI falling into the wrong hands which could jeopardise national security and wipe out the gains from billions of dollars of US investment, research and development.

A worst-case scenario is that smart machines could perversely endanger humanity because of unforeseen biases or unanticipated outcomes when used in war.

Recently retired chairman of the US Joint Chiefs of Staff, Mark Milley, and former Google CEO, Eric Schmidt, write in the policy journal Foreign Affairs that war games conducted with AI models have found that “they tend to suddenly escalate to kinetic war, including nuclear war, compared with games conducted by humans.” Milley and Schmidt argue that even if China doesn’t cooperate, the US should ensure its own military AI is subject to strict controls, kept under human command and conforms to liberal values and human rights norms.

Perhaps the last words should be left to Stephen Hawking, the inspirational British theoretical physicist and author who thought deeply about AI. He warned: “It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.”

Alan Dupont is chief executive of geopolitical risk consultancy The Cognoscenti Group and a non-resident fellow at the Lowy Institute.