AI regulators in UK are ‘under-resourced’, warns science committee chairman
The Science, Innovation and Technology Committee said the £10 million allocated to regulators was ‘insufficient’.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Artificial intelligence regulators in the UK are “under-resourced” in comparison to developers of the technology, the Commons science committee chairman has warned.
The Science, Innovation and Technology Committee said in a report into the governance of AI that £10 million announced by the Government in February to help Ofcom and other regulators respond to the growth of the technology was “clearly insufficient”.
It added that the next government should announce further financial support “commensurate to the scale of the task”, as well as “consider the benefits of a one-off or recurring industry levy” to help regulators.
Outgoing committee chairman Greg Clark said he was “worried” that UK regulators were “under-resourced compared to the finance that major developers can command”.
The report, published on Tuesday, also expressed concern at suggestions the new AI Safety Institute has been unable to access some developers’ models to perform pre-deployment safety testing that was intended to be a major focus of its work.
The committee has called on the next government to name any developers that refused access — in contravention of the agreement at the November 2023 summit at Bletchley Park — and report their justification for refusing.
It adds that the Government and regulators should safeguard the integrity of the election campaign by taking “stringent enforcement action” against online platforms hosting deepfake content which “seeks to exert a malign influence on the democratic process”.
Former business secretary Mr Clark said it was important to test the outputs of AI models for biases “to see if they have unacceptable consequences”, as biases “may not be detectable in the construction of models”.
Commenting on the report, Mr Clark said: “The Bletchley Park summit resulted in an agreement that developers would submit new models to the AI Safety Institute.
“We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing.
“It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective.
“We are worried that UK regulators are under-resourced compared to the finance that major developers can command.”
In its report, the committee states that the “most far-reaching challenge” of AI may be the way it can operate as a “black box” – in that the basis of, and reasoning for, its output may be unknowable.
The MPs add that if a chain of reasoning cannot be viewed, there must be stronger testing of the outputs of AI models as a means to assess their power and acuity.
The committee states that the conclusions and recommendations of the report apply to whoever is in government after the General Election on July 4.
In its last report of the current Parliament on the topic, the committee writes: “It is important that the timing of the General Election does not stall necessary efforts by the Government, developers and deployers of AI to increase the level of public trust in a technology that has become a central part of our everyday lives.”
It adds that any new government should be ready to produce AI-specific legislation should the current approach “prove insufficient to address current and potential future harms associated with the technology”.
The Department for Science, Innovation and Technology said the UK was taking steps to regulate AI and upskilling regulators as part of a wider £100 million funding package.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.