Transparency over what goes into developing artificial intelligence systems is crucial, but the push to improve it must be led by regulators, not private companies.
Nick Clegg, head of global affairs at Meta, today made the case for openness as the way forward, arguing in the Financial Times that greater transparency over how AI works “is the best antidote to the fears” surrounding the technology.
Since its launch last November, ChatGPT has captured the public imagination with its ability to quickly respond to users’ questions in a personable way.
The app is an example of generative AI, which produces text or other media in response to prompts.
It was trained in September 2021 by OpenAI on a swathe of internet text, books, articles and websites.
The problem is the company does not share the information on which the chatbot is trained, so there is no way to directly fact-check its responses.
Its peer, Meta, believes the recent decision to make publicly available 22 “system cards” that offer an insight into the AI behind how content is ranked on Facebook and Instagram is a step towards improving transparency.
Tony Blair: Impact of AI on par with Industrial Revolution
Martin Lewis warns against ‘frightening’ AI-generated scam video
Artificial intelligence ‘doesn’t have capability to take over’, Microsoft boss says
However, the system cards themselves offer only a superficial view of how Meta’s AI systems are used.
They do not give a comprehensive look at how responsible the processes of designing these systems are.
Please use Chrome browser for a more accessible video player
The cards give an “aerial view,” according to David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute, the UK’s national institute for artificial intelligence.
“It will talk about how the data might have been collected, it gives very general information about the components of the system and how some of the choices were made,” he said.
Read more:
Martin Lewis warns against ‘frightening’ AI scam video
AI ‘doesn’t have capability to take over’, says Microsoft boss
How AI could change the future of journalism
Some may see them as a first step, but in an industry where controlling access to information is a fundamental source of business revenue, there is insufficient incentive for companies to give away trade secrets, even if they are necessary to build public trust.
Please use Chrome browser for a more accessible video player
So far, there are no policy regimes in place to force private sector companies to be sufficiently transparent about AI.
However, the ground is being prepared in the UK by calls from campaigners – and a private members’ bill is due for a second reading in the House of Commons in November.
The next step for regulators is to deliver concrete guidelines governing which information is made accessible and to whom to improve accountability and safeguard the public.