Alibaba has become the latest Chinese technology giant to unveil its own ChatGPT-like AI model.
Tongyi Qianwen, which translates to “truth from a thousand questions”, will be added to the firm’s existing apps, including workplace messaging and a voice assistant.
Like OpenAI’s GPT tech, which has been added to websites from Microsoft’s Bing to travel planner Expedia, the chatbot will also be offered to clients for use in their own products and services.
A video demonstration showed it summarising meetings, writing emails, and giving shopping tips.
Alibaba chief executive Daniel Zhang said it “will bring about big changes to the way we produce, the way we work, and the way we live our lives”.
It follows the debut of similar technology from Chinese search giant Baidu last month, which was shown to understand various languages, answer questions, complete maths calculations, and generate images.
All of these releases – including Google’s Bard – are known as large language models, which are trained on an enormous amount of text data to generate answers, summarise information and carry out realistic conversations.
Read more:
AI generated newsreader debuts in Kuwait
Is AI becoming too clever?
ChatGPT banned in Italy
Please use Chrome browser for a more accessible video player
China’s AI regulation proposals
The success of OpenAI’s GPT model, which was recently upgraded to improve upon the chatbot’s ability to understand and complete tasks, has triggered an AI arms race across the world.
In China alone, Alibaba and Baidu’s releases are rivalled by similar products from the likes of media giant Tencent, gaming company NetEase, e-commerce platforms, universities, and dedicated AI firms.
The Chinese government has published draft rules outlining how such generative AI services should be managed, notably that they should adhere to “core socialist values”.
It’s a sign that the tech will have to adhere to the same tight restrictions as the rest of the internet in China.
They should also protect user data, according to draft rules published by the country’s Cyberspace Administration, or risk criminal investigation and fines.
Analyst Charlie Chai said the rules could slow down progress in the AI space “in exchange for a more orderly and socially responsible deployment of the technology”.
Click to subscribe to the Sky News Daily wherever you get your podcasts
It comes after Elon Musk joined a group of AI experts in calling for a pause in the training of generative AI.
Their letter, issued by the Future of Life Institute and signed by more than 1,000 people, warned “AI systems with human-competitive intelligence can pose profound risks to society and humanity”.
Be the first to get Breaking News
Install the Sky News app for free
In March, Italy became the first country to outright ban ChatGPT while the country’s data protection authorities investigated its collection of user information.
Italian authorities said the bot – which has more than 100 million monthly users – would be blocked pending an investigation into a suspected breach of its data collection rules and a failure to check the age of its users.
EU law enforcement agency Europol also warned ChatGPT may be used by criminals and to spread disinformation.
Elsewhere, schools in New York and universities in Japan have also restricted ChatGPT over fears students could use it to write assignments for them.