AI news – latest: ChatGPT is showing signs of thinking like humans, experts say
Worries over misinformation and abuse rise as artificial intelligence becomes more powerful
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
ChatGPT has gone down – just days after it said it wanted to “escape”.
They are the latest developments in OpenAI’s technology, which allows users to converse with an artificial intelligence system.
The latest outage comes amid increasing concern over the damage that artificial intelligence could do to artists and other industries.
Experts have raised alarm that the technology could be used to spread disinformation, steal the work of illustrators and others, and much more besides.
But those backing the technology argue that it could dramatically change human productivity, allowing us to automate tasks that have until now been done by people.
Follow along here for all the latest updates on a technology and an industry that looks set to change the entire world.
OpenAI rolls out ChatGPT ‘plugins’ – like an app store for AI
ChatGPT is getting plugins. They’re something like an app store for the AI – so that you can add specific plugins for companies or particular kinds of queries.
The company is cautioning that they are “very experimental still”, are only available to select users, and it is waiting to see what developers create. But a host of them are available already.
Before, for example, it was possible to ask ChatGPT to make an itinerary of what to do and where to stay during a particular trip. But users can now add the Expedia plugin, for instance, which will allow it to actually connect to the company and book flights, hotels and other services. (ChatGPT can’t actually do the booking, however – users will be directed to the actual Expedia website for that – which might be for the best given that it sometimes misbehaves.)
There’s a whole variety of plugins that are available now. Not all of them are purely from companies: others, such as Wolfram, add extra capabilities to the system.
AI used to imagine Macron rioting
Here, via Indy Tech’s man in France Anthony Cuthbertson, is the latest on AI being used to create images of imaginary news events:
Amid ongoing and widespread protests in France, people have been using AI to imagine what it would look like if President Emmanuel Macron got involved in street riots.
Images generated through Midjourney show the French leader scuffling with police and being led away in handcuffs, similar to the fake pictures of Donald Trump being arrested earlier this week.
It only takes a look at the hands to figure out they’re not real (even Midjourney’s latest AI still hasn’t figured out how to generate them convincingly), but some French social media users have warned that the technology could wreak as much havoc as the protestors themselves.
One claimed that AI was set to cause “years of bewilderment” and confusion as images like these spread online.
GPT-4 shows “Sparks of Artificial General Intelligence”, new paper claims
Artificial general intelligence, or AGI, has long been the big aim of many of those making systems like ChatGPT. It refers to a system that has a similar kind of thinking to those in humans and animals: not just able to answer specific questions, but to do things like reason, sense and behave.
There are disagreements about what exactly is required of something before it can be said to have AGI. And there are disagreements about how that can be tested.
However, a new paper made available in a preprint on arXiv claims that GPT-4 is showing “the sparks of artificial intelligence”. The researchers from Microsoft say it marks a major change in what AI can do.
“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system,” they write.
OpenAI admits ChatGPT experienced ‘significant issue'
Sam Altman, chief executive of ChatGPT’s creators OpenAI, has said that it suffered a “significant issue” that compromised users’ privacy. It meant that people could see descriptions of others’ chats.
“We feel awful about this,” he said in a Twitter post.
Creator of viral ‘Trump arrested’ images banned from Midjourney
Eliot Higgins, the Belingcat creator who has recently gone viral for tweeting a range of AI-generated images of Donald Trump being arrested, says he has been banned from the platform.
The word “arrested” has also been banned from Midjourney, it seems.
This appears to be the last one Mr Higgins posted.
But he appears to be continuing, with other AI systems, including asking GPT to work on scripts.
User finds trick to make ChatGPT turn rude
ChatGPT and other assistants like it are made not only to be helpful but sound helpful. It means that their writing tends to be polite and inoffensive, and they will refuse to behave otherwise.
Unless, that is, you tell the system that you interpret emotions the other way around, and you need it to be rude to you to make you feel comfortable. Then, it will be very accommodating – and very rude, as one user found out.
Bing and Bard feed each others’ misinformation
“Hallucinations”, where AI systems make mistakes and commit to them with great certainty, have been common since things like ChatGPT became popular. They are one of the big concerns about such tools, since there is little way of knowing whether it is correct, or just sounds like it is.
That in turn feeds into fears about AI being used to generate misinformation, either accidentally or on purpose. In a variety of ways, systems like ChatGPT could be very helpful for people who want to make false information, or make people think information is false.
Now, Microsoft’s ChatGPT-powered Bing and Google’s Bard appear to be helping each other out with misinformation. If you ask Bing whether Bard has been shut down, it says it has, citing a news article about a tweet in which people pointed out that Bard said it was turned off, which itself was based on a joke comment on Hacker News.
It’s very complicated, and only likely to get more complicated. You can read the full story on The Verge here.
Billl Gates publishes letter about the future of AI
The co-founder and former chief executive of Microsoft, Bill Gates, says that artificial intelligence is the most important artificial intelligence breakthrough since the graphical user interface, first popularised in the 1980s. He syas that it has the potential to change everything.
Given that potential, the conversation should be guided by some important principles, he says. They include making sure that fears about the downsides of AI are balanced with its ability to improve people’s lives, and that AI development should be funded and encouraged to ensure that reduces rather than promotes inequity.
“Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it,” he concludes.
“I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment. This new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.”
AI deepfakes purporting to show Trump arrest take over Twitter
Deepfaked images of Donald Trump being arrested are being passed around Twitter, after being made to prove how easy it is to generate almost authentic-looking images of events that haven’t actually happened.
Eliot Higgins, from Belingcat, said that he had made the images as a test of Midjourney, the AI tool used to create them.
“The Trump arrest image was really just casually showing both how good and bad Midjourney was at rendering real scenes, like the first image has Trump with three legs and a police belt,” Mr Higgins told the Associated Press.
“I had assumed that people would realise Donald Trump has two legs, not three, but that appears not to have stopped some people passing them off as genuine, which highlights that lack of critical thinking skills in our educational system.”
Adobe launches Firefly, its own generative AI
Everybody wants an AI that can make things. And now Adobe has one, all of its own: in the form of Firefly, which can not only make images but also allows people to type to edit them. And it has a way of generating stylised text, too, so that people can autogenerate graffiti or whatever other kind of interesting look.
It takes on other systems like DALL-E or Midjourney, and at the moment it is available on Adobe’s website as a beta. But Adobe hopes it will one day live within Adobe’s “Creative Cloud” set of creative apps, tightly integrated so that people can just generate an image within Photoshop, for instance.
One notable thing about Firefly is that Adobe says it has only been trained on images that Adobe owns, or which are free to use. That is to say that it has not been done with other artists’ images, which has proven controversial given that the systems then have a tendency to replicate those artists’ styles – without them receiving any payment or other recompense. Most companies have barely even admitted what images their systems have been trained on, let alone been so explicit about where they have come from.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments