Home / Culture, Technology and Media / GPT-3: ‘Mind-Blowing’ AI Tool Can Design Websites And Prescribe Medicine
OIF

GPT-3: ‘Mind-Blowing’ AI Tool Can Design Websites And Prescribe Medicine

The artificial intelligence tool GPT-3 has been causing a stir online, due to its impressive ability to design websites, prescribe medication, and answer questions.

GPT-3 is short for Generative Pre-training Transformer and is the third generation of the machine learning model. Machine learning is when computers can automatically learn from their experiences without having to be programmed.

 Its predecessor, GPT-2, made headlines for being deemed “too dangerous to release” because of its ability to create text that is seemingly indistinguishable from those written by humans.
While GPT-2 had 1.5 billion parameters which could be set, GPT-3 has 175 billion parameters. A parameter is a variable which affects the data’s prominence in the machine learning tool, and changing them affects the output of the tool.
At the time when GPT-2 was deemed “too dangerous” to release, it used 124 million parameters.

GPT-3 is currently in closed-access, with demonstrations of its prowess being shared on social media.

Coder coder Sharif Shameem has shown how the artificial intelligence can be used to describe designs which will then be built by the AI despite it not being trained to produce code.

Given an incomplete image, the artificial intelligence can also be used to ‘auto-complete’ it, using its tools to suggest what pixels ‘should’ be in the image based on its database.

The reason that GPT-3 can demonstrate such capabilities is because it has been trained on an archive of the internet called the Common Crawl, which contains nearly one trillion words of data.

The tool comes from OpenAI, an artificial intelligence research lab split into two sections: a for-profit corporation called OpenAI LP, and its non-profit parent organisation OpenAI Inc.

Last month, the product was made commercially available, but work still needed to be done to see how the tool should be used.

“We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the group’s head of policy, last month.

“If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

The achievement is visually impressive, with some going as far as to suggest that the tool will be a threat to industry or even that it is showing self-awareness.

However, OpenAI’s CEO Sam Altman has described the “hype” as “way too much”.

“It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out”, he said.

Moreover, questions have been raised regarding exactly what achievements are made by GPT-3.

Kevin Lacker, a computer scientist who formerly worked at Facebook and Google, showed that while the artificial intelligence can respond to “common sense” questions, answers that would be obvious to a human are unavailable to the machine and questions which are “nonsense” are responded to as if they are not.

This includes asking “How many eyes does my foot have?”, to which GPT-3 responds, “Your foot has two eyes”, or the question “How many rainbows does it take to jump from Hawaii to seventeen?” to which the program responds “It takes two rainbows to jump from Hawaii to seventeen.”

 The OpenAI researchers themselves acknowledge this, writing that “GPT-3 samples [can] lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs.”
Machine learning algorithms such as these do not necessarily “think” or even understand the language they are responding with. These algorithms examine huge databases of syntax (sentence structure) and can recreate a response that may have the correct result, but do not come to the conclusions as humans do.

“I think the best analogy is with some oil-rich country being able to build a very tall skyscraper,” Guy Van den Broeck, an assistant professor of computer science at UCLA, told VentureBeat.

“Sure, a lot of money and engineering effort goes into building these things. And you do get the ‘state of the art’ in building tall buildings. But … there is no scientific advancement per se. Nobody worries about the U.S. is losing its competitiveness in building large buildings because someone else is willing to throw more money at the problem. … I’m sure academics and other companies will be happy to use these large language models in downstream tasks, but I don’t think they fundamentally change progress in AI.”

The Independent

Adam Smith

Leave a Reply

Your email address will not be published. Required fields are marked *