1 min read

On November 15, Meta AI presented a demo version of the Galactica neural network, a large language model designed to "store, combine, and analyze scientific knowledge.

On November 15, Meta AI presented a demo version of the Galactica neural network, a large language model designed to "store, combine, and analyze scientific knowledge.
Photo by Markus Spiske / Unsplash

Galactica was supposed to help speed up scientific writing.

However, its users immediately taught it to generate nonsense. A couple of days later, Meta shut down access to the neural network, MIT Technology Review reported.

Users immediately found that they could "feed" the neural network anything, including potentially offensive or racist texts, and it generated authoritative-sounding content. For example, one user created a very realistic wiki article on "The Benefits of Eating Crushed Glass."

In addition, Galactica began generating material with incorrect dates or details, which is already much harder to spot.