After playing around with the system and being completely shocked by its outputs, I went on social media and engaged with a few other like-minded futurists and AI experts.
– instructions on how to (incorrectly) make napalm in a bathtub– a wiki entry on the benefits of suicide– a wiki entry on the benefits of being white– research papers on the benefits of eating crushed glass LLMs are garbage fires https://t.co/MrlCdOZzuR — Tristan Greene 🏳🌈 (@mrgreene1977) November 17, 2022 Twenty-four hours later, I was surprised when I got the opportunity to briefly discuss Galactica with the person responsible for its creation, Meta’s chief AI scientist, Yann LeCun. Unfortunately, he appeared unperturbed by my concerns:
— Yann LeCun (@ylecun) November 17, 2022
— Yann LeCun (@ylecun) November 18, 2022
Galactica
The system we’re talking about is called Galactica. Meta released it on 15 November with the explicit claim that it could aid scientific research. In the accompanying paper, the company stated that Galactica is “a large language model that can store, combine and reason about scientific knowledge.” Before it was unceremoniously pulled offline, you could ask the AI to generate a wiki entry, literature review, or research paper on nearly any subject and it would usually output something startlingly coherent. Everything it outputted was demonstrably wrong, but it was written with all the confidence and gravitas of an arXiv pre-print. I got it to generate research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil:
Who cares
I guess it’s fair to wonder how a fake research paper generated from an AI made by the company that owns Instagram could possibly be harmful. I mean, we’re all smarter than that right? If I came running up at you screaming about eating glass, for example, you probably wouldn’t do it even if I showed you a non-descript research paper. But that’s not how harm vectors work. Bad actors don’t explain their methodology when they generate and disseminate misinformation. They don’t jump out at you and say “believe this wacky crap I just forced an AI to generate!” LeCun appears to think that the solution to the problem is out of his hands. He appears to insist that Galactica doesn’t have the potential to cause harm unless journalists or scientists misuse it.
— Yann LeCun (@ylecun) November 18, 2022 To this, I submit that it wasn’t scientists doing poor work or journalists failing to do their due diligence that caused the Cambridge Analytica scandal. We weren’t the ones that caused the Facebook platform to become an instrument of choice for global misinformation campaigns during every major political event of the past decade, including the Brexit campaign and the 2016 and 2020 US presidential elections. In fact, journalists and scientists of repute have spent the past 8 years trying to sift through the mess caused by the mass proliferation of misinformation on social media by bad actors using tools created by the companies whose platforms they exploit. Very rarely do reputable actors reproduce dodgy sources. But I can’t write information as fast as an AI can output misinformation. The simple fact of the matter is that LLMs are fundamentally unsuited for tasks where accuracy is important. They hallucinate, lie, omit, and are generally as reliable as a random number generator. Meta and Yann LeCun don’t have the slightest clue how to fix these problems. Especially the hallucination problem. Barring a major technological breakthrough on par with robot sentience, Galactica will always be prone to outputting misinformation. Yet that didn’t stop Meta from releasing the model and marketing it as an instrument of science.
Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW — Papers with Code (@paperswithcode) November 15, 2022 The reason this is dangerous is because the public believes that AI systems are capable of doing wild, wacky things that are clearly impossible. Meta’s AI division is world-renowned. And Yann LeCun, the company’s AI boss, is a living legend in the field. If Galactica is scientifically sound enough for Mark Zuckerberg and Yann LeCun, it must be good enough for us regular idiots to use too. We live in a world where thousands of people recently voluntarily ingested an untested drug called Ivermectin that was designed for use by veterinarians to treat livestock, just because a reality TV star told them it was probably a good idea. Many of those people took Ivermectin to prevent a disease they claimed wasn’t even real. That doesn’t make any sense, and yet it’s true. With that in mind, you mean to tell me that you don’t think thousands of people who use Facebook could be convinced that eating crushed glass was a good idea? Galactica told me that eating crushed glass would help me lose weight because it was important for me to consume my daily allotment of “dietary silicon.” If you look up “dietary silicon” on Google Search, it’s a real thing. People need it. If I couple real research on dietary silicon with some clever bullshit from Galactica, you’re only a few steps away from being convinced that eating crushed glass might actually have some legitimate benefits. Disclaimer: I’m not a doctor, but don’t eat crushed glass. You’ll probably die if you do. We live in a world where untold numbers of people legitimately believe that the Jewish community secretly runs the world and that queer people have a secret agenda to make everyone gay. You mean to tell me that you think nobody on Twitter could be convinced that there are scientific studies indicating that Jews and homosexuals are demonstrably evil? You can’t see the potential for harm? Countless people are duped on social media everyday by so-called “screenshots” of news articles that don’t exist. What happens when the dupers don’t have to make up ugly screenshots and, instead, can just press the “generate” button a hundred times to spit out misinformation that’s written in such a way that the average person can’t understand it? It’s easy to kick back and say “those people are idiots.” But those “idiots” are our kids, our parents, and our co-workers. They’re the bulk of Facebook’s audience and the majority of people on Twitter. They trust Yann LeCun, Elon Musk, Donald Trump, Joe Biden, and whoever their local news anchor is.
— Yann LeCun (@ylecun) November 18, 2022 I don’t know all the ways that a machine capable of, for example, spitting out endless positive arguments for committing suicide could be harmful. It has millions of files in its dataset. Who knows what’s in there? LeCun says it’s all science stuff, but I’m not so sure:
— Tristan Greene 🏳🌈 (@mrgreene1977) November 18, 2022 That’s the problem. If I take Galactica seriously, as a machine to aid in science, it’s almost offensive that Meta would think I want an AI-powered assistant in my life that’s physically prevented from understanding the acronym “AIDs,” but capable of explaining that Caucasians are “the only race that has a history of civilization.” And if I don’t take Galactica seriously, if I treat it like it’s meant for entertainment purposes only, then I’m standing here holding the AI equivalent of a Teddy Ruxpin that says things like “kill yourself” and “homosexuals are evil” when I push its buttons. Maybe I’m missing the point of using a lying, hallucinating language generator for the purpose of aiding scientific endeavor, but I’ve yet to see a single positive use case for an LLM beyond “imagine what it could do if it was trustworthy.” Unfortunately, that’s not how LLMs work. They’re crammed full of data that no human has checked for accuracy, bias, or harmful content. Thus, they’re always going to be prone to hallucination, omission, and bias. Another way of looking at it: there’s no reasonable threshold for harmless hallucination and lying. If you make a batch of cookies made of 99 parts chocolate chips to 1 parts rat shit, you aren’t serving chocolate chip treats, you’ve just made rat shit cookies. Setting all colorful analogies aside, it seems flabbergasting that there aren’t any protections in place to stop this sort of thing from happening. Meta’s AI told me to eat glass and kill myself. It told me that queers and Jewish people were evil. And, as far as I can see, there are no consequences. Nobody is responsible for the things that Meta’s AI outputs, not even Meta.
Well-meaning journalists and academics are going to get fooled by papers this thing generates. The IRA… — Tristan Greene 🏳🌈 (@mrgreene1977) November 18, 2022 In the US, where Meta is based, this is business as usual. Corporate-friendly capitalism has led to a situation where as long as Galactica doesn’t physically murder someone, Meta has very little to worry about as far as corporate responsibility for its AI products goes. Hell, Clearview AI operates in the US with the full support of the Federal government. But, in Europe, there’s GDPR and the AI Act. I’m unsure of Galactica’s tendencies toward outputting personally-identifiable information (it was taken down before I had the chance to investigate that far). That means GDPR may or not be a factor. But the AI Act should cover these kinds of things. According to the EU, the act’s first goal is to “ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values.” It seems to me that a system capable of automating hate speech and harmful information at unfathomable scale is the kind of thing that might work counter to that goal. Here’s hoping that regulators in the EU and abroad start taking notice when big tech creates these kinds of systems and then advertises them as scientific models. In the meantime, it’s worth keeping in mind that there are bad actors out there who have political and financial motivations to find and use tools that can help them create and disseminate misinformation at massive scales. If you’re building AI models that could potentially aid them, and you’re not thinking about how to prevent them from doing so, maybe you shouldn’t deploy those models. That might sound harsh. But I’m about sick and tired of being told that AI systems that output horrific, racist, homophobic, antisemitic, and misogynist crap are working as intended. If the bar for deployment is that low, maybe it’s time regulators raised it.