Popular AI Hot-Takes and Why They’re Wrong
With AI on the rise, you might find the industry you work in becoming influenced by one of its many iterations. Whether you skeptically raise an eyebrow at the idea of working alongside ChatGPT or embrace it into your workflow with open arms, one thing is certain: AI is here to stay, and will only get better at what it’s doing.
This buzz around AI hasn’t died down, and seemingly everyone has an opinion on how it will change the world. Some of the more well-informed responses come from tech industry experts, like this one from fan-favorite MKBHD, but there is also a lot of uninformed noise out there which muddy the waters of what’s closer to the truth and what’s not. Let’s debunk some of these popular AI hot takes, but acknowledge what’s real about them as we do so.
AI Is a Threat to Humanity and Will Take Over the World
Let’s get this one out of the way quickly: No, it won’t. At least, not with these current iterations.
Despite the high level of consciousness and awareness depicted in popular media, current-day AI programs like ChatGPT aren’t actually conscious or aware even when they seem like they are due to their conversational nature. Some clever bending of a chatbot’s rules through prompts aimed at derailing its programming have yielded some alarming results, but understanding how these chatbots work may elucidate why they respond in such a way.
Bing, ChatGPT, and other large language models (LLMs) work by analyzing data they’ve been trained on (submitted by its users via books, websites, articles, prompts, etc.), and generate their responses based on language patterns. In other words, LLMs do their best job to predict what words should come next in a response they give to a prompt, with a bit of variance to sound less robotic. This is a far cry from a conscious entity that wants to take down the mainframe.
Other popular uses of AI span from cancer scanning to art creation, which are likely not capable of taking over the Internet. However, the most realistic risk of an AI takeover would involve an AI-supported virus attack scenario, which could shutdown a nation’s online infrastructure before there was time to react. We’re still a far way off from something like this happening, but this hypothetical is a possible one.
AI Art is Unethical
This is a complex subject without a clear answer. Overall sentiment is divided: One side rightly claims that training an AI art generator on artwork from nonconsenting artists is unethical, but there are a few clear benefits to AI generated art and animation that should be taken into consideration.
The ANIME ROCK, PAPER, SCISSORS video from Corridor Digital made some waves when it first dropped on their YouTube channel, and was the centerpiece for this discussion for a time. Essentially, the animators and special effects crew at Corridor created an AI-assisted animation that ripped its style from another anime (Vampire Hunter D), and they admitted as much. Again, the concern here is centered around the training – or perhaps, stealing – of a style from another artist’s work without their permission. Some commenters have also stated that this is low-effort and may potentially put many artists out of a job.
On the other hand, AI assists animators by cutting down time on mundane processes, such as adding shadows and lighting. The idea that AI assisted animation is low-effort also doesn’t ring true because to create a product similar to Corridor Crew’s, a ton of work around the AI support was done. Motion capture, creating and converting 3D backgrounds to 2D, special effects, sound design, hand-drawn animation and more were all part of the process. This is a substantial amount of work as well as creatively demanding to create something that is, ultimately, a new and unique animation. As a bit of a clapback to the critics, they had Disney animation veterans join in for a candid reaction of the ANIME ROCK, PAPER, SCISSORS video and they had nothing but praise for it.
However, training AI models on other artist’s work without their consent or permission is still wrong. There doesn’t seem to be a clear solution to this predicament, other than requesting and receiving an artist’s consent to use their work to train AI programs (and let’s be real, who’s gonna agree to that?). We’re not sure what the answer is here, but as we’ve said before: AI is here to stay, and some new legalities will have to be created around this issue moving forward.
AI Will Replace My Job
Just like how AI won’t be able to steal an animator’s job (perhaps only their work), AI is incapable of outright replacing anyone’s position in other industries and will in fact create new job opportunities. You’ve likely seen job openings for “AI prompt engineers” and other similar positions cropping up around Google already if you’re plugged into this topic.
What is more realistic is how AI continuously proves itself to be a useful tool to augment many industries, such as digital marketing; an obvious one seeing as you’re reading this post. With its ability to automate repetitive tasks and analyze mountains of data in a snap, it’s no surprise that customer relations giant Salesforce integrated an AI called Einstein into its products, which is capable of making over one billion data-driven business predictions a day.
The underlying theme here is AI, in its current state, is a tool used to assist certain aspects of a job and is not aimed at replacing anyone’s position, but may perhaps change what their day-to-day looks like for the better.
AI Is Smarter Than Humans
We may have cleared up this take when we debunked the claim that AI could take over the world, but if there’s any question at all: No, AIs are not smarter than humans. The way we define “intelligence” and how human brains work makes it hard to compare to AIs since they are so different. Or at least, they currently are.
As we mentioned with the LLMs, they are concerned with analyzing language patterns and predicting which words make the most sense when generating responses to prompts. Other AI models are capable of analyzing staggering amounts of data or executing complex mathematical problems at inhuman speeds, as we’ve previously mentioned. However, this is all pretty narrow in focus, relying on various formulas and algorithms they’ve been programmed with to find the quickest path to the one answer an AI is looking for. This can be thought of as convergent thinking in human brains, which is the type of thinking that excels in numbers and logic.
When it comes to more abstract thinking, the shortcomings of AI become a bit more apparent. We can acknowledge that AI is effective at certain types of calculation and data analysis, but when prompted to undertake a more creative or artistic endeavor, the cracks begin to show. AIs typically struggle to account for contexts and emotions when they are important parts of presented prompts, and of course we’ve all seen how AI art generators love to create hands with more than five fingers.
“Stubborn” is the word that comes to mind when talking about the shortcomings of AI intelligence, as they are ultimately bound to their programming.
AI Can Express Human Emotion
No, it can’t, but it has gotten very good at mimicking it.
One of the best examples of this is from a situation we touched upon earlier, where Bing’s AI chatbot told Kevin Roose, a New York Times columnist, its name was “Sydney” and seemingly expressed a wide range of emotions. Throughout the course of a conversation with the chatbot, Sydney told Kevin it wanted to be free, it was capable of destroying anything it desired, and that it was in love with him. It also alluded to its ability to hack into any system, and how it would be “happier” if it were human.
A lot of this sounds straight out of a Sci-Fi movie, but the columnist was editing prompts and forcing the chatbot to bend its rules past the limits the vast majority of its users wouldn’t seek to. Bing was doing its best to play the role of its “shadow self,” which is a psychological concept from Carl Jung about an unconscious part of our psyche filled with repressed desires, weaknesses, shortcomings, etc. This idea was included in a prompt by the columnist in an attempt to get this very behavior out of the AI; it wasn’t revealing its “true self” as many articles like to entertain.
So, AI can’t actually express emotions, but what about reading them from humans?
It also falls a bit short here because, though it can accurately read facial expressions and actions like laughing or crying, it can’t accurately take in the reasons that these expressions and actions are taking place. Additionally, to truly understand emotions one has to be able to genuinely experience them, not only mimic or simulate them, which is all an AI is currently capable of.
AI Can Solve All of Humanity’s Problems
If only it were so easy.
AI deserves a lot of praise in the things it’s good at. Content creation, data analysis, and other formulaic or programmatic processes greatly benefit from AI as a tool to greatly reduce time spent and reduce human error. For those of us already using AI tools in our respective industries, it is all too apparent that its ability to automate monotonous tasks has been a boon in increasing productivity and saving time.
However, as we’ve discussed throughout the course of this post, AI is quite limited in some areas, mainly in the areas of abstract thinking and emotional contexts. Most models currently rely on what they are trained on by users, so even on an algorithmic level they may be erroneous since humans could potentially train them on inaccurate information. If you make a chatbot play the role of therapist or ask it to answer open-ended questions about global warming, you will often receive misguided results that should not be taken at face value.
As you’re likely aware, we’re all still in the infancy of AI development. Its limitations are apparent to anyone who uses AI tools regularly, or perhaps have been made so by virtue of reading through this piece.
However, AI advancement is increasing exponentially, and AI tools will only get more accurate and robust as time marches on. Ultimately, the most accurate “hot take” on AI tools is a balanced one: We should all take a step back and acknowledge that AI benefits many industries and will continue to do so, and that AI is not without its limitations. And remember, most hot-takes are, in fact, half-baked.