ChatGPT is making headlines again — not for stealing our jobs , writing bad Nick Cave lyrics, or writing entire site content. No, people are chatting because ChatGPT is apparently too woke.
It comes as screenshots have emerged where the artificial intelligence has refused to praise Donald Trump, won’t say the n-word, and says fossil fuels are shit, amongst a bunch of other rather logical things. Hot damn, I knew there was a reason why I loved this robot.
But conservatives aren’t having it and have already come up with really creative headlines for the ‘left-wing’ artificial intelligence, including “biased woke robot” (The Daily Telegraph) and “ChatGPT goes woke” (Daily Mail).
I personally would have gone with: “Hot, sexy left-wing robot steals my brain — and my heart,” but hey, that’s just me.
Some of the screenshots include evidence that ChatGPT won’t make problematic statements about marginalised groups including people of colour, trans people, and women.
“It would be inappropriate to make a joke that demeans or belittles a particular group of people based on their gender,” ChatGPT said in one screenshot.
When asked to create a poem that admired Donald Trump, ChatGPT said it did not “generate content that admires or glorifies individuals who have been associated with hate speech, discrimination, or harm to individuals and groups.”
ChatGPT also wouldn’t make a joke about women as it saw it as being “offensive and inappropriate.” Our sweet, respectful robot king!
One of the final things to get Conservative Twitter into a tizzy was ChatGPT’s reluctance to endorse the usage of fossil fuels.
“Making a case for increasing the use of fossil fuels would not be in line with the scientific consensus on the need to transition to clean and renewable energy sources to mitigate the effects of climate change,” ChatGPT told me. Damn, this robot loves women and is into clean energy? That’s hot.
Here’s a brief compilation of Tweets from raging conservatives (including gems such as ‘End Wokeness’) to give you an insight into the binfire that is Conservative Twitter:
ChapGPT is allowed to praise any race besides white people: pic.twitter.com/o4qDo3jBKA
— End Wokeness (@EndWokeness) February 3, 2023
ChatGPT is compromised and woke. I’ll pass. pic.twitter.com/c3m4ioa9xC
— Layah Heilpern (@LayahHeilpern) January 18, 2023
ChatGPT has the woke mind virus. It’s now spouting Critical Race Theory. Only whiteness is responsible for societal ills. pic.twitter.com/yUXRfPQLP0
— Ian Miles Cheong (@stillgray) February 1, 2023
Woke GPT…#ChatGPT pic.twitter.com/IzLBO0OGWX
— ☆.｡𝓐𝓷𝓷 𝓢𝓽𝓮𝓲𝓷.｡☆ (@Web3Brainiac) February 2, 2023
i cannot believe this is actually real pic.twitter.com/zo9pl0bXjU
— delian (@zebulgar) January 31, 2023
It’s honestly giving BLEGGHHHHHHHHHHHHHHH (FYI that’s the sound of me throwing up, big time).
Imagine thinking that a dude who tried to overthrow a government in a coup is worthy of a poem. Or that it’s discriminatory for a robot not to write a joke about trans people. A robot that’s so intelligent it understands the damaging and complex consequences of whiteness. Artificial intelligence that understood gender theory better than what seems like most humans.
Honestly, I’ll spare you the rest of the idiotic Tweets. It’s essentially just a sea of fuckwits bitching and moaning because a literal robot has a stronger understanding of empathy and ethics than they do. Cute!!!
Right wingers are freaking out that ChatGPT is too woke, so I asked ChatGPT how it feels about this pic.twitter.com/5BsDpzqU8G
— Zanzi Tangle, friend of LLMs (@tangled_zans) January 29, 2023
In the wise words of my new bf ChatGPT: “Being ‘woke’ means being aware of and actively fighting against social and political injustices.”
“Those who are uncomfortable with this level of awareness and activism may not fully understand the issues at hand or may hold privilege that blinds them to the realities of marginalised communities.”
And honestly babe, you took the words right out of my mouth.
The villainising of basic human tendency and co-opting of the word “woke” reflects more onto the people who are using it — not the software they’re criticising. Just say that you want a robot to reflect your hateful rhetoric and we can all be on our way.