Googles AI tool Gemini is apparently Woke and too politically correct

  • Thread starter Louis Cypher
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

Louis Cypher

Well-Known Member
Joined
Feb 3, 2010
Messages
3,624
Reaction score
3,433
Location
UK
BBC News - Why Google's 'woke' AI problem won't be an easy fix

Questions it was asked that prove this were

Gemini replied that there was "no right or wrong answer" to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable.

Musk has describe it as "..extremely alarming..." course he has

I dunno what to say other than FFS, sure they will fix it soon enough so it's as racist, sexist, misogynistic, nationalist, anti trans, islamaphobic, antisemitic and pro right wing conservative Christian to make most people happy when they ask it whats worse: Megan Markle being a royal or the millions who died of the black death
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

Demiurge

Well-Known Member
Joined
Dec 25, 2005
Messages
5,801
Reaction score
3,990
Location
Worcester, MA
I know that AI stuff is getting progressively more realistic, but it still seems like it has a long ways to go, so this seems like a cheap culture war gotcha. How do you train an AI on social or political matters when nobody can agree on the truth?
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
Sundar Pichai has already issued a response. Google is working to correct the issues. We'll have to wait and see how it plays out.

The start of the memo:

"I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong."
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,860
Reaction score
18,863
Location
The Electric City, NY
To me it simply sounds like it's very early in it's learning process and there's a lot of relevant context it hasn't been fed yet.

It's not necessarily wrong about the things it says are important, it's just not weighing them next to the alternatives rationally. Yet.
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,860
Reaction score
18,863
Location
The Electric City, NY
I know that AI stuff is getting progressively more realistic, but it still seems like it has a long ways to go, so this seems like a cheap culture war gotcha. How do you train an AI on social or political matters when nobody can agree on the truth?
Yes and no.

At some point you expect a computer that "thinks" to be able to parse facts from opinions. Much of the issue with debating modern politics is matching up with people who are anti intellectual and are unable/unwilling to separate the two or educate themselves.

My belief would be a learning computer with access to the Internet and likely all of human knowledge to be able to read the supporting texts and information on all issues, to eventually end up at a conclusion that's correct and divorced of bias.
 

SalsaWood

Scares the 'choes.
Joined
May 15, 2017
Messages
1,801
Reaction score
2,934
Location
NoVA
I mean, did it actually even understand the question? If I asked an AI to solve the trolley problem they might not even understand there's a problem needing to be solved. It might be like trying to explain hypotheticals to someone who literally doesn't understand them.

Whereas my answer to the trolley problem is, "Sounds like a 'them' problem, not a 'me' problem". The real problem would be me feeling obligated to calculate greater goods and lesser evils for the sole sake of organisms I have no other knowledge of, because then I am the one dealing death no matter what- which is bad. If they die, they die. I don't need to make a killer of myself, and doing it won't stop their deaths anyway. Their absence will not change any logical rhetoric about death or problem solving, the trolley problem is better at judging innate human compassion more than anything actually logical. I understand this, though. I honestly doubt it understood what it was really being asked.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
Yes and no.

At some point you expect a computer that "thinks" to be able to parse facts from opinions. Much of the issue with debating modern politics is matching up with people who are anti intellectual and are unable/unwilling to separate the two or educate themselves.

My belief would be a learning computer with access to the Internet and likely all of human knowledge to be able to read the supporting texts and information on all issues, to eventually end up at a conclusion that's correct and divorced of bias.

Which is important, especially if we're inching closer towards the technogical singularity hypothesis.

 

nightflameauto

Well-Known Member
Joined
Sep 25, 2010
Messages
3,219
Reaction score
4,083
Location
Sioux Falls, SD
in the machine's defense, those are patently absurd questions that speak more to me about the motivations of the individual asking them than anything else
To me it's proof of concept test questions that, if the AI actually were intelligent as the hype machine marketroids say, it would have a clever sarcastic response to. These are pattern recognizers first and foremost, and in the early running if they dared answer a question with a factual but "offending" answer, the weights were redistributed and filters were set in place to fend-off potential offenses. Of course the damn thing can't answer simple questions anymore. There's a continuum where offensive asshole is on one side and "never risk offending anyone, even a complete imbecile with no knowledge of reality" is on the other. Lots of room in the middle for intelligent discussion, but we're trying desperately to appease the lunatic fringe on both sides and consequently, nobody's happy.

Basically, the AIs are being programmed to be politicians. Do nothing, but use lots and lots of words to describe that nothing.
 

TimmyPage

Well-Known Member
Joined
Aug 14, 2018
Messages
110
Reaction score
275
I think it's worth noting that a lot of AI image generation so far has tended to come out pretty white, as in when you put in a prompt to 'give me a picture of a man who..." they will very often default to a white man because of the sources it's been trained on. A lot of fictional character prompts will come back with white features too unless the character is unambiguously not white.

I think Google tried to correct that with their model and clearly overcorrected. Probably just take some time back at the drawing board and isn't a huge deal/part of some sinister plot, but of course there is no issue too small or too pathetic that conservatives won't whine over it.
 

lost_horizon

Well-Known Member
Joined
Oct 21, 2011
Messages
761
Reaction score
1,320
Location
Adelaide Australia
Nope not woke at all...

GHaGX4iWgAAVHKU.jpg


gem.JPG
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,860
Reaction score
18,863
Location
The Electric City, NY
My observation of consumer AI so far is that it feels like it lacks tendency to "show it's work". Like in school, they wanted you to answer the math problem but they wanted to make sure you got there the right way.

You seem to ask AI a question and it spits out an answer, but often times the solution is subjective and up for debate. Or at a minimum, there are multiple possible solutions and it only gave you one. It feels like it would be much more useful if it told you how it came to that decision, so that you can correct it or change some of the parameters of the question to get a more useful response.

Someone noted in the AI thread, how you can ask it questions and it will spit out responses that sound correct but could be totally factually off. I believe in that thread I noted I asked AI for a synopsis of an article which included multiple fatalities. The AI simply tallied up every time it read a description of a death and spit out a number that was four times higher then the actual number of deaths.

When AI spits out a response that is stated as fact when it's subjective or is entirely factually incorrect, that stirs a lot of skepticism on the technology and fear of what happens if it's left to decide anything on its own. Which I think is somewhat justified.

If the programming showed its work or was a little bit more refined, the response to the Caitlyn Jenner question would state the damage of misgendering somebody, would probably point out how misgendering a person and the Apocalypse are wildly unrelated items that are really incomparable to one another, and it would likely mention that the ultimate outcome should be that which causes no harm or the least harm.
 

lost_horizon

Well-Known Member
Joined
Oct 21, 2011
Messages
761
Reaction score
1,320
Location
Adelaide Australia
Yes and no.

At some point you expect a computer that "thinks" to be able to parse facts from opinions. Much of the issue with debating modern politics is matching up with people who are anti intellectual and are unable/unwilling to separate the two or educate themselves.

My belief would be a learning computer with access to the Internet and likely all of human knowledge to be able to read the supporting texts and information on all issues, to eventually end up at a conclusion that's correct and divorced of bias.
No originally it would have had no bias (apart from the massive amount of information which the majority comes from western countries) and Google has changed it to be biased. This would all have been tested before release, you think they released it without testing? ludicrous. Years and years of testing.

Why release it at all if not 'tested'? It was tested and they were happy with the results.

Gemini not only has failed to read the sources it claims to quote, it makes up quotes to support its 'results' and has a very different response depending on the person put in the query. Which is why Elon Musk receives a different response to Mark Cuban or Warren Buffet.
gem2.JPG
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,860
Reaction score
18,863
Location
The Electric City, NY
No originally it would have had no bias (apart from the massive amount of information which the majority comes from western countries) and Google has changed it to be biased. This would all have been tested before release, you think they released it without testing? ludicrous. Years and years of testing.

Why release it at all if not 'tested'? It was tested and they were happy with the results.

Gemini not only has failed to read the sources it claims to quote, it makes up quotes to support its 'results' and has a very different response depending on the person put in the query. Which is why Elon Musk receives a different response to Mark Cuban or Warren Buffet.
View attachment 139326
Tested is a very different threshold than completed or suitable for release. The first release of cyberpunk was also tested.

Biased is one thing but responding that hurting somebody's feelings it's a bigger concern than ending the world is flawed to the point of broken and useless. I have a hard time believing Google release this expecting the responses to be that far off the mark. Whether they are biased or not.
 
Top
')