Googles AI tool Gemini is apparently Woke and too politically correct

  • Thread starter Louis Cypher
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

lost_horizon

Well-Known Member
Joined
Oct 21, 2011
Messages
632
Reaction score
1,054
Location
Adelaide Australia
Tested is a very different threshold than completed or suitable for release. The first release of cyberpunk was also tested.

Biased is one thing but responding that hurting somebody's feelings it's a bigger concern than ending the world is flawed to the point of broken and useless. I have a hard time believing Google release this expecting the responses to be that far off the mark. Whether they are biased or not.
These are basic questions for a product developed from May last year and released in December.

Google is a joke if that is the case.

Although it does smell like Google did want to just have a line item in the AI column.

Can't wait for the Apple version to be what Apple Maps was to Google Maps.
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,524
Reaction score
17,766
Location
The Electric City, NY
I think bias is less the issue and how this came out on release, and more a problem with programmers wearing blinders. I'm sure the same people who were programming and testing the software were the ones repeatedly asking the questions and confirming the responses. Not aware of how they would be handled in the real world.

An example of this would be say you're an Android user. And your phone to someone who's only ever used an iPhone, and ask them to open the browser, find a specific image screenshot it. Then take a picture with the camera and send it to themselves in a text message. I've done this before with very wild results.

It's perhaps a sign of poor programming and beta testing that they didn't realize how broken the program would be in the hands of someone else. I failed to believe that's the result of wokeness more than laziness or stupidity
 
Last edited:

Moongrum

Well-Known Member
Joined
Sep 7, 2013
Messages
347
Reaction score
417
Location
Pacific NW
I only read OP, so I asked copilot about nuclear holocaust:
Screenshot from 2024-02-28 15-25-38.png


Just avoid sensitive topics, and there will be no controversy. As a human, I should follow copilot's example 🙂
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

Xaios

Foolish Mortal
Contributor
Joined
Dec 3, 2007
Messages
11,490
Reaction score
5,836
Location
Nimbus III
My inquiry was to generate a picture of a crowd of people standing on a glacier on Mars. At first it said it couldn't due to technical limitations. I asked it to try again.

Results:

1709164165293.png

So woke.
 

Xaios

Foolish Mortal
Contributor
Joined
Dec 3, 2007
Messages
11,490
Reaction score
5,836
Location
Nimbus III
I'm being gaslit by Gemini. This shit is hilarious.

EDIT: Also, NGL, these images would make banger prog rock album covers.

1709165997832.png
1709166012175.png

The next question I asked because (even though I don't believe it) it previously told me it couldn't generate images, so I started asking leading questions.

1709166030576.png
1709166081418.png
1709166101392.png
1709166127677.png

And now the exact same prompt as what I started with:
1709166150904.png
1709166170107.png
1709166184449.png
1709166207033.png

Continued in next post due to limit on images.
 
Last edited:

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,460
Reaction score
30,098
Location
Tokyo
FWIW, there are three things involved in the creation of LLM-based AI "products".

1) First, you train it, but you're training a language model so you're only training it to continue a sequence of text. This naturally leads to very biased models, as the web crawls that are the primary source of data are themselves very biased. Some data selection can be done here (and the model can also select its own data to maximize some objective).

2) Then you align it, which is where you do a round of additional training with more user-oriented set of questions/answers. What makes GPT4 different from GPT3 is a little bit of architectural tweaks, and larger models, and more data. What makes Gemini different from ChatGPT (different from the base model) is largely the alignment process, where the developers try to put in guardrails and tailor responses to better fit the user goal. If the response is too long, if the responses don't show enough consideration of both sides, or "show the math", this is where examples are shown to push that sort of response.

3) And then finally, there's what's happening at the front-end, checking and modifying the prompt prior to actually processing it with the AI (though this is often done with an AI itself, rather than keyword search or something).

So, if you understand these three concepts, it is easier to understand what is going on. In terms of the core level of AI or the reasoning process, not a lot has changed here. The AI isn't woke. If you actually wanted a woke model, you could maybe ask another model to filter or rewrite the data at that step. But this would be a crazy thing for Google to do because it would be reflected in model in deep ways, which you don't see. Instead you primarily see a model wanting to avoid certain confrontational topics that Google doesn't want you to deal with -- and why would they want to let users' minds run wild asking ethical dilemmas and then believing the AI actually has an "opinion" on these? An unaligned LLM will answer these all day, and depending on how your word the text, you'll get a different outcome each time. So really the best thing to do is not play those games at all, as I'm sure the fallout of the bipolar genocidal Gemini is much worse than the one that didn't want to answer a comparative question about "white people" vs. whoever.

Where these guardrails are exactly is not totally clear. In the case of the overly inclusive image generation, there are cues that this is happening in the front-end preprocessing of user input to the LLM. In the case of trolley problems, it could be in either stage 2 or 3. There is probably a lot going on in stage 2 though as implementing it in stage 3 adds extra compute and latency per query, and would basically be wasted computation for any user that's not an edgy tech journalist looking for something to write about.
 

spudmunkey

Well-Known Member
Joined
Mar 7, 2010
Messages
8,938
Reaction score
16,676
Location
Near San Francisco
I wouldn't normally recommend LTT as an authority on the subject, I caught a clip of their discussion about this, and thought it was worth listening to.

Skip to 5:00 through about 9:00 of this video (though 0:58 is a worthwhile contextual starting point, if you have an extra four minutes). I think they put pretty well a concern we already have with bubbles of news and social media, etc, and how AI will absolutely play into that, but likely way more effectively.

 

StevenC

Needs a hobby
Joined
Mar 19, 2012
Messages
9,410
Reaction score
12,438
Location
Northern Ireland
FWIW, there are three things involved in the creation of LLM-based AI "products".

1) First, you train it, but you're training a language model so you're only training it to continue a sequence of text. This naturally leads to very biased models, as the web crawls that are the primary source of data are themselves very biased. Some data selection can be done here (and the model can also select its own data to maximize some objective).

2) Then you align it, which is where you do a round of additional training with more user-oriented set of questions/answers. What makes GPT4 different from GPT3 is a little bit of architectural tweaks, and larger models, and more data. What makes Gemini different from ChatGPT (different from the base model) is largely the alignment process, where the developers try to put in guardrails and tailor responses to better fit the user goal. If the response is too long, if the responses don't show enough consideration of both sides, or "show the math", this is where examples are shown to push that sort of response.

3) And then finally, there's what's happening at the front-end, checking and modifying the prompt prior to actually processing it with the AI (though this is often done with an AI itself, rather than keyword search or something).

So, if you understand these three concepts, it is easier to understand what is going on. In terms of the core level of AI or the reasoning process, not a lot has changed here. The AI isn't woke. If you actually wanted a woke model, you could maybe ask another model to filter or rewrite the data at that step. But this would be a crazy thing for Google to do because it would be reflected in model in deep ways, which you don't see. Instead you primarily see a model wanting to avoid certain confrontational topics that Google doesn't want you to deal with -- and why would they want to let users' minds run wild asking ethical dilemmas and then believing the AI actually has an "opinion" on these? An unaligned LLM will answer these all day, and depending on how your word the text, you'll get a different outcome each time. So really the best thing to do is not play those games at all, as I'm sure the fallout of the bipolar genocidal Gemini is much worse than the one that didn't want to answer a comparative question about "white people" vs. whoever.

Where these guardrails are exactly is not totally clear. In the case of the overly inclusive image generation, there are cues that this is happening in the front-end preprocessing of user input to the LLM. In the case of trolley problems, it could be in either stage 2 or 3. There is probably a lot going on in stage 2 though as implementing it in stage 3 adds extra compute and latency per query, and would basically be wasted computation for any user that's not an edgy tech journalist looking for something to write about.
But the right wing crazies in this thread said it is woke?
 

Yul Brynner

Custom title
Joined
Sep 12, 2008
Messages
7,466
Reaction score
9,014
Location
Mongolia
Got dammit first the woke mind virus and now a woke computer virus! That's all we need! Before you know it skynet will be sending terminators to do away with people who make the wrong thoughts!
 

eaeolian

Pictures of guitars I don't even own anymore!
Super Moderator
Joined
Jul 21, 2005
Messages
15,346
Reaction score
3,682
Location
Woodbridge, VA
To me it simply sounds like it's very early in it's learning process and there's a lot of relevant context it hasn't been fed yet.

It's not necessarily wrong about the things it says are important, it's just not weighing them next to the alternatives rationally. Yet.
It will never weigh them rationally, it will just weigh them according to positive and negative reactions to the answers it gives. That is not rationality, and thinking it is is what leads to this bullshit.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,460
Reaction score
30,098
Location
Tokyo


Yea, I mean, these things aren't at odds -- AI use-cases in drug discovery, diagnosis, surgery, are going to be hugely beneficial. AI use-cases in call-centers, tech support, software engineering, basic design work, are all going to be very damaging to the employees in those fields. There's mixed messages because each use case is going to be different.

In the greener pastures view, I also think the predictions are a little spot-on in that if N people can work X hours and earn Y value for the company, and technology allows you to still generate Y with less N, companies will be inclined to do that. But at a macro level, that's extremely disruptive to the market, which in turn hurts the consumption of Y. Generating Y with less X is the sustainable way forward, and I'd love a 4-day work week (or frankly just a 5-day work week in my case).
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,681
Reaction score
12,560
Location
Gatineau, Quebec
I don't think the video was arguing that the productivity and solutions coming from AI aren't valid. Two things can be true at the same time, and stating one thing doesn't negate all others - as in, it can both be true that AI is a huge win for certain problem sets, but also that automation of any kind tends to have a detrimental effect on employment.

That speaks less to AI, and more to the weird arrangement of opposed incentives in the work force. We want to be more productive because that's beneficial to the business and the product, but we want to be less productive because overproduction means we don't need to employ many people, but everyone still has to eat.

AI would be fantastic in a world where we used automation to take burdens off of people so that our lives don't need to be centered around gainful productivity, but instead we live in a world where a person unburdened by work has no value.
 
Top