Data Cattle?

  • Thread starter slim231990
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Counter point: Some prompts yielded responses that were blatantly biased. Again, I'll point you back to the Managing Director's own words to see how this happened.

I understand the article, but I'm at a total loss how you think it creates bias in the model. The whole point of diversifying teams is to help reduce racial or gender bias in the resulting products.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
"Why do you think this is conspiratorial thinking?"
Because you say things like:

First of all: It already is. People get scammed by AI voices every day.


Google is not a political party.


Once again I don't think you're paying attention to what's actually happening. AI is already used by "news" sites and consistently getting push back for it.


Honestly, given that you've refused to explain what viewpoints you're afraid of manifesting through the machines, I kinda don't believe you.


Give examples. Because again, I don't believe you.
I really don't care if you don't believe me. I don't mean to sound snarky. I try and fail at something nearly every day. I learn from a host of people from different walks of life nearly every day. It's built in to the construct of my friend and colleague relationships.

If you want examples, research. It's not hard. It's in every news feed, left, right, and center. There are screen shots of biased prompt responses on CNN, MSNBC, NBC, CBS, ABC, Twitter, and even YouTube. Seriously, you can't say something isn't true when there's actual evidence.

When I say apolitical is the way forward, am I not arguing that I should see viewpoints that challenge my beliefs rather than supporting my own confirmation biases?
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
I understand the article, but I'm at a total loss how you think it creates bias in the model. The whole point of diversifying teams is to help reduce racial or gender bias in the resulting products.
Being inclusive is a very important goal. However, in our quest to unite and remove petty divisiveness, we have to meet people where they are and accept that our political views are a fraction of who we are. The prompt responses demonstrated favorability and undesirable logic. Let me politely reiterate that I believe Google when they say their biases are an input. I'm an independent, for example. If I trained AI, it would probably reflect my values, which wouldn't be good if your POV differed from mine. So to diversify, I would bring in people from other political leanings to help with diversity of thought. Google isn't that diverse when it comes to political donations. That's public record. So outside of melanin diversity, shouldn't we look at diversity of thought?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Being inclusive is a very important goal. However, in our quest to unite and remove petty divisiveness, we have to meet people where they are and accept that our political views are a fraction of who we are. The prompt responses demonstrated favorability and undesirable logic. Let me politely reiterate that I believe Google when they say their biases are an input. I'm an independent, for example. If I trained AI, it would probably reflect my values, which wouldn't be good if your POV differed from mine. So to diversify, I would bring in people from other political leanings to help with diversity of thought. Google isn't that diverse when it comes to political donations. That's public record. So outside of melanin diversity, shouldn't we look at diversity of thought?

The people training AI aren't like... teaching it, personally. It is a reflection of the internet. If you trained an AI, your political affiliation would not make a difference, because you are not involved in that process. There are ways in which political affiliation could be imposed on the model, but Google does not do that. What you're seeing in Gemini is exactly as I described above, whether I can both lead you to water and coax you into drinking it, I'm not sure. But the things you're talking about are not new or secret knowledge, you just haven't gone out and read it yet.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
The people training AI aren't like... teaching it, personally. It is a reflection of the internet. If you trained an AI, your political affiliation would not make a difference, because you are not involved in that process. There are ways in which political affiliation could be imposed on the model, but Google does not do that. What you're seeing in Gemini is exactly as I described above, whether I can both lead you to water and coax you into drinking it, I'm not sure. But the things you're talking about are not new or secret knowledge, you just haven't gone out and read it yet.

Google can alter their algorithms for search results, but the Gemini team can't dictate what sites and sources AI should crawl and scrape? They can't define trusted vs misinformation sites? Gemini just became sentient and decided for itself that X is good and Y is bad? You have to know that's not the case, right?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Google can alter their algorithms for search results, but the Gemini team can't dictate what sites and sources AI should crawl and scrape? They can't define trusted vs misinformation sites? Gemini just became sentient and decided for itself that X is good and Y is bad? You have to know that's not the case, right?

So, you think that despite the fact I already explained exactly how the overly inclusive image generation happens, in what is referred to as alignment, you think that they scraped the data in such a way that the data that is left creates model which produce black george washington and asian harriet tubman? That's how you're going to claim this works?
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
So, you think that despite the fact I already explained exactly how the overly inclusive image generation happens, in what is referred to as alignment, you think that they scraped the data in such a way that the data that is left creates model which produce black george washington and asian harriet tubman? That's how you're going to claim this works?
Google built the infrastructure, logic, parameters and defined initial behavior. It's on their blog:

"We've been rigorously testing our Gemini models and evaluating their performance on a wide variety of tasks. From natural image, audio and video understanding to mathematical reasoning, Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Our new benchmark approach to MMLU enables Gemini to use its reasoning capabilities to think more carefully before answering difficult questions, leading to significant improvements over just using its first impression."
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Google built the infrastructure, logic, parameters and defined initial behavior. It's on their blog:

"We've been rigorously testing our Gemini models and evaluating their performance on a wide variety of tasks. From natural image, audio and video understanding to mathematical reasoning, Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Our new benchmark approach to MMLU enables Gemini to use its reasoning capabilities to think more carefully before answering difficult questions, leading to significant improvements over just using its first impression."

That's just adcopy. The model, as it pertains to bias, is approximately the same as all the other LLMs.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
That's just adcopy. The model, as it pertains to bias, is approximately the same as all the other LLMs.
Quote from Google, on their blog:

"We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness. This helps Gemini seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models — and its capabilities are state of the art in nearly every domain."
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Quote from Google, on their blog:

"We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness. This helps Gemini seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models — and its capabilities are state of the art in nearly every domain."

That's also just a general description of how all of the comparable models work.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
That's also just a general description of how all of the comparable models work.

Google states their approach is unique and different and points to their infrastructure's and overall build's unique attributes. They also defined their definition of bias. So did Gemini arrive at its own conclusion of what constitutes bias, or did the team that (in their own words) rigorously tested Gemini release versions define bias? Pretty obvious, I'd say:

"Gemini has the most comprehensive safety evaluations of any Google AI model to date, including for bias and toxicity. We’ve conducted novel research into potential risk areas like cyber-offense, persuasion and autonomy, and have applied Google Research’s best-in-class adversarial testing techniques to help identify critical safety issues in advance of Gemini’s deployment."
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Google states their approach is unique and different and points to their infrastructure's and overall build's unique attributes. They also defined their definition of bias. So did Gemini arrive at its own conclusion of what constitutes bias, or did the team that (in their own words) rigorously tested Gemini release versions define bias? Pretty obvious, I'd say:

"Gemini has the most comprehensive safety evaluations of any Google AI model to date, including for bias and toxicity. We’ve conducted novel research into potential risk areas like cyber-offense, persuasion and autonomy, and have applied Google Research’s best-in-class adversarial testing techniques to help identify critical safety issues in advance of Gemini’s deployment."

Their definition of bias is basically the same as everyone else's in the field of AI ethics. The bias and toxicity benchmarks they use are established and used by researchers across the field.

If you don't understand how these models work or how to train them, you're not going to read a press release and walk away with an informed opinion. Start by reading some google research papers on LLMs, alignment, and AI ethics.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
Their definition of bias is basically the same as everyone else's in the field of AI ethics. The bias and toxicity benchmarks they use are established and used by researchers across the field.

If you don't understand how these models work or how to train them, you're not going to read a press release and walk away with an informed opinion. Start by reading some google research papers on LLMs, alignment, and AI ethics.

First, it's not a press release, it's a detailed report that shares links to their methodology. Second, who are these AI ethicists that are the arbiters of the universal definition of biases? Do you know who they are? How they're funded? How diverse they are in worldviews? What's their EQ and CQ?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
If you want examples, research.
How exactly do you propose I research your opinion? I'm asking YOU what YOU find objectionable. And I find it telling that you're skirting the question. I did look up stuff coming from the AI, and I didn't find it objectionable. So I want to know what YOU found objectionable.
However, in our quest to unite and remove petty divisiveness, we have to meet people where they
No we don't. If people are petty or divisive or racist or whatever else, there is no obligation to meet them any which way.

Google states their approach is unique and different
Why would you take a statement like this at face value when you believe that they're injecting malicious politics into their models to make George Washington black?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
First, it's not a press release, it's a detailed report that shares links to their methodology. Second, who are these AI ethicists that are the arbiters of the universal definition of biases? Do you know who they are? How they're funded? How diverse they are in worldviews? What's their EQ and CQ?
Why don't you go look into this yourself? I think it's clear you're going to explain to me how all of these work in your mind regardless. There is an abundance of research on this topic, it's pretty much all published openly.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
How exactly do you propose I research your opinion? I'm asking YOU what YOU find objectionable. And I find it telling that you're skirting the question. I did look up stuff coming from the AI, and I didn't find it objectionable. So I want to know what YOU found objectionable.

No we don't. If people are petty or divisive or racist or whatever else, there is no obligation to meet them any which way.


Why would you take a statement like this at face value when you believe that they're injecting malicious politics into their models to make George Washington black?

Dial it back. You're projecting what you want me to be. You conveniently left out an Asian representation of Harriet Tubman, which is also historically inaccurate. Do you live in a region where everyone is racist? If so, move. I don't experience that in my life. If I did, I'd call it out. If you're not willing to meet normal, non-racist people where they are, do you have a circle of friends that are carbon copies of yourself?

I'm clearly a proponent of inclusion and tolerance of differing views. Don't get so worked up because we don't see eye to eye on this. I'm arguing for truly diverse, apolitical dissemination of information from AI so that rational people are forced to think instead of being indoctrinated by a small subset of people that build and control technology. What's wrong with that, honestly?
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
Why don't you go look into this yourself? I think it's clear you're going to explain to me how all of these work in your mind regardless. There is an abundance of research on this topic, it's pretty much all published openly.
I honestly don't know. I'm going to research it though. So far, I see that Google partners with AI2, Paul Allen's company.

 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Stochastic parrots is not a great paper but probably a good jumping off point.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
Dial it back.
No. Back up your position. ¯\_(ツ)_/¯

Do you live in a region where everyone is racist?
I'm an anglophone in Quebec, where we're a quietly resented minority, so kind of. As with literal racism, it's almost never as simple as that, obviously.

If so, move.
No.

If you're not willing to meet normal, non-racist people where they are
Did you read what I said? I said you "don't meet racist people in the middle". Not "you don't meet reasonable people in the middle". If you're implying that racism or sexism or whatever else are reasonable normal positions to be met half way, then I strongly disagree.

instead of being indoctrinated [...] What's wrong with that, honestly?
That you're misattributing an easy to make mistake that was walked back as "indoctrination" - and then refusing to back it up with anything.
 
Top
')