Data Cattle?

  • Thread starter slim231990
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
195
Reaction score
250
Location
Charlotte, NC
No. Back up your position. ¯\_(ツ)_/¯


I'm an anglophone in Quebec, where we're a quietly resented minority, so kind of. As with literal racism, it's almost never as simple as that, obviously.


No.


Did you read what I said? I said you "don't meet racist people in the middle". Not "you don't meet reasonable people in the middle". If you're implying that racism or sexism or whatever else are reasonable normal positions to be met half way, then I strongly disagree.


That you're misattributing an easy to make mistake that was walked back as "indoctrination" - and then refusing to back it up with anything.

I'm also part of a marginalized minority group on my mother's side. My position simply put: current state vs target state. It's evident that Gemini was a disaster because it alienated a diverse group of people in its current state, in its team's quest for diversity. My position is that Google clearly lays out their methodology, testing, and guidance. They also defined a definition of bias that caused some of its prompts to display bias. As AI is refined, enhanced, becomes more powerful, and is introduced as an integral component of our lives, we should care about the motives and biases of those that build, test, validate, and deploy this technology. Otherwise, the consequences could be dire. People much smarter than me have talked and written about how quickly AI could become a threat in the areas of social engineering, dissemination of agenda driven information, discrimination against humans, and global warfare / weaponization. We have to realize that diversity of thought and oversight are required. It's not conspiratorial to think AI can become dangerous to humanity. Some AI innovators have said as much.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,702
Reaction score
12,616
Location
Gatineau, Quebec
I'm also part of a marginalized minority group on my mother's side.
Well, I'm glad we've got some common ground.

It's evident that Gemini was a disaster because it alienated a diverse group of people in its current state, in its team's quest for diversity.
This is just flatly untrue. It's not a disaster, because they rolled it back immediately and most of the product is still working just fine. In a few weeks, most people will have forgotten about this. It's a fast moving field. Also, you've yet to prove in any way that either the event was deliberate, or that it actually alienated a significant number of people. Some sensationalist headlines don't mean that it "alienated a diverse group". The only people I've noticed being alienated by these are people who are bothered by the attempt to be diverse more than the result.

They also defined a definition of bias that caused some of its prompts to display bias.
Again - you have to back something like that up. I gave you examples of prompts and responses I got that showed it was trying very hard not to show a bias. If you're going to make this point, you need counter-examples. "Google it" is not a counter-example, because without the context of what you find objectionable, I don't know which samples fit your argument.

As AI is refined, enhanced, becomes more powerful, and is introduced as an integral component of our lives, we should care about the motives and biases of those that build, test, validate, and deploy this technology.
From a ten thousand mile up view, sure.

People much smarter than me have talked and written about how quickly AI could become a threat in the areas of social engineering, dissemination of agenda driven information, discrimination against humans, and global warfare / weaponization
Some in this thread have conceded that it already is some of those things. But that doesn't make this example one of those. The fact that people get robo-calls from AI voices doesn't prove that google has injected political bias into their chat bot.

We have to realize that diversity of thought and oversight are required.
So we're starting from a point that diversity is good, right? You realize that THIS is a bias. "Diversity is required" is not a neutral position. Just thought I'd point that out.

It's not conspiratorial to think AI can become dangerous to humanity.
No, but it IS conspiratorial to think that Google is already deliberately injecting political values into their chat bots in an attempt to control people.
 

Moongrum

Well-Known Member
Joined
Sep 7, 2013
Messages
350
Reaction score
422
Location
Pacific NW
First, it's not a press release, it's a detailed report that shares links to their methodology. Second, who are these AI ethicists that are the arbiters of the universal definition of biases? Do you know who they are? How they're funded? How diverse they are in worldviews? What's their EQ and CQ?
Ethics in AI is a course at pretty much every university with a computer science program (even some philosophy departments teach it).
Here's a course page at University of Washington: https://courses.cs.washington.edu/courses/cse582/23sp/
The professor is a woman, the TAs are both women and minorities. Most professors research can be seen on their websites/university's, and they might even say what grad students they're advising. Probably even where grant money is coming from.
Seems like that specific professor does NLP research, though.

The AI Ethics prof at Georgia Tech used to be a black woman (I don't remember her name this was like 4 years ago), but now it seems a white dude teaches it: https://iac.gatech.edu/people/perso...-,Justin B.,and science and technology policy.

Universities are often a pipeline for cutting edge research that ends up happening in the private sector. Maybe they're not the authority on AI ethics, but I do think they help with whatever narrative is going on.

Anyways, just wanted to make this post not to argue/counterpoint, but show these are kind of the places to look/follow if it's a subject you wish to pursue outside of this thread.
 

SalsaWood

Scares the 'choes.
Joined
May 15, 2017
Messages
1,242
Reaction score
1,961
Location
NoVA
Patiently waiting for someone to bring up the "long distance relationship aids" that are coming round the bend.

Not that aids. Well, maybe.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
195
Reaction score
250
Location
Charlotte, NC
Well, I'm glad we've got some common ground.


This is just flatly untrue. It's not a disaster, because they rolled it back immediately and most of the product is still working just fine. In a few weeks, most people will have forgotten about this. It's a fast moving field. Also, you've yet to prove in any way that either the event was deliberate, or that it actually alienated a significant number of people. Some sensationalist headlines don't mean that it "alienated a diverse group". The only people I've noticed being alienated by these are people who are bothered by the attempt to be diverse more than the result.


Again - you have to back something like that up. I gave you examples of prompts and responses I got that showed it was trying very hard not to show a bias. If you're going to make this point, you need counter-examples. "Google it" is not a counter-example, because without the context of what you find objectionable, I don't know which samples fit your argument.


From a ten thousand mile up view, sure.


Some in this thread have conceded that it already is some of those things. But that doesn't make this example one of those. The fact that people get robo-calls from AI voices doesn't prove that google has injected political bias into their chat bot.


So we're starting from a point that diversity is good, right? You realize that THIS is a bias. "Diversity is required" is not a neutral position. Just thought I'd point that out.


No, but it IS conspiratorial to think that Google is already deliberately injecting political values into their chat bots in an attempt to control people.
Okay, I see your point on a few items mentioned. I linked and quoted a few excerpts from Google on their methodology and how they defined bias. There's a lot of information embedded here:


This information plus the Managing Director's words imply that if the product shows bias, that was how it was designed. When I use the term bias, I mean any type of slant that shows favor to one group, ideology, or worldview (that's not universally insane) over another. I think if it's disseminating information, it should present multiple viewpoints, all relevant related facts, and information across the political spectrum so that we can make informed decisions and observations. It shouldn't draw conclusions for us in this area. Perhaps there's a different argument for how AI is used in other areas, like medicine, where we can trust the result of millions simulated outcomes delivered in a few minutes with relevant support data.

Like I said, I want to see complex issues like wars, current events, and politics, from all sides. Otherwise, there's a danger of "othering" people because we're conditioned to do so. Future state, hopefully, will take this into consideration. I, personally, don't want a political or agenda driven AI. And yes, Google does partner with the US federal government. That's old news now.
 

SalsaWood

Scares the 'choes.
Joined
May 15, 2017
Messages
1,242
Reaction score
1,961
Location
NoVA
@Crungy
yes-evil.gif
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,702
Reaction score
12,616
Location
Gatineau, Quebec
I think if it's disseminating information
Chat bots don't "disseminate" information. They're glorified search engines and word calculators. The bot doesn't have an intent that it's trying to enforce on you - you have to prompt it for what you want to get out of it. You could prompt-engineer your way into biased data if you want to, but that's not a sign that the bot itself has ill intent - that's the user fishing for specific data out of the set.

it should present multiple viewpoints, all relevant related facts, and information across the political spectrum so that we can make informed decisions and observations.
Which, so far, every chat bot I've tried does. Or at least attempts to.

Like I said, I want to see complex issues like wars, current events, and politics, from all sides.
A chat bot is not the place to ask those questions, and probably never will be, because it was never their purpose to be complex-topic-detanglers. That's not what they do. In fact, they deliberately shy away from those subjects to avoid the very thing we're talking about happening.
 

slim231990

Areola51
Joined
Aug 11, 2010
Messages
250
Reaction score
149
Location
IL (US)
Yea when I try to talk to people about this subject in real life I get some funny responses like "well there's always been things like the radio, newspapers, and tv commercials." Don't get me wrong all of these forms of information have swayed people to consume products or change their personal beliefs or politics, but there's a few big differences.

I would say at this point if you live in a first world country a cell phone is a necessary part of life. Shit unless you're a pc gamer, the modern cell phone has even knocked the great home computer off it's pedestal. I believe that when FB went public with it's shares in 2012 the game became "lets get them hooked, the more screen time people have the more meta and ad revenue we can get from them." From there it evolved into "lets use this meta to find out their deepest subconscious desires and exploit it."

Long story short newspapers, commercials, and the radio didn't have the end goal to make a person a data cattle that just segregates themselves from family and friends to doom scroll. These forms of communication didn't track our location while they sat in our pocket. They didn't notify you if you haven't checked in within the last hr, and they didn't sell your deeply personal information to the highest bidder. When I read the sports section in the newspaper it didn't time me and come to the conclusion "he likes sports huh, then that's all hes gonna get 24/7 in his newspaper from now on."

When I say I pretty much lost my friends and family to digital addiction I didn't mean it literally lol. They're still there just so balls deep in a screen that there's nothing to even interact with at the end of the day.

For the younger people who are reading this I want you to know that if this digital dope was around when I grew up chances are I'd be balls deep too, so no judgement once again. Just keep in mind the internet which is the greatest source of information and disinformation of all time has become mostly about pure entertainment consumption and the explotation of your time. Is that what we want to waste or percious time on?
 

High Plains Drifter

... drifting...
Joined
Aug 29, 2015
Messages
4,188
Reaction score
5,315
Location
Austin, Texas
Not sure that this was ever touched upon specifically but seems there are basically two running contexts here:

Social media immersion as it relates to just one person.. their life, their interests, their needs, etc.

And social media immersion as it relates to the dynamics of their relationships with other people/ loved ones irl.

Valid points and concerns on both sides.
 

slim231990

Areola51
Joined
Aug 11, 2010
Messages
250
Reaction score
149
Location
IL (US)
My original purpose of this thread was to have people take a look at their phone screen time, but also felt the need to rant about the effects digital dope has on my life and the people in it. If one person evaluated their screen time from this, it was worth my time.

I feel like this problem is still fairly new. I remember in 2008 when they released the first I-phone, and man I had no idea on where it would lead us to lol. At this point I feel like we need to educate the youth about things like internet algorithms and how they are used against us. For example everyone knows about internet cat fishing, but when you try to explain how the same cat fishing method could be used to sway people's politics, belief systems, or alter facts (ex. Flat earth theory) people tend to believe that's crazy talk, and people are not actually doing that.

Also this isn't just about social media, it just seems to be the most influential of the digital dope. We all know how much the algorithms feed off polarization and getting people stirred up. The more pissed off people are from the interaction the more they dig in. I feel like with all the mass shootings, high levels of social depression, and wayyyy too high youth suicide rate we'd all be idiots thinking that social media isn't one of the main contributing factors.

This is just my opinion but it seems like people in general, or at least in my life, have become more short tempered and ready to start conflict whenever possible because they spend so much time getting wound up from a algorithm that it bleeds over into the real world. But then again I'm just some guy from IL who likes to ride his mountain bike and play guitar lmao.
 
Top