Data Cattle?

  • Thread starter slim231990
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

Festivus

Well-Known Member
Joined
Apr 8, 2010
Messages
84
Reaction score
54
Location
Kitchener, Ontario
Thousands are reporting mobile phone service disruption... AT&T, Verizon, etc...

Take that, wife!

Hang in there, I hope your situation gets better! It has happened to my partner too. I remember once her auntie was talking to her across the dinner table (and in principle saying something interesting) and I see this twitching thumb out of the corner of my eye and look across to see my partner is 'listening' but also looking down at her phone and scrolling through her facebook feed.

I guess I've gotten used to it more or less. With the disclaimer that I am perfectly capable of being boring, one of the things I find saddest is that when we are together and my partner has access to her phone and I have something I'd like to tell her, I'm figuring out how to tell her in a way that works given the constant pull on her attention from her phone.

I think an insidious thing about social media is that it has been fine tuned to give people the impression that the time they're spending on it is time spent usefully, even if it's hours of looking at hollow content.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
And those words are? All I can find are right-wing articles that are mad at him for "promoting diversity the wrong way" because he uses strong language when talking about things like... the fact that racism exists.


Of course. But also, the average person doesn't rely on it, and as soon as it was noticed, they acknowledged it was wrong and started fixing it.


I mean, I also work in software development - on a team that includes some AI research. This was a very easy outcome to reach accidentally.


You're talking about a company from the era of "move fast and break things". And, being in software, you should know damn well that no company ever covers every possible user case - especially when it comes to something like AI where the possibility space makes it literally impossible to get 100% coverage. That's an inherent risk of building AI systems - which we know because of how many accidentally racist AIs we've built. It's not a stretch to see one of them lean the other direction.


No, the system always does what it was programmed to do, which, as a software developer, I imagine you know very well doesn't always match the intended design - again, especially in a space like AI where the massive possibility space makes that a bigger-than-traditional challenge.

I'll go a step further in that if you're a software developer, I'm sure you know that a product like that is never the result of a single person. This isn't "Jack Krawczyk did this", it's "Google did this". As a team. The screwed it up, as a team.


Truth also includes being honest about your own biases when evaluating when someone (or a company) makes a mistake.


No, you have this backwards. You start with critical thinking, to arrive at the truth. You can't "start with the truth" unless you're substituting in whatever you believed from the outset.

And that's why I think it's conspiratorial thinking. A lot of people who envision the worst outcomes in technology are starting from the conclusion and working backwards to validate whatever they already felt about the matter. And I'm not defending google - they're not a "good person", they're a faceless corporation that contributes to a lot of the worlds problems. But wokeness ain't one of them. There's 101 reasons to criticize google, and to be wary of AI, but this ain't one of em.
Critical thinking, at its core, is clarity, conclusions, and decisions (Michael Kallet's definition, anyway). You can get clarity by establishing facts that you know to be true, or by deductive reasoning, empiricism, et. al, to arrive at conclusions and then decisions.

Systems Thinking implies that programming, in part, is the design, not a TRD, FRD, or mock-ups / design documentation.

If you search Jack Krawczyk's social media accounts, you'll find his own words. If I see something on any news source, I want to be able to verify with context so I know it's not cherry picked sensationalism. He was a direct input that led to the output, or deliverable. There's nothing wrong with some of his statements, but that's not the question. The question is, "how did his biases impact the deliverable?"

A product isn't the result of single person, you're right, but the product is tested and validated throughout development, especially at Google. Usually, it's something like multiple teams working on the program, aligning on deliverables. That could be predictive project management, or adaptive. Either way, you have one or all of the following: client or stakeholders, developers (using the Scrum Guide definition here), product owner, product manager, project manager, scrum master / agilist, director level, c-suite level, and maybe board level. Last time I checked, Google was purely Agile, I believe their own version of Scrum, but I'm not sure. If it is still Scrum or a scaled Agile framework, I'm sure they still have Product Goals, Backlog Refinement, Sprint Planning, Sprint Goals, daily stand-ups, Sprint Reviews, and Retrospectives.

So, let's look at the Managing Director's own words from a Computerworld interview:

"Outputs of any AI system are limited to the demographic makeup of its creators, and therefore subject to the unintentional biases that this team might have." - Helen Kilesky

There's nothing wrong with what she's saying. However, it's telling with how the Gemini team's input impacted the product.

This brings us back full circle about the dangers of unchecked biases and the need for some kind of plan to ensure AI, and all the technology tools that we use for learning, teaching, as a source of truth, etc, isn't monopolized and weaponized. Especially since AI isn't going anywhere and will likely play a bigger role in many aspects of our lives at some point in the near or distant future.

Here's the link to the Computerworld article if you want to read it:

https://www.computerworld.com/artic...re-so-much-about-hiring-diverse-ai-teams.html

The ethics and standards for AI should be a concern for all of us, that's all I'm saying.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
If you search Jack Krawczyk's social media accounts, you'll find his own words
I mean, sure, I can do that, but I haven't yet come across anything that I find objectionable unless you're annoyed with him being too woke. Also his X / twitter is private, and I've got better things to do than hunt down some guys socials just to figure out what point you're trying to make. Just say the thing. Is he racist? Is he sexist? Does he have poor business sense? Did he insult someone when it wasn't warranted? What did he say that you think is objectionable? It's a VERY simple question.

However, it's telling with how the Gemini team's input impacted the product.
I think it's telling you something very different than what it's telling me. I think it's telling me that a team tried to make their product include a more diversified data set, and to avoid training it on a white-only perspective, and it blew up on them.

It sounds like you're saying you think they're trying to brainwash us with wokeness, but are avoiding just saying so. If I'm wrong, then please correct me. Once again, I ask you what you think is objectionable about Jack and his team.

I mean, they own the platform, so they're well within their own right to bias it whatever way they want to - nobody and nothing is without bias - so what is the bias you're objecting to? It's not like other AIs have never shown bias before, both intentional and otherwise.

as a source of truth
IMO this is the real danger of AI. Treating it like it's a source of truth. Which it very much is not.

I'm just kinda skimming that article so far, and aside from it being kind of a fluff piece, I still don't understand what's objectionable about it - unless you have a problem with diversity.

The ethics and standards for AI should be a concern for all of us, that's all I'm saying.
Sure - but then... wouldn't having diversity be well within what a company might be expected to follow as an ethical guideline? And wouldn't such an ethical guide be the kind of thing to cause an over-correction in a data set? Especially if you aren't expecting it, and therefor might not have designed a test plan to detect it? I'm much more willing to attribute this to a mistake than malice.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
I mean, sure, I can do that, but I haven't yet come across anything that I find objectionable unless you're annoyed with him being too woke. Also his X / twitter is private, and I've got better things to do than hunt down some guys socials just to figure out what point you're trying to make. Just say the thing. Is he racist? Is he sexist? Does he have poor business sense? Did he insult someone when it wasn't warranted? What did he say that you think is objectionable? It's a VERY simple question.


I think it's telling you something very different than what it's telling me. I think it's telling me that a team tried to make their product include a more diversified data set, and to avoid training it on a white-only perspective, and it blew up on them.

It sounds like you're saying you think they're trying to brainwash us with wokeness, but are avoiding just saying so. If I'm wrong, then please correct me. Once again, I ask you what you think is objectionable about Jack and his team.

I mean, they own the platform, so they're well within their own right to bias it whatever way they want to - nobody and nothing is without bias - so what is the bias you're objecting to? It's not like other AIs have never shown bias before, both intentional and otherwise.


IMO this is the real danger of AI. Treating it like it's a source of truth. Which it very much is not.

I'm just kinda skimming that article so far, and aside from it being kind of a fluff piece, I still don't understand what's objectionable about it - unless you have a problem with diversity.


Sure - but then... wouldn't having diversity be well within what a company might be expected to follow as an ethical guideline? And wouldn't such an ethical guide be the kind of thing to cause an over-correction in a data set? Especially if you aren't expecting it, and therefor might not have designed a test plan to detect it? I'm much more willing to attribute this to a mistake than malice.

Okay. Simple. If you read the responses Gemini gives, the bias is pretty obvious. If the purpose is to be accurate and unbiased, it should behave that way. It shouldn't be colored by the narrative, worldview, biases, or cause du jour. Training AI to lean left or right isn't something any of us should want. I'd think you'd want the same so that we, collectively, can start to have real conversations about the dangers of ideological dogma leading to something that has the potential to be dangerous as AI advances. This is one of the most innovative breakthroughs, and we're witnessing the advent. AI could be used as a tool to unite the world, put a dent in poverty, increase our quality of life, and establish common ground, but the need to be a tribalist is too strong of a temptation, apparently.
 

High Plains Drifter

... drifting...
Joined
Aug 29, 2015
Messages
4,198
Reaction score
5,346
Location
Austin, Texas
Hang in there, I hope your situation gets better! It has happened to my partner too. I remember once her auntie was talking to her across the dinner table (and in principle saying something interesting) and I see this twitching thumb out of the corner of my eye and look across to see my partner is 'listening' but also looking down at her phone and scrolling through her facebook feed.

I guess I've gotten used to it more or less. With the disclaimer that I am perfectly capable of being boring, one of the things I find saddest is that when we are together and my partner has access to her phone and I have something I'd like to tell her, I'm figuring out how to tell her in a way that works given the constant pull on her attention from her phone.

I think an insidious thing about social media is that it has been fine tuned to give people the impression that the time they're spending on it is time spent usefully, even if it's hours of looking at hollow content.
I sincerely appreciate your reply. Trying to find much difference between what you wrote and where I'm at and I can't really find any besides replacing auntie with bestie and facebook with instagram. Oh and my wife doesn't really eat dinner... just snacks. That's rough though. Sorry you've had to deal with this too.

Interestingly enough, I talked to her again about this recently and was brutally candid about my feelings... that I felt shut out and taken for granted. That's the much abridged version. I told her that I didn't think she has been contributing positively towards our co-existence these past months and that my feelings over time have hindered my appreciation of her as my partner. I mean, there was a ton more words that we both spoke but in the end she seemed to latch on to feeling guilty. She brought it up a couple days after and was seemingly quite distraught... many tears and all that.

Then last Sunday, she told me that she wants to try to limit her time on her phone, although I'm secretively skeptical. I'm making sure to come at this in a very "not blaming" kind of way and I even told her as she was being critical of herself, that I don't want us to ever dwell on our admitted mistakes or shortcomings... that I want our relationship to be forever in the moment and/ or moving forward... away from the things that we can't change.

Also on Sunday, she asked me if we could go to the park for a walk, and I of course was like "sure!". It was nice ( over two fucking miles which is um... rare for me but it's cool). She wanted to hold hands the whole time which was expected as she's always been into the touchy-feely deal and although I'm not so much like that, I try to be receptive to those intimate interactions.

When we discussed this phone thing in the past, there were several times where she kinda came back on me like "well you used to spend a lot of time with your best buds" and while that's legit, I told her that I didn't feel that now 10 years into this thing, the whole "chicken or egg?" lamenting would likely prove advantageous for either of us. Again.. we both need to progress from where we're at now and accept responsibility for our actions and how we regard one another. It was a weak argument anyway, as I've always given her a good deal of time and attention. We've worked fairly well as a "team" throughout the relationship and that needs to continue... one way or another.

I feel that communicating very genuinely ( which I do, regardless of anything) about the damaging effects this habit has caused, is truly important but so is balancing that with a willingness to be receptive to her thoughts and to share, listen, and act with as much civility and compassion as possible. I don't want either of us to lose the ease of engaging in open and respectful dialogue.

Apologies for the length of my posts. I've tried to stay quiet throughout this thread but I figured that maybe sharing some of how I'm dealing with it, might help someone else if they decide to discuss this topic with a friend or family member. And again, getting some of this out proves a bit cathartic. Fuck, I'm long-winded though lol.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
If you read the responses Gemini gives, the bias is pretty obvious.
I tried a few times to get Gemini to give me a very obviously politically slanted answer, and so far it doesn't. It flat out refuses to talk about gun control. If I ask about abortions, it insists on presenting it as "here's what some think and here's what others also think" without any suggestion that one perspective or the other is better. If I ask about diversity, it insists on human oversight being a key in any questions of diversity. When I ask about religion, a lot of the answers contain things like "It's important not to overshadow Jesus's message of love" etc., right alongside the more scholarly takes. That's far from the leftist bias you're suggesting. The vast majority of what comes out of this thing seems to be the same bog-standard scholarly sounding point-of-fact stuff you'd get out of anything else.

Training AI to lean left or right isn't something any of us should want.
Again, assuming you could prove that's what's happening, I don't know that I'd agree. I think if a person feels strongly enough about any particular set of values, they might think of their position as the "moral default", and therefore would absolutely want AI to reflect that. Like, if you ask me if AI should slant left or right, whenever there is a case where it has to choose, I'm obviously going to pick left every time - especially if I expect that AI system is ever going to be involved in decision making that could have any kind of moral implications attached to it.

For example - imagine you're designing an AI model that spits out city designs and zoning maps. I would absolutely prefer that this model leans left. Because a "right leaning" model might do things like create segregated zones for poor people or minorities. Or be deliberately inhospitable for the homeless. Or fail to incorporate the goal of a city being walkable. Or it might prioritize a type of zoning that benefits large business to the detriment of small owners or affordable housing or something. Or maybe it doesn't generate a school zone because "education is liberal brainwashing anyway". Or could entirely lack parameters to try to reduce environmental harm. Do you see where I'm going with this?

The very nature of politics suggests that people legitimately believe their position is either correct to some standard or valuable - otherwise they wouldn't be holding that position in good faith in the first place. So on that basis, there's no such thing as "unbiased AI". Because there will always be someone else for whom your "neutral" position seems biased from the perspective of their own "neutral" position. The only way to "be unbiased" is to present every position you can find or just refuse to answer anything that could be controversial - which is what AI generally tries to do right now.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Yea, and in this case "neutral" means reinforcing the stereotypes (or the realities) of current reality, and not potential reality. Do you want to reinforce the idea that all doctors are men, or "great thinkers" are white, all gang members are black or hispanic, etc.? These are the biases a typical web crawl will lead you to, based on current social demographics and dominant historical perspectives.

You try to undo this by identifying prompts which ask for such groups of people, and further specify (implicitly) that you want the results to be inclusive, i.e., that all of us, regardless of race or gender, are equally capable of becoming doctors or great thinkers, and that overshoots and messes up other historical contexts. This in not really a left or right leaning idea, and it is closely aligned with the notion of protected classes in the american legal system.

The important point is that it was immediately recognized as a bug and not a feature. The idea that this was an effort to rewrite all of history based on developers' values is a poor reading of the realities of why the model behaved that way. Being too inclusive was just collateral damage in the attempt to prevent the model from not being inclusive at all.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
I'm reminded of the thing a while ago one of the image generating models just could not imagine a depiction of a "nerd" that didn't wear glasses. As soon as the word "nerd" appeared in the prompt, every result had glasses in it, no matter what the rest of the prompt was.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
If you want political AI over apolitical AI, what happens when your preferred political leaning is disparaged, shut down, and labeled hate speech by AI? Or, an absolute truth is suppressed because it fails support an agenda? Don't confuse the current zeitgeist with ever-changing cultural movements. Culture and ideas always change. I favor apolitical because history is replete with tyrants that gain power by tickling ears. I favor apolitical because I want the raw, unfiltered data from numerous sources. I'd like to discard propaganda, divisiveness, and sensationalism. To discern the difference between facts and propaganda, you need context and accurate, unbiased data. That's why empirical processes, like the scientific method, still work today. We may initially have a bias towards preferred hypotheses, but then we test our hypotheses to see if we're correct. I'd prefer that AI not seek to validate confirmation biases if that entails suppressing opposing points of view or changing facts to support narrative.
 
Last edited:

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
what happens when your preferred political leaning is disparaged, shut down, and labeled hate speech by AI?
I imagine that in that situation, the problem is not the AI, but my position as compared with that of the general population. In other words, I don't feel like I'm at risk of my beliefs becoming hate speech, because "hate speech" is not an arbitrary decision made along party lines.

Or, an absolute truth is suppressed because it fails support the truth of an agenda?
There is no man-made media, be it AI or news papers or just a casual conversation, that can claim to be "absolute truth", and the suggest it can is absurd on the face of it. You can't eliminate bias, you can only be aware of it and respond appropriately. Besides that, it only matters if AI is authoritative, which it isn't, and probably should never be.

I favor apolitical because history is replete with tyrants that gain power by tickling ears.
But that's not apolitical. Just acknowledging the existence of "a tyrant" is a political stance.

I'm not contesting that striving for transparency and accuracy is a good thing. I'm contesting that the current examples we have are as biased as you're implying.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
@TedEH

You may not be at risk yet, but you have no idea what cultural shifts might occur in the future. AI isn't authoritative yet, but it's myopic to think that it's not a possibility in the future. Apolitical AI tools that disseminate news and viewpoints could possibly guard against tyranny by presenting all viewpoints to let the consumer of said information decide. Tyrants may be political, but their end goal is always power and control. To maintain power and control, tyrants seek to crush dissenting views and opinions. Multiple generations now are addicted to technology and seek to validate their confirmation biases through various media channels. As AI is refined and capabilities grow, I'll again state that it's dangerous to train AI to react favorably to the preferred political agenda of 1/2 of the vox populi, given that said preferences change drastically over time.

Let's also not limit AI to its current form. Eventually there will be consolidation and capabilities will expand. In the right environment, like Oak Ridge Laboratory's Frontier, AI's function could expand to all aspects of our life, from finance to healthcare. Think of how fast IoT emerged. I just read an article that referenced AI's role in creating a fusion energy source that would increase our ability to enhance AI capabilities. I believe the current state of AI will evolve relatively quick. That's why I'm saying we should give careful thought to the desired future state, given that it will provide solutions globally. Nations will weaponize AI, if they haven't already. Think about the reach and our current addiction and dependence on technology.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
If you want political AI over apolitical AI, what happens when your preferred political leaning is disparaged, shut down, and labeled hate speech by AI? Or, an absolute truth is suppressed because it fails support an agenda? Don't confuse the current zeitgeist with ever-changing cultural movements. Culture and ideas always change. I favor apolitical because history is replete with tyrants that gain power by tickling ears. I favor apolitical because I want the raw, unfiltered data from numerous sources. I'd like to discard propaganda, divisiveness, and sensationalism. To discern the difference between facts and propaganda, you need context and accurate, unbiased data. That's why empirical processes, like the scientific method, still work today. We may initially have a bias towards preferred hypotheses, but then we test our hypotheses to see if we're correct. I'd prefer that AI not seek to validate confirmation biases if that entails suppressing opposing points of view or changing facts to support narrative.

There is no unbiased data to train these models. There is no apolitical data to train these models. You are making a fundamental mistake in your thinking so the subsequent 20 paragraphs of text are meaningless.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
There is no unbiased data to train these models. There is no apolitical data to train these models. You are making a fundamental mistake in your thinking so the subsequent 20 paragraphs of text are meaningless.

First, there is objective truth. George Washington wasn't African American and Harriet Tubman wasn't Asian. The earth isn't flat. 1+1=2. If you take what I said into consideration, I'm saying that where there are competing viewpoints, provide said viewpoints so the consumer can decide. Or heaven forbid, the consumer of information can do actual research and arrive at a conclusion. "GASP! The horror of hearing something I don't agree with! Whatever shall I do?" Well, the answer is to look at the multiple viewpoints and conclusions of the information and decide instead of looking for a source to spoon feed you the delicious rage info to support your tribe. We should all be wary of echo chambers.

We should all seek out different viewpoints and have the ability to engage in honest discourse. Although it might challenge my beliefs, I'm not afraid of being wrong, admitting that I'm wrong, and learning something new. That's the importance of apolitical dissemination of information. It doesn't limit your points of reference to a particular worldview.
 
Last edited:

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
First, there is objective truth. George Washington wasn't African American and Harriet Tubman wasn't Asian. The earth isn't flat. 1+1=2. If you take what I said into consideration, I'm saying that where there are competing viewpoints, provide said viewpoints so the consumer can decide. Or heaven forbid, the consumer of information can do actual research and arrive at a conclusion. "GASP! The horror of hearing something I don't agree with! Whatever shall I do?" Well, the answer is to look at the multiple viewpoints and conclusions of the information and decide instead of looking for a source to spoon feed you the delicious rage info to support your tribe. We should all be wary of echo chambers.

We should all seek out different viewpoints and have the ability to engage in honest discourse. Although it might challenge my beliefs, I'm not afraid of being wrong, admitting that I'm wrong, and learning something new. That's the importance of apolitical dissemination of information. It doesn't limit your points of reference to a particular worldview.

No one is trying to get the AI to make George Washington black or Harriet Tubman asian (or similar historical rewrites). This is a strawman argument.

Also no one is limiting the model to a specific worldview. It is practically impossible to scrape a gigantic portion of the web from which to train these models, and yet have all of that data subscribe to only a single worldview.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
Apolitical AI tools that disseminate news and viewpoints could possibly guard against tyranny by presenting all viewpoints to let the consumer of said information decide.
This is no different than non-AI tools. It's an actual problem that actually exists in spaces that claim to be doing the opposite. Look at the shitshow that is Twitter / X. You could also argue that historically, "let the consumer decide" has not led to very smart choices - again look at twitter. The court of public opinion is very often off the mark. Very rarely does the consumer actually do the deciding for themselves in those cases anyway - it's not like the general public is at some default state of "unbiased". Again - you cannot eliminate bias. It is impossible.

You may not be at risk yet, but you have no idea what cultural shifts might occur in the future.
I fundamentally reject the premise that if my beliefs suddenly become "hate crime" that my problem would be with AI. It would be with the culture that allowed such a dramatic shift, or with myself for becoming hateful.

I continue to think that what you're afraid of isn't AI, it's "wokeness", and being "on the wrong side of history" or some other nonsense. Such a dystopian future it'll be when we..... try to be diverse.....?

I once again ask you to provide an actual concrete example of the thing you're afraid of or take issue with, outside of a vague "yeah, but what if AI ENABLES TYRANTS?!" as if tyrants will just pop out of the ground because the computer told us to.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
No one is trying to get the AI to make George Washington black or Harriet Tubman asian. This is a strawman argument.

Also no one is limiting the model to a specific worldview. It is practically impossible to scrape a gigantic portion of the web from which to train these models, and yet have all of that data subscribe to only a single worldview.
Gemini did just that. It's not a strawman when it actually happened. And you don't know anymore than I do what the intent was. You may want it to be the way you describe, but you don't know the intent. So you are engaging in a logical fallacy. The ideologies and biases of the team that created Gemini are present in the output. See Google's managing director's own words for reference.

I'm proposing that this be a lesson to all AI companies, and I'm hoping that they course correct and evaluate how their innate biases can impact the dissemination of information when training AI.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,796
Reaction score
31,257
Location
Tokyo
Gemini did just that. It's not a strawman when it actually happened. And you don't know anymore than I do what the intent was. You may want it to be the way you describe, but you don't know the intent. So you are engaging in a logical fallacy. The ideologies and biases of the team that created Gemini are present in the output. See Google's managing director's own words for reference.

I'm proposing that this be a lesson to all AI companies, and I'm hoping that they course correct and evaluate how their innate biases can impact the dissemination of information when training AI.
The intent is literally an entire subfield of AI research that thousands of people work on. It's easy to see what happened here -- they tackled one problem but cast a wider net than intended. You can also see the hints at how this is implemented in the rephrasing of the prompt when the model presents the results. It happens when asked to draw people, but not other things. This would be a win for a variety of categories, like as mentioned before, "draw a conference of doctors who are leaders in their field", etc. It's not great when asking to draw a group of popes.

So when you say this should be a lesson to all AI companies, you're acting like this isn't a known thing, that they aren't aware of biases, both in training data and prompt design, influence model predictions. They are. There are whole teams whose purpose is to be experts in these topics. You're not raising a new point, but rather misattributing ignorance for what was likely poor coordination between teams and a rush to ship.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
This is no different than non-AI tools. It's an actual problem that actually exists in spaces that claim to be doing the opposite. Look at the shitshow that is Twitter / X. You could also argue that historically, "let the consumer decide" has not led to very smart choices - again look at twitter. The court of public opinion is very often off the mark. Very rarely does the consumer actually do the deciding for themselves in those cases anyway - it's not like the general public is at some default state of "unbiased". Again - you cannot eliminate bias. It is impossible.


I fundamentally reject the premise that if my beliefs suddenly become "hate crime" that my problem would be with AI. It would be with the culture that allowed such a dramatic shift, or with myself for becoming hateful.

I continue to think that what you're afraid of isn't AI, it's "wokeness", and being "on the wrong side of history" or some other nonsense. Such a dystopian future it'll be when we..... try to be diverse.....?

I once again ask you to provide an actual concrete example of the thing you're afraid of or take issue with, outside of a vague "yeah, but what if AI ENABLES TYRANTS?!" as if tyrants will just pop out of the ground because the computer told us to.

You've got it wrong. AI is the future. What if it's weaponized in the form of social engineering? I have no problem with acknowledging multiple viewpoints and I'm certainly not far right. I don't want any political party to control information and tools that can vastly impact society. I want the ability to decide for myself. Twitter isn't a great example and demonstrates the rage mob info consumption and the inability to research for facts. Clickbait and not reading beyond the headlines is the problem. When AI replaces news conglomerates, do you want biased information, or a plethora of multiple viewpoints? As you know, complex newsworthy events are rarely black and white. If you don't want the latter, then we'll just amicably stop this conversation. Thank you for the polite discourse and for sharing your views. You've raised valid points.
 

Briz

Lifelong Learner
Joined
Jan 30, 2015
Messages
213
Reaction score
314
Location
Charlotte, NC
The intent is literally an entire subfield of AI research that thousands of people work on. It's easy to see what happened here -- they tackled one problem but cast a wider net than intended. You can also see the hints at how this is implemented in the rephrasing of the prompt when the model presents the results. It happens when asked to draw people, but not other things. This would be a win for a variety of categories, like as mentioned before, "draw a conference of doctors who are leaders in their field", etc. It's not great when asking to draw a group of popes.

So when you say this should be a lesson to all AI companies, you're acting like this isn't a known thing, that they aren't aware of biases, both in training data and prompt design, influence model predictions. They are. There are whole teams whose purpose is to be experts in these topics. You're not raising a new point, but rather misattributing ignorance for what was likely poor coordination between teams and a rush to ship.

Counter point: Some prompts yielded responses that were blatantly biased. Again, I'll point you back to the Managing Director's own words to see how this happened.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,054
Reaction score
13,474
Location
Gatineau, Quebec
"Why do you think this is conspiratorial thinking?"
Because you say things like:
AI is the future. What if it's weaponized in the form of social engineering?
First of all: It already is. People get scammed by AI voices every day.

I don't want any political party to control information and tools that can vastly impact society.
Google is not a political party.

When AI replaces news conglomerates, do you want biased information, or a plethora of multiple viewpoints?
Once again I don't think you're paying attention to what's actually happening. AI is already used by "news" sites and consistently getting push back for it.

I have no problem with acknowledging multiple viewpoints and I'm certainly not far right.
Honestly, given that you've refused to explain what viewpoints you're afraid of manifesting through the machines, I kinda don't believe you.

Counter point: Some prompts yielded responses that were blatantly biased
Give examples. Because again, I don't believe you.
 
Top
')