Briz
Lifelong Learner
No. Back up your position. ¯\_(ツ)_/¯
I'm an anglophone in Quebec, where we're a quietly resented minority, so kind of. As with literal racism, it's almost never as simple as that, obviously.
No.
Did you read what I said? I said you "don't meet racist people in the middle". Not "you don't meet reasonable people in the middle". If you're implying that racism or sexism or whatever else are reasonable normal positions to be met half way, then I strongly disagree.
That you're misattributing an easy to make mistake that was walked back as "indoctrination" - and then refusing to back it up with anything.
I'm also part of a marginalized minority group on my mother's side. My position simply put: current state vs target state. It's evident that Gemini was a disaster because it alienated a diverse group of people in its current state, in its team's quest for diversity. My position is that Google clearly lays out their methodology, testing, and guidance. They also defined a definition of bias that caused some of its prompts to display bias. As AI is refined, enhanced, becomes more powerful, and is introduced as an integral component of our lives, we should care about the motives and biases of those that build, test, validate, and deploy this technology. Otherwise, the consequences could be dire. People much smarter than me have talked and written about how quickly AI could become a threat in the areas of social engineering, dissemination of agenda driven information, discrimination against humans, and global warfare / weaponization. We have to realize that diversity of thought and oversight are required. It's not conspiratorial to think AI can become dangerous to humanity. Some AI innovators have said as much.