AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,381
Reaction score
1,938
I missed this earlier, but I'm not sure all the text of the internet quite portrays a meaningful or complete view of the world. It certainly portrays a good chunk of how online folks like to talk about the world, but if we're scraping the reddits and x/twitters of the world for a large chunk of that, we're not exactly starting from a very reliable source of "understanding" the world to begin with.

If the goal is to plausibly sound like someone who would post on the internet, that might be enough, but I don't think that satisfies a model of understanding the world.
Again, +1. You're hitting it out of the park today. (Or pick your favorite sports analogy.)
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,834
Reaction score
31,345
Location
Tokyo
Agree to disagree on these points.

>I think we have basically all the text-of-the-internet data we're ever going to need when it comes from gleaning an >understanding of the world from raw text.

Try a thought experiment on a scenario where we grabbed the same amount of text from people in the 1970s.

I don't quite understand. People in the 1970s had human level intelligence, no?

I missed this earlier, but I'm not sure all the text of the internet quite portrays a meaningful or complete view of the world. It certainly portrays a good chunk of how online folks like to talk about the world, but if we're scraping the reddits and x/twitters of the world for a large chunk of that, we're not exactly starting from a very reliable source of "understanding" the world to begin with.

If the goal is to plausibly sound like someone who would post on the internet, that might be enough, but I don't think that satisfies a model of understanding the world.

I didn't say that we had enough text to have an understanding of the world. I said we had enough text to gain as much of an understanding of the world as we can "from raw text", i.e., as much as we're going to need from that modality. The current / next-gen of models are getting a lot of their comparatively new data from other modalities directly (audio data in the case of GPT4o, transcriptions of audio in the case of a lot of LLMs, and video data in the case of sora and some gemini models). So if you're talking about model collapse, there's this assumption that the internet keeps growing in size, AI-generated content makes up a lot of it, and we still train naively and uniformly on that web scrape. In reality, the web scrape's usefulness is not going to keep growing proportional to its size, future models will be trained on huge scrapes of data in other modalities, models will be used to score the likelihood of data coming from previous models, and data will be sampled selectively to maximize the utility to the new model.
 
Last edited:

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,104
Reaction score
13,570
Location
Gatineau, Quebec
It's very hard not to read your comments as coming from a place where you're so invested in seeing AI succeed that you don't want to consider any critical perspective. Which is fine, you do you. The cutting edge is exciting. Optimism is good. I'd actually really like to hear what critical takes could come from someone who is following the topic so closely and gets it. But it's got to be balanced out with at least some amount of staying critical or skeptical, because there's very likely a ceiling to this progress, or at least this rate of progress. Remember when blockchain was going to solve all the worlds problems? It solved exactly zero problems that it didn't create for itself.

Nobody is claiming that AI is useless, but it has its limits and its downsides, and will always have limits and downsides, and I think it would be naive and maybe dangerous to culturally dive head first into disruption without first grounding ourselves in the kind of skepticism that would prevent us from shooting ourselves in the foot with technology just for its own sake, as we have a habit of doing. When you "move fast and break things", there's a significant risk of breaking something very important.

there's this assumption that the internet keeps growing in size,
AI-generated content makes up a lot of it
and we still train naively and uniformly on that web scrape
I think these are all reasonable assumptions a person could make.

future models will be trained on huge scrapes of data in other modalities
What are these other modalities? If we're not feeding models more text, images, sound, etc. from the internet, what are we talking about? Setting up mics and cameras everywhere and letting the AI just sort itself out? Let it literally watch us and decide for itself? Like you keep saying, AI is not just chat bots - we've been feeding images, sounds, etc. into different models for a good amount of time now.

models will be used to score the likelihood of data coming from previous models, and data will be sampled selectively to maximize the utility to the new model.
This hasn't really worked so far. All evidence I've seen so far shows that AI is bad at detecting AI.
 

gabito

Advanced power chords user
Joined
Aug 1, 2010
Messages
1,126
Reaction score
1,806
Location
South of Heaven - Argentina
The good parts of AI: given an infinite amount of time a monkey hitting a keyboard randomly will almost surely solve cancer

That will never happen. Monkeys are too busy buying guitars they'll never use because they spend an infinite amount of time typing random stuff in guitar forums.
 

SalsaWood

Scares the 'choes.
Joined
May 15, 2017
Messages
1,871
Reaction score
3,074
Location
NoVA
The word "need" is carrying a lot of weight here. There's no need for most of the AI out there. ChatGPT is not a need. Smart appliances are not a need. Really advanced minecraft bots are not a need. 400 bazillion propaganda photos with too many fingers are not a need. Medical and science advancements could be had using machine learning without abstracting it out to chat bots and generalized job automation and trying to humanize computers.


No it hasn't. Not generally enough to put it that way. There are some folks trying to make this happen, or lying about how much "AI" is in their software, and they're being called out for it.
I mean, autocomplete is software and that's all these AI are. They fill in the blank or do generative modeling in different forms. That's what it all is. Algorithms within software don't make it an entirely new thing, or even a very different thing at all. Are you saying people are talking about it in such ways, but that they shouldn't because of some non-technical colloquialism?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,104
Reaction score
13,570
Location
Gatineau, Quebec
I mean, autocomplete is software and that's all these AI are.
Autocomplete, as I understand it, doesn't typically work using AI in any sense of the word - it's generally just a lookup tree of some kind. I've never heard anyone draw that comparison before - in part because it's not a good comparison.

Are you saying people are talking about it in such ways, but that they shouldn't because of some non-technical colloquialism?
I'm saying that fewer people than you implied are calling all algorithms AI, and those who are doing so are mistaken, whether it's literally, technically, or colloquially. It would be like calling your shoes a car. They both get you from A-to-B, but there's about as far as the comparison goes.
 

SalsaWood

Scares the 'choes.
Joined
May 15, 2017
Messages
1,871
Reaction score
3,074
Location
NoVA
Autocomplete, as I understand it, doesn't typically work using AI in any sense of the word - it's generally just a lookup tree of some kind. I've never heard anyone draw that comparison before - in part because it's not a good comparison.


I'm saying that fewer people than you implied are calling all algorithms AI, and those who are doing so are mistaken, whether it's literally, technically, or colloquially. It would be like calling your shoes a car. They both get you from A-to-B, but there's about as far as the comparison goes.
What is the literal difference?

I gather that you're saying all AI is software but not all software is AI. I think I get that far at least.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,834
Reaction score
31,345
Location
Tokyo
It's very hard not to read your comments as coming from a place where you're so invested in seeing AI succeed that you don't want to consider any critical perspective. Which is fine, you do you.
I can understand why you might be considering that but I think the reality is different. I have four jobs: I work at a video game company, and I teach ML at the university. I also consult for a music company and for the US government. None of these is the hype-driven venture capital backed high-equity job that gives me any skin-in-the-game on whether AI "succeeds" or fails. For my purposes, the LLMs of today/recent future are more than enough to accomplish my company goals from a technological perspective. In fact, a fastly improving AI from some big-tech company only hurts me, as it minimizes my contributions, while also frustrating me that most of my peers and previous students have TCs in the $300-800k range while I don't because that's not how Japan rolls.

Also, you have to acknowledge that you setup a catch 22: the people who know the technology and its limitations the most are by virtue of that typically the most invested in its success. To draw an analogy, climate scientists are the most invested in people paying attention to depressing climate model predictions and warnings from the climate science community. They need grants to do work, the grants are largely contingent on government policy, government policy is often in large part dictated by public sentiment. This exact criticism has been raised by climate skeptics for decades. And in this case almost all AI experts are saying, yes, this technology will be disruptive and you would be justified to be worried about large scale job displacement. In this analogy you're basically saying, in as many articles you've read about how the weather works, and the fact you had a colder summer this year, that all this global warming stuff is extremely exaggerated.

Or even worse than that, because I'm not asking you to change your lifestyle or anything. I'm basically just recommending that people accurately appreciate the risks of AI impacting their jobs, or jobs in their industry, while simultaneously remaining open-minded about the positives of AI, which is poised to improve the advancement of civilization in many ways, even if not to the degree that I personally believe.

Of course I'm open to criticisms of AI (and acknowledge many current limitations -- i.e., my job for the most part), but they have to come from informed opinions. They can't come from pointing to glitched out images or talking about AI being autocomplete.

I think these are all reasonable assumptions a person could make.
The last one is not one that AI researchers are going to make. We're not going to be naively training on text uniformly in the future. There's currently many lines of research here, branching out in different ways, all moving away from this. So in that sense the rest of those assumptions don't matter.
 
Top
')