AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
*I'm also not convinced at all that the neural-net style of AI will get real knowledge or understanding.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,838
Reaction score
31,348
Location
Tokyo
This is relevant because it has to do with the aim. If you want general linguistic competence, sure, you're golden. If you want anything like knowledge or understanding, you're barking up the wrong tree.

It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,116
Reaction score
13,582
Location
Gatineau, Quebec
Of course, consistent improvement isn't always particularly meaningful. To rope the video game metaphor back in, rendering has improved in leaps and bounds in the last 10 years or so, but unless you're pretty deep in the weeds, it's vary hard to just look at two renderings and say one is objectively better than another.

But more importantly, I feel like "linguistic competence and general reasoning" is exactly that xkcd comic I posted recently. We already know that LLMs are great at language processing. Language follows pretty solid grammar rules that are pretty trivial to be assimilated into a model compared to "general reasoning", which depends on your semantic and philosophical take on what that means, and some might not think it's possible at all.

Of course people aren't impressed by LLMs using convincing grammar, they're impressed that it can string together plausible sounding logic, and answer obscure questions with some measure of accuracy. Or play minecraft real well, or whatever.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,116
Reaction score
13,582
Location
Gatineau, Quebec
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,838
Reaction score
31,348
Location
Tokyo
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?

What makes things feel like hit pieces is when there's some criticism, but seemingly no effort to determine the extent to which there are reasonable solutions being developed to address those criticisms. Or in my mind, to sanity-check by oneself how far-reaching and damning those criticisms really are.

It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.

So when I see a criticism of LLMs, I'm similarly thinking to myself like, "yep, that's not an inherent limitation of the model", "not that either", "yep, that's basically gone in the latest models", "yea, I'd just do X instead here", etc. Like in this thread for many criticisms I've pointed out actual research that addresses these, with probably enough detail to go and find those sort of papers. There's like 30 papers under review at neurips right now addressing these sorts of things in all sorts of ways. The idea that we're at some sort of AI dead-end is not at all the vibe from within the community, where the development of all sorts of things is in like turbo mode compared to pre-ChatGPT.
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.

For funsies, here's a mistake that AI / applee music made as I'm writing this:

Deathcore.jpg
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.
I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,116
Reaction score
13,582
Location
Gatineau, Quebec
For funsies, here's a mistake that AI / applee music made as I'm writing this:
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?
 

p0ke

7-string guitard
Joined
Jul 25, 2007
Messages
3,876
Reaction score
2,717
Location
Salo, Finland
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?

Yeah, the term AI has been slapped on pretty much anything for the last 10 years at least when most of it is actually just a pretty simple algorithm that matches two things together. And even with "machine learning", how do you actually know that's AI and not just collecting a lot of data and defining variables based on that and performing hard coded actions based on them?

For example my latest smart watch claims it's using AI to optimize battery consumption, but TBH I think it just collects enough data to calculate stuff like "because you usually go to bed between 11-12, it's safe to set the watch to battery saving sleep mode unless significant movement is detected". Of course that's the easiest thing to detect and calculate, so maybe it is actually doing some AI stuff for other more complicated daily routines.

I don't think that Apple recommendations thing is AI, and it just got messed up in that example because either a bunch of people who generally listen to deathcore happened to also listen to whatever appeared on the list, or there's a bug in the API (or application that calls the API) that caused it to return some generic results.

Anyway, I guess basically any software that does anything other than direct user input can be called AI since there's no way to know what it actually does under the hood...
 
Top
')