AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,389
Reaction score
1,938
*I'm also not convinced at all that the neural-net style of AI will get real knowledge or understanding.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,838
Reaction score
31,348
Location
Tokyo
This is relevant because it has to do with the aim. If you want general linguistic competence, sure, you're golden. If you want anything like knowledge or understanding, you're barking up the wrong tree.

It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,114
Reaction score
13,579
Location
Gatineau, Quebec
Of course, consistent improvement isn't always particularly meaningful. To rope the video game metaphor back in, rendering has improved in leaps and bounds in the last 10 years or so, but unless you're pretty deep in the weeds, it's vary hard to just look at two renderings and say one is objectively better than another.

But more importantly, I feel like "linguistic competence and general reasoning" is exactly that xkcd comic I posted recently. We already know that LLMs are great at language processing. Language follows pretty solid grammar rules that are pretty trivial to be assimilated into a model compared to "general reasoning", which depends on your semantic and philosophical take on what that means, and some might not think it's possible at all.

Of course people aren't impressed by LLMs using convincing grammar, they're impressed that it can string together plausible sounding logic, and answer obscure questions with some measure of accuracy. Or play minecraft real well, or whatever.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,114
Reaction score
13,579
Location
Gatineau, Quebec
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,838
Reaction score
31,348
Location
Tokyo
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?

What makes things feel like hit pieces is when there's some criticism, but seemingly no effort to determine the extent to which there are reasonable solutions being developed to address those criticisms. Or in my mind, to sanity-check by oneself how far-reaching and damning those criticisms really are.

It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.

So when I see a criticism of LLMs, I'm similarly thinking to myself like, "yep, that's not an inherent limitation of the model", "not that either", "yep, that's basically gone in the latest models", "yea, I'd just do X instead here", etc. Like in this thread for many criticisms I've pointed out actual research that addresses these, with probably enough detail to go and find those sort of papers. There's like 30 papers under review at neurips right now addressing these sorts of things in all sorts of ways. The idea that we're at some sort of AI dead-end is not at all the vibe from within the community, where the development of all sorts of things is in like turbo mode compared to pre-ChatGPT.
 
Top
')