AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
*I'm also not convinced at all that the neural-net style of AI will get real knowledge or understanding.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,839
Reaction score
31,350
Location
Tokyo
This is relevant because it has to do with the aim. If you want general linguistic competence, sure, you're golden. If you want anything like knowledge or understanding, you're barking up the wrong tree.

It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,118
Reaction score
13,584
Location
Gatineau, Quebec
Of course, consistent improvement isn't always particularly meaningful. To rope the video game metaphor back in, rendering has improved in leaps and bounds in the last 10 years or so, but unless you're pretty deep in the weeds, it's vary hard to just look at two renderings and say one is objectively better than another.

But more importantly, I feel like "linguistic competence and general reasoning" is exactly that xkcd comic I posted recently. We already know that LLMs are great at language processing. Language follows pretty solid grammar rules that are pretty trivial to be assimilated into a model compared to "general reasoning", which depends on your semantic and philosophical take on what that means, and some might not think it's possible at all.

Of course people aren't impressed by LLMs using convincing grammar, they're impressed that it can string together plausible sounding logic, and answer obscure questions with some measure of accuracy. Or play minecraft real well, or whatever.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,118
Reaction score
13,584
Location
Gatineau, Quebec
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,839
Reaction score
31,350
Location
Tokyo
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?

What makes things feel like hit pieces is when there's some criticism, but seemingly no effort to determine the extent to which there are reasonable solutions being developed to address those criticisms. Or in my mind, to sanity-check by oneself how far-reaching and damning those criticisms really are.

It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.

So when I see a criticism of LLMs, I'm similarly thinking to myself like, "yep, that's not an inherent limitation of the model", "not that either", "yep, that's basically gone in the latest models", "yea, I'd just do X instead here", etc. Like in this thread for many criticisms I've pointed out actual research that addresses these, with probably enough detail to go and find those sort of papers. There's like 30 papers under review at neurips right now addressing these sorts of things in all sorts of ways. The idea that we're at some sort of AI dead-end is not at all the vibe from within the community, where the development of all sorts of things is in like turbo mode compared to pre-ChatGPT.
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.

For funsies, here's a mistake that AI / applee music made as I'm writing this:

Deathcore.jpg
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,391
Reaction score
1,938
It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.
I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,118
Reaction score
13,584
Location
Gatineau, Quebec
For funsies, here's a mistake that AI / applee music made as I'm writing this:
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?
 

p0ke

7-string guitard
Joined
Jul 25, 2007
Messages
3,876
Reaction score
2,717
Location
Salo, Finland
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?

Yeah, the term AI has been slapped on pretty much anything for the last 10 years at least when most of it is actually just a pretty simple algorithm that matches two things together. And even with "machine learning", how do you actually know that's AI and not just collecting a lot of data and defining variables based on that and performing hard coded actions based on them?

For example my latest smart watch claims it's using AI to optimize battery consumption, but TBH I think it just collects enough data to calculate stuff like "because you usually go to bed between 11-12, it's safe to set the watch to battery saving sleep mode unless significant movement is detected". Of course that's the easiest thing to detect and calculate, so maybe it is actually doing some AI stuff for other more complicated daily routines.

I don't think that Apple recommendations thing is AI, and it just got messed up in that example because either a bunch of people who generally listen to deathcore happened to also listen to whatever appeared on the list, or there's a bug in the API (or application that calls the API) that caused it to return some generic results.

Anyway, I guess basically any software that does anything other than direct user input can be called AI since there's no way to know what it actually does under the hood...
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,839
Reaction score
31,350
Location
Tokyo
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.
As flattered as I am to make someone flabbergasted, reading and reasoning are often mentioned in the same sentence as things learned by LLMs.

For funsies, here's a mistake that AI / applee music made as I'm writing this:
Recommendation systems are not typically built on the type of AI we're talking about.

I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?

Midjourney V6 is solid. Adobe is still way behind in this space.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,118
Reaction score
13,584
Location
Gatineau, Quebec
I ended up looking up that Hinton guy very briefly last night and came across a claim he made, in short:
People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete

It sounds to me like a lot of the folks who believe AI "truly understands" are just looking at the results and inferring that more is happening than really is. They're starting from the conclusion and working backwards. Effectively saying that because it plausibly looks like understanding, then it must be understanding, because there can't possibly be any other explanation. And I can't help but fall back on something I expressed a long time ago in another thread.

Does a calculator "understand" math? Does a car "understand" transportation? Does a pen "understand" language? We have an abundance of tools that produce things they don't have an "understanding" of, despite filling those roles very well and producing perfectly reasoned results. But these are all human reasoning expressed through the implementation of the device, not the device itself doing the reasoning. And I think this is where we generally draw the line between what's AI and what isn't. AI, at least from some perspectives, is "the thing doing the reasoning", where an algorithm without AI is just the expression of reasoning done ahead of time by an author.

Take for example, if you're familiar with the "human centered design" way of thinking - that is to say a model of embedding knowledge into the design of something, and being aware of the separation between that human knowledge and embedded knowledge and how they interact. If you press a button, and the button has signaled to you what it will do, and this causes a series of automations that, say, unlock a door, or deliver you a canned drink, or whatever, then the knowledge of what the button represents, the cost of the action, the conditions to satisfy the user request, the process of producing the action you want, etc. are all things you could call "knowledge" that are embedded in the machine, the lock, the coke machine, whatever - but these were all deliberately designed and placed by the person who made the machine. From the author. They're expressions. The source of the knowledge is not the machine, only the expression of it. Without the human having expressed the knowledge via implementation, the machine would do nothing. And when the machine fails in a manner that it outside of the expression of the author - like if the button breaks, the drink gets jammed, someone breaks the door down, etc. - then it has no mechanism to either understand its predicament or do anything about it. It will continue to express its mechanical function.\

I have trouble believing that AI is any different - it's just an expression of a lot more of the authors intent than you'd have with a traditional machine. The amount of knowledge embedded is vastly larger, but it still represents an expression of the authors reasoning, not the reasoning of the machine as if it's an active agent in the process. A human still decides the model, the training data, the intended result, the bounds of what the machine can access or express, and all of the process by which this happens. Because that's what a computer does.

And we see the same thing happen when the machine fails in a way that falls outside of the expressed knowledge of the author - when something breaks. It continues to follow its design without any other consideration and produces something, because that's what it's designed to do. Hallucinations. Wrong answers delivered confidently. Inaccuracies in simulations. Because it's still a machine, and it's still constrained to the reality that's a machine.

I find it very hard, philosophically, to attribute the "intelligence" of AI to the machine and not to the person who made the model. Yes, the models are complicated and they can express some very complex things, but they're still the expression of the author of the model. In the same way as with a simple machine, or basic algorithm, or whatever else, it's not the machine that understands, it's the author who understands - and the machine can only express.
 
Top
')