AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,392
Reaction score
1,938
*I'm also not convinced at all that the neural-net style of AI will get real knowledge or understanding.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
This is relevant because it has to do with the aim. If you want general linguistic competence, sure, you're golden. If you want anything like knowledge or understanding, you're barking up the wrong tree.

It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
Of course, consistent improvement isn't always particularly meaningful. To rope the video game metaphor back in, rendering has improved in leaps and bounds in the last 10 years or so, but unless you're pretty deep in the weeds, it's vary hard to just look at two renderings and say one is objectively better than another.

But more importantly, I feel like "linguistic competence and general reasoning" is exactly that xkcd comic I posted recently. We already know that LLMs are great at language processing. Language follows pretty solid grammar rules that are pretty trivial to be assimilated into a model compared to "general reasoning", which depends on your semantic and philosophical take on what that means, and some might not think it's possible at all.

Of course people aren't impressed by LLMs using convincing grammar, they're impressed that it can string together plausible sounding logic, and answer obscure questions with some measure of accuracy. Or play minecraft real well, or whatever.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?

What makes things feel like hit pieces is when there's some criticism, but seemingly no effort to determine the extent to which there are reasonable solutions being developed to address those criticisms. Or in my mind, to sanity-check by oneself how far-reaching and damning those criticisms really are.

It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.

So when I see a criticism of LLMs, I'm similarly thinking to myself like, "yep, that's not an inherent limitation of the model", "not that either", "yep, that's basically gone in the latest models", "yea, I'd just do X instead here", etc. Like in this thread for many criticisms I've pointed out actual research that addresses these, with probably enough detail to go and find those sort of papers. There's like 30 papers under review at neurips right now addressing these sorts of things in all sorts of ways. The idea that we're at some sort of AI dead-end is not at all the vibe from within the community, where the development of all sorts of things is in like turbo mode compared to pre-ChatGPT.
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,392
Reaction score
1,938
It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.

Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.

Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.

For funsies, here's a mistake that AI / applee music made as I'm writing this:

Deathcore.jpg
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,392
Reaction score
1,938
It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.
I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
For funsies, here's a mistake that AI / applee music made as I'm writing this:
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?
 

p0ke

7-string guitard
Joined
Jul 25, 2007
Messages
3,876
Reaction score
2,718
Location
Salo, Finland
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?

Yeah, the term AI has been slapped on pretty much anything for the last 10 years at least when most of it is actually just a pretty simple algorithm that matches two things together. And even with "machine learning", how do you actually know that's AI and not just collecting a lot of data and defining variables based on that and performing hard coded actions based on them?

For example my latest smart watch claims it's using AI to optimize battery consumption, but TBH I think it just collects enough data to calculate stuff like "because you usually go to bed between 11-12, it's safe to set the watch to battery saving sleep mode unless significant movement is detected". Of course that's the easiest thing to detect and calculate, so maybe it is actually doing some AI stuff for other more complicated daily routines.

I don't think that Apple recommendations thing is AI, and it just got messed up in that example because either a bunch of people who generally listen to deathcore happened to also listen to whatever appeared on the list, or there's a bug in the API (or application that calls the API) that caused it to return some generic results.

Anyway, I guess basically any software that does anything other than direct user input can be called AI since there's no way to know what it actually does under the hood...
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.
As flattered as I am to make someone flabbergasted, reading and reasoning are often mentioned in the same sentence as things learned by LLMs.

For funsies, here's a mistake that AI / applee music made as I'm writing this:
Recommendation systems are not typically built on the type of AI we're talking about.

I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?

Midjourney V6 is solid. Adobe is still way behind in this space.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
I ended up looking up that Hinton guy very briefly last night and came across a claim he made, in short:
People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete

It sounds to me like a lot of the folks who believe AI "truly understands" are just looking at the results and inferring that more is happening than really is. They're starting from the conclusion and working backwards. Effectively saying that because it plausibly looks like understanding, then it must be understanding, because there can't possibly be any other explanation. And I can't help but fall back on something I expressed a long time ago in another thread.

Does a calculator "understand" math? Does a car "understand" transportation? Does a pen "understand" language? We have an abundance of tools that produce things they don't have an "understanding" of, despite filling those roles very well and producing perfectly reasoned results. But these are all human reasoning expressed through the implementation of the device, not the device itself doing the reasoning. And I think this is where we generally draw the line between what's AI and what isn't. AI, at least from some perspectives, is "the thing doing the reasoning", where an algorithm without AI is just the expression of reasoning done ahead of time by an author.

Take for example, if you're familiar with the "human centered design" way of thinking - that is to say a model of embedding knowledge into the design of something, and being aware of the separation between that human knowledge and embedded knowledge and how they interact. If you press a button, and the button has signaled to you what it will do, and this causes a series of automations that, say, unlock a door, or deliver you a canned drink, or whatever, then the knowledge of what the button represents, the cost of the action, the conditions to satisfy the user request, the process of producing the action you want, etc. are all things you could call "knowledge" that are embedded in the machine, the lock, the coke machine, whatever - but these were all deliberately designed and placed by the person who made the machine. From the author. They're expressions. The source of the knowledge is not the machine, only the expression of it. Without the human having expressed the knowledge via implementation, the machine would do nothing. And when the machine fails in a manner that it outside of the expression of the author - like if the button breaks, the drink gets jammed, someone breaks the door down, etc. - then it has no mechanism to either understand its predicament or do anything about it. It will continue to express its mechanical function.\

I have trouble believing that AI is any different - it's just an expression of a lot more of the authors intent than you'd have with a traditional machine. The amount of knowledge embedded is vastly larger, but it still represents an expression of the authors reasoning, not the reasoning of the machine as if it's an active agent in the process. A human still decides the model, the training data, the intended result, the bounds of what the machine can access or express, and all of the process by which this happens. Because that's what a computer does.

And we see the same thing happen when the machine fails in a way that falls outside of the expressed knowledge of the author - when something breaks. It continues to follow its design without any other consideration and produces something, because that's what it's designed to do. Hallucinations. Wrong answers delivered confidently. Inaccuracies in simulations. Because it's still a machine, and it's still constrained to the reality that's a machine.

I find it very hard, philosophically, to attribute the "intelligence" of AI to the machine and not to the person who made the model. Yes, the models are complicated and they can express some very complex things, but they're still the expression of the author of the model. In the same way as with a simple machine, or basic algorithm, or whatever else, it's not the machine that understands, it's the author who understands - and the machine can only express.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
I ended up looking up that Hinton guy very briefly last night and came across a claim he made, in short:


It sounds to me like a lot of the folks who believe AI "truly understands" are just looking at the results and inferring that more is happening than really is. They're starting from the conclusion and working backwards. Effectively saying that because it plausibly looks like understanding, then it must be understanding, because there can't possibly be any other explanation. And I can't help but fall back on something I expressed a long time ago in another thread.

Does a calculator "understand" math? Does a car "understand" transportation? Does a pen "understand" language? We have an abundance of tools that produce things they don't have an "understanding" of, despite filling those roles very well and producing perfectly reasoned results. But these are all human reasoning expressed through the implementation of the device, not the device itself doing the reasoning. And I think this is where we generally draw the line between what's AI and what isn't. AI, at least from some perspectives, is "the thing doing the reasoning", where an algorithm without AI is just the expression of reasoning done ahead of time by an author.

Take for example, if you're familiar with the "human centered design" way of thinking - that is to say a model of embedding knowledge into the design of something, and being aware of the separation between that human knowledge and embedded knowledge and how they interact. If you press a button, and the button has signaled to you what it will do, and this causes a series of automations that, say, unlock a door, or deliver you a canned drink, or whatever, then the knowledge of what the button represents, the cost of the action, the conditions to satisfy the user request, the process of producing the action you want, etc. are all things you could call "knowledge" that are embedded in the machine, the lock, the coke machine, whatever - but these were all deliberately designed and placed by the person who made the machine. From the author. They're expressions. The source of the knowledge is not the machine, only the expression of it. Without the human having expressed the knowledge via implementation, the machine would do nothing. And when the machine fails in a manner that it outside of the expression of the author - like if the button breaks, the drink gets jammed, someone breaks the door down, etc. - then it has no mechanism to either understand its predicament or do anything about it. It will continue to express its mechanical function.\

I have trouble believing that AI is any different - it's just an expression of a lot more of the authors intent than you'd have with a traditional machine. The amount of knowledge embedded is vastly larger, but it still represents an expression of the authors reasoning, not the reasoning of the machine as if it's an active agent in the process. A human still decides the model, the training data, the intended result, the bounds of what the machine can access or express, and all of the process by which this happens. Because that's what a computer does.

And we see the same thing happen when the machine fails in a way that falls outside of the expressed knowledge of the author - when something breaks. It continues to follow its design without any other consideration and produces something, because that's what it's designed to do. Hallucinations. Wrong answers delivered confidently. Inaccuracies in simulations. Because it's still a machine, and it's still constrained to the reality that's a machine.

I find it very hard, philosophically, to attribute the "intelligence" of AI to the machine and not to the person who made the model. Yes, the models are complicated and they can express some very complex things, but they're still the expression of the author of the model. In the same way as with a simple machine, or basic algorithm, or whatever else, it's not the machine that understands, it's the author who understands - and the machine can only express.
Before in the climate change analogy I said it was as if you were the climate denier telling all of the climate scientists they have too much vested in global warming to be duly skeptical of climate change. Now with Hinton it's like you're telling the world's most respected and influential climate scientist that he's made a critical error in his thinking, based on probably less reading in the topic than he has done anytime in the past 40 years. There's probably a bit of hubris there.

The models are very general, and I don't think anyone who implements one is going to prescribe a whole lot of intelligence in the design of "look at all this stuff, and use it to predict this thing". Again (again like, referencing some previous thread that took this turn), this is like Searle's Chinese Room. If you had a room and inside there was a person, who you could ask a question to, and always received a reasonable answer from, you'd probably say that person is intelligent. If now it is revealed that there was no person in the room, only a nearly infinitely complex set of rules that specify how to calculate what the reply should be, then where is the intelligence?
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
it's like you're telling the world's most respected and influential climate scientist that he's made a critical error in his thinking
I don't honestly have any qualms about that. Nobody is infallible, and there's always more than one way to look at things. I don't think it's valuable to worship influential people as if they can do no wrong. He's as capable of a critical error in thinking as anyone is.

If now it is revealed that there was no person in the room, only a nearly infinitely complex set of rules that specify how to calculate what the reply should be, then where is the intelligence?
I feel like I answered that question already:
The expression of the intelligence is in the room, but it's an expression of the intelligence of the person who created the set of rules.

The philosophical difference here is that on one side you have "intelligence creates rules" and on the other you have "intelligence IS the rules".

You know the common youtube joke where a person will be watching a VHS tape or something, then suddenly the recording starts answering in real time as if it's a real time conversation? Like a 4th wall break joke, which asks you to suspend the disbelief that the recording could have known exactly what the viewer would be asking.

"Welcome to xyz instructional video recorded in 1989!"
"Wow, what an ugly video tape!" says the viewer.
"Hey, that's rude, this was recorded over 30 years ago!"
"What?"
"What?"

Imagine, hypothetically, that the person recording the tape DID predict what the viewer would say and left a tape behind. When the tape is being viewed, you have what appears to be a conversation between the viewer and the recording. Are they actually talking to the tape? Are they talking to the author? It the author portraying a message to the viewer or is the tape portraying the message?

I think this situation is analogous. You have an author and an expression. The tape contains the expression, just as the room does, and just as the AI does. When the tape "says" something, you would understand this to be the author's expression, not the tape's expression - and the whole joke is the subversion of that idea. The Chinese Room situation you describe is asking you to exercise this same subversion.

We already do this with art - if you see a painting, you don't say "wow, that paint is very talented". You might colloquially say "this painting says something to me", but you know that it's a creation, you know that it's an expression from an author, and the talent isn't in the paint, it's in the author.
The intelligence is in the author. The talent is in the artist. The tone is in the hands. The song is an expression. The painting is an expression. The algorithm is an expression.

And it's not like I'm alone in this manner of thinking - I found that quote on a blog of a guy who I presume works in the field and also disagreed with the sentiment.

 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
I don't honestly have any qualms about that. Nobody is infallible, and there's always more than one way to look at things. I don't think it's valuable to worship influential people as if they can do no wrong. He's as capable of a critical error in thinking as anyone is.
Sure but that's kind of like saying Philippe Petit is as capable of falling over as anyone is.

I feel like I answered that question already:
The expression of the intelligence is in the room, but it's an expression of the intelligence of the person who created the set of rules.

Well exactly. And in the case of the LLM, the LLM processing the data learns the rules. So the intelligence is in the LLM. The intelligence can't be solely in the data, as the LLM generalizes to situations unseen in the data.

And it's not like I'm alone in this manner of thinking - I found that quote on a blog of a guy who I presume works in the field and also disagreed with the sentiment.


Yea, Andrew's a smart guy, but here he's comparing machine intelligence to human intelligence, which isn't necessary. No one said that an LLM arrives at an answer in the same manner as humans do -- it's obviously much more biased towards what would be called "System 1" thinking.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
Sure but that's kind of like saying Philippe Petit is as capable of falling over as anyone is.
He is. I think that's a perfectly reasonable thing to say.

but here he's comparing machine intelligence to human intelligence, which isn't necessary.
What else do you compare it to? What do you think I'm comparing it to? If we're going to draw a line and say that machine and human intelligence are not equivalent or comparable, then we're in agreement. Human intelligence is the manner in which most people understand what intelligence is, though.

I mean, I AM a human, so my reference point of what it means to "understand" is going to necessarily be the human process of understanding something.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
He is. I think that's a perfectly reasonable thing to say.
I'd even go as far as to say that he's probably fallen down more than most people - he probably didn't start off as a pro. And while I have a 0% chance of falling off a highwire because I'm never on a highwire, he always has a non-zero chance of falling when he's up there.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,843
Reaction score
31,350
Location
Tokyo
He is. I think that's a perfectly reasonable thing to say.
Maybe as capable, but certainly not as likely, which is my point. If someone were to really dig in and fully understood these models inside out and all the theory, a good chunk of it being his, and then comes in and says, let me lay out why I think this is wrong -- sure. The odds of someone spending a couple days reading about a topic and managing to throw a wrench into a Turing Award winner's understanding of computer science are not particularly high. But I think this may be a "but you do you" moment.

What else do you compare it to? What do you think I'm comparing it to? If we're going to draw a line and say that machine and human intelligence are not equivalent or comparable, then we're in agreement. Human intelligence is the manner in which most people understand what intelligence is, though.
I meant in the mechanism. When Andrew's talking he's basically talking about the difference in System 1 vs System 2 thinking. But clearly an LLM can do both of these to varying extents -- it's unlikely (my technically correct way of saying that it's not at all) the case that humans and LLMs think in comparable ways at all. But I don't think that is any shortcoming of the LLM -- human intelligence varies, and there is no reason to make human intelligence the benchmark for all intelligence.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,125
Reaction score
13,587
Location
Gatineau, Quebec
but certainly not as likely, which is my point
I didn't say likely, I said capable. I'm very strongly opposed to the way some folks treat smart people like they can do no wrong. They can, and they do. Every person, expert or no, is fallible, even in the things they're an expert about. Sometimes especially in the thing they're an expert about because it'll be the area where they are the most active, participating, making claims, trying things, exploring the space, etc. You don't get to be an expert without making mistakes.

I meant in the mechanism [...] there is no reason to make human intelligence the benchmark for all intelligence.
Of course there's a reason: it's how we understand what intelligence is in the first place, and it forms the basis for all of the terminology we're using.

If you imply that using language at all is a sign of understanding the content being communicated, that's not distinguishing between machine vs. human. It's not drawing a distinction between there being something that has the appearance of the qualities of understanding embedded in a process vs. something more akin to the ability to reflect on meaning and identity of a concept.
 
Top
')