AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,400
Reaction score
1,943
Recommendation systems are not typically built on the type of AI we're talking about.
Of course: I'm just having fun with the example of failure. But broadly, they're neural nets, and it is this kind of system that is not going to deliver any of the big promises.

Midjourney v6 has lots of problems with hands. I've tested it extensively. It's a nice example because it is (excepting hands), as you say, mostly solid. It's "good enough" (sorry Gibson) to produce mediocre art, cheaply.

That is my only point.

*I bet that generative art systems of our current style don't get better. I could be wrong, but I currently don't think I am.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,849
Reaction score
31,352
Location
Tokyo
I didn't say likely, I said capable. I'm very strongly opposed to the way some folks treat smart people like they can do no wrong. They can, and they do. Every person, expert or no, is fallible, even in the things they're an expert about. Sometimes especially in the thing they're an expert about because it'll be the area where they are the most active, participating, making claims, trying things, exploring the space, etc. You don't get to be an expert without making mistakes.
I'm not sure how well this analogy is stretching. You don't get to be an expert without being an expert, and when we're talking the expert of experts, sure, there is a non-zero probability of him being wrong about the things he knows most about, but calling it non-zero is about the most charitable thing I can say about it. It's one thing if you have a cogent argument you researched deeply, becoming knowledgeable in the area, and that if presented to Hinton he'd really struggle with. This isn't it.
Of course there's a reason: it's how we understand what intelligence is in the first place, and it forms the basis for all of the terminology we're using.
In cog sci maybe. Not in AI. The goal is superhuman intelligence.

If you imply that using language at all is a sign of understanding the content being communicated, that's not distinguishing between machine vs. human. It's not drawing a distinction between there being something that has the appearance of the qualities of understanding embedded in a process vs. something more akin to the ability to reflect on meaning and identity of a concept.
I didn't imply that.

Of course: I'm just having fun with the example of failure. But broadly, they're neural nets, and it is this kind of system that is not going to deliver any of the big promises.
They're very different types of neural nets. And just being a neural net shouldn't be a problem. Your brain is something that takes inputs and produces outputs, and therefore you are essentially a function parameterized by the electro-chemical processes of your brain's physiology. Neural nets are universal function approximators with theoretical guarantees on the capacity of their modeling as a function of layer size. Of course the design and behavior of these things are very different, they each impose different biases on how responses are computed. The brain has evolved to think and thus has very survival-optimized architecture to process the types of thoughts that are important to our lives, and the LLM has a very simple design of attention and aggregation, but fundamentally for a given problem there's no reason why the reasoning steps of the brain and the LLM can't be comparable, say if the LLM hits upon a certain chain of reasoning that is comparable to the chain a typical human uses to reach a conclusion on a particular problem.
Midjourney v6 has lots of problems with hands. I've tested it extensively. It's a nice example because it is (excepting hands), as you say, mostly solid. It's "good enough" (sorry Gibson) to produce mediocre art, cheaply.
Even when it makes a weird hand you can just select and do the re-do region type command. So as a tool, the hands are no longer an issue. It would be pretty trivial to incorporate an additional model that checks the hands for > 5 fingers, or 5 fingers without a thumb, etc., and regenerates that portion by itself. Not much of a testament to the core model's physiological understanding but as a tool, which was the original topic of the thread basically, these are easy problems to solve.

*I bet that generative art systems of our current style don't get better. I could be wrong, but I currently don't think I am.
I mean... they get better all the time? I haven't tried this but seems promising:

 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,135
Reaction score
13,593
Location
Gatineau, Quebec
It's one thing if you have a cogent argument you researched deeply, becoming knowledgeable in the area, and that if presented to Hinton he'd really struggle with. This isn't it.
It's not any better to defer everything to "well, the leading expert said so, so that has to be the answer". You can't hang the entirety of a field on one guy. There are a LOT of people skeptical of AI, many of whom are much smarter than me. You're the one nose first in the research, you must be aware of that. And to be honest, I feel like I've articulated my reasoning pretty well. You haven't put forward any argument against any of that reasoning, and instead just deferred to "I'm going to believe the researchers instead". And don't get me wrong, that's not unreasonable - but my skepticism hasn't been shown to be unreasonable either.

The goal is superhuman intelligence.
Computers are already superhuman, without tacking the word intelligence on the end. Computers can already do things you could describe as intelligent, if you play semantic games, much better than any human ever could.

I didn't imply that.
No, Hinton did, in that quote. It's why I quoted it. Hinton's words, once again:
Hinton said:
Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand.

This comes from an article/interview with him where he's described as
article said:
Hinton argues that the intelligence displayed by A.I. systems transcends its artificial origins.

And then he goes on to describe the idea that an LLM can properly understand how the world works and think for itself, based only on analysis of writing, calling GPT "a system capable of thought".

therefore you are essentially a function parameterized by the electro-chemical processes of your brain's physiology
I think it's disingenuous to imply that the brain is understood well enough to explain consciousness this way. This is a plausible approximation, but you can't assert this as truth.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,849
Reaction score
31,352
Location
Tokyo
Computers are already superhuman, without tacking the word intelligence on the end. Computers can already do things you could describe as intelligent, if you play semantic games, much better than any human ever could.
Yea, but we want to tackle it with intelligence tacked on the end. The aim of the field is to have computers learn effective strategies for optimizing some objective that generalize to novel but related situations. So yea, computers can already do things people describe as intelligent via unintelligent ways, such as Deep Blue playing chess via brute-force look-ahead, but we want computers which do things in ways that are hard to distinguish from human intelligence, such as AlphaGo playing Go where the board states are nearly innumerable and often novel and the state of the board is hard to read.
No, Hinton did, in that quote. It's why I quoted it. Hinton's words, once again:
That quote is absolutely correct. There is essentially nothing to debate about it apart from trivial corner cases, like if one were to always have observed the continuation of every sequence encountered. You can't auto-complete to an increasingly large amount of queries without understanding, because at some point you have to generalize. Whether you want to call it understanding is philosophical, but the bottom line is that you would have to have organized concepts, and drawn connections between them, in a manner that mimics what we consider understanding when it occurs inside an animal. The ability to auto-complete comes either from memorization or generalization, and when the preceding text sets up a complex question unseen in the training data, it requires the ability to perform complex reasoning to answer it.

I think it's disingenuous to imply that the brain is understood well enough to explain consciousness this way. This is a plausible approximation, but you can't assert this as truth.
Well it's that, or thought is a quantum process, or there is a spirit.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,135
Reaction score
13,593
Location
Gatineau, Quebec
Well it's that, or thought is a quantum process, or there is a spirit.
Or we don't know, and it's absurd to decide that the answer must fit within our current understanding - which has already failed to provide us the answer.

That quote is absolutely correct.
I don't believe that it is. I think it's a fundamental mistake. Not much point in going back and forth about it, 'cause I don't think we have the common ground to resolve that difference in view.

Moving on from that, not to make any particular point, but I found this interaction to be pretty funny:
1723226291594.png


I think I understand how I got there - because the answer is more reasonable if it's the first thing you ask, but you can prime ChatGPT to focus on a particular detail and it just sort of ignores common sense after that point. I asked what makes an apple an apple. Part of the answer was the taste. I asked what an apple tastes like, and the answer was "it's refreshing". I said what about a week old apple? Then what about a month old apple? How about I leave for vacation for a year? Every time phrased as "I do x, then go to the fridge". Eventually it's so focused on the time that it ignores any context that could have resulted in someone else putting an apple in the fridge.

"I have a roommate. I went on vacation for a year. I come back, look in the fridge, take an apple, how is it?" And it says it's probably rotten. Ok, so maybe not a roommate - how about I'm visiting my parents. It would be perfectly reasonable to assume that parents would do their own shopping given that I don't live there, so my time away has no relation to the quality of the food in the fridge, and I did not in any way imply that I'm the one who stocks their fridge, or told them not to or anything.

I tried again, with a fresh conversation, and it took very little time for it to get confused again:
1723227570320.png
 

METÖL

SS.org Regular
Joined
Aug 3, 2024
Messages
9
Reaction score
3
I make music for myself only. AI will not have any effect on that.
People like live music instead of recordings. AI will not have any effect on that.
There's little money in music sales for the average artist. AI stealing those 3 cents from artists' Spotify income won't make a difference.

So I don't see it affecting art a lot but it will affect some of the bullshit jobs that we hate but at the same time financially depend on. I hope this initiates a change in the system where something like a universal basic income makes sure our all basic needs are fulfilled. I think this would already be possible and a right thing to do now. Maybe AI serves as an accelerator.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,135
Reaction score
13,593
Location
Gatineau, Quebec
So I don't see it affecting art a lot but it will affect some of the bullshit jobs that we hate but at the same time financially depend on.
It already has, though. It did for visual arts, and it wouldn't be shocking to see it start happening in audio. I see no reason why it wouldn't, and there's certainly people working on it.

A lot of artists, visual or audio, make their money through commercial collaboration - ads, games, movies, branding, cover art for stuff, etc. - and it's all stuff where at least some number of people have shifted to just asking a generator to do the heavy lifting instead of hiring someone. Why hire a team of artists to make textures when you can type a prompt into TextureGen 4.0? Why record foley for your short film or car ad when there's a tool to make that too? Why mix and master an album, when you can get AI you do it for you? Those tools all exist, and people use them instead of paying people, right now. It's cheaper and easier than even going the Fiver route. Some of the folks who are in those fields, and would have otherwise been paid for it, are doing it because it's what they enjoy doing.

I may be skeptical of AI on a rhetorical or philosophical level, but there's no doubt they impact the arts.
 

METÖL

SS.org Regular
Joined
Aug 3, 2024
Messages
9
Reaction score
3
It already has, though. It did for visual arts, and it wouldn't be shocking to see it start happening in audio. I see no reason why it wouldn't, and there's certainly people working on it.

A lot of artists, visual or audio, make their money through commercial collaboration - ads, games, movies, branding, cover art for stuff, etc. - and it's all stuff where at least some number of people have shifted to just asking a generator to do the heavy lifting instead of hiring someone. Why hire a team of artists to make textures when you can type a prompt into TextureGen 4.0? Why record foley for your short film or car ad when there's a tool to make that too? Why mix and master an album, when you can get AI you do it for you? Those tools all exist, and people use them instead of paying people, right now. It's cheaper and easier than even going the Fiver route. Some of the folks who are in those fields, and would have otherwise been paid for it, are doing it because it's what they enjoy doing.

I may be skeptical of AI on a rhetorical or philosophical level, but there's no doubt they impact the arts.
I would put creating textures and recording foley more in the bullshit job category than the art one. Even though it is mostly artists doing those, do you think it's really what makes them feel fulfilled as an artist? I don't think they'd be sad to lose that work if they didn't financially depend on it.

Also what AI mixes and masters my music? I could need some help with my mixes, they don't sound good...
 
Top
')