AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,392
Reaction score
1,938
Recommendation systems are not typically built on the type of AI we're talking about.
Of course: I'm just having fun with the example of failure. But broadly, they're neural nets, and it is this kind of system that is not going to deliver any of the big promises.

Midjourney v6 has lots of problems with hands. I've tested it extensively. It's a nice example because it is (excepting hands), as you say, mostly solid. It's "good enough" (sorry Gibson) to produce mediocre art, cheaply.

That is my only point.

*I bet that generative art systems of our current style don't get better. I could be wrong, but I currently don't think I am.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,846
Reaction score
31,350
Location
Tokyo
I didn't say likely, I said capable. I'm very strongly opposed to the way some folks treat smart people like they can do no wrong. They can, and they do. Every person, expert or no, is fallible, even in the things they're an expert about. Sometimes especially in the thing they're an expert about because it'll be the area where they are the most active, participating, making claims, trying things, exploring the space, etc. You don't get to be an expert without making mistakes.
I'm not sure how well this analogy is stretching. You don't get to be an expert without being an expert, and when we're talking the expert of experts, sure, there is a non-zero probability of him being wrong about the things he knows most about, but calling it non-zero is about the most charitable thing I can say about it. It's one thing if you have a cogent argument you researched deeply, becoming knowledgeable in the area, and that if presented to Hinton he'd really struggle with. This isn't it.
Of course there's a reason: it's how we understand what intelligence is in the first place, and it forms the basis for all of the terminology we're using.
In cog sci maybe. Not in AI. The goal is superhuman intelligence.

If you imply that using language at all is a sign of understanding the content being communicated, that's not distinguishing between machine vs. human. It's not drawing a distinction between there being something that has the appearance of the qualities of understanding embedded in a process vs. something more akin to the ability to reflect on meaning and identity of a concept.
I didn't imply that.

Of course: I'm just having fun with the example of failure. But broadly, they're neural nets, and it is this kind of system that is not going to deliver any of the big promises.
They're very different types of neural nets. And just being a neural net shouldn't be a problem. Your brain is something that takes inputs and produces outputs, and therefore you are essentially a function parameterized by the electro-chemical processes of your brain's physiology. Neural nets are universal function approximators with theoretical guarantees on the capacity of their modeling as a function of layer size. Of course the design and behavior of these things are very different, they each impose different biases on how responses are computed. The brain has evolved to think and thus has very survival-optimized architecture to process the types of thoughts that are important to our lives, and the LLM has a very simple design of attention and aggregation, but fundamentally for a given problem there's no reason why the reasoning steps of the brain and the LLM can't be comparable, say if the LLM hits upon a certain chain of reasoning that is comparable to the chain a typical human uses to reach a conclusion on a particular problem.
Midjourney v6 has lots of problems with hands. I've tested it extensively. It's a nice example because it is (excepting hands), as you say, mostly solid. It's "good enough" (sorry Gibson) to produce mediocre art, cheaply.
Even when it makes a weird hand you can just select and do the re-do region type command. So as a tool, the hands are no longer an issue. It would be pretty trivial to incorporate an additional model that checks the hands for > 5 fingers, or 5 fingers without a thumb, etc., and regenerates that portion by itself. Not much of a testament to the core model's physiological understanding but as a tool, which was the original topic of the thread basically, these are easy problems to solve.

*I bet that generative art systems of our current style don't get better. I could be wrong, but I currently don't think I am.
I mean... they get better all the time? I haven't tried this but seems promising:

 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,129
Reaction score
13,588
Location
Gatineau, Quebec
It's one thing if you have a cogent argument you researched deeply, becoming knowledgeable in the area, and that if presented to Hinton he'd really struggle with. This isn't it.
It's not any better to defer everything to "well, the leading expert said so, so that has to be the answer". You can't hang the entirety of a field on one guy. There are a LOT of people skeptical of AI, many of whom are much smarter than me. You're the one nose first in the research, you must be aware of that. And to be honest, I feel like I've articulated my reasoning pretty well. You haven't put forward any argument against any of that reasoning, and instead just deferred to "I'm going to believe the researchers instead". And don't get me wrong, that's not unreasonable - but my skepticism hasn't been shown to be unreasonable either.

The goal is superhuman intelligence.
Computers are already superhuman, without tacking the word intelligence on the end. Computers can already do things you could describe as intelligent, if you play semantic games, much better than any human ever could.

I didn't imply that.
No, Hinton did, in that quote. It's why I quoted it. Hinton's words, once again:
Hinton said:
Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand.

This comes from an article/interview with him where he's described as
article said:
Hinton argues that the intelligence displayed by A.I. systems transcends its artificial origins.

And then he goes on to describe the idea that an LLM can properly understand how the world works and think for itself, based only on analysis of writing, calling GPT "a system capable of thought".

therefore you are essentially a function parameterized by the electro-chemical processes of your brain's physiology
I think it's disingenuous to imply that the brain is understood well enough to explain consciousness this way. This is a plausible approximation, but you can't assert this as truth.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,846
Reaction score
31,350
Location
Tokyo
Computers are already superhuman, without tacking the word intelligence on the end. Computers can already do things you could describe as intelligent, if you play semantic games, much better than any human ever could.
Yea, but we want to tackle it with intelligence tacked on the end. The aim of the field is to have computers learn effective strategies for optimizing some objective that generalize to novel but related situations. So yea, computers can already do things people describe as intelligent via unintelligent ways, such as Deep Blue playing chess via brute-force look-ahead, but we want computers which do things in ways that are hard to distinguish from human intelligence, such as AlphaGo playing Go where the board states are nearly innumerable and often novel and the state of the board is hard to read.
No, Hinton did, in that quote. It's why I quoted it. Hinton's words, once again:
That quote is absolutely correct. There is essentially nothing to debate about it apart from trivial corner cases, like if one were to always have observed the continuation of every sequence encountered. You can't auto-complete to an increasingly large amount of queries without understanding, because at some point you have to generalize. Whether you want to call it understanding is philosophical, but the bottom line is that you would have to have organized concepts, and drawn connections between them, in a manner that mimics what we consider understanding when it occurs inside an animal. The ability to auto-complete comes either from memorization or generalization, and when the preceding text sets up a complex question unseen in the training data, it requires the ability to perform complex reasoning to answer it.

I think it's disingenuous to imply that the brain is understood well enough to explain consciousness this way. This is a plausible approximation, but you can't assert this as truth.
Well it's that, or thought is a quantum process, or there is a spirit.
 
Top
')