crushingpetal
SS.org Regular
- Joined
- Nov 11, 2022
- Messages
- 1,392
- Reaction score
- 1,938
*I'm also not convinced at all that the neural-net style of AI will get real knowledge or understanding.
This site may earn a commission from merchant links like Ebay, Amazon, and others.
This is relevant because it has to do with the aim. If you want general linguistic competence, sure, you're golden. If you want anything like knowledge or understanding, you're barking up the wrong tree.
Legit question - why do you feel that AI skepticism is necessarily a "hit piece?" What motive do people have to be skeptical of AI that would warrant hit pieces, rather than just legitimate skepticism?
I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.It's exactly the linguistic competence and general reasoning that is important. If anything, knowledge should be more explicitly disentangled from forward prediction parameters than it is in current LLMs, as it is still messy to update after-the-fact and 2nd or 3rd order inferences from the updated information often fail to get updated appropriately. In my opinion, the "AI" part of it is the reading/reasoning aspects, and the exact knowledge the LLM has is peripheral.
Like I said, if you have the general purpose AI, there are approaches like RAG that can fill in the knowledge on-the-fly.
Also I find the idea that LLMs blew their load on 2022 data to be pretty weird when they've improved quite consistently and significantly since then.
Yes, but the solution will cost every member a $4.99/month subscription.Yeah, but can an AI fix this forum so it stops being so fucking slow?
I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?It's kind of like those many-fingered characters in earlier image generation models. I'm reading articles that are like, "Is this going to be taking any artist jobs? Not so long as it thinks this is an acceptable hand". Meanwhile I was literally using a [at the time] soon-to-be-public model that almost never made those mistakes, and the idea that the models were somehow intrinsically incapable of doing 5-finger hands should be obviously stupid and shortsighted if the authors had spent enough time to understand how the models work. Now everybody knows the fingers aren't a very pervasive problem in current gen models and we have to move the goalpost over to some other thing.
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?For funsies, here's a mistake that AI / applee music made as I'm writing this:
This was a point I was going to make earlier and I think I erased it - a lot of the uses for AI are things that we can't even really tell are AI or not in the first place. Maybe Apple calls their recommendation service AI, but is it actually? Maybe it is, but I don't know. How could I know? How can any consumer know the difference between a product that uses machine learning, and just a general non-AI algorithm of some kind? Do you know if your smart TV uses AI? Are we really just going to call all software AI, like was suggested earlier?
As flattered as I am to make someone flabbergasted, reading and reasoning are often mentioned in the same sentence as things learned by LLMs.I'm actually flabbergasted that you would casually run linguistic competence and general reasoning together.
Recommendation systems are not typically built on the type of AI we're talking about.For funsies, here's a mistake that AI / applee music made as I'm writing this:
I spent a fair amout of time from Jan.--May. working with Midjourney and Dall-E and Photoshop's generative fill and the problems were well beyond numerically having "5 finger hands". The systems didn't understand how fingers work, period. What current gen are you talking about?
People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete
Before in the climate change analogy I said it was as if you were the climate denier telling all of the climate scientists they have too much vested in global warming to be duly skeptical of climate change. Now with Hinton it's like you're telling the world's most respected and influential climate scientist that he's made a critical error in his thinking, based on probably less reading in the topic than he has done anytime in the past 40 years. There's probably a bit of hubris there.I ended up looking up that Hinton guy very briefly last night and came across a claim he made, in short:
It sounds to me like a lot of the folks who believe AI "truly understands" are just looking at the results and inferring that more is happening than really is. They're starting from the conclusion and working backwards. Effectively saying that because it plausibly looks like understanding, then it must be understanding, because there can't possibly be any other explanation. And I can't help but fall back on something I expressed a long time ago in another thread.
Does a calculator "understand" math? Does a car "understand" transportation? Does a pen "understand" language? We have an abundance of tools that produce things they don't have an "understanding" of, despite filling those roles very well and producing perfectly reasoned results. But these are all human reasoning expressed through the implementation of the device, not the device itself doing the reasoning. And I think this is where we generally draw the line between what's AI and what isn't. AI, at least from some perspectives, is "the thing doing the reasoning", where an algorithm without AI is just the expression of reasoning done ahead of time by an author.
Take for example, if you're familiar with the "human centered design" way of thinking - that is to say a model of embedding knowledge into the design of something, and being aware of the separation between that human knowledge and embedded knowledge and how they interact. If you press a button, and the button has signaled to you what it will do, and this causes a series of automations that, say, unlock a door, or deliver you a canned drink, or whatever, then the knowledge of what the button represents, the cost of the action, the conditions to satisfy the user request, the process of producing the action you want, etc. are all things you could call "knowledge" that are embedded in the machine, the lock, the coke machine, whatever - but these were all deliberately designed and placed by the person who made the machine. From the author. They're expressions. The source of the knowledge is not the machine, only the expression of it. Without the human having expressed the knowledge via implementation, the machine would do nothing. And when the machine fails in a manner that it outside of the expression of the author - like if the button breaks, the drink gets jammed, someone breaks the door down, etc. - then it has no mechanism to either understand its predicament or do anything about it. It will continue to express its mechanical function.\
I have trouble believing that AI is any different - it's just an expression of a lot more of the authors intent than you'd have with a traditional machine. The amount of knowledge embedded is vastly larger, but it still represents an expression of the authors reasoning, not the reasoning of the machine as if it's an active agent in the process. A human still decides the model, the training data, the intended result, the bounds of what the machine can access or express, and all of the process by which this happens. Because that's what a computer does.
And we see the same thing happen when the machine fails in a way that falls outside of the expressed knowledge of the author - when something breaks. It continues to follow its design without any other consideration and produces something, because that's what it's designed to do. Hallucinations. Wrong answers delivered confidently. Inaccuracies in simulations. Because it's still a machine, and it's still constrained to the reality that's a machine.
I find it very hard, philosophically, to attribute the "intelligence" of AI to the machine and not to the person who made the model. Yes, the models are complicated and they can express some very complex things, but they're still the expression of the author of the model. In the same way as with a simple machine, or basic algorithm, or whatever else, it's not the machine that understands, it's the author who understands - and the machine can only express.
I don't honestly have any qualms about that. Nobody is infallible, and there's always more than one way to look at things. I don't think it's valuable to worship influential people as if they can do no wrong. He's as capable of a critical error in thinking as anyone is.it's like you're telling the world's most respected and influential climate scientist that he's made a critical error in his thinking
I feel like I answered that question already:If now it is revealed that there was no person in the room, only a nearly infinitely complex set of rules that specify how to calculate what the reply should be, then where is the intelligence?
Sure but that's kind of like saying Philippe Petit is as capable of falling over as anyone is.I don't honestly have any qualms about that. Nobody is infallible, and there's always more than one way to look at things. I don't think it's valuable to worship influential people as if they can do no wrong. He's as capable of a critical error in thinking as anyone is.
I feel like I answered that question already:
The expression of the intelligence is in the room, but it's an expression of the intelligence of the person who created the set of rules.
And it's not like I'm alone in this manner of thinking - I found that quote on a blog of a guy who I presume works in the field and also disagreed with the sentiment.
He is. I think that's a perfectly reasonable thing to say.Sure but that's kind of like saying Philippe Petit is as capable of falling over as anyone is.
What else do you compare it to? What do you think I'm comparing it to? If we're going to draw a line and say that machine and human intelligence are not equivalent or comparable, then we're in agreement. Human intelligence is the manner in which most people understand what intelligence is, though.but here he's comparing machine intelligence to human intelligence, which isn't necessary.
I'd even go as far as to say that he's probably fallen down more than most people - he probably didn't start off as a pro. And while I have a 0% chance of falling off a highwire because I'm never on a highwire, he always has a non-zero chance of falling when he's up there.He is. I think that's a perfectly reasonable thing to say.
Maybe as capable, but certainly not as likely, which is my point. If someone were to really dig in and fully understood these models inside out and all the theory, a good chunk of it being his, and then comes in and says, let me lay out why I think this is wrong -- sure. The odds of someone spending a couple days reading about a topic and managing to throw a wrench into a Turing Award winner's understanding of computer science are not particularly high. But I think this may be a "but you do you" moment.He is. I think that's a perfectly reasonable thing to say.
I meant in the mechanism. When Andrew's talking he's basically talking about the difference in System 1 vs System 2 thinking. But clearly an LLM can do both of these to varying extents -- it's unlikely (my technically correct way of saying that it's not at all) the case that humans and LLMs think in comparable ways at all. But I don't think that is any shortcoming of the LLM -- human intelligence varies, and there is no reason to make human intelligence the benchmark for all intelligence.What else do you compare it to? What do you think I'm comparing it to? If we're going to draw a line and say that machine and human intelligence are not equivalent or comparable, then we're in agreement. Human intelligence is the manner in which most people understand what intelligence is, though.
I didn't say likely, I said capable. I'm very strongly opposed to the way some folks treat smart people like they can do no wrong. They can, and they do. Every person, expert or no, is fallible, even in the things they're an expert about. Sometimes especially in the thing they're an expert about because it'll be the area where they are the most active, participating, making claims, trying things, exploring the space, etc. You don't get to be an expert without making mistakes.but certainly not as likely, which is my point
Of course there's a reason: it's how we understand what intelligence is in the first place, and it forms the basis for all of the terminology we're using.I meant in the mechanism [...] there is no reason to make human intelligence the benchmark for all intelligence.