jaxadam
Well-Known Member
A good AI should have warned you about horse girls.
This was back when I was still playing snake on my Nokia.
A good AI should have warned you about horse girls.
This site may earn a commission from merchant links like Ebay, Amazon, and others.
But that's COMMUNISM. If you're not sweating and breaking your back for my benefit, you're a BAD PERSON.most people could have more free time and work less
I don't think we're ever going to agree on this point. I liken this to the "you can't prove God doesn't exist" argument. The burden of proof is on the side making the claim. The claim is that AI is currently capable of "reasoning", and that is the part that needs to be proven. LLMs have the appearance of reasoning, because that's exactly it's purpose, that's what it's designed and trained to do, but so do a lot of things that can't actually reason. Saying that AI can "reason" is anthropomorphism.
I don't really get this, but the LLM sees the world through text. If you were put a human in a world where there was an important relationship between dogs and distances, then humans too would learn these relationships. Humans are one part evolutionary bias, one part data-driven. But yes, if you were to in essence present a bonkers world to an LLM, it would learn a bonkers representation where concepts were not related in the way they are in our world.If an AI sees an image of a pen, it has zero internal concept of what a pen is or what it could do. All it has is a bunch of trained weights that make it likely that a number of tokens in some order are a likely response to whatever the hell it might be looking at. Statistics dictate that if you've trained the weights to do so, you might get some text saying "this is a pen, you write with it", but you could just as easily end up with "this is a dog, which is a useful tool for calculating distance" if given the right data to work with - and the machine has no idea how ridiculous that is, because it has no idea what a pen, a dog, or "distance" mean. It's all equal and arbitrary and carries no meaning or consideration past the evaluation of tokens and some statistical model.
I mean, there's 101 examples out there of AI or ChatGPT failing miserably at a lot of things, like math, or ASCII art, or understanding when a user is being sarcastic, or just taking everything at face value, giving non-answers, etc. The kind of stuff that would be unreasonable as an answer from an actually intelligent agent.
The best way to illustrate this is that a lot of the time, the answers you get from something like GPT have trouble inferring whether or not the answer being given matches the intent of the prompt. I asked about the pen above, and had to pry a few times to get details, asking how the pen got on the desk. Every answer I get is just a list of generic factors, much like the earlier example given, and insists that part of the way to the desk might be through things like magnetic attraction, or being blown there by the wind - which makes zero sense. That's not a reasonable answer. That's not an answer arrived at via reasoning. Because reason would dictate that I was looking for an answer like what I typed out above, or that you'd need further evidence to support weird environmental factors, but that's what it goes with because the tokens of [speculate] [pen] [placement] [lifecycle] are going to have associations with the generic non-answers of [direct placement] [environmental factors] [manufacturing] [transport].
View attachment 133368
But sure. "How is the pen likely to have ended up on the desk" is definitely magnets. That's reasonable.
I've just skimmed the thread, as far as Dr. Luke pop music, it will absolutely be writing it, it's already been shown that it's written to a formula.And that's a large part of why I don't think AI can do music. Music is so insanely subjective, and the criteria to evaluate what makes music, let alone "good" music, can barely be pinned down by people, so I can't imagine how you'd train a machine to recognize it.
I work in distribution currently. AI isnt about to load trucks for me, but if it planned our shipping routes my job may be easier .
someone correct me if im wrong, but theres enough automation available that most people could have more free time and work less *if thats how the system had been designed*
Three of our plants have almost completely automated distribution centers. Robot forklifts, robot order pickers, robot lumpers, etc. and a computer system that assembles and maintains orders.
The only folks working are there to communicate to truckers and make sure stuff doesn't break.
This isn't new tech, they've been around for almost two decades now. I don't think it really counts as "AI" though.
The "Mia Kalifa look like a virgin" line is so worn out that it makes Mia Kalifa look like a virgin
Sounds like your company has moneyThree of our plants have almost completely automated distribution centers. Robot forklifts, robot order pickers, robot lumpers, etc. and a computer system that assembles and maintains orders.
The only folks working are there to communicate to truckers and make sure stuff doesn't break.
This isn't new tech, they've been around for almost two decades now. I don't think it really counts as "AI" though.
Sounds like your company has money
Just think of the pizza party you could get.More than they like telling us.
Compared to people, they're cheap.
That you had to provide the reasoning in order for it to give a reasonable answer. it didn't arrive at "of course, there's no chance this would happen" on it's own, otherwise it wouldn't have suggested those things. YOU did that. Not the AI. You did the reasoning part. If there's a 0% chance that something would happen, then why would that be a reasonable answer to have provided to the question in the first place?So where's the shortcoming here?
I do think you're operating off of a very loose definition of what reasoning is, given that your only criteria seems to be "can pass a reasoning test when compared to previous iterations of itself" but with no concern for the process, or what any of it actually means. Yes, the text is very convincing, because that's exactly what it's designed to do. But it's an illusion. In the same way that if you play a video game, you didn't actually bump into geometry in a map. The geometry isn't real. It's an illusion. It's a metaphor. It's a very clever trick.Which gets to the point, either things are capable of reasoning to various degrees, or you are off in philosophy land where reasoning is or isn't there and you need to take it up with Searle.
That you had to provide the reasoning in order for it to give a reasonable answer. it didn't arrive at "of course, there's no chance this would happen" on it's own, otherwise it wouldn't have suggested those things. YOU did that. Not the AI. You did the reasoning part. If there's a 0% chance that something would happen, then why would that be a reasonable answer to have provided to the question in the first place?
Then what is this?I did not provide any reasoning.
You had to provide so much context that you wouldn't need to give to a reasonable agent for it to not provide you with nonsense or vague non-answers.But still, I asked for the model to assign some probabilities to each of these explanations, and I further added that this was an office setup and didn't have children or pets, since you as a human could likely infer this from seeing this hypothetical room.
"so much context" was literally: the room is an office with no children or pet factors. You're crazy if you think that's where the heavy lifting is being done in this exercise. I think it reduced the probability if those things from like <3% to 0%.Then what is this?
You had to provide so much context that you wouldn't need to give to a reasonable agent for it to not provide you with nonsense or vague non-answers.
Again - reason does not mean "answer", it's the process. Reason is coming to a conclusion by operating on the actual understanding of the information. AI does not do this because it doesn't understand the information. Acting on data is not proof of understanding. GPT is just, one word at a time, going "what token makes it through the big system of weights and math based on what I've done so far". At no point is the data interpreted outside of the likelihood for one token to appear after a given sequence of other tokens.
But the process of eking out that answer from the source data is not what I would call "reasoning". You're focused on the results, I'm focused on the process. You're focused on what emerges, I'm focused on why it emerged. What you're describing as reasoning, I think is just an illusion or appearance of reasoning. In that sense, I don't think what you've said is strictly true. Correct me if I'm wrong, but there's zero planning at each stage, at each word - meaning at no point does the machine hold some state that represents "the answer" it's trying to communicate. Again - where a reasoning agent holds a concept of the answer it wants to communicate, and then communicates it with you, an LLM simply calculates a bunch of likely words, and it just happens to be an emergent property of this system that it lands on reasonable-sounding answers a lot of the time. Up until the sentence has been fully generated, there was no answer.Merely the fact that answers are structured and organized suggests that the answer exists in some sense internally prior to being generated word-by-word.
Yea, that sounds incorrect. The whole point of the depth of the network is to make connections which are not immediately accessible in the surface form of the input, and to layout something like a plan or a compact representation of the input, from which the output is decoded. GPT-type architectures are decoder only, but that doesn't mean they don't have essentially an encoder built up inside them doing exactly that.But the process of eking out that answer from the source data is not what I would call "reasoning". You're focused on the results, I'm focused on the process. You're focused on what emerges, I'm focused on why it emerged. What you're describing as reasoning, I think is just an illusion or appearance of reasoning. In that sense, I don't think what you've said is strictly true. Correct me if I'm wrong, but there's zero planning at each stage, at each word - meaning at no point does the machine hold some state that represents "the answer" it's trying to communicate. Again - where a reasoning agent holds a concept of the answer it wants to communicate, and then communicates it with you, an LLM simply calculates a bunch of likely words, and it just happens to be an emergent property of this system that it lands on reasonable-sounding answers a lot of the time. Up until the sentence has been fully generated, there was no answer.
You can reason without anthropomorphizing. Prolog programs reason, they're just brittle.To me it feels like we're anthropomorphizing computers. Attributing a type of agency to them that they don't have, just because their behavior sort of mimics what a conscious agent might say.
I mean, you can say that, but you're kind of ignoring an entire field (and notably this book on my bookshelf that makes it pretty clear):Attributing that kind of reasoning to an AI, IMO, leads to putting much more trust than is warranted in the output. Like the lawyers who got fired for using ChatGPT instead of doing actual case research. Or the teachers etc. who were convinced that ChatGPT could identify if text was written by AI.
You can call what an AI does "reasoning" if you want to, but I can't, unless in a colloquial way.