Anyone on here particularly religious?

  • Thread starter Hollowway
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,761
Location
St. Johnsbury, VT USA
Your library link was not relevant to this system, at all.

Sure, they are different tasks, but here is a car that can navigate the road without explicit training of that (which is actually a huge problem to getting autonomous cars out on the road), and then other in-simulation car systems navigate other cars while not being given explicit knowledge of those (besides negative rewards for crashing). You have to look at the state of the field and the individual contributions of different research groups, and then from that precipice decide whether you think this is possible end-to-end. I don't care what your stance is at that point, but I think it's a bit bizarre to take firm stances while not reading the literature.

1. I read your paper. I quoted your paper back to you multiple times. I posted my own links to papers I read, and quoted those to you. Stop saying I don't read the literature, at this point you are insulting yourself as well as insulting me.
2. My library link is 100% relevant to this discussion. The paper you keep saying I didn't read is just about road and lane identification and steering controls. Why don't you start acting like you read your own link?
3. I don't care if you don't care what my stance is. Your stance is that you are somehow all-knowing in this field, where you are clearly misunderstanding some things you are posting, and you seem to think it makes you look cool to shit on bostjan. Yet I've made valid points and your response is to say that I'm "whining" (which didn't even make sense in context), or that I don't read things, yet you don't seem to be challenging specific things I've said.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
1. I read your paper. I quoted your paper back to you multiple times. I posted my own links to papers I read, and quoted those to you. Stop saying I don't read the literature, at this point you are insulting yourself as well as insulting me.

That's not "the literature." When it comes to how much behavior you can learn without hand-coding things, it's not just a couple self-driving car papers. The deepmind papers are a great start. The fundamental learning paradigm of these papers and the way that self-driving cars can be trained (in simulation) are the same. I already discussed why this isn't done for self-driving systems that rely solely on example or real-world driving scenarios. At the end of the day, it's up to you to decide whether driving around grand theft auto avoiding cars and getting to a goal is sufficiently similar to driving a real car to the grocery store, as it pertains to exhibiting intelligent behavior.

2. My library link is 100% relevant to this discussion. The paper you keep saying I didn't read is just about road and lane identification and steering controls. Why don't you start acting like you read your own link?

It was not at all relevant to that particular Nvidia car. That car does not use those libraries. End of discussion. You were free to bring it up, but trying to make it relevant to my post (with examples) was obviously wrong.

3. I don't care if you don't care what my stance is. Your stance is that you are somehow all-knowing in this field, where you are clearly misunderstanding some things you are posting, and you seem to think it makes you look cool to shit on bostjan. Yet I've made valid points and your response is to say that I'm "whining" (which didn't even make sense in context), or that I don't read things, yet you don't seem to be challenging specific things I've said.

I guess you're so well-informed that you can properly assess my well-informedness. But you throw out your points like they are critical counter examples. Does the Nvidia system use image normalization? Sure. That doesn't affect my own assessment of what's being learned, because other systems have already learned such normalization. If that's too brittle for you, fine.

AI can be a weird thing to argue about. First we could find a system that doesn't use normalization. Then it would be about if it could avoid a kid running into the street. Then it wouldn't be AI until it could make an ethical decision about who it must runover when multiple people of all ages and social classes run into the street. Then it wouldn't be AI until it was emotionally distressed over the ethical decision. When you wind up arguing with always-right guys like you, this is how it goes. When really I was just trying to show some cool complex behavior without if-else rules, broaden some people's minds over how current AI works :-/
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,761
Location
St. Johnsbury, VT USA
It was not at all relevant to that particular Nvidia car. That car does not use those libraries. End of discussion. You were free to bring it up, but trying to make it relevant to my post (with examples) was obviously wrong.

I brought it up before you posted that example.

:shrug:

But you throw out your points like they are critical counter examples.

Funny, that's the way I thought you were coming off to me. :lol: I mean, I agree that I have made counter-examples to your points, but, to be fair, my counterpoints specifically address your points.

I guess the internet is a weird place to try to have a discussion. :/

AI can be a weird thing to argue about. First we could find a system that doesn't use normalization.

I mean, sure, but you kept beating that drum that no hand-coding of anything was being used in the example, when the second sentence of the very paper you posted as example admitted the opposite. In fact, I am certain that AI to do image normalization would be totally possible. It's not exactly a trivial task, but it's super easy compared to the things we are discussing here about driving a car.

Obviously, the point of disagreement here is how advanced these AI's are. They are advanced as hell. But they are not as advanced as you have stated on several posts. I think we've covered that. The paper you keep coming back to states that the AI can determine the outline of the road without any explicit coding telling it such. I don't see where anyone claimed AI could not do that.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,709
Reaction score
12,656
Location
Gatineau, Quebec
Then it wouldn't be AI until
I don't think anyone is saying this isn't AI. The distinction being made is that AI is not the same as "real" or "human" intelligence. It's impressive for sure, but it's IMO nowhere near truly understanding what it's doing or "learning".
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
Obviously, the point of disagreement here is how advanced these AI's are. They are advanced as hell. But they are not as advanced as you have stated on several posts. I think we've covered that. The paper you keep coming back to states that the AI can determine the outline of the road without any explicit coding telling it such. I don't see where anyone claimed AI could not do that.

I think the point is that if it can determine the road without any explicit coding telling it such, can it not determine people, dogs, deer, etc.? This would run very counter to the opinions expressed earlier in this thread, that these need to be fed in, by hard-coding or by pre-training some object recognition system on labels ("this is a person" "this is a deer", etc.)

I mean, it is clear that this is the case when you extrapolate from the Atari game work and the ability of that system to do similar from similar input. You just need an objective function that rewards/penalizes something relevant to these objects ("hitting people is bad" or even "hitting people means cops come, and going to jail drastically slows estimated time to destination")

With regards to the second sentence of the paper that contradicts something I said, I'm not sure what you mean. I guess you mean this (third sentence)?: "With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways." Well its objective is derived from human steering so that's natural. That is a learning by example paper, where the model itself does not contain hand-coding. See deep reinforcement learning if you want to ditch the human.

But yea, I don't think we're close to human intelligence. But to the points expressed earlier -- that AI is mostly tricks to appear intelligent -- I strongly disagree. I don't see any indication that deep RL is so far removed from the process of human intelligence to be considered a qualitatively different type of thing. The biggest obstacles seem to be in training strong predictive models, such that if you're in a situation you can imagine your outcomes in the future based on the actions you take, and composing small actions into conceptually larger ones, so that an RL reward applies more to concepts than super tiny actions. And then grounding such an agent in a multitask world. But there are people working on all of these problems -- it seems one of scale.
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,761
Location
St. Johnsbury, VT USA
Determining: "Road/NOT Road" -> "Drive here/NOT Drive here" is not trivially developed into "This is an object that could potentially get in your way within a safe distance X, so slow down on approach."

Agree/Disagree?
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
Disagree. The model has a functional representation of the road, so in the presence of this curve, align the steering wheel in this manner. This is not so different from the functional representation of a child, where in the presence of this object, reduce speed 20-30% (because this is what all humans are doing when this object enters the field of the camera / i.e., there is a strong association between this type of phenomena and the need to slow down in the objective function).

It's not exhibiting the same rationale for reducing speed that you mention, but functionally it is equivalent (and would lead back to Chinese room talk to say that these are fundamentally different).
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,709
Reaction score
12,656
Location
Gatineau, Quebec
rationale
I'm not sure I'd call any of the process "rationale" at all. It's not like the machine thinks to itself "oh no, there's something here, I should move out of the way" - it's just giving the result that best matches the pattern it was trained on. There's no reasoning involved. I imagine it would respond to any foreign shape the same way, regardless of whether or not it's safe or ok to drive over it.

I think it's safe to say we've strayed super far from the original topic though. Maybe time for a new thread? The AI thread? Edit: Maybe we should poke a mod to move this to it's own discussion.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
I'm not sure I'd call any of the process "rationale" at all. It's not like the machine thinks to itself "oh no, there's something here, I should move out of the way" - it's just giving the result that best matches the pattern it was trained on. There's no reasoning involved. I imagine it would respond to any foreign shape the same way, regardless of whether or not it's safe or ok to drive over it.

Well exactly -- I know you wouldn't call it a rationale. But ultimately if a machine makes all the same decisions of an informed human, on some complex task that we would require a chain of reasoning, then one has to consider that the function that the model has learned has an implicit understanding of the discrete logical steps that you would cite when making the same decision.

But ya, also in favor of a thread split.
 

Explorer

He seldomly knows...
Joined
May 23, 2009
Messages
6,620
Reaction score
1,161
Location
Formerly from Cucaramacatacatirimilcote...
I don't think we're close to human intelligence. But to the points expressed earlier -- that AI is mostly tricks to appear intelligent -- I strongly disagree. I don't see any indication that deep RL is so far removed from the process of human intelligence to be considered a qualitatively different type of thing. The biggest obstacles seem to be in training strong predictive models, such that if you're in a situation you can imagine your outcomes in the future based on the actions you take, and composing small actions into conceptually larger ones, so that an RL reward applies more to concepts than super tiny actions. And then grounding such an agent in a multitask world. But there are people working on all of these problems -- it seems one of scale.
I'm not trying to pick on you, but as I noted earlier, deep RL to build a rule system is a way to build a *simulated* intelligence, not a synthetic intelligence.

You see no indication that a rule system is different from a synthetic intelligence. Could you expand on how they are identical?
It's not exhibiting the same rationale for reducing speed that you mention, but functionally it is equivalent (and would lead back to Chinese room talk to say that these are fundamentally different).
The "Chinese room" system also is a rule-based system, and not a synthetic intelligence, right? Also, the "Chinese room" has never been proposed as an exqmple of synthetic intelligence.
Well exactly -- I know you wouldn't call it a rationale. But ultimately if a machine makes all the same decisions of an informed human, on some complex task that we would require a chain of reasoning, then one has to consider that the function that the model has learned has an implicit understanding of the discrete logical steps that you would cite when making the same decision.
No more so than the game Mousetrap understands its function.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
I'm not trying to pick on you, but as I noted earlier, deep RL to build a rule system is a way to build a *simulated* intelligence, not a synthetic intelligence.

You see no indication that a rule system is different from a synthetic intelligence. Could you expand on how they are identical?

The "Chinese room" system also is a rule-based system, and not a synthetic intelligence, right? Also, the "Chinese room" has never been proposed as an exqmple of synthetic intelligence.

No more so than the game Mousetrap understands its function.

I don't believe in this simulated vs. synthetic distinction, so I don't really know much to engage here. I think the best definitions of intelligence are functional ones, based on behaving optimally to achieve one's goal, and not ones that try to classify or put restrictions on how that intelligent behavior arises.

So things like self-awareness and higher-order planning are, to me, things human intelligence evolved for the purpose of better achieving the long-term goal of sexual reproduction in a competitive environment. So when an intelligence "bottoms out" from following a simple strategy, no longer achieves its goals in a large population of similar intelligences exploring slightly different strategies, the right ones get promoted. Ultimately things like self-awareness arise.

So what someone might try to classify as a "less than"/simulated intelligence is imo just an intelligence that has not been situated in a world where it needed certain higher-order functions to succeed (or where such complexity could even be harmful toward achieving its goal). This is kind of subject to some caveats that the intelligence is somewhat biologically-inspired (i.e., reward-driven, complex representation learning/pattern matching) since that is the basis of our understanding of intelligence in our world.
 

Explorer

He seldomly knows...
Joined
May 23, 2009
Messages
6,620
Reaction score
1,161
Location
Formerly from Cucaramacatacatirimilcote...
I don't believe in this simulated vs. synthetic distinction, so I don't really know much to engage here. I think the best definitions of intelligence are functional ones, based on behaving optimally to achieve one's goal, and not ones that try to classify or put restrictions on how that intelligent behavior arises.
It seems apparent, at least to me, that you have no meaningful explanation of how a rule-based simulated intelligence would then make the leap to synthetic intelligence/consciousness, and so you will instead argue there is no difference between a rule-based expert system and a synthetic intelligence/consciousness. That's very convenient when attempting to avoid admitting there is a vast qualitative difference, but ultimately fails in terms of doing honest intellectual inquiry.
So things like self-awareness and higher-order planning are, to me, things human intelligence evolved for the purpose of better achieving the long-term goal of sexual reproduction in a competitive environment. So when an intelligence "bottoms out" from following a simple strategy, no longer achieves its goals in a large population of similar intelligences exploring slightly different strategies, the right ones get promoted. Ultimately things like self-awareness arise.
Ah. So, magically, a rule-based system suddenly gains self motivation, no explanation required.

I'm dubious. Could you give an example of such having happened?
So what someone might try to classify as a "less than"/simulated intelligence is imo just an intelligence that has not been situated in a world where it needed certain higher-order functions to succeed (or where such complexity could even be harmful toward achieving its goal). This is kind of subject to some caveats that the intelligence is somewhat biologically-inspired (i.e., reward-driven, complex representation learning/pattern matching) since that is the basis of our understanding of intelligence in our world.
Self-motivation is a handy indicator for showing consciousness, whether natural or synthetic. What meaningful alternate indicator would you suggest?

Even though this is slightly out of my field, I'm going to suggest reading up on the deeper writings regarding synthetic intelligence/consciousness. It seems like you're positing a lot of magic steps and handwaving to paper over the gaps you want to claim don't exist.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
It seems apparent, at least to me, that you have no meaningful explanation of how a rule-based simulated intelligence would then make the leap to synthetic intelligence/consciousness, and so you will instead argue there is no difference between a rule-based expert system and a synthetic intelligence/consciousness. That's very convenient when attempting to avoid admitting there is a vast qualitative difference, but ultimately fails in terms of doing honest intellectual inquiry.

Ah. So, magically, a rule-based system suddenly gains self motivation, no explanation required.

I'm dubious. Could you give an example of such having happened?

Self-motivation is a handy indicator for showing consciousness, whether natural or synthetic. What meaningful alternate indicator would you suggest?

Even though this is slightly out of my field, I'm going to suggest reading up on the deeper writings regarding synthetic intelligence/consciousness. It seems like you're positing a lot of magic steps and handwaving to paper over the gaps you want to claim don't exist.

Well the biggest example of consciousness / self-awareness arising from a rule-based system is…you know…you. That your ancestors were nothing more than proteins operating in a very if-this-then-that world of basic chemical processes, and then — being subject to more difficult hurdles to reproduction for millions of years — you wound up a conscious entity. I don’t see how anyone can believe in evolution and yet not imagine a goal-driven need for development of consciousness, when we see the spectrum of both intelligence and self-awareness in the diversity of life in this world. I shouldn’t need to provide an explanation for consciousness arising anymore than I would have to explain having five fingers on a hand — it just worked.

I think it’s just the usual trap, of believing we are somehow qualitatively special or distinct from all the life around us.

And as much as five fingers seems to be a rather ideal number of fingers for the physical activities our ancestors performed, self-awareness, complex (/imaginative) planning, reasoning about the beliefs of others — all these are clearly required to interact successfully in a tribal society. Yet you want more than a plausible need for a skill (to the goal of sexual reproduction), combined with millions of years to evolve it?

Oh, and what are the deeper writings regarding synthetic intelligence/consciousness? I think my viewpoint is very aligned with a Dan Dennett perspective of things.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,709
Reaction score
12,656
Location
Gatineau, Quebec
I shouldn’t need to provide an explanation for consciousness arising anymore than I would have to explain having five fingers on a hand — it just worked.
What good is it to use something as an argument if you can't (or refuse to) provide an explanation of it when challenged? I've kinda stopped caring about this conversation a while back, and a fair amount of the science at this level goes over my head, but IMO if you can't explain something then you probably don't understand it yourself. And I think the sort of universal lack of understanding of consciousness is central enough to the conversation to warrant an explanation, if you really know so much about it. :2c:

In other words "it just is" will never be a good argument for anything.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,501
Reaction score
30,198
Location
Tokyo
What good is it to use something as an argument if you can't (or refuse to) provide an explanation of it when challenged?

I'm not sure you read me right. I explained why -- the how is evolution. I don't think anyone in the world understands the how of self-awareness to a significantly greater degree than that.

At the same time, and as I tried to imply with my previous post, I'm not sure why you would expect anything more than that. We don't know the how of many things related to our physiology, yet most people are willing to accept that they arose via evolution in response to environmental conditions. Why is it hard for you to believe consciousness was the same way?

I'm not ducking out on the discussion but you seem to be expecting no less than the mysteries of the world revealed to you.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,709
Reaction score
12,656
Location
Gatineau, Quebec
I'm not ducking out on the discussion but you seem to be expecting no less than the mysteries of the world revealed to you.
I suppose it's a weird issue with debates about AI, cause a lot of people (not any of us necessarily) equate AI with these not-understood-at-all mysteries of the world. Like the suggestion that we're close to computers that are conscious- how can we claim to be close to simulating something that we know very little about? :lol:
 


Latest posts

Top