Anyone on here particularly religious?

  • Thread starter Hollowway
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
To be more clear, this is why I pointed out the first line of your link:

But this is exactly what that car is doing. The network generates an angle, then explicitly calls a steering function- something like (SetTurnAngle(angleFromNetwork)) - or to put it another way, there is a turnLeft() function.

That was a bad choice of name on my part because I intended it to be a higher order kind of command, like turn left on this road, not like set angle -1, -15,-45, -34, -20, -7, etc -- The model controls the most primitive command you can give to the car, not some high level thing that executes the act of making left turns or braking or 180s etc that you would tell a human driver
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
To be more clear, this is why I pointed out the first line of your link:

But this is exactly what that car is doing. The network generates an angle, then explicitly calls a steering function- something like (SetTurnAngle(angleFromNetwork)) - or to put it another way, there is a turnLeft() function.

That was a bad choice of name on my part because I intended it to be a higher order kind of command, like turn left on this road, not like set angle -1, -15,-45, -34, -20, -7, etc -- The model controls the most primitive command you can give to the car, not some high level thing that executes the act of making left turns or braking or 180s etc that you would tell a human driver
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,712
Reaction score
12,662
Location
Gatineau, Quebec
My point was that the network is not literally driving the car in the same sense that you could say a person is driving the car - it's performing one (very impressive) function within a machine that drives a car. The idea that the network knows what it's doing is an illusion.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
My point was that the network is not literally driving the car in the same sense that you could say a person is driving the car - it's performing one (very impressive) function within a machine that drives a car. The idea that the network knows what it's doing is an illusion.

I feel like there are actually 2 things going on here:
-- (1) that the programmer provides his learned knowledge to give the illusion of intelligence

-- (2) that the machine "knows" what it's doing.

To the first, I think the examples do a good job of arguing against this point. A camera or knowing that a steering wheel turns left or right aren't programmer expert knowledge - they're the atomic perceptual features the machine deals with and takes actions with. Was Johnny 5 not intelligence because he had cameras for eyes? And you can imagine that his model brain is controlling a set of actions which are controls in his joints similar to the left/right degree steering commands. And this applies readily to a lot of movie AI robots that I think everyone would agree had AI if they were not fantasy characters. That's an AI breakthrough that's really only hit a useful scale in the past 3-4 years. So you described AI earlier as basically being programmers putting in "if this then that" -- clearly this type of hard-coded knowledge is not how these systems work.

To the second, yea, I agree, but I don't see that as an issue, or it's not clear if it's an issue yet. If a model is trained to do one task, is it possible to "know" that it is doing the task? Doesn't knowledge of what a task is require the contrast of a different task? But if a machine can both drive a car and walk to the store and go bowling, I feel like then it does to some degree have some shared knowledge that it can apply as it focuses on any particular one. Driving the car requires some general navigation and representations of the path that are useful to walking around town, for instance.

I believe that while the DeepMind Atari setup doesn't show this explicitly, that the performance improvements on certain games when trained on all games shows that useful transfer of skills is happening. If you can transfer skill between games, doesn't that imply to some extent that the model is able to apply useful strategy learned in one game to a new one, and to do so must "know" what it's doing in each task?

The problem of having a more interesting example of this, and something closer to the kind of human intelligence you describe, is that it's difficult to ground an AI in multiple real-world tasks without it having a robot body and hands. It also takes a long time to train things involving robots since you're performing actions in real time and not simulated computer time.
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,762
Location
St. Johnsbury, VT USA
https://blogs.nvidia.com/blog/2016/05/06/self-driving-cars-3/



That pretty much disproves Bostjan's whole rebuttal whining, and demonstrates that a model can learn a complex behavior without being given explicit knowledge of the obstacles in its environment, which is the main point of this 2-3 page back-and-forth (and took all of 5 seconds to Google). You can of course do things with explicit knowledge, but that's not the point. I bring up examples like this as a counterexample to the idea that this intelligent behavior is somehow the programmer's knowledge embedded into the system.

If you don't trust the description, read the paper:
http://images.nvidia.com/content/te...2016/solutions/pdf/end-to-end-dl-using-px.pdf
I think the burden of proof is on the skeptic to find the untruth in a published scientific paper...

You know, you can lead a horse to water...

@narad Read your own links you post. Nothing there contradicts what I said. If you simply take to beating a dead horse and insulting me along the way, you might just undermine your own credibility in the argument, then maybe people won't even bother reading your links.

Incidentally, I saw this article the other day and thought of this thread: https://techcrunch.com/2017/07/25/artificial-intelligence-is-not-as-smart-as-you-or-elon-musk-think/

It's not too technical nor detailed, but it still kind of sums up a lot of the mentality around these sorts of threads.

Getting back to the religious and philosophical implications, storing data on every single atom in the universe would require a computational system larger than the entire universe. Why? Well, say one atom could store all of the quantum states, position, etc. of one atom. There would have to be one atom in the computer's memory banks for every one atom in the universe. Then you would need a very very powerful computer to calculate how those atoms would all interact with each other. The complexity of that computer is discussed here: http://advances.sciencemag.org/content/3/9/e1701758.full

Anyway, to make a computer simulation of the universe, you'd need an infinitely large classical computer, or at least a quantum computer about three times the size of the actual universe. It'd be far more efficient to simply create an actual universe than to create a computer simulation of said universe.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
@narad Read your own links you post. Nothing there contradicts what I said. If you simply take to beating a dead horse and insulting me along the way, you might just undermine your own credibility in the argument, then maybe people won't even bother reading your links.

Of course it does. You said the Nvidia self-driving car I posted that learned representations for objects in the environment was using an image classification library, and 3D objects were being mapped to the most similar objects in a library, and were very adamant about it. While those could be used, they weren't happening in the one I posted, with the point being that they are not necessary, i.e., that hard-coding these things ("cheating" by just inserting the programmer's knowledge) is not necessary.
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,762
Location
St. Johnsbury, VT USA
...but they still do just that. Maybe they don't have to, but they still do. I guess the reasons why they do are arguable to some extent.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,712
Reaction score
12,662
Location
Gatineau, Quebec
I think that you understand what you're talking about well enough, but just interpret it in a very different way. I don't think we'll ever agree on it. As far as I care, it's not "real" intelligence- I'm not convinced of it, and I don't think I could be, not with technology in it's current state.

To the second, yea, I agree, but I don't see that as an issue
And I think that's the core (or a large part) of our difference of opinion. I can't call a computer intelligent until it understands what it's doing on some level. It doesn't matter to me that it "contains knowledge" on an abstract level because everything does. Everything that has been engineered contains, uses, and expresses some level of knowledge, but we don't call those things intelligent unless it appears to be doing something that a human would have to think about. Writing something down means that the paper contains knowledge, but the paper is not intelligent. There is knowledge embedded in the engineering of a car, but a car is not intelligent. A speedometer on a bike contains or expresses the knowledge of how the radius of the wheel, time, speed, etc. are related to eachother, but we would not describe the bike as being intelligent. Machine learning is the same. It contains and expresses knowledge, but it doesn't understand it anymore than the piece of paper understands what you wrote on it.

I'm sure you disagree with all of that, and that's fine. It's not a meaningful disagreement.

At the end of the day.....
I think it's safe to say we're not in a simulation. :lol:
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
...but they still do just that. Maybe they don't have to, but they still do. I guess the reasons why they do are arguable to some extent.

Well I mean the first "goal" I had in participating in this thread was to cast off the misconception that modern AI is just tons of hand-written if this then that statements. I pointed to the DeepMind Atari work first, but moved to self-driving cars to follow TedEH's example, since there are examples of both which use deep reinforcement learning to discover what are essentially (1) the objects of an if-this clause, and (2) the actions to take in that circumstance to optimize the objective. The particular Nvidia car I pointed to didn't use hand-coded rules or image classifiers, etc., and you fired back that they did and I should read the webpage. So I hope you can understand my annoyance there. Any doubt is I think clarified in the paper/posts I linked to.

Regarding why some self-driving cars use libraries for various components, I think it boils down to how the system will ultimately be trained. If you're learning from example, then you're not so likely to see many people crossing the street or deer jumping into the road, etc., so there are only few instances of this to learn a representation from, -1 for deep RL self-driving for commercial purposes. On the flip side, systems that train in simulations have much more experience in all of these rare encounters, and so a lot of current self-drive research utilizes some simulated world training and some real-world training. The representations learned end-to-end tend to be much more robust to bad weather conditions and lack of road markings, etc., so +1 for deep RL when you have lots of data. When I say self-driving cars aren't using libraries or hand-coding, I'm referring to these most cutting-edge systems.

It's also good to code in some failsafes (if "Person in front of you" -> "stop") so I can imagine that in there as a practical real world concern that's neither here nor there wrt the larger thread topic, but is important for company liability. However, the primary driving behavior is done without this.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
I think that you understand what you're talking about well enough, but just interpret it in a very different way. I don't think we'll ever agree on it.

Yea, I'm happy to disagree on a philosophical point, just not a "how it works" sort of point.

A speedometer on a bike contains or expresses the knowledge of how the radius of the wheel, time, speed, etc. are related to eachother, but we would not describe the bike as being intelligent. Machine learning is the same. It contains and expresses knowledge, but it doesn't understand it anymore than the piece of paper understands what you wrote on it.

I still don't get the speedometer example. I mean, I understand how that contains knowledge, but say we have a camera on a dashboard, and we have lots of training data consisting of pairs of (5-sec video clip, speed label). I train a model, a blackbox which I provide with essentially no information pertaining to the task, to predict based on changes in pixels what the speed of the car is. Now the model has learned the knowledge that we otherwise had to put into the speedometer. If you keep broadening the definition of the task, then we'd be learning more and more of that knowledge that a programmer/engineer would have had to define in a previous method.

But anyway, probably not important we agree on that either after a few pages. I just still don't get it though.
 

Explorer

He seldomly knows...
Joined
May 23, 2009
Messages
6,620
Reaction score
1,161
Location
Formerly from Cucaramacatacatirimilcote...
I don't think anyone has produced a true synthetic intelligence, compared to numerous simulated intelligence systems which are being given as examples.

It's always interesting to see discussions wherein the latter is being proposed as an example of the former.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,712
Reaction score
12,662
Location
Gatineau, Quebec
I still don't get the speedometer example.
I just meant it as an example of something that makes a system appear to "know" something. A speedometer doesn't do anything more than react to a voltage, and that voltage could be anything. It just happens to be attached to more engineered stuff that supplies it a voltage that (hopefully) correlates to a good approximation of the speed you're going. But it might not. And the speedometer doesn't know any different. It's an example of where the "knowledge" in the system is just a clever design expressed by an engineer.

a blackbox which I provide with essentially no information pertaining to the task
I get that there are some machine learning tasks that have been done without providing the machine with very much information ahead of time, but I very much doubt this is the case for driving. I would be very surprised if there wasn't a bunch of image pre-processing going on outside of the machine learning, and a ton of pre-selected parameters that the machine is specifically looking for to help drive the simulation (pun slightly intended).

I don't think anyone has produced a true synthetic intelligence, compared to numerous simulated intelligence systems which are being given as examples.
I think this is more or less what I'm saying, just worded a bit better.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
I get that there are some machine learning tasks that have been done without providing the machine with very much information ahead of time, but I very much doubt this is the case for driving. I would be very surprised if there wasn't a bunch of image pre-processing going on outside of the machine learning, and a ton of pre-selected parameters that the machine is specifically looking for to help drive the simulation (pun slightly intended).

I don't mean to beat a dead horse but everything's in the paper. I mean, the whole point of the paper wrt existing work is "Hey, we can do this without pre-training individual modules for object detection / planning."
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,762
Location
St. Johnsbury, VT USA
I don't mean to beat a dead horse but everything's in the paper. I mean, the whole point of the paper wrt existing work is "Hey, we can do this without pre-training individual modules for object detection / planning."

paper said:
With minimum training data from humans the system learns to drive...

"With minimum training" =/= "without pre-training."
Steering module =/= all modules
Lane detection =/= object detection and planning

I hate to make conjectures about your thought process, but it seems very clear at this point that you are misunderstanding some rather specific things as much more general things, or taking statements about "we use less of this and more advanced that" as "we didn't use this and made huge leaps and bounds with that." That's where I, personally, keep getting hung up and keep coming back into this thread.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
Not everything is in the paper. Enough is in the paper to explain the jist of the process, but a lot of implementation details are left out.

Well literally not all model code goes into a paper, but you can't leave out details that contradict the story of the paper. That's dishonest to a degree that I think you should actually provide evidence for it, rather than assuming.
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,762
Location
St. Johnsbury, VT USA
Another quote from the paper:

The normalizer is hard-coded and is not adjusted in the learning process.
Page 4, under network architecture.

This was the paper you posted to prove that there was no hard-coding, particularly in the image recognition system.

QED.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
"With minimum training" =/= "without pre-training."
Steering module =/= all modules
Lane detection =/= object detection and planning

I hate to make conjectures about your thought process, but it seems very clear at this point that you are misunderstanding some rather specific things as much more general things, or taking statements about "we use less of this and more advanced that" as "we didn't use this and made huge leaps and bounds with that." That's where I, personally, keep getting hung up and keep coming back into this thread.

Well it's the driving down the road that is the difficult part. Google maps and GPS can tell you how to get from A to B, so I don't think of that as being relevant. What's cool about this paper is that the model learns a representation of the road without being told explicitly what the road is. And that's due to the objective function. If you have some scenario where all sorts of objects hop on to the road and avoiding them is necessary to minimize the objective, then the network would learn a representation of those objects as well, without being told what they are or what shape they are, etc. This is what you see in the Atari games.

Another quote from the paper:


Page 4, under network architecture.

This was the paper you posted to prove that there was no hard-coding, particularly in the image recognition system.

QED.

Hard-coding of if-elses or object detection. If you think image normalization is an intelligent process (or something that isn't done end-to-end in hundreds of other papers), then you're welcome to it.
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,509
Reaction score
13,762
Location
St. Johnsbury, VT USA
Image normalization is an intelligent process, if it is to be done automatically under arbitrary lighting conditions.

Also, I already posted the link to nvidia's site which proved that they used a library for object identification.

Driving down an empty road is not a trivial task, but identifying where the lanes of the road are is not even anywhere near as complex as actual neighbourhood driving- respecting other driver's space, avoiding squirrels and children, not running over sharp objects, and navigating from point A to point B, at the same time. I mean, frankly, it's not even apples and oranges, it's apples and earthquakes.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,509
Reaction score
30,217
Location
Tokyo
Your library link was not relevant to this system, at all.

Sure, they are different tasks, but here is a car that can navigate the road without explicit training of that (which is actually a huge problem to getting autonomous cars out on the road), and then other in-simulation car systems navigate other cars while not being given explicit knowledge of those (besides negative rewards for crashing). You have to look at the state of the field and the individual contributions of different research groups, and then from that precipice decide whether you think this is possible end-to-end. I don't care what your stance is at that point, but I think it's a bit bizarre to take firm stances while not reading the literature.
 
Top