Natural Born Robots

Natural Born Robots

Natural Born Robots

NATURAL BORN ROBOTS

Brooks introduces Nouvell AI, also called fundamentalist AI. He compares and contrasts his Nouvell AI with the then-mainstream symbol system, also called classical AI. Read the complete articole that Manuel and Sam are discussing about:

https://msujaws.wordpress.com/2010/11/18/elephants-dont-play-chess/

In this post the dialogue is realised by an interaction of virtual characters, for more information please check the page “Virtual characters

Manuel

I was so hoping the hype were true …

Today 11:34   

Sam

Sorry to disappoint … 🙄😅😅                                                   

Today 11:35  

Manuel

So why isn’t AGI around the corner with these artificial brains?

Today 11:36  

Sam

😅 .. Again.. they’re not really artificial brains                          

Today 11:37

Sam

Neural networks make a lot of fairly simplistic assumptions about the brain being an information processor that just works with ones and zeroes

Today 11:37

Manuel

OK, fair enough, the model is simplistic. But it works! Right?

Today 11:38   

Sam

Yes it does, but we are training artificial neural networks in ways that don’t really match how brains learn, or how children learn

Today 11:39  

Manuel

How so? 😞

Today 11:39  

Sam

Apart from the brain having almost a 100 billion (with a b) neurons and the largest simulations barely cracking 10 million (with an m)?

Today 11:40  

Manuel

OK, ok, sure. But isn’t this about how they learn?

Today 11:43   

Sam

Yes, but the raw numbers are relevant too. Brains are fast and efficient because they are massively parallel: lots of problem solving all at once all over the place

Today 11:45   

Manuel

So the size of the brain has a lot to do with its speed?

Today 11:45   

Sam

Indeed, we don’t really know yet how and where all the computations take place

Today 11:46

Manuel

Aha, so because we don’t really know how human information processing works, we can’t really compare the two  🤓

Today 11:46  

Sam

Not just that, but also what kind of information is relevant to human processing

Today 11:46  

Manuel

Meaning? 😕

Today 11:46   

Sam

Well, humans make a lot of assumptions in the background, use lots of rules of thumb, apparently, to simplify their problems

Today 11:46   

Sam

We know what’s it like just live in a complex layered world full of meaning, more or less right from the start

Today 11:47   

Manuel

I guess, sure. The world isn’t a jumble of chaos that we interpret, in a sense we generally “just know” what stuff is. Is that it?

Today 11:47   

Sam

Indeed, and it is not at all easy to figure out how to have computers to the same. Simulating the entire world isn’t doable, but picking out what is relevant is also really hard

Today 11:48   

Manuel

I think I get it. So “Artificial General Intelligence” has more to do with general background knowledge than with general problem solving?

Today 11:48

Sam

Yes, it would mean that a robot in the real world would constantly be solving an immense number of problems that we don’t really even notice

Today 11:49   

Sam

It doesn’t know what is relevant and irrelevant, because …

Today 11:49   

Manuel

… the world isn’t a chessboard! I get it … ♟️♟️♟️

Today 11:50

Sam

But then again, maybe that’s just what we need: a robot that is born and evolves in our same naturel world. How could it come to understand it otherwise?

Today 11:51   

Manuel

🤖🤖 Robots!

Today 11:52

Manuel

I wanted to talk more about robots …

Today 11:52

Sam

Hey, I love talking about robots! Let’s do that next time       

Today 11:53   

Manuel

Thanks for the chat! 😄😄

Today 11:53

1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 4.00 out of 5)
Loading...

Related post

 

Voodoo Neuronics

Voodoo Neuronics

Voodoo Neuronics

NEURAL NETS MAKE CHICAGO BLUES SEE RED

The Chicago police force is using an artificial intelligence program to anticipate misconduct among its officers. Read the full article written by Edward Helmore

https://www.independent.co.uk/life-style/neural-nets-make-chicago-blues-see-red-1327853.html

In this post the dialogue is realised by an interaction of virtual characters, for more information please check the page “Virtual characters

Sam

The approach to discover the rules of all thinking and putting them in a computer failed by and large, instead programmers started looking more in detail at how human brains worked

Today 11:34   

Manuel

Sure, that makes a lot of sense!

Today 11:34   

Sam

In the beginning they simply modeled neurons as input-output machines, firing and not firing as ones and zeroes

Today 11:35  

Manuel

That’s … simplistic? Right?

Today 11:36  

Sam

A bit, but again it did have serious early successes. A computational model of a single neuron was quite useful

Today 11:37

Sam

An artificial neuron simulated in a computer could actually already solve a lot of problems, even though quite simple ones

Today 11:37

Manuel

So how did that work?

Today 11:38   

Sam

Well it would have a series of inputs, filter them by recognizing a pattern, and then provide as output whether the input belonged to a certain category of not

Today 11:39  

Manuel

Not sure I get this … 😞😞

Today 11:39  

Sam

Sorry … Imagine you put a lot of points in a plane. Such a single neuron can classify them by drawing a single straight line among them

Today 11:40  

Manuel

Ah, ok. So it can tell whether they are above or below some threshold

Today 11:43   

Sam

Exactly! Now that is indeed still quite simple, but now imagine combining a lot of those together …. ☺️☺️

Today 11:45   

Manuel

Oh, wow, of course! So you can draw a lot of lines! Meaning finding a lot of patterns?

Today 11:45   

Sam

Yep, very complex pattern detection, but …                                          

Today 11:46

Manuel

… it sounded too good to be true, right? 😀

Today 11:46  

Sam

Oh, no! Not at all! It is just that if you string together artificial neurons in a network, it becomes pretty hard to tell how it works exactly

Today 11:46  

Manuel

Huh? You build a thing and don’t know how it works?

Today 11:46   

Sam

That was a big criticism. But if you think about it, it seems obvious: we don’t yet fully understand how brains work, so we build an artificial brain …

Today 11:47   

Manuel

… ok, but how can you build an artificial brain if you don’t know how it works?

Today 11:47   

Sam

Well, we use the very simple artificial neuron model, which we do understand, and string a lot of them together. But you can’t program that like an ordinary computer

Today 11:48   

Manuel

So how do you do that then?

Today 11:48

Sam

We make it learn. We don’t have a general rule for solving all problems, but we have a general rule for learning how to solve problems

Today 11:49   

Sam

The information, the knowledge how to solve a problem, is all distributed through the network.

Today 11:49   

Sam

There is no step-by-step algorithm   🙄                                                 

Today 11:50  

Manuel

OK, so it works in a very different way from the earlier approach where we tried to find a universal recipe

Today 11:50

Sam

Right. The problem now is that we can’t always explain what it is doing or how it works

Today 11:51   

Manuel

… so it is just like the brain …  🧠

Today 11:52

Sam

Ha! That’s right! The model is as mysterious as the thing itself           

Today 11:51   

Sam

https://en.wikipedia.org/wiki/Bonini%27s_paradox                            

Today 11:51   

Manuel

So when we talk about AI nowadays, is it all neural networks?

Today 11:52

Sam

Mostly yes. Things like Machine Learning work with this approach. We train a neural network to recognize patterns in a large dataset. And then do something “intelligent” with it                   

Today 11:52  

Manuel

So is AGI more realistic with this newer approach?

Today 11:53

Sam

No, not really.                                                                                             

Today 11:53  

Manuel

… oh come on! … 😆

Today 11:53

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.00 out of 5)
Loading...

Related post

 

Is thought just a program?

Is thought just a program?

Is thought just a program?

Will Artificial Intelligence Ever Live Up to Its Hype?

Replication problems plague the field of AI, and the goal of general intelligence remains as elusive as ever

https://www.scientificamerican.com/article/will-artificial-intelligence-ever-live-up-to-its-hype/

In this post the dialogue is realised by an interaction of virtual characters, for more information please check the page “Virtual characters

Manuel

What? Why? 😅😅

Today 11:32  

Sam

Well, if you want to build an artificial intelligence, you better figure out how our “natural” intelligence works first

Today 11:33  

Sam

What rules do we actually use to solve problems? Play chess? Do math? Solve puzzles? Have dinner at a restaurant? What script do we follow?

Today 11:34   

Manuel

OK, it does seems like a good idea to figure that out first 😁😁

Today 11:34   

Sam

If intelligence is the ability to solve problems, in general, then we can just look at how humans solve problems, and put that in an algorithm

Today 11:35  

Manuel

I’m guessing this is where icebergs make a sudden appearance…

Today 11:36  

Sam

Absolutely: even in the case of chess or math, humans are far from perfectly rational. We make mistakes, have to backtrack, react differently in similar situations, etc

Today 11:37

Sam

… and that is without taking into account that the world is not a chessboard

Today 11:37

Manuel

So again … …  if they knew all that early on: where did the optimism come from?

Today 11:38   

Sam

They had enough convincing successes to keep the program funded: they thought that problem solving in humans and machines could be described by the same algorithm

Today 11:39  

Sam

Humans and computers get input, apply an algorithm, and yield the desired output. Both do the same thing: information processing

Today 11:40  

Manuel

If that were true, then indeed AGI sounds like something right around the corner!

Today 11:43   

Sam

Yeah, it was a very convincing paradigm: thinking, problems solving, etc. is just symbol manipulation. And that’s something a computer can do too 😅🤣

Today 11:45   

Manuel

But a computer isn’t automatically a “general intelligence”, so why was this so convincing? And especially at the time … 😆

Today 11:45   

Sam

It was enough that it could in principle do the same things a human could, even if we didn’t have the algorithm yet

Today 11:46

Sam

If you could boil down intelligent human behaviour to rules, and then translate those to a computer, you’d be done!

Today 11:46

Manuel

So how successful were they in the end?

Today 11:46   

Sam

The paradigm was alive and well at least until the ‘80s

Today 11:47   

Manuel

Wow! And nobody opposed this approach? 😕

Today 11:47   

Sam

Obviously. Some accused them of being far too optimistic, other proposed competing paradigms

Today 11:48   

Manuel

So what was the main criticism?

Today 11:48

Sam

Well, some of the proponent made really outlandish claims, so Herbert Simon already in 1957 claimed that there were “machines that think, that learn and that create.” 

Today 11:49   

Manuel

That would seem a bit premature …

Today 11:50

Sam

Yep, so they got accused of selling hype, but ultimately this approach was superseded by something completely different

Today 11:51   

Manuel

OK! Now I’m curious … what happened?  😲😲😲

Today 11:52

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.00 out of 5)
Loading...

Related post

 

The world is not a chessboard

The world is not a chessboard

The world is not a chessboard

Many people believe AI (Artificial Intelligence research) started quite recently, like five years ago. But in fact the field has already had 70 years of fascinating history.

It all began in the nineteen-fifties when the potential power of information technology was becoming clear, at least to a small group of far-sighed thinkers including Alan Turing and Norbert Wiener

https://www.ai4eu.eu/news/ai-crossroads

In this post the dialogue is realised by an interaction of virtual characters, for more information please check the page “Virtual characters

Manuel

Hi, I’m back again! That was a pretty long and detailed article, but now I’m more confused than before … 😅😅

Today 11:32  

Sam

OK, how can I help?                                                                                

Today 11:34   

Manuel

Well, it seems like we’ve been thinking that human level intelligence or AGI would be around the corner for decades …

Today 11:34   

Sam

Sure, people have made a lot of very optimistic predictions in the past                                                       

Today 11:35  

Manuel

I saw quotes that people in the ’50s thought it could be done in less than a year!

Today 11:36  

Sam

Well, there was a famous conference in 1956 where some of the most prominent researchers in the area at the time wanted to get together to make a breakthrough …

Today 11:37

Sam

… but I don’t think they seriously thought at the time they could program an AGI from scratch

Today 11:37

Manuel

But where did all the optimism come from, if it wasn’t all hype? 

Today 11:38   

Sam

It certainly wasn’t “all hype”. There had been fabulous breakthrough just before that …

Today 11:40  

Sam

We went from the idea of a universal computing machine with Turing https://en.wikipedia.org/wiki/Turing_machine

in the ‘30s to actual programmable universal computers in the ‘40s https://en.wikipedia.org/wiki/Z3_(computer)

Today 11:40  

Sam

And during the early ‘50s people had written a string of working chess and checkers programmes

Today 11:40  

Manuel

So? That doesn’t sound very radical to me … isn’t that like the hammers you told me about before?

Today 11:43   

Sam

Sure, but the breakthrough was that it turned out to be possible at all: they were the very first of their kind 🤩🤩

Today 11:45   

Manuel

OK, yes, I guess I’m too used to current technology to take a chess computer seriously as a breakthrough in AI …

Today 11:45   

Sam

 Yes, in a sense. What “AGI” wants to be, is an AI that is as intelligent as a human overall, not just for a specific task. But still not necessarily do things in the same way a human does

Today 11:46

Manuel

I get that, and you’re right, we do things differently now, but at the time that was quite radical

Today 11:46   

Sam

People were optimistic because they thought that they could generalize from chess to everything else: all thought would be like a program

Today 11:46   

Manuel

That seems like a gigantic unfounded assumption …
Today 11:46   

Sam

Yep, that ship ran into a lot of icebergs, but the assumption wasn’t entirely bogus

Today 11:47   

Manuel

How so?  😆

Today 11:48

Sam

Well, you can think of playing chess as solving a problem: how do I check the king? And an algorithm can take you there: from a set board to victory                                  

Today 11:49   

Sam

But general intelligence could just be like that: an algorithm to solve any problem

Today 11:49   

Manuel

Fair enough. So what exactly went wrong?

Today 11:50

Sam

Ha! The world is not a chessboard! There’s no fixed ruleset of legal moves that you can apply, simulate, run through, and analyze

Today 11:51   

Manuel

How did they try to tackle that then?

Today 11:52

Sam

By turning from engineering to psychology …   😎😎                       

Today 11:52   

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.00 out of 5)
Loading...

Related post

 

Putting the “Artificial” in “Intelligence”

Putting the “Artificial” in “Intelligence”

Putting the “Artificial” in “Intelligence”

Artificial general intelligence: Are we close, and does it even make sense to try?

A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea…”

https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/amp/

In this post the dialogue is realised by an interaction of virtual characters, for more information please check the page “Virtual characters

Manuel

Hi Sam, could I ask you a couple of questions on AI … and robots?

Today 11:32  

Sam

👋  Sure thing!                                                                                

Today 11:34   

Manuel

Thanks! I’ve read this article and I’m not sure I quite get the point. What is “AGI” exactly?

Today 11:34   

Sam

Ah, yes, that’s a tricky one …  😎😎                                                        

Today 11:35  

Sam

So basically “AI” just means “Artificial Intelligence”, and that’s hard enough to define, but “AGI” means “Artificial General Intelligence”

Today 11:36   

Manuel

Ok, and the difference is … ? 😅

Today 11:36  

Sam

Compare it to a toolbox: you have nails, you need a hammer. You need to find patterns in data, you need an AI  🤖

Today 11:37

Sam

As Marvin Minsky once said, we could call a program “intelligent” if it does things in a way that we would call “intelligent” if a human did them

Today 11:37

Manuel

So a machine is smart if it does things like a human? 

Today 11:38   

Sam

Defining “intelligence” in general is really hard! AI systems are really good at things humans are really bad at, and really bad at things that are really easy for humans -> https://xkcd.com/1425/ (Moravec’s Paradox)

Today 11:40  

Manuel

So can we make an AI as smart as a human or no? 😆😆

Today 11:43   

Sam

Yes and no. We can make an AI do some tasks as well or faster than a human, but not in the same way a human does them, just like a car isn’t an artificial horse

Today 11:45   

Manuel

Ah, that clears it up. So a hammer is better than my fist at hammering in nails, but it is not a better hand overall. Right?

Today 11:45   

Sam

 Yes, in a sense. What “AGI” wants to be, is an AI that is as intelligent as a human overall, not just for a specific task. But still not necessarily do things in the same way a human does

Today 11:46

Manuel

So we have specialized AI systems that are good at one thing, but not a general AI that is good at everything?

Today 11:46   

Sam

Exactly. BUT!  😁😁😁                                                                              

Today 11:46   

Manuel

… there’s always a but … 😆

Today 11:46   

Sam

Well, there is one approach that tries to imitate how humans learn to do things: artificial neural networks that imitate the structure of the brain and can learn, instead being programmed.

Today 11:47   

Manuel

They’re building a whole artificial brain?  🧠🤖

Today 11:48

Sam

Not really no, these neural nets are generally simulated on ordinary computers 😊😊😊😊                                    

Today 11:49   

Sam

In this way, AI’s have achieved human-level skills in games  https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/

Today 11:49   

Sam

and have learned to do more complex perception and motion  https://www.bostondynamics.com/atlas

 

Today 11:49   

Manuel

Doesn’t this mean it is just “intelligent” full stop? What makes it “artificial”? “Artificial” makes it sound like its fake …

Today 11:50

Sam

In a sense you are right, “artificial” is commonly used to indicate that something is not real, like artificial flavoring or coloring in food. A computer solves problems in a different way than a human, but if it gets the job done, there’s nothing “fake” about it 

Today 11:51   

Manuel

OK, I’m going to read some stuff and get back to you.

Today 11:52

Sam

Sure, have a nice weekend!    🖖🖖                                                       

Today 11:52   

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.00 out of 5)
Loading...

Related post