Canada Files
Canada Files | Geoffrey Hinton
5/20/2025 | 28mVideo has Closed Captions
Godfather of Artificial Intelligence, Geoffrey Hinton.
Described as the godfather of Artificial Intelligence, Geoffrey Hinton won the Nobel Prize for his discoveries that helped computers learn in the same way as the human brain. Now he is warning of the dangers of AI and its potential threat to humanity.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Canada Files is a local public television program presented by BTPM PBS
Canada Files
Canada Files | Geoffrey Hinton
5/20/2025 | 28mVideo has Closed Captions
Described as the godfather of Artificial Intelligence, Geoffrey Hinton won the Nobel Prize for his discoveries that helped computers learn in the same way as the human brain. Now he is warning of the dangers of AI and its potential threat to humanity.
Problems playing video? | Closed Captioning Feedback
How to Watch Canada Files
Canada Files is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship♪ >> Welcome to Canada Files.
I'm Valerie Pringle.
My guest today is Geoffrey Hinton who is described as the godfather of artificial intelligence.
He is the 2024 Nobel Laureate in Physics for his discoveries that helped computers learn in the way the human brain does.
He is a professor emeritus at the University of Toronto.
And since leaving his job at Google, has been warning that AI poses a threat to humanity.
>> Valerie: Geoffrey, hello.
>> Geoffrey: Hello.
>> You've been sounding the alarm about AI for a couple of years now.
Do you feel it's a threat to humanity?
>> Yes.
Umm...most of the people I know who do AI believe that it's going to get smarter than us.
Some of the people think that's going to be fine.
Because we'll be making...sure it doesn't replace us.
Others are very worried.
And I'm one of those.
I feel we just don't know what's going to happen when we have things smarter than us.
>> And unsure about the ability to control it.
>> I think at present, we're like someone who's raising a very cute tiger cub.
>> Well, one of the things you've been passionate about too, is trying to inspire students and mentoring them.
To spend their time looking in the safety of AI.
>> Yes, I think we're the sort of bifurcation point in history.
Either we figure out how to keep this stuff safe or we don't.
We don't know if it can be made safe.
But we should put a huge effort right now into figuring out if it can be made safe.
>> So many people say, well, you know, it's just a tool.
Others say it's an agent.
Others say it's the next revolution in humanity.
>> Yes, so most of the people who say it's just a tool, don't have a kind of model of how people work.
One thing to bear in mind is these large language models, they certainly seem like they understand what you're saying.
And can answer quite tricky questions.
Which, yes, they really do understand.
They...have an evolutionary history.
They came from a model in 1985-- a tiny language model.
And that tiny language model wasn't designed as a chatbot .
It was designed as a theory of how people understand the meanings of words.
So the best model we have of how people understand things is they do it the same way as these chatbots.
They're not understanding things in a completely different way from us.
They're understanding things in the same way we understand them.
>> In what sphere does AI worry you most?
>> Actually there's a lot of spheres in which it worries me.
We need to distinguish between short-term and long-term risks.
So what I've been talking about in public mainly is the long-term risk of AI eventually taking control and taking over.
I've been talking about that because many people think that's just science fiction and it's not.
It's something we seriously need to worry about.
But there are many far more immediate risks we also need to worry about like lethal autonomous weapons.
So for example, the European AI regulations have a clause in them that says, "None of this applies to military use of AI."
So we're going to get very nasty lethal autonomous weapons.
After they've done terrible things, maybe we'll get something like the Geneva Conventions.
As they controlled chemical weapons.
Because those conventions work.
Britain is not using chemical weapons in the Ukraine.
Um, so that's one risk.
There's the risk of joblessness , of course.
And that will cause a bigger gap between rich and poor.
Because the increased productivity isn't going to go the poor people who get fired.
It's going to go to the rich people who fire them.
And the bigger that gap, the nastier society gets.
Make it big enough and you get very angry poor people who can be prey to populism.
We have them in both the States and Canada.
That's another risk.
Then there's the risk of corrupting democracy by fake news.
So it turns out--I just saw a paper where someone looked at how good you are at persuading people.
It compared a person with GPT-4 with a person with access to your Facebook page.
And GPT-4 will access your Facebook page.
It turns out GPT-4 with access to your Facebook page is much better at persuading you of things, than a person.
So it's very scary.
And presumably that's going on now.
>> You were working at Google, you know, they bought your company--involved in how it was being developed.
Do you remember a moment, or was there one?
Is that too cinematic?
Where you just started to, you know, think about the Sorcerer's Apprentice, or something.
And thought I have to quit, to leave.
I have to talk about this.
I have to warn people.
>> No, it wasn't like that.
I got along very well with people at Google.
And Google was very responsible.
They developed the first big chatbots.
And they didn't release them to the public.
Because they were worried about their reputation.
They rang very responsibly.
And I left Google on good terms.
Um, I mainly left because I was 75.
And I'd decided I would retire at 75.
The precise timing was designed so I could talk about the risks.
>> Do you trust the big tech companies to deal responsibly with AI?
>> Not entirely.
I trust some more than others.
Um, I don't trust Meta.
I'm quite suspicious about OpenAI now.
So in general, I don't trust big companies to do the right thing when it conflicts with profits.
>> You've been really sounding the alarm--you've received a huge amount of attention for what you've been saying.
Do you think people are listening to you?
>> Yes, I think they are.
I'm quite pleased that governments actually paid attention and started doing things.
I don't think they fully appreciate the risks.
I also think they don't have the ability to regulate it.
In the States, for example, to get decent regulations you need collaboration.
Well, you're not going to get that.
>> You were awarded the Nobel for Physics.
Interesting, which is wonderful.
Congratulations.
But for work that you're now warning people about.
How does that sit with you?
>> Um, well, I'm not a physicist.
I used a little bit of physics in the past.
In the early days of neural networks.
That's how they justified it.
I worked with someone called Terry Sejnowski.
We got a very interesting learning algorithm.
It was beautiful simple and depended on... ideas from statistical physics.
Now the big AI systems, at present, don't use these ideas.
They use a different algorithm I worked on, called backpropagation .
Which doesn't have that much to do with physics.
So it's sort of slightly embarrassing that I've been given a Nobel prize for physics.
I often sort of pinch myself to make sure it's not a dream.
And I have very good evidence this is a dream.
Because if you ask what's the chance that a psychologist will win the Nobel Prize in Physics?
Okay, that's maybe one in two million, let's say.
But what's the chance if this was my dream that I will win the Nobel Prize in Physics?
Maybe that's one in two.
So that gives odds of a million to one in favour of it being a dream.
>> Unbelievable.
You've given away your Nobel prize money.
>> Yes, I gave half of it to an organization that provides clean drinking water for Indigenous communities.
I was very upset about what happened to Davis Inlet, for example.
And I think it's crazy that a lot of Indigenous communities, they have to boil the water.
So that's where half of it went.
The other half is more interesting.
So in the 1980s, I worked with Terry Sejnowski on something called Boltzmann machines .
Which did use traditional physics.
We were young and very excited.
And we thought we'd figured out how the brain worked.
Um, I'm still not totally sure we didn't.
And um, he and I had an agreement.
That if one of us got the Nobel prize, and the other didn't-- we assumed back then it would be the Nobel Prize for Physiology or Medicine.
For figuring out how the brain worked.
We would split it because we'd done this work together.
He was actually a student of John Hopfield.
So when it was announced, my first reaction was how's Terry going to feel about this?
I called him up and said, "Look, we had this arrangement.
That we'd split it so where do you want me to send the money?"
He said, "Well, Geoff, you know, I think we should forget that arrangement because it was partly for Boltzmann machines .
But there was lots of other things you did after that that were important for the Nobel committee.
So I don't feel that agreement applies any more."
So we had this problem.
I wanted to give him the money.
And he wouldn't take it.
So in the end, we compromised.
I gave the money to our main conference.
Which is called the NeurIPS Conference.
>> To create a prize for young researchers in his name.
>> Wonderful.
>> It all started because you wanted to understand >> the human brain.
>> Yes.
>> And you were driven to understand the human brain.
>> Yes, and I still haven't.
But yes, my aim was to understand how brains learn.
>> Well, it's been pointed out that you were born into science.
You should just explain a little bit of your blood line, your family tree.
>> So my father was an entomologist.
he studied beetles-- mainly beetles.
He was a professor.
And his father was a mining engineer.
He ran a silver mine in Mexico.
But he was also an amateur botanist.
He collected huge numbers of plants in Mexico, that are in Kew Gardens.
And his grandfather, that is my grandfather's grandfather , was someone called George Boole.
Who basically invented mathematical logic.
He showed that you could take propositions and use one and zero as true and false.
Then you could do arithmetic with these ones and zeros.
And doing arithmetic with ones and zeros in the right way was equivalent to doing logical deduction.
>> Which was sort of the basis, I guess, on which so much of computer science is based.
>> Yes, a lot of computer science is now based on something called Boolean algebra.
>> I love the fact that you were introduced once as someone who failed at physics, dropped out of psychology, and then joined a field with no standards at all, AI.
>> Right.
I was insulted because I didn't fail at physics.
I dropped out of physics and I failed at psychology.
And that's far more respectable.
>> And you were... >> I was quite good at physics.
>> Yeah, you were a carpenter for awhile.
You really took awhile to find what it is and how it was you wanted to do your work.
>> Yes and no.
From when I was at high school-- I had a very smart friend at high school.
Talking to him, I became interested in how the brain worked.
And I was always interested in that.
But um, when I dropped out and became a carpenter, I'd sort of given up on it.
But then I couldn't get rid of the-- it's like, that was the problem I had to solve.
How does the brain work.
And I still haven't solved it.
>> But as you worked along in AI and neural networks, did you think I can solve this one?
I can figure out how the human brain works by mimicking it in a machine.
>> So back in the 70s, when I started doing graduate work, it was fairly new to use computer simulations to test out theories about how the brain works.
And that seemed to be the right methodology.
That there were going to be principles there.
And you could do computer simulations so you could quickly reject bad theories of how the brain works.
I have rejected a very large number of bad theories.
Many of them, mine.
Of how the brain worked by doing repeated simulations.
>> Well, you know, there was a time when AI research was just not funded.
No-one was interested.
It was sort of called the AI winter .
Yet you said, "I was always convinced it wasn't nonsense.
It was completely obvious to me."
>> Yes, what I was convinced wasn't nonsense was that understanding how neurons change the strength of connections between them to learn things.
That was the central problem.
We had to solve that problem because that's how the brain must work.
And all these people doing logic and saying we need a special kind of logic to understand human reasoning.
That wasn't the best approach to understanding the brain.
There was this problem.
How did it learn stuff by changing election strengths.
And curiously, Von Neumann and Turing both believed in that approach.
They both died young, possibly with the help of British Intelligence.
I think the whole issue of the field might have been different if either of them had lived.
They would have done-- they would have made neural networks respectable.
Because you couldn't really accuse them of not understanding logic.
>> Ummm.
But what was critical that you did was helping to find the method that enable these networks, the neural networks, to learn.
>> Yes and no.
So the method we used... >> I'll defer to the Nobel laureate.
>> That's nice.
The method we use now is called backpropagation .
We showed how you could use backpropagation to learn the meanings of words.
By trying to predict the next word.
The way these language rules work, greatly simplified, is you have a string of words.
With each word, you want to associate a whole bunch of features.
These features are kind of neurons that might, or might not, get active.
The activity of the neuron is the feature activity.
There might be features like animate , about the size of a bread box.
A cat for example, is animate --about the size of a bread box.
Is a predator, it has all those features.
A fridge has a completely different set of features.
So the first thing you do is associate features with the word.
Then you take the words in the context.
The words you've heard up 'til now.
And from their features, you try and predict the features of the next word.
This is how the language model I did in 1985 worked.
And then having predicted the features of the next word, you make guesses about what the next word might be.
Then what you do is, you look and see what the next word actually was.
You say, how could I change the features I assigned to words, or the interactions between those features so I'll make a better guess?
I'll put more probability on the right answer.
And less probability on the wrong answer I just gave.
And you go back and you change the connections between neurons so that you'll make a better bet next time.
And the going back is called backpropagation .
>> So what worries you about backpropagation.
>> Well, it works so well, it's what's behind things like GPT-4.
Which knows 1000s of times more than a person.
And it has about 100th, about 1% of the connections you have in your brain.
But it knows 1000s of times more than you do.
Backpropagation works really well for stuffing huge amounts of knowledge into not many connections.
Where not many is a trillion.
But they have a lot of experience.
Because you can make many copies of the same model running on different hardware.
They can look at different bits of the Internet and they can combine what they learn.
So GPT-4 have an immense amount of experience to get all that knowledge in there using backpropagation .
We're just the opposite.
We have 100 times as many connections as GPT-4.
But we have very little time.
We only live for 2 billion seconds.
And that's a tiny amount of experience compared to what GPT-4 had.
We're dealing with the problem of you've got as many connections as you could possibly want.
Maybe not that many but a lot.
But what you're limited by is you don't have much experience.
We're not very good at sharing experiences with each other.
The best way we have of doing it is I produce a sentence.
And you try and change the connection strings in your brain so you might have said the same thing.
Or I hear some other kind of behaviour and you change an extra string so you might have done that.
That's a very inefficient way of sharing knowledge compared to what these digital things have.
>> Right.
And we die and all those connections go.
>> You die and it's gone.
>> We're mortal.
They're immortal.
So we actually have solved the problem of immortality.
There's just one little snag.
It's only for digital systems.
It's not for us.
>> That's true.
What do you think your singular quality then was, as you look back?
Was it curiousity?
Just general curiousity?
Persistence or patience?
>> It's a lot of curiousity combined with a remarkable ability to ignore what other people say.
>> Well things really changed.
Was it 2012 when you had your big breakthrough?
You ended up-- your company was bought by Google.
People say that computer science was changed.
>> That's when people realized it was changed.
Things actually changed earlier than that.
That's when it was generally recognized that neural networks actually did work really well on really hard problems.
Like recognizing objects and images.
Up until that point, the people who did vision has said this will never work-- little toy problems.
It will never work on real images.
Just forget it.
Then we halved...just about halved the error rate.
And the computer vision community actually-- reacted quite sensibly.
They didn't say this is rubbish.
They said oh, it actually works and they all started using it.
>> Well, you got the satisfaction of people sitting up, listening and admitting you were right.
Then also the satisfaction of, you know, you were bought by Google.
>> There was a huge business deal.
>> My company was bought.
>> Your company was bought for $44 million.
Was that for someone who's been toiling away in a computer science faculty, was that astonishing?
To sort of go, woah!
>> It was a bit astonishing.
>> I'm rich!
>> And I have a neuro-diverse son.
And I always had the worry of what would happen to him when I'm not here.
All parents of neuro-diverse children, that's their main worry.
In academia, there's no way I could deal with that problem.
So that's why I set up the company.
So I could deal with that problem.
So if it hadn't been for him, I probably never would have set up the company and never gone to Google.
>> Wow..what did that do in terms of changing your life and the work that you were doing?
And what you were able to do.
>> It added an extra digit to my salary.
It..I had a lot of freedom at Google.
So I became kind of like a graduate student again.
I was only working there half time.
I still had students to finish off at the University of Toronto and they let me do whatever I liked.
>> But you've...one of the things that you've always been credited with, is being a great mentor.
>> Yes.
>> And really loving bringing along your students.
>> Yeah, I like having smart students and watching them develop.
>> Are you astonished at the speed of everything?
You never thought you'd live to see that.
>> No, I never thought I'd live to see, for example, an AI system or a neural net that could actually talk English.
In a way that was as good as a natural English speaker.
And could answer any question.
For GPT-4, you can ask it about anything!
And it will behave like a not-very-good expert.
It's incredible.
It knows 1000s of times more than any one person.
It's still not as good at reasoning.
But it's getting to be pretty good at reasoning.
And it's getting better all the time.
>> Wow, what do you think AI is good for?
>> Everything.
( laughs ) Um, so obviously any kind of routine intellectual work, it can do.
So my niece writes-- answers letters of complaints for our health system.
It used to take her 25 minutes to compose a letter.
Because you need to refer to stuff in the complaint.
They don't just want a form letter, right?
Now what she does is just scan it and give it to GPT-4 or 3.5 and it composes the letter.
Then she reads through the letter and tells it to make it a bit more or less formal, something like that.
On average, it takes her 5 minutes now.
So she's five times more productive.
Now I don't think they need five times as many letters answered.
So basically her job-- they need less people doing her job.
So there will be joblessness there.
There are other jobs which are very elastic.
Like, for example, suppose you could make an AI doctor.
Well, anybody who's over 50 knows that you could use huge amounts more than you can get of a doctor's time.
What's this thing?
And why does this hurt?
It'd be great.
So the demand for healthcare is incredibly elastic.
We could easily get 10 or 100 times as much healthcare as we get.
And be very happy to absorb it.
So it won't cause joblessness there.
And it will cause amazing advances in healthcare.
>> Wow, and yet we hear it may wipe out humanity.
Do you think, with some of the things that you say that you're scaring people too much?
You think of kids these days who are terrified about what's happening with the climate.
If that doesn't kill us, then AI will.
>> It worries me a lot.
It'd be nice if you could not scare the kids.
But scare the politicians.
So they do something about it.
That's not really possible.
So yes, I do worry that I cause teenagers to have sleepless nights about AI taking over.
But I don't think we should stay quiet about it.
The same for climate change, right?
They're worried about climate change.
But it's good they're worried about climate change.
It might actually cause people to stop burning carbon.
>> Hasn't your son sort of said sometimes, >> "Dad, just stop talking about this."
>> He has.
Yes.
>> Because he thinks it's too scary sounding?
>> Yeah.
>> But I guess what's the alternative view?
>> Exactly.
If you think it's a real threat, just not saying anything doesn't seem right.
>> Do you think still about the human brain?
>> Oh yes.
That's the thing I'd most like to understand.
>> Well, there's still time.
>> Maybe but maybe not for me.
>> The final question we always ask is what does being Canadian mean to you?
>> So I actually left Canada in 1998 and went to London for 3 years.
And London's fairly multi-cultural.
But it's not the same kind of multi-cultural as Canada.
There's lots of cultures there but they don't necessarily interact that well.
Canada is kind of wonderful at it-- Toronto, in particular.
So I have two adopted children who are Hispanic.
And they experienced very little racial prejudice here.
>> Interesting.
>> So that, for me, summarizes Toronto.
>> And Canada as well in your career here?
>> Yeah, and obviously compared with the States, we have healthcare for everybody.
And it's just a much kinder society.
>> Well, it's been a real pleasure to talk to you.
>> And meet with you and get a glimpse of the brain.
>> Okay, thank you.
>> Which is quite formidable.
Thank you so much.
>> Thank you.
>> And we'll be back next week with another episode of Canada Files.
♪
Support for PBS provided by:
Canada Files is a local public television program presented by BTPM PBS