Geoffrey Hinton Tells Us Why He’s Now Scared of the Tech He Helped Build §
Key takeaways §
Highlights §
- As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.” (View Highlight)
- Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.” (View Highlight)
- Note: It’s expected that LLMs come up with wrong facts, because humans do too! It’s just a matter of doing so more elegantly.
- “Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?” (View Highlight)
- Note: The machines may soon be able to create their own subgoals, the steps required to execute a task.