精东影业

漏 2026 精东影业

1375 Euclid Avenue, Cleveland, Ohio 44115
(216) 916-6100 | (877) 399-3307

WKSU is a public media service licensed to and operated by 精东影业.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI pioneer shares the origins of machine learning and looks to its future

Today鈥檚 artificial intelligence revolution took decades to develop.

Early research in computational neuroscience used the human brain as a model in building machines that could understand language today鈥檚 large language models, or LLM鈥檚 that are the core of AI鈥檚 abilities.

Terry Sejnowski at the Salk Institute in San Diego is a pioneer in the field. He spoke with 精东影业鈥檚 Jeff St.Clair about the links between human and artificial intelligence and the push to build ever smarter machines.

STCLAIR: Your new book is ChatGPT and the Future of AI, the Deep Language Revolution. What does GPT stand for and what does it mean?

SEJNOWSKI: That's an acronym for a generative system that can generate output, not just recognizing but also creating its own output.

P stands for pre-trained and this means all the learning takes place long before it gets put out into the public. And it takes a tremendous amount of computing. The largest language models take hundreds of millions of dollars of computer time just to train them. They're that big. But once that's done, you do a little fine tuning and now it doesn't learn anything new. Everything stops and there's no new learning taking place.

And finally, T stands for transformer. That鈥檚 the name of the architecture. It came from a group at Google who in 2017 wrote a paper called 鈥淎ttention Is All You Need鈥 and attention is a part of the transformer. The transformer helps keep track of everything that you said in your dialog. That is something the brain can do and they put that into a generative AI model.
And this was amazing -- the model was trained to predict the next word in the sentence. And if you want to be able to predict next word, you have to understand something about the meaning of the word, how it fits into the sentence. So, it actually had to figure out internally how to represent meaning. And that's the real trick.

STCLAIR: How does your experience in neuroscience and understanding the brain help you to develop the used in AI computing?

SEJNOWSKI: The revolution has taken place both in AI simultaneously with neuroscience, and so the two now are really converging.

In our , we asked what we can learn from the brain, whether we can understand the principles and incorporate them into the smaller neural networks. What we didn鈥檛 know was how well it would scale, if the network is bigger and bigger, does it get better and better and solve bigger and bigger problems?

Early on I calculated how much computational power there was in the brain, and I predicted by 2015 computers should be close to brain scale and sure enough in 2012 Geoffrey Hinton was able to solve a very difficult problem in computer vision, recognizing objects and images invariant to the position size and rotation. He showed that a particular type of neural network called the convolutional neural network was able solve, with the largest existing database, 20 million images, that the machine could learn as well as a human. That was a turning point.

STCLAIR: Do you do you think that systems like ChatGPT are 鈥渢hinking鈥 or have the ability to think?

SEJNOWSKI: Yes and no. I think that there are some aspects which are undoubtedly what you would call 鈥榯hinking鈥. In other words, it can't answer your questions unless it understands the question. The thinking that these large language models do may not be exactly the same as a human. I'll give you one example from a problem I'm working on right now.

When I read a book, or have a conversation, afterwards I think about it, I can plan the future and think about the past. My brain is always active. It generates activity on its own. It doesn't need an outside person talking to me. But ChatGTP, the moment you stop talking to it, it just goes blank. There's no self-generated thought, no internal dialog. And that's a big difference. As human beings, we reflect and are creative in the absence of any outside input. That's a problem I'm working on because we don't know how humans do that. It's called long-term working memory, and I recently got a big grant from the National Institutes of Health to work on understanding how it works. I think if we could understand how the brain does that, we should be able to transplant that into a large language model.

Computational neuroscientist Terry Sejnowski co-wrote the book with the ChatGPT chatbot as part of his deep dive into the roots of AI.
Computational neuroscientist Terry Sejnowski co-wrote the book with the ChatGPT chatbot as part of his deep dive into the roots of AI.

STCLAIR: Do you see that happening someday with the large language models, that they might gain a and achieve that internal dialog you're talking about?

SEJNOWSKI: Things are moving so quickly, I would say it's inevitable.

STCLAIR: Are you at all worried about where AI is heading and maybe what safeguards should or should not be put in place?

SEJNOWSKI: This is an issue that a lot of people in AI are concerned about, and a lot of work is being put into guardrails to prevent these large language models from saying things that are inappropriate, improper, misleading or hallucinating. But these are technical problems and the engineers are going to solve them.

We are at a very early stage right now in AI development. Compared to say airplanes, we鈥檙e at the Wright brothers stage. We just got off the ground. And the last thing the Wright brothers were able to do was to control the airplane, to point it in the right direction without crashing, and that's where we are. We鈥檙e off the ground, but we don't know how to control it yet.
One real concern is that at some point AI will become smarter than we are. It will achieve a different level. It can already write computer programs and it should be able to write a computer program that will make it smarter. And the smarter it is, the better the computer program that it can write. So, it's a positive feedback loop that can run away. And if that happens, we might be in trouble.

STCLAIR: Former collaborator, Geoffrey Hinton, in his Nobel Prize speech in 2025, did give a stark warning about AI, what are your concerns?

SEJNOWSKI: We really have to be careful, we have to be cautious, and we have to be prepared. And so I'm glad that Geoffrey Hinton and others are worried about it because, if these smart people worry about it, we'll be safer.

Let me just say, all technology can be used for good and bad. There are problems, for example, with biotechnology. We have a godlike power to create new viruses, for example, that could wipe out all of humanity, that's an existential threat. But we haven't. And the reason is that biologists have self-regulated themselves. They've put in controls. So we need to likewise self- regulate AI. Obviously companies have to self-regulate because otherwise people are going to sue them. This is something that is we're in a very early stages.

Something else to keep in mind are the unintended consequences of AI. We don't even know what they are yet, both on the benefit and risk side. The actual ways that it can be used will unfold in the future. If we regulate it too prematurely and prevent ourselves from getting to that point, we won't know it鈥檚 full potential. We won't know what might happen going in both directions.

STCLAIR: We've been talking to, a pioneer in the field of computational neuroscience and author of the book, ChatGPT, and the Future of AI, The Deep Language Revolution. Dr. Sejnowsky, thank you so much.

SEJNOWSKI: It's been wonderful. Thank you.

Jeff St. Clair is the midday host for 精东影业.