Raising Baby AI in 2024
Davos 2024 convened a panel of experts to terrify and reassure us in equal measure
You’d have to be living under a rock to entirely swerve the avalanche of AI predictions for 2024. They typically fall into three camps — extreme AI optimism, extreme AI pessimism, and a sort of vanilla-flavored corporate edition whereby the author safely predicts things that are already happening.
Much more interesting was a meeting at Davos 2024 yesterday, where a panel of undisputed AI silverbacks gathered to discuss the trajectory of AI’s sexiest zeitgeist — large language models (LLMs).
Child rearing
Yann LeCun, AI lead at Meta and a passionate seeker of artificial general intelligence (AGI), gave an eminently repeatable layman’s explanation of where we are with LLMs now vs. human intelligence, using the example of a child acquiring visual knowledge.
He told the intimate Davos crowd that a human four year-old takes in about 20MB per second through its optical nerve. Using a rough estimate of that child having 16k waking hours in their first four years, and there being 3600 seconds in each hour, LeCun says this indicates that the average four year old has 50x more information than the very biggest LLMs we currently have.