What will the world look like when AI can do in minutes what we can do in 3 weeks (or longer)? That time is here.
Are we preparing students for a world where most existing knowledge work will be done by machines?
McKinsey has created its own AI platform called Lilli that is being deployed among 45,000 employees and enables consultants to do "in minutes what it would have taken them weeks to do," said CFO Eric Kutcher said at the recent CNBC CFO Council Summit in Washington
MSN
Short of getting the full AGI (whatever that means), enabling people to reliably get a significant portion of their own work done by autonomous systems will enable countless new personal and professional opportunities. Bojan Tunguz
Yann LeCun, one of the “godfathers” of AI, frequently talks about how someday in the future we will all have an army of AI assistants that are smarter than us and that work for us. Like those around us who are smarter than us, these AIs will know more about what we are trying to do; they will think faster; they will see patterns we do not see; and they will communicate more accurately.
In the context of the discussion, Dr. LeCun is usually talking about a world (not too far) into the future when we have developed artificial general intelligence (AGI), and machines become generally as smart as we are in at least most domains in which we are intelligent. But AIs are starting to have a radical impact well before they get to that point (and if you think AGI is a long way off, imagine what AI will be like when we arrive at AGI).
On a level less sophisticated than McKinsey, I have many AI assistants who exceed my capabilities at many tasks and each cost me approximately $20/month. I already have many AI assistants for grammar and spelling (GrammarlyGo), research (Perplexity and Google), idea generation (ChatGPT4/Claude2), image generation (Dall-E3, MidJourney), and video editing (CapCut). These assistants help me with all of these tasks, plus many others, enabling me to do more and better work. I’m still working on the minutes/three-week thing, but I don’t have the economic firepower of McKinsey to deploy the AI scientists needed to apply the AIs to my work that would enable them to do it in three minutes (though a data analysis I did probably would have taken me a month compared to the 3-4 minutes it took ChatGPT4).
But even these tools are quite powerful. If I just did these things on my own, it would take a lot longer, and my work product would be inferior. I certainly couldn’t call 500,000 people at the same time and carry on unique conversations with each person in real time, like my own AI call center can. Calling 500,000 people myself would take me forever and/or cost me a lot of money. And what if I had to answer hundreds of thousands of calls 24/7? Exhausting.
Similarly, I guess, the McKinsey consultants could try to compete against the AIs, stand up for their humanity, and do their work on their own! I suspect their bosses would let them work on projects that are not timely and offer to pay them a few dollars a week. If they are close to retirement, they may just let them keep their full salaries for a few more months.
But we all know the few-dollar-a-week option isn’t real. To succeed in the industry, consultants are going to have to “reconceive what it means to deliver those services” and jobs are going to have to be put “back together again in new iterations.” “It will change everything," PwC chief products & technology officer Joe Atkinson recently told CNBC. MSN
Figuring out how to change everything is going to take creativity, critical thinking, communications, and collaboration, as well as the ability to leverage AI to amplify human capacity. That will become the human role, and there is no alternative. Machines cannot put the jobs back together and conceive of new services on their own (at least not yet).
The alternative is certainly not to wish the technology away or to resist it.
The third option, and the only one he (Chavez) does not recommend to anyone, "is to stand in way of progress in the name of job security and obfuscate or thwart people looking to automate workflows." "It's doomed to fail," Chavez said. "I've seen it fail. Do not set yourself up in competition with the computers. I decided in seventh grade to not compete with calculators on multiplying numbers. I had confidence finding what tools can do better will open up new things for me to do.
Humans aren’t going to win a competition against machines in most instances (“future of work” surveys and studies do indicate that plumbers and cleaners will not be impacted by AI for the foreseeable future).
As Mustafa Suleyman said on CNN Sunday morning, AIs are already basically competitive with doctors at diagnosing diseases, which is consistent with a growing number of studies.
In this study, the AI model graded the aggressiveness of tumours from CT scans with 82% accuracy, compared to the 44% accuracy achieved by a clinician reading the scan, taking a tissue sample, and waiting for the results of a lab analysis [Bristows]
It’s not perfect in most instances, but it’s within range, and we are still in the first inning of AI. And it’s not just in diagnosis; AIs will be able to facilitate treatment as well. Vinod Khosla predicts FDA certified AI doctors within 5 years. Existing apps are already quite powerful.
Harvey.ai, which can substitute for paid legal assistants and most likely some attorneys in the near future, is now valued at $800 million. As Supreme Court Chief Justice John Roberts recently noted: “I predict that human judges will be around for a while. But with equal confidence I predict that judicial work—particularly at the trial level—will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.” Individuals can get direct legal help at a fraction of the cost of a human attorney, and judges in the U.K. have authorization to use AI to help write legal opinions.
I’ll be shocked if there aren’t multimodal tutors that don’t hallucinate available within a year. For students who homeschool, these will be their teachers. These systems already exist in text, and Google can tutor a kid on needed math homework corrections with a photo of the original work, as it can reason through the problem to see the errors.
Colin Murdoch, the chief business officer of Google’s new AI super division, explains:
Gemini is a significant advance in AI development. It’s our largest and most capable model to date: it understands and reasons text, images, audio, video and code, so it can help people be more creative or learn. For example, let’s say your child brings home physics homework and needs help understanding what they have done right and wrong. If you took a photo of the page, Gemini would not only give you the correct answer to the problem, but would read the document and explain what the child has done right, what they have done wrong, and the underlying concepts. Users can also interact with Gemini through Bard, which now works with Gemini Pro and is more effective for understanding, summarizing, reasoning, coding and planning. It is already available in English in more than 170 countries, and in the coming months it will be available to billions of people through other Google core products such as Search, Ads, Chrome and Duet AI. In the long term, tools like Gemini will transform the way billions of people live and work around the world.
As noted, this (and more) will all happen whether or not we get to “AGI,” which could happen as early as 2027. [We explain this claim and what it means in great detail in Chapter Two of our report].
These changes are radical.
I see education heading down one of two paths.
Path 1 — We keep telling the teachers AI cannot do their jobs because it can’t do everything humans can do, and we keep telling kids that learning with AI is cheating and that they will amount to nothing if they use AIs to help do their work. This path is a path toward the irrelevance of educational institutions. AIs can and will be able to do most of what we do. AIs may not be able to do everything as well as us, but at least a lot of the things we do, they will be able to do better.
Path 2 — We help students and teachers understand the world that is emerging around them. Level with them. Explain that the future of every single job is uncertain. Explain that in past technological revolutions that occurred over a much longer period of time than this, many people lost their jobs. Explain that society is about to be substantially disrupted, probably in greater ways than it ever has been before, and certainly faster than ever before. Inform them that every “future of employment” report identifies soft/ durable skills and AI technology skills as the known job skills of the future. Let them know that the future economic value of all knowledge work is uncertain.
In this world, educators and parents try not to worry about students’ scores on standardized tests that measure knowledge they may never have use in an AI World. Educators and parents should educate themselves on what it means to prepare students for the world they will graduate into, not the world of June 2023 or before, and try to prepare them for that world.
Most importantly, educators understand that the future of education has to be about preparing students for the AI World. How to do that is the most important question. A secondary, but important, question is how to use AI in schools. It’s a secondary question because it doesn’t matter if we use AI in schools if we use it in a way that doesn’t prepare students and teachers for the AI world.
The future of education is not about teachers copying and pasting prompts into generic large-language models. This is a great way to help teachers (and anyone) learn how to use the technology as it exists now and how it works, but in a few years or less, teachers aren’t going to copy prompts into ChatGPT to write lesson plans and quizzes. In the future, teachers are most likely going to be facilitating student interaction with multimodal AIs that will support their learning and helping them develop the creativity, critical thinking, communications, and collaboration skills while leveraging AI to amplify human capacity to “reconceive what it means to” provide goods and services and to put the world “back together again in new iterations.”
The best part is that it’s actually not that complicated; teachers already have the abilities to help students develop creativity, critical thinking, communications, and collaboration skills. Now they just need 10X that, and school leadership has to help with the understanding AI part.
It's simply human deep learning amplified by AI, and it’s well laid out in our report.