Developmental artificial intelligence: When the job of AI is to make itself obsolete.

“What if the proper place of AI schlock is to make learning to draw so fun we don’t need it anymore?”


What if the role of AI schlock ...

As society works to shape the role of AI in work, a major question looms: What kind of work should we favor for AIs, and which should we work to keep to ourselves? I’ve been slicing up the conversation of what work to automate along the formative/summative distinction in education. Which work do we do to develop/learn (formative) and which do we do for the outcome (summative)?  Maybe the role of AI depends on whether a job is about the outcome or the process of doing it? The prediction would be the AI uptake is fastest for the areas where a given kind of work is output work.   For work that’s about the output AI will help people be “better”, and we’ll more easily find social agreement to automate.  For work that’s about the process, automation will make people “worse” as it causes decay of fundamental skills. And work that’s both, some of it will incorrectly get treated like it’s just one, while some is more sensitively sliced into its output part and its process part.

Notetaking is a good example. It’s been one of the fastest uptake areas of AI.  And it’s becoming ubiquitous for meeting minutes.  But in a lot of science, math, and humanities education, in debate, conversation, brainstorming, and so on, notetaking is a formative tool, for helping us think, and we don’t help ourselves by automating it. What will happen? Will we divide notetaking into summative parts (meeting minutes) and formative parts (brainstorming), with different roles for AI in each, or will we pretend tasks are just one or the other (critics of all AI generated images). The frame might be applied to sociologist Putnam, and how he blames TV and TV news for the decline of American participatory democracy. We thought news was about the output, so we create broadcasts, and the formative part—e.g. the skills around finding and individually interpreting events for oneself and one’s community—decayed.

Midjourney is another great example.  Say that art has a “social development” role and an “illustration” role. If we allow that illustration is more about the output, we might expect to see AI images playing a greater role in slides than in paper figures, or web graphics than fine art.

Education is another example. Because teachers treat homework as formative (“practice makes perfect”) and students treat it as summative (“gimme the grade”) we’re suddenly right in the middle of a social conversation about what uses of AI aid learning and which are plagiaristic abuses?

Governance is especially timely and relevant. I’m increasingly obsessed with governance as a thing that people use to develop themselves. So I’m nervous about AI facilitation, argumentation, and deliberation because they are developed by people who assume that all governance work is summative. I believe more and more that a surprising chunk of it is formative, and AI will make democracy worse as our good habits decay.

This doesn’t mean theres no place for AI in formative work. Just very different AI.  The role of tools in formative work is to make itself obsolete.  What does that look like? We’ll find the best examples in the areas of society with the most agreement that work is formative.  I’m thinking K–12 education. No one is proposing to replace gradeschoolers with LLMs, even if LLMs are cheaper than five year olds, with better attendance and better grades. That’s a sure sign it’s formative work. But I’ll be curious where else besides edtech we find tools and uses of AI that are focused on reducing dependence by developing people.  There may be interesting hints among people who use AI art tools formatively and LLMs for Q&As. For example, both of these formative uses of AI are iterative. For governance my opinions are pretty strong. Instead of using human discourse datasets to train AI facilitators we should use them to train AI debators that we use to train human facilitators.

Of course, this argument depends on the idea that we’ll have any control over the role of AI in society. I think we don’t have a lot but we have more than we think. I’ll be curious how quickly or slowly it happens that the kid who likes to draw for its own sake stops getting asked “Why?”

Society lost interest in chess AI just as it was getting interesting. IBM’s Deep Blue changed chess, but it didn’t kill it. These days, human/computer hybrids can accomplish things that neither could alone. Why do people still learn chess? Because it’s fun to learn and think about. Fun, enrichment, and voluntary personal enjoyment of manual tasks will be the compass of formative learning, and a source of a whole range of insights into what we mean when we talk about bringing AI in.

This all amounts to an argument for public AI as well. An AI devoted to developing humans to replacing it, and putting itself out of a job, will never compete with one that makes us dependent. The private sector vision for AI is to increase the capacity of AI or AI/human hybrids, while decreasing the capacity and ability of humans individually. No privately held bot is trying to make itself obsolete.