For the first time since the 1960s, Hollywood writers and actors are on strike concurrently. One of the joint movement’s inspirations is generative artificial intelligence—the term for programs that produce humanlike text, images, audio and video more quickly and cheaply than artists. The strikers fear studios’ use of generative AI tools will replace or devalue human labor. This is a reasonable worry: one report suggests that thousands of jobs have already been lost to AI, while another estimates that hundreds of millions could eventually be automated. Left unchecked, this labor disruption could further concentrate wealth in the hands of companies and leave workers with less power than ever.

“Unfettered capitalism, unfettered innovation, does not lead to the general well-being of our society,” says Joseph E. Stiglitz, a winner of the 2001 Nobel prize in economics, a professor at Columbia University and chief economist at the Roosevelt Institute, a think tank based in New York City. “That’s one of the results that I’ve shown very strongly. So one can’t just leave it to the market.” Striking workers such as those in the writers’ and actors’ unions that are taking action now could serve as one restriction on job automation. Government regulation could also limit AI’s disruptive ability. Stiglitz, who has studied the science of inequality—and how we can reduce it—spoke with Scientific American about how artificial intelligence will impact the U.S. economy and what should be done to prevent it from increasing economic inequality.

[An edited transcript of the interview follows.]

Generative AI is already disrupting the job market. Copywriters have been laid off in favor of text-generating programs such as ChatGPT. IBM has said it will pause hiring on thousands of roles that could be done by AI. Do you see this trend continuing?

Yes, I do. But we don’t know the extent to which it will happen. I think it will replace people in more routine jobs—you mentioned copy writing, copy editing. Where there are a set of rules, it can read and see whether those rules are followed. It may not have as good an ear for the exceptions, and so I think that there’s going to be a lot of AI-human interface: people will use AI as a productivity-enhancing tool.

I don’t think AI is at the point where it can be trusted on its own, but I think it’s a very powerful tool for doing a wide class of work that involves a lot of routine. Somebody trained ChatGPT on my data, and [I tested it] to see how well it did in answering journalist questions. I made up the questions, and I reviewed the answers. And I thought on half the questions, it did perfectly reasonably. And on three, it was totally wrong. So I think my view is: it’s not going to be unleashed without a lot of human interaction. You’re going to have to check it—not only the quality of the answer [but also] the bias and whether it’s gone down a rabbit hole and produced made-up references.

What about the possibility of AI creating jobs? Would that be enough to make up for some of the jobs that will disappear in the new AI era?

No, I don’t think so. I think it’s going to create a demand for different skills. So, for instance, AI is very much like a black box. And by that I mean even the people who create it don’t understand exactly how it’s functioning. So at least some people have speculated that managing an AI may require more linguistic humanities skills than mathematical skills. And it may create a change in the kinds of skills that are valuable in the labor market. I see it as, at least in many areas, increasing productivity enough that the demand for labor in those areas will go down. There will be jobs created, but my judgment is that there will be more jobs lost.

Could we end up in a situation where human-created work is a premium product, the way buyers might be willing to pay more for hand-woven sweaters than for machine-made ones?

Yes, there’s a widespread sense that there’s a kind of blandness to ChatGPT-generated material. There’s always going to be a demand for creativity. I think the areas where it’s going to replace us are very much the areas where, now, we don’t put a lot of weight on who has written it—you know, it’s a newsletter, or it’s something that, if it had been generated by a machine, we don’t care. It’s not the literary quality of the information; we just want [that information to be accurate and] put in the right form.

A big labor disruption like this is going to have an impact on economic inequality. As someone who studies inequality extensively, how do you see these changes in the job market contributing to inequality in both the short term and the coming years?

I’m very worried. In a way, robots have replaced routine physical work. And AI now is replacing routine white-collar work—or not replacing [it] but reducing the demand. So jobs that were routine white-collar, I think, will be at risk. And there are enough of those that it would have a macroeconomic effect on the level of inequality. It could amplify the sense of disillusionment: [in places where deindustrialization occurred, there was a] rise to the deaths of despair. They were located in particular places, but this routine work occurs everywhere.

Now, that poses an advantage and a disadvantage. The problem is: this may mean that large fractions of the world, of the U.S., will face this inequality. But on the other hand, if we get our macroeconomic policy right and create jobs, the jobs will be created everywhere. So people won’t have to move in the way that, right now, jobs that are created are in urban coastal cities, and the jobs that are lost are in the Midwest, South, industrial towns. So some of the place-based inequality, which has played such a role in the divided U.S., it may not be as bad.

And do you see any potential solutions to this issue of the reduced demand for white-collar work? Is there any way to reduce the impact of that?

Sure, two things: We increase aggregate demand to keep the economy closer to full employment, and we have active labor market policies to train or retrain people for the new jobs [created by AI]. It may be that if we have good, distributed policies, people may say, “Well, our standard of living is sufficiently high—I don’t need that many material goods.” And so they’ll accept more leisure; we might move to a 30-hour week. In effect, our measured GDP [gross domestic product] would not be as high as it would be if we had a 35- to 40-hour week. But our objective is not measured GDP; our objective is well-being. It could well be that we decide to move to an equilibrium with overall shorter working weeks and more leisure. And that way may be one way we accommodate this increased productivity and increased innovation.

How can we incentivize companies to shorten the workweek and accept reductions in overall profitability?

We may have to use government regulation because of the weakness of the bargaining power of workers—especially in the U.S. We passed the “hours and wages bill” [the Fair Labor Standards Act of 1938] in the Great Depression, which capped the workweek at 40 hours. That was a long time ago, and now we’re in a new world. It may be the appropriate thing is to set it at 30 or 35, with a lot of flexibility, so if companies want to have the workers work more than that, then they pay them overtime. What we have to recognize is that we created a system where workers don’t have much bargaining power. So in that kind of world, AI may be an ally of the employer and weaken workers’ bargaining power even more, and that could increase inequality even more. There is a role for government to try to steer innovation in ways that are more productivity-increasing and job-creating, not job-destroying.

It’s interesting to compare the AI revolution to historical events because disruptions like this often have historical parallels. Is AI’s impact analogous to another event?

One always has to be careful about making historical comparisons. Some people have made, I think, the wrong analogy. And they said, “In previous cases, the innovation created more jobs than destroyed—cars destroyed jobs in horses and buggies but created new jobs in car repair.” There’s no theory that says that it has to be that way. I think that’s a lazy way of reading history—just “in many cases, more jobs were created.” But it’s not inevitable, and one can easily imagine the opposite.

Overall, do you feel optimistic or pessimistic about the situation?

I guess overall, I feel pessimistic—with respect to the issue of inequality. With the right policies, we could have higher productivity and less inequality, and everybody would be better off. But you might say the political economy, the way our politics have been working, has not been going in that direction. So at one end, I’m hopeful that if we did the right thing, AI would be great. But the question is: Will we be doing the right thing in our policy space? And I think that’s much more problematic.