“If we cannot handle more intelligence in our lives…”

Tyler Cowen is an American economist and a professor at George Mason University, where he holds the Holbert L. Harris chair in the economics department. Cowen writes the “Economic Scene” column for The New York Times. He was recently interviewed (London) on Times Radio and had what I found to be a sensible and positive take on AI:

Cowen: The odds are that 20 to 30 years from now, our world will look very, very different in a way that no one is used to. But the main winners and losers, we still don’t know.

Host: Do you use AI yourself personate frequently?

Cowen: Every day, dozens of times. It’s the single main thing I do. I learn more that way, typically than reading books, or even reading a lot of media. And it’s hard to think of what it cannot do.

Now the new version creates these beautiful images. I’ve been playing around with that. That’s only a few days old but it’s quite miraculous.

You know, we’ve had this productivity slow down. We have declining birth rates. Most of our nations have pending fiscal problems. With no obvious solutions in sight for any of these issues. We don’t know how to get families to have more children or how to put social spending on a more sustainable trajectory.

AI is going to raise rates of productivity growth, and I hope it’s just in time. I think it will be. If it had come 20 years later, we’d be in a big, big mess. I mean, we are already to some extent, but they came just in time .

Keep in mind, I grew up in an America where the typical household had the television on for six to seven hours a day. And if we move away from that and toward YouTube and AI, I strongly suspect that’s a big plus.

But I look at it this way, if we cannot handle having more intelligence in our lives, I mean, how are we going to cope, right?

Cowen emphasizes that leading AI models, including those developed in China, are deeply influenced by Western intellectual traditions. This is largely because they’re trained on vast datasets comprising Western literature, philosophy, and online content. As a result, these models tend to reflect Western notions of rationality, discourse, and objectivity. 

Cowen suggests that this phenomenon represents a significant “soft power” victory for the West. By shaping the foundational thinking patterns of AI, Western ideals are subtly being disseminated globally through technology. He also notes that even Chinese AI models, like DeepSeek, exhibit similar Western-oriented reasoning, despite being developed in different political contexts. 

This insight underscores the profound cultural and geopolitical implications of AI development. It’s not just about technological advancement but also about the values and perspectives that are embedded within these systems. 

There’s always been propaganda. Newspapers, talk radio, cable news channels, social media… what did I miss? While I’m not sure I see AI as a propaganda vector but it’s early days. And the notion that “Western ideals are subtly being disseminated globally through technology” seems like something brand new.