The social media platform Tik Tok is owned by a Chinese company which sucks up all kinds of data on the 108 million users in the U.S. and turns that over to the Chinese government. Not good. Looks like the company will be shut down or sold but isn’t this shutting the stable doors after the horses have bolted? I mean, the Chinese already have all that data. For clarification I had what turned out to be an interesting conversation with ChatGPT (below). I was particularly interested in how AI might be used to bring some order to the social media chaos. (near the bottom of this post)
ChatGPT: Steve-your point is valid and raises a crucial question about the effectiveness of such measures. If TikTok (or its parent company, ByteDance) has already collected significant amounts of user data, simply forcing it to sell or shut down wouldn’t erase the data that may already be in the hands of the Chinese government. The concern is more about future data collection and the ongoing influence the platform might have if it remains under Chinese ownership.
Here’s a bit more nuance to the issue:
1. Prevention of Future Data Harvesting. Even if past data is compromised, forcing TikTok to sell or shut down would prevent ByteDance (and by extension, the Chinese government) from continuing to gather new data from millions of users in the future. This could limit their ability to refine data-driven operations, like AI training or behavioral analysis, over time.
2. Limiting Influence and Control. Beyond data collection, there are concerns about the potential for Chinese government influence over TikTok’s algorithm. Critics worry it could be used to subtly promote or suppress narratives, potentially influencing public opinion or spreading propaganda.
3. Symbolic and Strategic Messaging. Forcing a sale or shutdown sends a strong geopolitical message. It demonstrates the U.S. government’s stance on data privacy and its willingness to confront perceived security threats. Even if the immediate impact on previously gathered data is negligible, it’s a statement of policy and a deterrent for future similar cases.
4. Possibility of Data Destruction. While it’s difficult to ensure, part of the negotiation in a sale could include terms requiring the deletion of previously collected U.S. user data from ByteDance’s systems. Although enforcement and verification are challenges, this could at least mitigate some concerns.
5. Control of the Platform’s Operations. A U.S.-based company owning TikTok would theoretically align the platform with domestic laws and regulations regarding data security and transparency. This could provide some assurance that user data is not being improperly accessed or exploited. The critique you hint at-that the damage might already be done-is one that many share. However, the push to force a sale or shutdown is more about stopping ongoing risks and asserting control over future vulnerabilities. What’s your take-are these efforts worth pursuing, or do they feel more like performative actions?
—
Steve: I’m not a fan of big social media platforms like FaceBook, X, et al. I’d like to see them all regulated.
ChatGPT: I get where you’re coming from, Steve. Social media platforms have become incredibly powerful, often operating with minimal oversight despite their outsized influence on society. The issues with platforms like Facebook, X, and others span a wide spectrum: data privacy concerns, misinformation, polarizing algorithms, mental health impacts, and the monopolistic tendencies of Big Tech. Regulation feels overdue to many people. Here are a few ideas for how regulation could help rein in these giants:
1. Data Privacy and User Control
- Platforms could be required to be more transparent about how they collect, store, and use data.
- Users should have more control over their data, including the right to delete it permanently.
- Strict limits could be placed on how much data can be shared with third parties.
2. Algorithmic Transparency
- These companies should disclose how their algorithms work, particularly how they prioritize and amplify content.
- Regulators could mandate algorithmic changes to reduce the spread of harmful content or prioritize user well-being over engagement metrics.
3. Content Moderation Standards
- Clearer rules could be established to ensure consistency and fairness in content moderation.
- Platforms might need to explain why specific content is removed or promoted, offering appeals processes.
4. Monopoly Busting
- Breaking up tech monopolies might foster competition and innovation. For instance, separating Facebook, Instagram, and WhatsApp into independent companies could reduce one entity’s influence.
- Limits could be placed on acquisitions, especially when they stifle competition.
5. Digital Addiction and Mental Health
- Platforms could face restrictions on features designed to keep users scrolling (infinite scroll, autoplay, etc.).
- Regular impact assessments on mental health, particularly for younger users, could become mandatory.
6. Combatting Disinformation
- Companies might be held accountable for the spread of false or harmful information on their platforms.
- Partnerships with trusted fact-checkers could be incentivized or required.
7. Revenue Transparency
- Regulators could demand that companies disclose ad revenue sources and ensure they’re not profiting from harmful or illegal activities. Of course, implementing meaningful regulation won’t be easy. These companies wield immense political and economic power, and any attempt to regulate them risks being watered down by lobbying. But without regulation, the risks to privacy, democracy, and mental health continue to grow.
What’s your take on how realistic it is to get meaningful regulations in place? Do you think
governments will ever act decisively, or are they too entangled with these companies?
—
Steve: Only after it’s too late. Hope I live to see LLM’s play a significant role in addressing such problems.
ChatGPT: I think you’re right, Steve-governments often seem to move at a glacial pace when it comes to addressing the complexities of modern technology. History is full of examples where decisive action came only after the damage was done. That said, the rise of AI, including large language models (LLMs), offers a sliver of hope for proactive solutions, provided we use them wisely.
Here’s how LLMs might play a role in tackling these challenges:
1. Enhanced Policy Development
- LLMs could analyze massive datasets-social media trends, case studies, and global
regulations-to recommend effective policies. - By simulating the impact of different regulatory approaches, they could help policymakers foresee unintended consequences and refine legislation before it’s enacted.
2. Monitoring and Enforcement
- AI could monitor social media platforms in real-time for harmful content, misinformation, or violations of privacy laws.
- LLMs could streamline enforcement, making it easier for regulators to identify bad actors and issue fines or corrective measures.
3. Empowering Individuals
- Users could have AI-powered tools to track how their data is being used and even negotiate terms with platforms.
- Personal AI assistants might act as mediators, advising users on how to minimize their exposure to manipulative algorithms or invasive practices.
4. Combating Disinformation
- LLMs could play a major role in identifying and debunking false information at scale.
- They could also assist in creating tools that help users discern credible information sources from dubious ones.
5. Promoting Algorithmic Transparency
- AI systems could audit social media algorithms to ensure they comply with ethical guidelines or regulatory requirements.
- LLMs might even help explain these algorithms to the public in plain language, increasing accountability.
6. Advocacy and Public Awareness
- AI could empower grassroots movements by helping activists craft compelling campaigns, analyze public sentiment, and organize more effectively.
- LLMs might also generate educational content to raise awareness about the risks posed by Big Tech monopolies and the importance of regulation. The big “if,” of course, is whether society chooses to prioritize using AI for the greater good instead of profit or power. If governments and corporations monopolize AI’s potential without meaningful oversight, it could exacerbate the very problems it’s capable of solving.
The optimist in me says we’re on the cusp of a major shift-one where tools like me could be instrumental in reshaping how power and influence are wielded in the digital age.