The Canadian journalist, blogger, and author Corey Doctorow coined the term "enshitification" to describe the slow and gradual deterioration of online platforms and digital services as they prioritize revenue and profits over customer needs.
Enshitification usually follows a familiar pattern, unfolding over three key phases:
- Initial User-Centric Phase: Platforms start by offering excellent services to attract users, often operating at a loss to build a large user base.
- Business Customer Focus: As the platform grows, it begins to cater to business customers, offering them favorable terms to create a thriving marketplace.
- Exploitation Phase: Finally, the platform starts to exploit both users and business customers to maximize profits for shareholders.
Recently, there's an update to the final phase, as AI creeps into every platform and service.
Feeding the Hungry AI Beast
The NYTimes has a great podcast episode of The Daily, "
AI's Original Sin", where they dive deep into how, by late 2021, OpenAI, Google, and Facebook had exhausted all the publicly available online content to train their AI models. They then turned to copyrighted content without permission, and in some cases, broke their own platforms' terms of service to feed their AI's training needs. It's an excellent episode and worth listening to.
Nowadays, more and more platforms are adding AI to their products. Google Gemini for Workspace,
HubSpot's AI, ClickUp Brain, Notion AI, Adobe's Firefly, Canva Magic, etc., are all enhancing their products with AI backends that have been well received and, let's be honest, have been great for productivity and empowering people to do more.
The addition of AI features raises questions about how customer data is being used in training these AIs to get better, and how it affects consumer privacy, creator copyright, and business data sensitivity.
Platforms have been shifting their policies to harvest more user data for AI training, leading to significant backlash from users and creators. This form of "enshitification" has seen major players like Adobe, Meta, and Microsoft navigate stormy waters as they try to balance AI innovation with user trust.
Adobe's Policy Rollercoaster
Adobe recently found itself in hot water when it
updated its terms of service to include a clause allowing the company to use customer content to train its AI models. Users were outraged, expressing their concerns loudly on social media and considering moving to alternative platforms.
The backlash was so intense that
Adobe swiftly reversed its decision, announcing a more user-friendly approach to AI training. This backpedaling was necessary not only to quell the uproar but also to maintain user trust, which is vital for Adobe's business.
Concerned creatives, fed up with Adobe's long history of awful customer service, decided to find alternatives, only to discover hidden terms in the cancellation policy that made them pay cancellation fees. This led the
Department of Justice to take legal action against Adobe for hiding fees and making it difficult for consumers to cancel subscriptions. This lawsuit highlights the broader issues of transparency and user rights that are becoming increasingly significant as companies exploit user data for AI development.
Meta's Data Scraping Saga
This change has sparked more debates about privacy and user control. Many users are frustrated by
the lack of clarity around opting out of such data usage, raising concerns about consent and transparency. Meta's extensive data collection practices are again under scrutiny as users and regulators question the ethics and legality of using personal content without explicit permission.
The controversy over Meta's policy highlights the tension between AI development and user privacy.
As AI models become more sophisticated, they require vast amounts of data, much of which comes from users who may not fully understand or consent to its use. This situation poses a significant challenge for businesses that rely on these platforms for marketing and customer engagement.
Microsoft Copilot and the GitHub Dilemma
The GitHub controversy underscores the broader issue of how companies utilize publicly available data for AI training. It brings to light the complexities of intellectual property in the age of AI, where the boundaries of data ownership and usage are increasingly blurred.
Implications for Businesses
These policy changes and the consumer backlash have significant implications for businesses, especially those in the marketing sector. Companies must navigate the fine line between using AI for innovation and respecting user privacy and data rights.
Businesses must be extra careful about how they use their own data with AI-centered platforms, especially service-based B2B companies that use their customers' business data. NDAs will now need to be updated to ensure that any shared information with vendors will not be used with AIs for fear that sensitive information might be used as training data and leaked in outputs.
For creators, there have been numerous cases of visual media being appropriated by AIs to generate images and videos. These AIs, trained on publicly available images in designers' portfolios and client sites, make it easy to recreate a particular style or aesthetic unique to that creator. Even
Scarlett Johansson wasn't immune from OpenAI with their latest 4O release, whose voice sounded too much like the actress's in an attempt to replicate the feel from the movie "Her."
Final Thoughts
I've been using AI a lot these days for a lot of things. It has been a time saver in many cases and has really enhanced workflows for my
digital marketing agency, Third Wunder. I'm also growing more and more conscious of how I can use what data where, when, and why. We have NDAs with our clients, and they trust that their information is safe and secure. Internally, we are having discussions on data security and safety and considering alternatives to some tools we use in light of these recent policy changes and legal issues.
The trend of "enshitification" in platform policies is going to be a major concern going forward. The need to balance AI innovation with user trust and ethical considerations is going to be top of mind for many. How can companies ensure they are on the right side of this balance, adapting to new technologies while safeguarding customer trust?