On 26th November the Media Reform Coalition hosted a public panel discussion at the London School of Economics, asking “Is Big Tech too big to regulate?”
Global tech giants like Amazon, Google, Microsoft and Meta dominate the digital media landscape. Their massive concentrations of control and ownership over digital communication, e-commerce, social media and new technologies like AI are restricting the potential of the internet to support freedom of expression, inclusive public debate and individual empowerment.
As we have argued in our 2024 Media Manifesto, we need public alternatives to the dominant platforms and technologies that define our shared digital spaces. But are these corporate titans simply too big, too global and too dominant to regulate in the public interest? How have different countries tried to curb the power of Big Tech, and what could alternative models look like? Chaired by Professor Lee Edwards of the LSE and Media Reform Coalition, our panel of media practitioners, legal experts and tech policy researchers discussed these challenges and more.
Lexie Kirkconnell-Kawana, CEO of the independent press self-regulator Impress, opened the panel by describing how Big Tech giants have shaped the digital services market. “In a world of scarcity,” Lexie noted, “human attention is one of the last few assets from which a company can extract and accumulate wealth in the digital markets.” This has driven what she terms the ‘appliance-isation’ of all aspects of private and public life, with work, commerce, entertainment and public services increasingly defined by digital apps owned and controlled by a handful of dominant tech companies. This immense market power has allowed these companies to establish digital enclosures, which permanently tie users into a homogenous information environment. Lexie linked this to the ‘enshittification’ of digital services, a term coined by Cory Doctorow to describe the pattern of tech companies “amassing a user base, asset stripping the product of value, asset stripping their customer base, and then creating very high barriers to exit those services.”
Lexie went on to explain how tech companies have so far been able to distract regulatory attention by obfuscating their functions under the notion of innovation, and in turn standardising illegal practices that erode regulatory compliance across the sector. Leading tech players have built their market dominance on this model. Lexie highlighted price-fixing and workers’ rights violations by ride-sharing app Uber, as well as the breaching and manipulation of users’ data and privacy rights by digital marketplace Temu, which keeps users active through addictive “gamification and glamourfication” of the shopping experience to support its underlying business model of breaching and manipulating users’ data and privacy rights. The combination of these companies dominating horizontal supply chains and exploiting anticompetitive tools across global markets has created unprecedented market concentrations, which Yanis Varoufakis has critiqued as a shift from traditional capitalist market capture towards ‘technofeudalism’.
The monopolisation of tech platforms has not only created unavoidable risks for users – such as a Microsoft bug causing a global IT outage – but also entrenched predatory market behaviours, like Meta’s buyout (and later sale) of the CRM tool Kustomer. “These companies aren’t practicing innovation,” Lexie argued, “they are practicing ‘ex-novation’. They’re buying up their competitors, and if those companies don’t have any brand recognition or utility they stomp them out of existence, because the big companies aren’t interested in creating better products or services.”
Looking at the evolving regulatory environment, Lexie noted efforts to create mechanisms for curbing the market power of tech companies, such as the UK’s Digital Markets, Competition and Consumers Act and the EU’s Digital Services Act. However, Lexie also cautioned that regulatory enforcement faces serious challenges. “Even in a perfect regulatory sandbox with full implementation and enforcement, my concern is we wouldn’t see effective change because these business models are built on illegal practices, and without those these businesses won’t be able to function.” Lexie raised the ancestry and genomics company 23AndMe as an example: despite its huge historical and anthropological value to customers, the company’s business rests on exploiting users’ data to sell to the highest bidder – a model which, following a major security breach last year, has left the company on the verge of collapse.
Dr Vincent Obia, Leverhulme Early Career Fellow at the University of Sheffield, continued this topic by explaining how African countries’ efforts to regulate Big Tech face more challenging barriers than in the Global North. How tech companies operate in these countries has broadly mirrored the regulatory models that have emerged across what Anu Bradford has termed the three ‘Digital Empires’. The United States, with its focus on free expression and free markets, has fostered the rapid growth of the “tech behemoths”, and protected them from intervention through measures like the Communications Decency Act’s Section 230. In China, Big Tech has not proved ‘too big to regulate’, with the state employing a sliding scale of lax and rigid enforcements to fine, moderate and in some cases control tech companies. Under the European model, which Vincent described as based on citizens’ rights and democracy, a focus on due process and legal accountability faces the intrinsic challenge of trying to regulate companies that are based outside of the EU’s legal domain.
Vincent emphasised that the market power of tech companies dwarves some African nations’ own domestic markets, and African governments have become increasingly reliant on Big Tech to provide key national services – giving these companies considerable influence over debates on regulation. There is also a disparity in the technical capacity and tools available to some countries, which makes it harder for policymakers to understand and consider how platforms operate and what regulatory options might be available. Drawing on his PhD research into social media and tech regulation in Africa, Vincent highlighted five streams of regulatory intervention used across Africa: legal restrictions on online behaviour (such as Egypt’s ‘falsehood’ laws); requirements for official user registration (as in Tanzania); taxes on social media and tech companies (like Uganda); social media bans or internet shutdowns (Congo); and state-sponsored distortion techniques, where states “flood social media spaces and information ecosystems with all kinds of content to distort users’ perceptions of reality”.
Together these approaches represent what Vincent defines as ‘regulatory annexation’. Countries like Nigeria – facing huge disadvantages in their balance of power with tech platforms, and dependent on structural interventions established in Western regulatory settings – focus their regulatory efforts on users’ behaviours, reflecting models of state intervention in ‘traditional’ media like broadcasting and the press. Vincent noted that this approach to regulating Big Tech is part of a broader policy of ‘regime security’, with states protecting their own political or economic controls rather than protecting or enhancing citizens’ rights and interests. This has afforded countries like Nigeria a bargaining chip with large tech platforms which, not wishing to lose access to large profitable user markets, are more likely to negotiate with states to avoid platform bans or more punitive interventions. However the power imbalance still exists, as shown by Nigeria lifting its nine-month ban of Twitter in 2022 on an assurance that Twitter would establish a corporate office in the country – which Vincent noted has still not happened.
Vincent’s research also found that key actors in Nigeria’s debates on social media regulation preferred the limitations and business models of tech companies to the imposition of controls and bans by governments. “There was a strong feeling in those I spoke to of massive state-citizen distrust, of ‘we do not trust the government. We know platforms’ invasive data practices, we know their exploitative business practices, but we prefer these to government controls.” Although Nigeria has recently developed a code of practice to regulate the tech companies themselves – including measures to hold platforms accountable for hate speech, mis/disinformation and other ‘tech harms’ – this has not been implemented, and Vincent does not envisage it ever being implemented “due to these issues around asymmetries of power”. In looking for opportunities to regulate Big Tech for the public good, Vincent argued for a more principles-led approach to regulatory intervention, in particular by challenging what Jonathan Corpus Ong terms the ‘illusion of inclusion’ present in most tech platforms’ features. Regulation should instead build genuine ‘powers of participation’ for users, civil society and technical communities to control and enhance their own rights in digital spaces.
Our final panel speaker, Katie Heard, expanded on these debates by describing the barriers facing the 8 million people in the UK who are currently excluded from digital spaces and tech platforms. Drawing on her work as Head of Research at the Good Things Foundation, the UK’s digital inclusion charity, Katie pointed to poverty, access and skills as key factors that drive digital exclusion and which should play a key part in debates on the purposes and priorities around regulating Big Tech. Her research has identified that there are approximately 8.5 million people in the UK without the necessary skills to get online, and as many as 2.4 million households that can’t afford a mobile phone contract or internet connection.
“People who are already excluded from society in multiple ways are also excluded digitally. There are many people who can’t afford to access the internet, might not have a mobile device they can rely on, but also don’t have the skills to switch on a device, to connect to WiFi, or knowing how to use the basic applications many of us take for granted – many of them controlled by Big Tech.”
There is a need for “a much softer and more supportive environment” to overcome the fear, uncertainty and misunderstanding that many digitally excluded people experience in their relationship with Big Tech. Katie pointed to the prevalence of newspaper headlines about online scams and data misuse, which feed into people’s worries about how providing information to opaque digital platforms may impact their receipt of benefits or their healthcare. The personal experience of walking into a doctor’s surgery, and seeing clearly the systems and the people who are dealing with their personal data, is a world apart from the experience and perception of online services. “The smallest change to a relatively straightforward and trusted service like the NHS app, even just buttons moving around on a screen, can be a huge knock to somebody’s confidence in a system, especially if those are their first steps into the online world.”
Moving from digital exclusion to inclusion often starts in community-led, one-to-one support for individuals, but Katie noted that taking the next step of building people’s digital skills and literacy is a much bigger challenge. The process of informing people about how Big Tech systems work and how to keep themselves safe also has to contend with building users’ basic skills about how to connect online, install apps and use services in an environment that is ever-changing and ever-evolving. “Someone might start with signing up to the NHS app, then installing a banking app, and these are mostly safe, but then they start to shop or use social media, and that tiny bit of knowledge about keeping themselves safe is exposed to a risky and unregulated space.” Katie also highlighted new research by the Good Things Foundation finding that, despite the potential for AI to help bridge the digital divide with simple tools, there is a high level of fear, anxiety and a feeling of a lack of relevance to the lives of digitally-excluded people.
Turning to regulation, Katie noted that the framing of these debates is often focused on tackling challenges or threats from the platforms themselves, rather than looking at what regulation will achieve for the end user. In the UK, both the regulatory approaches and the social relationship with Big Tech are a long way behind how these platforms and digital environments have developed. Orienting policy responses towards the issue of inclusion, by “encouraging these organisations to make sure they’re bringing everybody along with them, and giving the most disadvantaged the skills, knowledge and equipment to participate in these new digital spaces equally”, may be a better way to apply regulation and give Big Tech more active responsibilities to the public. However, Katie cautioned that who stand to benefit the most are also highly unlikely to be aware these regulations exist, or that they are making a lot of difference to their day-to-day interactions with digital media.
Big Tech companies and new digital platforms have become tightly woven into our daily lives across work, entertainment and social interactions, so debates about regulation are heavily connected with questions about our own security and public well-being. Yet the dominance of these platforms, and the widespread reliance of businesses, governments and civic structures on Big Tech, also creates a serious societal risk. Heavy or ineffective regulations that lead to these platforms disappearing would also result in the loss of a wide range of vital services and social connections, with even greater harms to those most affected by economic, cultural and digital exclusion. This makes the inter-connections between, on the one hand, balancing global political and market power and, on the other hand, protecting the rights and freedoms of individual users, even more complicated.
As part of the panel’s closing Q&A session, Lexie summarised this conundrum and the need to put the public interest at the heart of any regulatory debate on Big Tech:
Are we satisfied with just chipping away at the edges, of ensuring these companies comply with existing antitrust laws or data laws? Or do we want regulation to fundamentally reshape Big Tech, as intrinsic services in our day-to-day lives, for the public good? If we’re going to bring those 8 million excluded people into this digital landscape, why are we bringing them into an incredibly toxic and harmful environment? It’s a bit like giving toddlers cigarettes just because the wider public is exposed to the harms of smoking.
On the current direction of travel, it seems certain that Big Tech and the services they operate are set to become more commercialised, and more intent on trapping users within exploitative systems all while holding monopolistic control over more and more of the basic infrastructure that makes up the digital realm. The fundamental question is whether our political and regulatory approaches have the moral consensus, and the technical understanding, to decide if this is what society wants our shared digital spaces to look like.
Read more from the Media Reform Coalition: