My speaker notes from the 'Resisting Big Tech Empires' conference, Sat 25th April 2026.

Opening plenary Q1: "What do we mean when we talk about Big Tech Empires?"
I'm not sure there are Big Tech empires, but there are definitely Big Tech corporations in the middle of a world system collapse. The so-called rules-based order was always a cover story, but now it's over anyway. Power is shifting; it's an interregnum, a time between, and AI is a morbid symptom of it.
By AI, I specifically mean anything built on neural networks, which includes transformer models like ChatGPT. It's important to realise AI isn't a powerful technology, at least not in the sense that's claimed for it. All it can do is correlations; there's no causation, no physics model - just pattern-finding, and now pattern-generation. It's also very reductive and abstracting, and not really able to deal with the context or complexity of the world. As a technology, it's hugely complex but very flaky. However, it is the point of convergence for a number of powerful forces who all see it as a way to hold onto power under rapidly changing conditions.
AI definitely has empire-like aspects, especially its need for continuous growth and expansion, which the industry calls 'scaling'. This is the push from millions to billions to trillions of parameters; a version of so-called growth which, just like GDP, is essentially sociopathic. The accelerating growth is materially visible in the way data centres suck up electricity and water, and the way chip manufacture demands control over mineral supply chains. Another major overlap with empire is eugenics. The 'I' in AI, or the concept of intelligence, comes straight from Galton and Pearson, who justified the British Empire via biology. The computations of AI repeat this kind of stratification and segregation.
While Big Tech may not be an empire in itself, the AI corporates have some characteristics that are very reminiscent of the East India Company; they're very ideological, they're very interventionist, and they're increasingly wedded to military adventurism. Unfortunately, the aspect of AI which sabotages its usefulness in social settings, which is its innate tendency to generate collateral damage through out-of-distribution errors and so-called hallucinations, ceases to be a problem in conflicts where maximum indiscriminate damage is the actual point.
I think it's equally important to focus on energy. It's been so striking to me to see the boasts about AI rendered not as computation, or floating-point operations, but as energy use in gigawatts. This is the imperial pattern; expansion driven by Promethean tech and by burning as much energy as possible. It looks to me a lot like Total Mobilisation, a term coined by ultra-nationalist Ernst Junger to describe the channelling of the entire material and energy resources of a nation into a new technological order; "the conversion of life itself into energy" as nations are "driven relentlessly to seize matter, movement and force through the formalism of technoscience". Total mobilisation legitimates a new form of political order based on the vitalism of conflict.
Whether or not that's what's happening, AI itself is an engine of precaritisation and necropolitics, that form of power which not only discriminates in allocating support for life but sanctions the operations that organise neglect and increase vulnerability to death. Its direction of travel is defining who is more disposable. This doesn't make it Skynet; it's not sci-fi but a direct extension of structural forces, especially racism, misogyny and anthropocentrism.
AI is useless for social reproduction, because a shoddy simulation is no substitute for real care. You can't make worker-friendly AI; it's anti-worker from top-to-bottom. You can't make planet-friendly AI; it burns resources and produces bullshit. The whole AI assemblage is Epstein tech; not simply because the guy himself was involved in it, but because the apparatus of AI is a concentrated network of masculine power which enables abuse at scale. AI is textbook accelerationism, a belief that we should go further and faster, that our tech should be overclocked to the point of meltdown in order to break through the barriers between us and a new era.
The good news is AI isn't inevitable, but there's no going back to a pre-AI age, because AI is an amplifier for the problems we already had. It encourages thoughtlessness in decisions and in expression, and it forces an exclusionary normativity. What it shows us is the wider nihilistic solutionism of those who can't bear to give up one inch of power. AI is a mask-off tech for a very mask-off time; it's a technological reminder that, in the eyes of the powerful, the rest of us are expendable.
Anywhere AI is proposed as a solution, there are already deep problems that need fixing in a different way, usually by more people doing people-stuff; doctors, nurses, teachers and care workers. But we also need to question the ideologies that underpin AI, like productivism and efficiency, and of course the continuation of colonialism which makes any of this possible.
There's a widespread compulsion to preface any criticism of AI by saying 'of course it has great potential for XYZ'; it mostly doesn't, and we just need to stop defending it. Every day we make excuses for AI is another day of real material harms; another day when real problems are hidden behind a smokescreen of high tech, and another day when we're sleepwalking into a much more fascist world.
Opening plenary Q2: "What should our strategies be for resisting these big tech empires?"
Collective refusal can start anywhere, like in school or university for example, with the rejection of technological offloading that research shows makes us less critical. It can start at work, where more people are being forced to use AI, like it or not. Unfortunately, most trade unions seem to have drunk the AI kool aid. Being against AI is a growing social movement that isn't yet conscious of itself.
The fact that so many people are starting to hate AI brings its own risks. When we're pushing back, we need to be careful who we're standing with. We need to be wary of doomers, who believe the risk of AI is superintelligence that will kill us all; that delusion that has more in common with Silicon Valley oligarchs than with the rest of us.
We also need to be wary of policy-types talking about AI sovereignty because that's just nationalism and populism, stoked by domestic elites who are panicking about their place in the new world order. As if scattering flag-flying data centres across the homeland will make AI less harmful! The only kind of sovereignty AI delivers is the one Nazi jurist Carl Schmitt defined as the power to declare the state of exception.
AI has shown us that if we care about justice then we need to pay attention to technopolitics. I'm suggesting we start with a technopolitics of decomputing which, as a simple rule of thumb, is something like; people first, computing last, AI never.
Resisting hyperscale data centres is a great start, as their grid-stressing and water-wasting is a form of low-intensity conflict against local communities. However, stopping the enclosure of electricity and water also raises wider questions, like who controls vital common good resources and how do we re-socialise them? It's an opening for what we could call technopoliticisation, where we see that datacentres also drive overlapping social harms, like algorithmic exploitation in Amazon warehouses, gig work and outsourced data labour, as well as AI-powered targeting for welfare sanctions, deportations and weapons systems.
Like all capitalist technology, the core of AI is expansion and growth. Decomputing is a technopolitics of degrowth, which isn't about all-round austerity but removing the drive for infinite expansion. We need to stop fuelling empire and break its circuits of necropolitical value creation. Decomputing is about deautomatisation; about extracting ourselves from the patterns of machinic relations which are amplified by tech. It's about mutual aid that comes from the recognition of mutual vulnerability, and care that doesn't depend on classifying people according to algorithmic boundaries. Decomputing adopts the approach to tech development outlined in Illich's tools for conviviality; developing tools that enable autonomy and adaptation, rather than the conditioned responses demanded by manipulative systems.
Decomputing means the prefigurative resistance of workers and peoples councils, where directly democracy is a counter to automatisation and collective struggle under conditions of consensus transforms inherited social patterns. Intersecting layers of horizontal governance at local, regional and even international level are not only conceivable but considerably less complex than the tech we’re supposed to bet our future on.
Decomputing is a prioritisation of the periphery and the pluriverse when it comes to visions of future technologies. The aim of decomputing is to deactivate existing computational power, and to return to common use the technical means for making our own arrangements and for infrastructuring the common good.
Afternoon panel: "Is this Technofascism? Rethinking the far right in the age of AI"
[note: on the day I skipped the quote from the ICE resistance organiser for reasons of time, but I've kept it here]
Talking about technofascism means taking both the political and the technological seriously; seeing them as distinct but inseparable, and as co-emergent. I'm going to start on the other side of this binary from Nafeez's talk, i.e. with the tech, to try to explain why AI as a technology is closely wedded to fascistic solutionism.
I think this is important, even if your main concern is fascism not AI, because fascism doesn't return as a tick list of characteristics. Fascism is a fluid and dynamic force which operates at the level of social desires before it appears as jackboots. We need to learn to sense it in our infrastructures as well as in our political discourse.
It's important to remember that contemporary AI isn't just generative but also predictive. The core computational mechanism of all this AI is correlation not causation, which means it's already a form of computational conspiracy theory with no grasp of structural causes. As these systems become pervasive they start to tip the balance of different decisions, producing algorithmic states of exception; forms of exclusion that render people vulnerable in an absolute sense.
Such an apparatus is attractive to the state and corporations, not despite its inability to address structural questions, but because of it.
It's not only that eugenics never went away in dreams of Pioneer Fund recipients, it never went away in our systems of welfare and disability, which became shockingly clear during Covid when 'pre-existing medical condition' became a soothing proxy for disposable. What we arrive at with the merger of opaque AI and existing bureaucracy is what Hannah Arendt called thoughtlessness; the inability to critique instructions, the lack of reflection on consequences, and a commitment to the belief that the correct ordering is being carried out.
People expected great things from the EU's AI regulation while forgetting that's the same entity intent on drowning as many refugees in the Mediterranean as possible. AI has become another cog in the already existing machinery for shifting societies towards authoritarianism. By filtering and classifying out-groups as the cause of shoddy services while obscuring structural forces that are actually to blame, it becomes the technological piston in the self-reinforcing cycle created when governments adopt the rhetoric of the far right.
The carelessness at the core of AI makes it useless for social reproduction but very attractive to those who want machinery for mass deportation. The patriotic rhetoric about the UK as an AI superpower is not only nonsense but legitimises the systemic build-out of repressive infrastructure ready for Reform UK to deliver on their minimum demand of 600,000 deportations.
It's right and just to point to the environmental harms of data centres, but we should realise that the real energy source powering AI isn't methane-polluting gas turbines or even the fantasies about nuclear fusion, but fear. The techno-patriarchy everywhere fears the loss of unquestioned mastery. Better to accelerate the burn than admit fragility or allow the rest of us to have a real say in the conditions of our own existence.
So I think the important question isn't so much 'is this technofascism?', but 'what does anti-fascism look like under technopolitical conditions?'.
One thing that both fascism and AI have in common is their need to convince us of their inevitability, which is a giveaway that they're still vulnerable. AI is actually a very shaky system; it's materially unsustainable and floating on a financial bubble. What we need is an alternative technopolitics that enacts what Ivan Illich called counterfoil research: "providing guidelines for detecting the incipient stages of murderous logic in a tool" and "devising tool-systems that optimize the balance of life, thereby maximizing liberty for all".
We don't need to keep a human in the loop, we need to get AI out of the loop altogether, because the whole point of AI is to remove the frictions of individual conscience and collective refusal. Musk's efforts with DOGE are a warning here; it turns out that having centralized, bureaucratized and digitized institutions makes them vulnerable to a cyberattack from within by technofascist nerds. We need more sociocratic patterns, with circular feedback loops up and down the organisation, and policy changes happening when there are no remaining 'paramount objections'.
The current hype about AI agents is another example; where startup kids in San Francisco are launching 10x agents in between popping peptides, the real impact of the LLMs-in-a-loop will be to trash whatever service they touch. It's not that AI agents will multiply our power but that their unreliability reflects the broader point of AI, which is to strip us of agency altogether.
We need reclaim agency through assemblies and workers' and people's councils as a basis for collective agency and counter power. On that note, I'd like to quote one of the organisers of the resistance to ICE in Minneapolis:
"In the first days of December, as it became clear that the ICE invasion was a real thing that was really happening to us, as groups of us gathered swapping rumors about the kidnappings and clearly inadequate tips about phone security, we had no idea what to expect, no idea what would happen, no idea what we were going to do. Only one thing was crystal clear: nobody, absolutely nobody, was coming to save us. It was clarifying. We knew, with complete certainty, that... if we don’t stand in their way when they come to kidnap our neighbors, nobody will stand in their way. If we don’t try to feed people who can’t work, can’t even go outside to get food, nobody will feed them. It put things into focus really fast."
We need to learn from systems that support self-organisation under emergency conditions, whether it's resistance to ICE or hurricane recovery, as ways to survive what's coming as well as to prefigure alternative futures.
The time is ripe for experiments in infrastructured mutuality, and if we want to start figuring out an anti-fascist approach to tech we should start with all the people who are being marginalised by the current approach. The disability movement, for example, has a ton to teach the rest of us about adapting tech for survivability in a world that is designed to be hostile to your very existence. As I said in the opening plenary, we need programme of decomputing as an anti-fascist approach to infrastructuring the common good.