TSA Terminal Archive

Generative Models Against Acceleration: Counterforces to Dromocracy and Dataism

Algorithmic Dreams: where data dissolves into surreal visions. These neural networks dream in frequencies beyond human comprehension, generating artifacts that resist quantification and demand slow contemplation.

In an age defined by velocity and data, two powerful ideologies have taken hold: dromocracy and dataism. Paul Virilio’s concept of “dromocracy” describes a social order governed by speed rather than democratic deliberation. In such a regime of acceleration, power belongs to those who can act and communicate fastest, and human agency itself risks being overshadowed by the relentless tempo of technology (Virilio, “Speed and Politics” 69). Meanwhile, historian Yuval Noah Harari uses dataism to describe the emerging belief that all important truths can be derived from data, and that the value of any entity lies in its contribution to information flow 1:. Dataism treats the universe as a giant data-processing system, with humans as mere chips within it 1:.

Generative AI models – neural networks capable of producing text, images, or music – are born from this twin culture of speed and data. They embody acceleration, trained on massive datasets at high computational velocity, and they arise from a data-centric worldview that presumes more data and faster processing yields better results. At first glance, these models reinforce dromocracy and dataism: they automate creativity and flood the infosphere with content at unprecedented speed 2. However, this essay advances a counterintuitive thesis: generative models, precisely through their surreal, chaotic outputs, can subvert the regimes of speed and data. Their inherent unpredictability and penchant for nonsense introduce ambiguity and slowness into an overly accelerated, data-driven culture. In what follows, we will examine how generative models might become counterforces to dromocratic speed and dataist quantification, rather than mere accomplices, by leveraging their surrealist capacities and non-quantifiable outputs.

Dromocracy and the Crisis of Speed

Paul Virilio famously argued that modern society is governed by speed: politics, culture, and warfare all obey a dromocratic law 3. In Virilio’s analysis, power belongs to those who can act or communicate the fastest, eclipsing even economic wealth in importance 3. “It is not material wealth but speed that significantly influences all political life,” Virilio observes 3. A state or corporation that can transmit knowledge or deploy forces quicker than its rivals holds the advantage. Thus, information technologies and rapid logistics create a new ruling principle – the sovereignty of real-time transmission. The result is a civilization constantly accelerating toward “frenetic standstill,” where every moment is saturated with instantaneous events yet deprived of lasting substance 4.

This relentless pace creates a form of temporal claustrophobia. As events and information pile up in real time, individuals struggle to process the deluge, resulting in anxiety and a sense of confinement. Virilio warns that when technology’s tempo outstrips human faculties, people become “observers to their own lives,” unable to fully engage or reflect upon their experiences 5. Thus, the rule of speed threatens not only traditional democratic rhythms but the very capacity of humans to meaningfully interact with their surroundings. Cultural critic Jean Baudrillard similarly described the “ecstasy of communication” in late 20th-century media – an era when screens and networks abolish interiority and distance, leaving an “obscene” surface of constant contact. Both Virilio and Baudrillard diagnose a crisis of speed: a world so fast that it becomes effectively static, an “overexposed” environment where meaning evaporates in the glare of real-time data. Depth, context, and memory are casualties of this acceleration. The faster we race, the more we feel we’re going nowhere, suffocated by immediacy.

Generative Models: Agents of Acceleration?

At first blush, generative AI models appear to turbo-charge the dromocratic regime of speed. After all, these systems can produce content at a machine’s pace, far outstripping human creative timelines. A text generator can compose news articles or marketing copy in seconds; an image model can output thousands of illustrations in the time a human paints one. Such capability seems to fulfill the accelerationist impulse in culture and industry – endless production, 24/7. Indeed, observers predict an exponential surge in AI-generated content: experts have speculated that the vast majority of online text, images, and media could be machine-generated within a few years 2 6. One analysis suggested that over half of web text was already either AI-written or AI-translated by 2023 6. With bots able to auto-generate unlimited amounts of synthetic content for virtually free, the internet is bracing for a deluge 2.

This mass automation of creativity feeds directly into dromocracy. It accelerates the media cycle and information flows to breakneck speed. News can be reported and proliferated faster; visual culture mutates overnight in viral spurts of AI-created memes. The quantity of output explodes, pressuring humans to consume and respond more quickly just to keep up. In a dromocratic view, generative models are like engines of hyper-productivity, intensifying the race. They epitomize the principle that “if something can be done faster or cheaper, it usually will be done that way,” even if finesse or depth are lost 7. By automating content creation, generative AI might accelerate the already dizzying pace of digital life to a new, more frantic pitch.

Yet this is only a first reading. It assumes generative models will be deployed in service of speed and efficiency – an assumption this essay will complicate. There is another side to these models: the strange, the unruly, and the ungovernable aspects of their output. Just as a high-performance car can be misused to joyride in loops, generative AI might be misused to subvert the very logic of acceleration it springs from. To see how, we must explore the latent surrealism within the machine.

Surrealism Within the Machine

Despite being products of rational engineering, generative models often behave like surrealist artists. Their outputs can be dreamlike, disjointed, and full of surprise. Neural networks do not think or create as humans do; they recombine data in ways that sometimes defy common sense. The results can be astonishingly weird: images of landscapes that melt into abstract shapes, sentences that slip into poetic nonsense, faces that look almost real but have uncanny glitches. In fact, some AI art explicitly embraces this dreamlike quality. Artist Mario Klingemann, a pioneer of AI art, uses neural networks to generate portrait-like images that are haunting and bizarre. His algorithms produce “surreal new imagery” by processing photographs through the alien logic of a machine mind 8. Klingemann has said he seeks algorithms that can surprise even their creator – outputs “different from anything I would expect,” as he puts it 9. This capacity to exceed expectation, to create something uncanny and novel, is precisely what aligns generative models with the ethos of Surrealism.

Historical Surrealism, the artistic movement of the early 20th century, was a rebellion against rationalism and mechanistic modern life. The Surrealists (Andre Breton, Salvador Dalí, etc.) championed fantasy, dreams, and the unconscious as a way to counter the “cold and abstract rationality” of industrial civilization 10. As art historians note, Surrealism shared Dada’s anti-rationalism, using illogical juxtapositions and dream imagery to jolt people out of common sense 11. Surrealist art was deliberately bizarre, meant to disturb and baffle the viewer 11 – not to confuse for confusion’s sake, but to re-enchant a world drained of mystery by data and utility 10. In many ways, generative AI’s oddest creations carry this spirit forward. When a text model hallucinates a story that no programmer anticipated, or an image model “dreams” in algorithmic patterns, the effect is similar to Dalí’s paranoiac-critical paintings or Breton’s automatic writing: the machine’s rational procedures yield irrational results.

Crucially, such emergent surrealism can slow down the frenetic pace of media consumption. A straightforward, factual piece of content might be skimmed and forgotten in seconds – perfectly fitting the accelerated tempo. But a confusing or ambiguous piece of AI-generated art forces the audience to pause and interpret. Confronted with ambiguity, we slow our reading to make sense of it, or we take longer to stare at a strange image to decipher its forms. The ambiguity and richness of these outputs demand interpretive effort, reintroducing a kind of slowness and depth. As one scholar of Surrealism observed, the goal was often estrangement: presenting familiar things in strange ways so that viewers must see them “as if for the first time,” breaking automatic perception 11. Generative models, when they err or produce seemingly incoherent outputs, inadvertently do something similar – they estrange us from our expectations. In this estrangement lies a subversive potential: the ability to disrupt the smooth, high-speed conveyor belt of information with moments of confusion, wonder, and reflection.

Contemporary artists are already harnessing AI’s surreal side as resistance. For example, Anna Ridler painstakingly curates data sets to train AI systems that generate art infused with personal or historical ambiguity (such as her piece Mosaic Virus, which uses tulip bloom images and cryptocurrency data to critique economic manias). Mario Klingemann, mentioned above, deliberately welcomes the neural glitch – his portraits flicker and deform, preventing quick identification and forcing longer engagement. These artists treat the generative model not just as a content factory but as a collaborator in pushing the boundaries of sense and nonsense. The machine becomes, in effect, a fellow Surrealist. In the process, they rescue AI from being merely an accelerator of mindless content and turn it into a tool for slow enchantment.

Tactical Sabotage of Speed

If generative models have an intrinsic capacity to produce nonsense and noise, that capacity can be weaponized against the regime of speed. Here we move from the aesthetic to the tactical: using AI to actively sabotage hyper-accelerated media systems. One strategy is overload. Since AI can generate content endlessly, some artists and activists consider flooding networks with purposeless or absurd material to gum up the works. For instance, one could unleash swarms of bots that post gibberish text, deepfake images with no clear meaning, or surreal memes en masse. In effect, this creates a denial-of-service attack on attention. The high-speed info ecosystem – always hungry for fresh content – gets choked with junk that looks substantive but isn’t. It’s a form of Dadaist protest via AI: inundating the channels of communication with anti-messages.

Such generative noise disrupts the expectation that every bit of data has meaning or utility. It slows down the ability to extract actionable information (since one must now sort signal from a sea of noise). This recalls the “poisoning” tactics in hacker culture, where activists might feed wrong data to surveillance algorithms to degrade their accuracy. For example, consider an anti-productivity bot that randomly generates and emails “productivity reports” filled with surreal statistics to a corporate office, forcing managers to spend time deciphering fiction. Or a social media bot that replies to trending political hashtags with Markov-chain babble, diluting the hashtag’s usefulness for propaganda. These may sound whimsical, but they illustrate a tactical ethos: using the generative power of AI to confuse and slow down systems that normally profit from speed and clarity.

There are already glimpses of this in practice. Some experimental projects deploy nonsense generators into Twitter and other platforms as a statement 12. Tools exist that assemble corporate-speak into absurd, unending strings of jargon—turning quick skimming into a futile exercise . Another group released thousands of GAN‑generated faces of people who don’t exist, exemplified by “ThisPersonDoesNotExist”; such synthetic identities can dilute the integrity of facial recognition databases 13. Others have gone further, using adversarial facial cloaking to poison face-data collection and disrupt recognition systems 14. These acts function like digital sand in the gears, using AI’s prolific output to challenge the dromocratic imperative of instantaneous intelligibility. In the spirit of Surrealist provocation, they introduce chaotic elements into an otherwise optimized stream, thereby regaining some measure of control over tempo. Speed thrives on order and predictability; nonsense is its kryptonite.

Dataism and the Faith in Quantification

Parallel to the cult of speed is the cult of data. Harari’s concept of dataism captures the contemporary faith that given enough data and computing power, all aspects of life – biology, society, economy – can be quantified and optimized 1:. Dataism extends the old Enlightenment rationality into an almost religious conviction: “Information flow is the supreme value” and anything that cannot be translated into data might be deemed irrelevant 15 1:. In a dataist worldview, experiences and qualities are valued only insofar as they generate data points that algorithms can process. Emotions become biochemical data, social interactions become network metrics, art’s value reduces to engagement statistics. Harari even warns that dataism could lead humans to trust algorithms over their own intuition, surrendering decisions to data-driven AI in the belief that numbers know best 15.

This ideology has permeated business, governance, and personal life. We see it in the mantra “what gets measured, gets managed,” now applied to everything from health (counting steps, heart rates) to education (endless testing and metrics) to creativity (artists chasing likes and views as proxies for success). Dataism carries a promise of objectivity and efficiency – by quantifying, we can eliminate bias and irrationality. But it also imposes a narrow notion of value: if something can’t be counted, it risks being ignored or devalued. The danger, critics note, is a kind of reductionism that strips away the qualitative richness of human life. We begin to see the world through the lens of a spreadsheet, where optimization is the highest good.

In a dataist society, even art and thought are pressured to justify themselves in data terms. The rise of big data aesthetics (for example, installations that glorify data streams or networks) sometimes reinforces this mindset, celebrating sheer volume and processing as sublime. Harari himself likens dataism to a new religion – one that might subsume older humanist values in favor of total surveillance and algorithmic authority 15 1:. Resisting dataism, then, means asserting the value of the unquantifiable: experiences and creations whose worth cannot be neatly expressed in a database.

Breaking the Dataist Illusion

Generative models, when used creatively, can poke holes in the dataist faith. One way is by producing outputs that are rich in ambiguity and noise, which resist easy quantification. Unlike a traditional program that might generate a precise, repeatable result, a model like a GAN (Generative Adversarial Network) or a large language model often generates content with surprising variation and even meaningless details. From a dataist perspective, such output might be seen as “garbage” – it’s not immediately useful or optimizable. But therein lies its subversive power. By embracing generative noise, artists and thinkers can highlight the limits of what data and algorithms can capture.

Take the example of media artist Refik Anadol. He works with huge datasets (e.g. millions of images of a city, or years of climate data) but the end product of his AI process is not a set of charts or actionable insights – it’s an immersive visual experience that he pointedly calls a “hallucination.” In his Machine Hallucinations series, Anadol’s custom AI “dreams” out the data into swirling, abstract imagery on monumental screens 16. The data, instead of yielding practical information, becomes raw material for visual noise and wonder. One piece transformed 155 million images of nature into an evolving animation, essentially a data hallucination projected as art 16. This kind of work undermines dataism by showing that data can be used for imagination rather than surveillance or optimization. The value of the piece is aesthetic and emotional – qualities that cannot be measured in bytes or efficiencies. It’s a recovery of the marvelous from within the data deluge, echoing the Surrealists’ aim to “re-enchant the world through imagination” against the quantification and reification of modern life 10.

In music, consider Holly Herndon and her AI “baby” called Spawn. Herndon’s album Proto integrates an AI that was trained on her voice and her ensemble’s vocals. But instead of producing perfectly polished pop harmonies, the AI contributes eerie, glitchy textures – “words stuttered out against a shimmer-curtain of chorused voices,” as one review described Spawn’s output 17. The machine-learning vocals come out gritty and glitchy, full of artifacts and unexpected tones 17. Rather than correct these as flaws, Herndon features them as integral to the music’s emotion. The result is music that is at once high-tech and strangely human, precisely because of its imperfections and ambiguities. Spawn’s vocalizations are not easily quantifiable: they evoke feelings (somewhere between haunting and uplifting) that cannot be reduced to a data point. In this way, Herndon’s work breaks the dataist illusion that more data or more perfect processing automatically equals more value. Sometimes the noise, the inexplicable remainder, is what makes art powerful.

Both Anadol and Herndon illustrate a broader point: randomness, ambiguity, and excess in generative outputs can act as an antidote to dataism. These phenomena inject a degree of irreducibility – something that doesn’t resolve into clean information. For dataists, ambiguity is an enemy (to be eliminated by better algorithms); for the rest of us, ambiguity can be a source of depth and intrigue. By cultivating generative outputs that are emotionally resonant yet algorithmically unoptimized, creators push back against the idea that the ultimate goal is a frictionless, fully measured world. The symbolic excess – extra meanings, textures, or images that don’t serve a clear purpose – keeps alive the parts of human experience that data cannot fully explain or monetize.

Counter-Strategies Through Generation

Generative AI models, while initially seeming aligned with acceleration and quantitative values, can paradoxically serve as sources of resistance to these prevailing cultural forces. Rather than merely amplifying the speed of content production and data-driven logic, generative models can be strategically reoriented toward fostering ambiguity, promoting interpretive depth, and cultivating deliberate slowness.

One effective theme is ambiguity as resistance. Instead of generating predictable and clearly-defined outputs, AI systems can be intentionally tuned to produce content that requires active interpretation. Ambiguous outputs invite audiences into slower, deeper engagement, interrupting rapid consumption patterns and fostering reflective, contemplative interaction.

Another critical theme is the intentional embrace of error and unpredictability. By valuing AI-generated glitches, surreal distortions, or unexpected outcomes not as defects but as purposeful disruptions, generative models can inject meaningful friction into digital experiences. This disruption serves as a reminder of the limitations of algorithmic perfection and challenges the expectation of seamless, accelerated consumption.

A related strategic theme is the deliberate cultivation of non-utilitarian content. Creating generative systems designed explicitly for playful or non-productive outputs helps resist cultural pressures to measure value solely in terms of efficiency and profitability. By producing content that defies easy commercialization or immediate utility, generative models emphasize qualitative, humanistic values over quantitative assessments.

Additionally, generative models can serve as agents of surreal intervention, periodically inserting bizarre or unexpected artifacts into otherwise streamlined digital environments. This tactic mirrors the historical surrealist practice of juxtaposing incongruous elements to provoke thought and disrupt automated patterns of perception and response.

Lastly, employing generative systems for data subversion involves deliberately introducing noise or misleading artifacts into datasets or algorithms reliant on clear, quantifiable data. Such strategies highlight the vulnerabilities of strictly data-driven systems, illustrating that not all valuable experiences or knowledge can be neatly quantified or algorithmically processed.

Collectively, these thematic counter-strategies aim to redirect the immense generative power of AI away from reinforcing speed and quantification, and toward promoting humanistic, reflective, and critical engagements with technology.

Conclusion

Generative models stand at the crossroads of our accelerated, data-driven era. They are unmistakably products of dromocratic and dataist trends – built by harnessing massive data and computing speed – yet they also harbor the seeds of an antidote. As we have argued, the very aspects of these AI systems that might seem like flaws from a utilitarian perspective (their unpredictability, their occasional nonsense, their “hallucinations”) can be reframed as features that inject slowness, ambiguity, and creativity back into a society starved of those qualities. In a world where speed has become tyrannical, the absurd slowness induced by a confusing AI image or story can be liberating. In a culture that believes “the value of any phenomenon is determined by its data processing contribution”, an AI-generated artwork with no clear utility is quietly revolutionary 1:.

Of course, this counterforce potential is not automatic. It requires intentionality – deliberate mis-use of AI against the grain of efficiency. Left to corporate devices, generative models will indeed likely intensify dromocracy and dataism, pumping out ever more content and feeding the data mills. The task for artists, thinkers, and technologists who are wary of those regimes is to seize the generative tools and redirect them toward imagination rather than acceleration. This might mean accepting, even celebrating, when AI produces the illogical or the unprofitable. It means designing systems that privilege curiosity over click-through rates.

In closing, generative models are a double-edged sword. They encapsulate the accelerative, quantitative excess of our time, yet they also offer a means to transcend it. Much like the Surrealists leveraged the tools of modernity (photography, film, automatic writing) to subvert modernity’s rationalist strictures, we too can leverage neural networks and algorithms to subvert the totalizing demands of speed and data. By embracing the surrealist potential in the machine and fostering outputs that confound easy consumption and measurement, we reclaim a space for slowness, mystery, and human depth. In doing so, we begin to loosen the grip of dromocracy and dataism – not by rejecting technology, but by playfully and critically twisting its purposes. The ultimate hope is that in this dance with our generative creations, we rediscover our agency and freedom: the ability to say no to the pace of the racing clock and yes to the unpredictable, immeasurable riches of creative chaos.

Throughout this exploration, one thing has become clear: acceleration and quantification are not inexorable fates. They can be met with counterforces that are already emerging from within the system itself. Generative models, unlikely children of acceleration and data, may yet become the harbingers of a new counterculture of thoughtful slowdown and qualitative value. It is up to us to guide them in that direction, turning the hallucinatory power of AI into a tool for humanistic resistance – a rebellion in code against the tyranny of speed and the cult of data. In an era of dromocratic dataism, perhaps only the dream logic of our own machines can shock us awake.

Footnotes

  1. Dataism - Wikipedia 2 3 4 5 6

  2. 99.9% of Content Will Be AI-Generated by 2025: Does Anyone Care? | HackerNoon 2 3

  3. Dromology: Media, speed and a negative horizon | Medium 2 3

  4. Paul Virilio, Rasender Stillstand. Essay (1992), München: Hanser Verlag

  5. Virilio, “The Administration of Fear” 45

  6. Is AI quietly killing itself - and the Internet? 2

  7. Photography through the Eyes of a Machine | EyeEm

  8. Photography through the Eyes of a Machine | EyeEm

  9. Photography through the Eyes of a Machine | EyeEm

  10. Surrealism as a revolutionary movement – Anticapitalist Resistance 2 3

  11. Surrealism Movement Overview | TheArtStory 2 3

  12. Twitter ‘Joke Bots’ Shame Human Sense of Humor

  13. This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated Faces

  14. Fawkes

  15. Dataism - Wikipedia 2 3

  16. Refik Anadol’s Mesmerizing Data Paintings Are Captivating Audiences Worldwide | Artsy 2

  17. Holly Herndon: Proto review – dizzying beauty and bracing beats | Electronic music | The Guardian 2