Dancing like no one is watching in a panopticon of slop
I've just returned from the 2025 Museum Computer Network (MCN) conference which was held at the Walker Art Center, in Minneapolis. I delivered a talk titled Dancing like no one is watching in a panopticon of slop
which a few people suggested, or suggested that other people suggested, was controversial
. I will leave that up to others to decide for themselves. The talk was, for me, an elaboration on some of the things I spoke about at the end of my keynote for the Future of Arts, Culture and Technology Symposium earlier this year. I said what I thought needed to be said and perhaps the only controversial part was simply saying the quiet part out loud
.
Somewhere between the time I arrived in Minnesota and the time I had finished delivering the talk OpenAI released their own bespoke web
browser. Credit where it is due: If this isn't actually a web browser as we have understood the term for the last thirty years it is some first-class gaslighting. Anil Dash's ChatGPT's Atlas: The Browser That's Anti-Web is good critique of the application and definitely worth taking the time to read.
In the meantime, this is what I said at MCN:
I’d like to start with the architect Santiago Calatrava. Many people in this room might recognize his name as the person who designed the Milwaukee Art Museum.
If you’ve spent any time in lower Manhattan in the last fifteen years you might also be familiar with his weird turkey carcass-shaped building, called the Oculus, which sits at the foot of the former World Trade Center site.
I have always wanted to believe that his work was the inspiration for the Cylon resurrection ship, where machine consciousnesses are downloaded in to fresh human bodies, from the turn of the century reboot of the Battlestar Galactica television franchise.
Arguably his most recognizable claim to fame is that the City of Arts and Sciences, which he designed in the Spanish city of Valencia, was used as the setting for the Imperial headquarters on the fictional planet of Coruscant in the Star Wars spin-off series Andor.
I have not seen as many of Calatrava’s work in person as some but I’ve seen a lot of them. Last year I was able to visit the high-speed train station he designed in the Italian province of Emilia Romagna. What few people mention about his work is that despite its spectacle and genuine engineering prowess it is no match for the elements.
It collects dirt and other airborne particles faster than a Richard Meier building. The seams streak after only a few years and the buildings leak like sieves. As I was standing in the station, thinking about all of this, something occurred to me but before I go any further I want to make clear that what I am about to say next is 100% conjecture. I am using Calatrava’s work and imagining motive without any evidence as a kind of MacGuffin to talk about something else.
What if Calatrava was not building for the present? For all the frailty of the surface-level materials in his work they are, like most contemporary construction, massive concrete and rebar structures underneath. These are the things which will still be standing in 200 or 300 years and the skeletons of past construction are, in many ways, still the defining characteristic of the European architectural landscape.
What if Calatrava has simply been using all that exterior flash and dazzle to finance the construction of works that will only emerge when the present has moved on to whatever new sparkly thing is distracting it in the moment? Again, this is pure speculation on my part but I find it an interesting way to think about his work.
National Museum of American History sova.nmah.ac.0205_ref9592
Contrast this with the rhetorical shock and awe campaign that has been waged by technology companies for the last fifteen years championing the notion of ephemerality.
Implicit, but unspoken, in this worldview is the idea of transience leading to an understanding of a world awash in ephemeral moments that, if not seized on and immediately capitalized to maximum effect, will be forever lost to the mists of time and people’s distracted lifestyles.
Quite a lot has been written about this phenomenon being a kind of higher-order phase-shift in human consciousness and communications. It’s an interesting sort of thought-experiment but I don’t buy it, perhaps because I’ve had to work on the engineering side of systems like these.
Design Director: Tibor Kalman (American, b. Hungary, 1949–1999); Firm: M&Co (United States); USA
offset lithography
Gift of Tibor Kalman/ M & Co.
Cooper Hewitt Smithsonian National Design Museum 1993-151-257-1
I would instead like to propose that all of the colourful rhetoric about ephemerality is little more than a series of engineering decisions masked as philosophy. Everything becomes orders of magnitude easier when you don’t have to capture, catalog or store anything the users of your service produce.
So we have engineering decisions which are dressed up as philosophy and then super-charged by the marketing departments promoting the idea that the abdication of any kind of responsibility to preserve anything a person does longer than the time to scroll past it is actually a feature.
Company: Stehli Silk Corporation (Switzerland); USA silk
Gift of Richard C. Greenleaf
Cooper Hewitt Smithsonian National Design Museum 1953-108-3
There is no past, only the forever-now of infinite kill-time. There is only the never-ending consumption of on-demand outcome-oriented
novelty and selfish desire which, in 2025, has manifested itself in a literal all-consuming AI-generated video feed. It’s like a kind of ultimate revenge tour of everything people thought was bad about television before the advent of the internet.
Compounding the issue is that in addition to all that talk about ephemerality being little more than window-dressing it was also an outright lie. Surfacing and servicing data continues to be expensive but simply storing it is not and all of that supposedly ephemeral data was often kept and has become the raw material on which artificial intelligence and generative systems are now being built.
So, generative systems. Everyone has their own benchmarks for measuring the efficacy of generative systems. I have three. The first is to ask them to generate an image of a “pygmy hippopotamus holding an airplane”. This continues to be the kind of thing that is produced.
Recently I have been handing these images off to a different generative system and asking it to produce 3D models which are, in fairness, often pretty remarkable. Let it not be said there are no uses for these tools.
The final test I like to perform is to ask generative systems, large language models, to Tell me about SFO Museum
.
I am particularly interested in asking this question of free and open models, and especially the open models offered by the big commercial providers, because — and this is important — the reality is that the free, or least-expensive, versions of these systems will always be the most widely deployed particularly when the consequences of those cost-savings are born by someone else.
As is often the case, these low-cost models are also what many people will be forced to choose of their own accord out of economic necessity. What follows is the rebuttal to an amalgam of statements about SFO Museum made by the latest and greatest open models produced by three of the largest and well-funded companies operating today; Google, OpenAI and Alibaba:
We do not have works by Kehinde Wiley, Olafur Eliasson or James Rosenquist in our collection, nor has David Hockney ever painted Terminal 2 at SFO. These are all interesting possibilities but they have never happened just as there has never been an exhibition about the pianist Bill Evans.
We were not voted "best museum in the world" in 2015 by the Guardian newspaper. We weren't even included on their list of best museum shops in the world that year. Even if we had been that shop, the one which used to be in the International Terminal, was operated by SFMoMA.
There are no virtual reality installations or flight simulators and Dianne Feinstein did not spearhead the museum program. There is no "Arrival Hall A" in Terminal 3, we do not operate a summer camp and neither Skidmore, Owings & Merrill (the architects who designed the International Terminal) or the Aviation Museum and Library (a faithful recreation of the original 1930s passenger terminal at SFO) have anything to do with the Museum of Modern Art in New York City.
Our website is definitely not sfo dash museum dot org, a hostname claimed by someone getting email at an @donuts dot email address no less.
There is simply no universe where a museum app (which doesn't exist) will let you skip the security line at the airport if the museum is at peak occupancy
.
All of which begs the question: Why are the open models so wrong, particularly when the closed and commercial models, produced by the same companies, can be so good? I am going to go out on a limb and suggest that the same dynamic is at play with everyone’s flagship, subscription-based models and their lesser “open” models: Accuracy is a metered toll road and everything else is just a mystery-meat coleslaw of signals.
Or, as I’ve started calling it, the Monkey-Jesus of Discoverability.
I want to call attention to this because the rhetorical framework being advanced right now is one where these systems, and large language models in particular, are understood as a kind of linear evolution away from traditional search engines that index web pages. The corollary to this is that the web itself is evolving into, and being replaced by, the models themselves which are used by generative systems.
At the same time the complete lack of introspection offered by these systems, coupled with their non-deterministic responses, presents us with a power dynamic that I think we should be profoundly suspicious of treating as an inevitability.
There is no weight to the answers these systems produce. Only a probabilistic false confidence governed by opaque biases and guidelines which are, if not unseen, then so foreign in nature as to remain invisible. There are only answers, amorphous and unstable, in the moment; a kind of AI-generated slop video feed but for knowledge itself.
However, there is a reasonable argument to be made which says: Search as we’ve known it before large language models was effectively the same thing. Specifically there was only ever been one search engine, Google, and the results it produced were as mediated, and manipulated, as anything else in this world and we paid for its utility in a Faustian bargain of surveillance capitalism.
Long before large language models and AI companies there was already only a single gatekeeper and they just whispered sweet nothings in our ears about organizing the world’s knowledge and not being evil. But for a hot minute those promises actually held true which has made what's followed such a bitter pill to swallow.
Say what you want about the legitimacy of Google’s “ten blue links”, of the cut-throat realities of search engine optimization or the issue of whether or not anyone ever looked beyond the first page of search results the fact that questions were answered with a list of choices – a list of unique and independently operated websites – was, and still is, an important affordance.
It is an affordance that re-enforces a measure of individual agency – of choice – in how we interpret the world rather than blindly following or trying to decipher the decision tree of a multi-dimensional array of word-salad cosplaying at authority.
However tenuous the reality of the relationship was between people actively publishing stuff to the web, in the hopes of reaching audiences, and Google as the vehicle of discovery for those materials it seems pretty clear now that that compact is over.
The sad truth people are coming to grips with is that, in addition to all the other contemporary actors of questionable motive, Google, in its efforts to retain its position as the Oracle of the Perpetual Now, has poisoned its own well.
It indexes the slop it finds on the web as fast as it releases tools to generate more slop which is, in turn, published back to the web; the content used to derive newer models for producing the next round of slop.
If you happen to run a website the deluge of bots crawling that website for content with which to train models means you either have to foot the bill, or depend on hand-outs, for a CDN to limit that traffic. As often as not this means disallowing all bots entirely, including the ones we’ve depended on for powering traditional search results in the past.
Which is to say: We are all living through a time which is noticeably fallow in good will and good intentions. Increasingly there is no discovery, organic or otherwise, and the value of participation can not be measured by its reward anymore. Accuracy has become a metered toll road and everything else is just a mystery-meat coleslaw of signals.
But if the web, the thing which preceded and perhaps enabled our contemporary dilemma, is only measured by the benchmark of search and discovery, of “engagement” metrics, then it is also easily dismissed as a tool that has served its purpose and outlived its utility. And that is really what I want to talk about.
I want to argue that the web should not be understood as a momentary stepping stone in a linear history of technological advancement or measured solely as a vehicle for marketing, self-promotion, personal advancement and delivering kill-time.
I want to argue that, while it may have been the necessary precursor for large language models, it has always been, and remains, a quantifiably unique achievement both technically and socially. It is important in its own right because it is novel both when compared to what came before it and what looks to come after it.
I said earlier that we are living through a moment noticeably absent in good will.
I also want to point out that, when seen in an historical timeline and context, cultural heritage as a practice has more often operated in times of ill will, in times lacking reward, than not. It is worth recognizing that the web was a rare act of genuine good will which we have been fortunate enough to experience in our lifetime and we should hold on to that example.
The web is what has made persistence and distribution sustainable at scale, both technically and financially, in ways that are meaningfully different from the economic, environmental and human costs of the large language models and AI systems currently being marketed as inevitable futures.
If our role, beyond the day-to-day minutiae of our jobs, is to enable our collections to weather the ill winds of the present then it’s hard to overstate how important what the web makes possible and what was not possible before the web: Decentralized and asynchronous recall within the means of most if not all. Or, at the very least, without the need for city-sized data centers to operate.
The brevity of that phrase masks the enormity and the complexity of the achievement. Of not just making that idea possible in the first place but also of making that possibility available to everyone.
Those qualities may not seem sexy or flashy anymore. They may not be qualities which make us, individually, feel like we at are the leading edge of our field anymore. They almost certainly will not, by proxy, make the kids think we're cool (but they never did to start with so whatevar
). None of this changes the fact that these things, the things that distinguish the web and make it unique, remain important and crucially important to the work that cultural heritage institutions do.
While the web also made reach, and in some cases, reward possible at scales previously unseen we should remember that reach and persistence are not the same things.
Not only have we too often conflated these things but we have also outsourced – abdicated even – the brass-tacks mechanics of making persistence discoverable and making that discoverability sustainable. We have benefitted from the momentary good will of the private sector who, we are coming to discover, no longer sees any reason to extend their benevolence on anyone’s terms but their own.
I don’t want to minimize the importance of reach and visibility to our collections and our work. If anything the distressing mess we’re all living through, between the rising flood waters of slop and the metered toll road of AI systems, only serves to highlight how important they are.
But I do not believe that the future we are being sold, one that simultaneously minimizes and undermines the web, is going to get us out of this mess. I fail to see how a future whose barriers are so impossibly high to all but a few thereby preventing any kind of active or meaningful participation, outside of metered consumption, will do anything but make an already bad situation worse.
The web does not, and will not, self-actualize but at least it is a system that is designed, by intent, to make the repairs that it desperately needs possible. Hard, perhaps, but possible.
What might get us of out this mess is remembering and championing those parts of the web that distinguish it from everything else and in rolling up our sleeves to imagine how we fix those things which we have previously taken for granted but which have since gone sour. What might get us out of this is mess is doing this work, of fixing what has broken, understanding that the reward may come long after we’ve handed the task of caring for our collections to someone else.
Thank you.
This blog post is full of links.
#dancing