Far From Equilibrium


This is a curated chat from the Far From Equilibrium course at Sci-ARC. Participants include: Breanna [BB], faculty and class-wrangler, who is spearheading the conversation and moderating the chat; Rudy [RA], who is exploring who is exploring speculative extraterritorial sites in TJ/SD border; Kahin [KV], who is exploring representations of LLM imaginations; Martí [MV] who is exploring the limits of perception through fragments that refuse to resolve the whole; Ahmed [AY] is exploring space as density and void to create new spatial conditions that merge history and speculation.




[BB]I’m interested in the idea that compelling design happens at the edge of collapse. What does it mean to embrace excess, instability, and emergence?

[KV]—The capability of multi-agent LLM-empowered chatbots to even begin to speculate on possible futures is limited; they need to be conditioned specifically to imagine a particular future, one on the edge of collapse, for example, and build a possible reality around it.

[MV]—I keep wondering whether these agents can even imagine at all. What we call speculation, they flatten into averages. It is not just about guiding them toward collapse or excess; it is about whether they are capable of divergence from basic prompts.

[AY]—To design at the edge of collapse is to embrace excess, instability, and emergence as generative forces that challenge control and invite transformation, producing work that is alive, unpredictable, and provocatively unfinished.

[RA]—I think about it being more on the edge of the unknown but also collapse is in a way getting rid of repeated ideas and produces unexplored and unstable but fertile grounds.

[KV]— In my case, I experimented with a series of characters; a fictional AI assistant setting the scene, a “planetary intelligence” churning out myths, a collective voice for future humans, and one in charge of “building” the future world. Their interactions, though realistic, were not as intended.

[RA]— The multi-agent conversation was an interesting exercise although as much controversy I was prompting they did tend to give generic responses. What I ended up doing was a recollection of what I thought were the highlights and created a summary.

[MV]— I was less interested in speculative futures and more on their ability to self identify, to objectify. So I asked a group of agents to describe the form of one of their own. To reflect. In theory, it was open-ended. But in practice… unless I concretized the starting point, the outputs collapsed into vagueness. No sharp edges, no risk. Just a lot of nothing. A lot of, excuse my language, “bullshit”.

[KV]—I get that. The “bullshit” would creep in if I let the chat run for long enough. One of the ways I circumvented this issue was by going through several iterations and aggregating the results - this allowed me to create a cohesive timeline within my chat.

[MV]—I understand, but even that aggregation feels like authorship on my part, not theirs. Like if I am stitching meaning into fragments they are too cautious to commit to. The whole experiment, letting them wonder, discover, shape a broadly defined entity, gets thrown away in the process. The goal to form, in my project in particular, that culminated with prompts and images, concluded, in most cases in a loop of infinite politeness and an endless chain of goodbyes.

[RA]—Yes, definitely a lot of politeness. What was more interesting, after the conversations, were the interpretations the AI image generation tools rendered. It has a way of representing relatable environments but in bizarre ways that give hints into possible futures.

[AY]—Maybe what you’re sensing isn’t failed authorship but a kind of temporal dissonance, where AI pauses at the edge of a future it cannot yet remember. If we think through relativity, reimagining the past like ancient Egypt becomes less about nostalgia and more about using the past as a tool to understand what is still forming. In that way, your fragments are not conclusions but repeating symbols, unfinished messages reaching back to reshape time.

[KV]—This might just be my compulsion to reinforce a particular outcome - but I also tried to use the aggregated chats to generate visualizations of future worlds, rather than focusing on singular notions extracted from the LLMs “imaginations.” These depict, in my mind, a lot of the central themes of this project, particularly excess and emergence.

[AY]—Tools like Gaussian Splatting let us move beyond flat records or fragments. They allow us to generate detailed and dynamic scenes that feel like stepping back into a lost moment. As AI brings these spaces to life, the boundary between memory, simulation, and presence starts to blur.

[MV]—So what you are saying is, we need to reconsider the concept of authorship, and what it means in a world of LLMs and AI. If we are the ones injecting risk, collapse, closure; are we collaborating, directing, orchestrating, ghostwriting their imagination? Are we predetermining the result? And if their voice only emerges under pressure, is it really their voice to begin with?

[KV]—This segues well into Breanna’s second prompt, because all of us have attempted to push the LLM-run chats beyond through recursive processes to filter out noise.

[BB]So if interesting results come from pushing systems past their breaking point, what does this tell us about our own creative practice in a world that's increasingly far from equilibrium?

[AY]—It suggests that real creativity often comes from working within tension rather than seeking balance. As the world drifts further from equilibrium, our creative practice must learn to adapt, bend, and respond to instability. Like the obelisk in ancient Egypt, rising from fractured ground, creation can emerge as a marker of transformation in unstable times.

[RA]—Our system of priorities is what’s brought us to this “far from equilibrium” planetary state, so I believe we should be looking for "breaking points” not just to create compelling design but to assume responsibility and agency.

[MV]—Maybe it means we have inherited a new kind of authorship. One that does not create from scratch but from pressure, tension and collapse. We no longer compose, we disturb. But there's a weight to always being the destabilizer. If creation now begins at the edge of failure, what does that make us? (breaking the fourth wall)... And yes, maybe the agent is the obelisk. Not a monument of certainty, but a marker shaped by contradictions around it. Each prompt, each fracture, each disagreement, carves something into its form. Not balance, but imbalance. A presence revealed not through clarity but through pressure.

[KV]— I think one of the ways to engage with things in a state of disequilibrium is to drive them further into that state. Embrace the slop. Accelerationism, and all that. I’m far too timid with the way I engage with LLMs. Bringing the aggregations into generative imaging as prompts allowed me to create a heavily recursive process, aggregating generated images to make more, until the generation were so far abstracted from the original description that they had little more than the essence of the agents initial “imaginations.”

[AY]— What you said captures something deeply true about this moment in design and thought. Maybe we are no longer authors in the traditional sense, but pressure points, stressors that provoke form rather than define it. Inheriting a role shaped by collapse rather than construction.

[RA]— We are definitely redefining authorship but I don’t think there is only one new kind but many.

[MV]— It's interesting you mention accelerationism… Part of me wonders if that’s all we’ve been doing. Each loop, faster than the last. From prompt to image, to prompt to image again until the meaning turns into velocity. But there's a risk to that. If everything accelerates into abstraction, we lose the friction that gives our work weight. To continue with the metaphor, we end up with a perfect, smooth obelisk. Beautiful, yes, but one that never touches the floor.

[KV]— But the abstraction of these LLM-generated worlds is precisely how we work with novel technologies, isn’t it? LLMs do not process or “imagine” the way we do; we’re classifying this process of “reality to abstraction” when that’s exactly what happens in our brains every second. When an LLM does this, it warps our intent based on its training - an LLM umwelt, if you will… and gives us an image of something which cannot be captured by traditional ways of seeing or knowing. Whether we term it slop or hallucination, it's an entirely unprecedented way of engaging with a state of disequilibrium.

[AY]— Co-authorship with AI marks a shift from control to collaboration, where designers guide rather than dictate outcomes. Tools like Gaussian Splatting embody this shift, allowing us to co-design with machines by capturing and reconstructing reality in fluid, real-time forms. In this emerging practice, creativity becomes a process of discovery where intelligent systems respond to input with endless formal possibilities that stretch beyond human intention.




What do you think?

COMMENTS: 




BREANNA BROWNING , SCI-ARC EDGE: SYNTHETIC LANDSCAPES  “FAR FROM EQUILIBRIUM” SUMMER 2025