modelling the prospective experience: an interview with Changeist
Foresight pioneers Scott and Susan of Changeist open the woodshed door and share how they design and build immersive workshop experiences that activate the emotional function in participants, as well as the executive.
A couple of weeks back I found myself treading the boards—well, the concrete, really—at Media Evolution, roleplaying the director of a troubled regional foresight agency. Said imaginary agency was looking back over the years leading up to 2036, and trying to work out what went wrong with its capacity-building programme.
Also in the room were twenty-four participants, roleplaying as delegates in the agency’s governance review, plus two veteran foresight consultants, playing… themselves, mostly.
The workshop was the brainchild of Scott Smith and Susan Cox-Smith of Changeist, and you can read their write-up over on their own site. But I have shamelessly leveraged my role as actor and narrative consultant on the project to interview them about method: about how they did it, and why.
In short, this is very much worldbuilding in action—albeit somewhat less obviously so than some of the work I talk about here on the regular. Perhaps more surprisingly for regular readers, there is also some AI involved… albeit used less like a turn-key song-generator, and more like a modular synth.
So step inside the woodshed, and find out how the furnishings come off the lathe! I started by asking Team Changeist to sum up the gig in their own words.
Scott Smith: So we were asked to run a workshop for most of a day for people from various backgrounds in Skåne, to help them understand how to develop and sustain a thriving future culture in an organization. Because most people may at best only get to sort of see or realize part of that, and at worst experience the fail side of that.
Rather than giving them hypotheticals, we thought it would be easier to put them in a deeper hypothetical—to put them into a deeper scenario set eight years into the future, let them look back at the successes and failures of very familiar types of organizations, and to forensically examine those successes and failures in order to better understand them.
So we constructed that reality, with sufficient detailing to create different paper trails through six or seven organizations, and then a careful exercise to have them first situate themselves in or near those organizations as delegates of a governance review, in which they would carefully unpack and unwind and identify the key issues that emerged in those organizations.
PGR: In your write-up, you’ve been calling it a business LARP—and it’s really not entirely a joke. From your perspective, doing this sort of thing, unpack business LARP a bit. What are the advantages of doing it that way? Why not just do a normal workshop?
Scott: I guess there are several reasons for doing it the way that we did. From our practice point of view, it’s part of a trajectory that we’ve been on for the past four or five years, really starting just before COVID, of trying to understand how you could use experiential futures at a greater scale than just building an object—or a set of objects, or even a static space—and then put that experience in motion, use it to kind of create a more dynamic environment that doesn’t just get people’s critical response to a single static thing, but allows them more room to embody that experience, that future, more deeply.
And embodiment isn’t just a static thing. In my mind, it’s a dynamic mode where you’re passing through something. You’re in something; you’re having a journey of some kind. You’re exploring a narrative arc, and you’re having varying responses to that.
Part of the reason for that is to be able to activate both the emotional and the executive function, because the two work together in the real world. When you encounter something surprising, estranging, etcetera, you don’t just have an intellectual response to it; you probably have a sensory response in the nervous system before you actually have a response higher up the brain stem. So we were carrying that forward, and it seemed like an interesting opportunity to do it.
I don’t know if LARP is exactly the right phrase, because that’s usually seen as a kind of playthrough with different possibilities... I haven’t looked at the literature in a while. But in this case, we thought about that problem I just mentioned: how do you approach something prospectively that you have very little experience with, in most cases, while you have a lot of experience with more mundane challenges on a day-to-day basis. Everybody knows what it’s like to run into an issue in the office, or to see a program stumble or take longer, or not get funded, or be cut off too early, or whatever.
So it’s taking that familiarity, but using it as a means of learning by giving people both familiar ground and unfamiliar dynamics to play with. It felt like a better way to help connect the problem-space to experience.
Susan Cox-Smith: I would add that I think it’s always easier for people to put a critical eye on something that isn’t theirs, that doesn’t belong to them, or that isn’t threatening their job or their colleague’s job. So the workshop moved them into a third space and allowed them to really name what it was that they were seeing rather than trying to gently describe a failure.
Scott: Connecting it to the lecture the day before, when I started the lecture I said that we’re going to be realists, that’s our brand, but we’re also going to point you to where there are some possibilities. And we tried to carry that over into the construction of the workshop.

We’ve already used the word “failure” a number of times. I’m hedging towards alternatives like “challenge” and “maybe not so successful” and things like that—because it wasn’t a safety course, where we’re showing them how they could burn their hand or cut themselves. But I think there is a far greater propensity for attempts to shift entrenched culture to be unsuccessful; it fails more than it doesn’t, let’s put it that way. And the best way to confront that is to look at why it breaks down, why it fails. Any good review of security failures or infrastructure failures or whatever should be asking “what was the weak material here? What decisions were made wrong? What could we have done differently?”
Because there are so many organizational cultural headwinds against allowing for this kind of culture to be constructed and embedded, that are getting stronger and stronger as we go forward, it’s necessary to point more directly at those challenges. We did that by trying to embed them carefully and differently, very intentionally, throughout the intertwining narratives of these six organizations so that, as Susan said, they could feel the familiarity: “oh yes, I know what this is, but I didn’t do it.” But it’s very close—a couple of people commented in the LinkedIn thread that it felt a bit uncomfortable, which is a design consideration. It’s actually a design intention to create constructive discomfort as part of the process.
PGR: I talk a lot about the way in so much futures fiction work—be it design fiction, narrative prototyping, whatever—you’re either estranging the familiar or you’re familiarizing the strange. What I think was really interesting about this one, for me in particular, is that it relied very strongly on familiarity. Getting the balance right is a kind of thing where you have to have a feel for it; as we discussed last week, it’s an art, not a science. There’s science to it, certainly! But it’s also a thing of feeling and judgment, as I think so much of the better foresight work is, even in the more traditional forms.
I don’t want to expose any trade secrets here, but I think it might be interesting to discuss the mechanics of the thing, because they are quite interesting. I know people who read Worldbuilding Agency are aware of my feelings about AI, and are going to be interested to know that I did a thing with some people who did a thing with AI! But it’s a really interesting way of talking about the utility and limits of that, in general, while also talking about what I think is, in my experience at least, quite a unique thing.
Scott: I think we all have big feels about AI. Just to kind of back up a step, because I think it’s useful here: as we’ve approached it, we’ve tried to understand it through using. Figure out where it works, where it doesn’t; what are the stress points? How does the changing, evolving nature of it affect its usefulness, etcetera?
Pretty early on, I made a decision that there was a cost-benefit trade-off. There’s a larger one, for separate discussions, but there’s also an immediate one in terms of speed, depth, complexity. Those were worth playing with to understand how it could be harnessed—in part because us not using it wasn’t going to stop everyone else around us from using it! So I wanted to understand how it could be brought in constructively, and where it could be brought in as a kind of working material—as a tool, just like anything else.
So over the past few years, we’ve been playing with AI as both as a kind of generative storytelling engine but also as a kind of modeling tool, and bringing those two pieces together. This wasn’t the first time we’ve used it. This is the first time we’ve used it exactly like this, but it came from somewhere.
We’ve talked about Foom before, our flagship strategic simulation, which uses generative AI with certain sets or parameters and seeds around the context of who’s there, who’s playing, who’s participating, in order to generate a dynamic story world. We’ve also used it to build more complex interactive narrative environments, effectively like small towns, or even large towns, as parts of other projects where time was scarce and complexity common.

The most important part is the intentionality. Where are you planting important ideas or important experiences throughout that narrative? So in some ways, it’s like building a story or a mystery or whatever: you pick your setting, you have your characters, you know how they relate to each other, but you’re also injecting plot points in a very strategic fashion.
So we have some experience in using it for learning by doing, and in another proprietary project, by using it to help keep track of the world assets. It’s like thinking about architecture: the reason you use parametric modeling and complex CAD files is so something knows where all the bolts are, but you also know where you want to place cables and access points and wayfinding. So it’s almost like an architectural approach to the story.
With this workshop, we knew we didn’t have a lot of time, but we wanted to be able to steer and shape. So it was easy to bring that model back into play here. One of the interesting aspects of this is that Media Evolution serves a region—it’s a funding region, as well as a geographical and a cultural region. So you’ve got a natural catchment for the narrative. But we also know that it’s a very interconnected place. So we needed to be able to use that model as a way of creating some glue for that cultural and economic and political interconnectedness, but also to help us balance the distribution of experience and story and assets within that.
All that is to say: here AI functions as a tool that can hold all of those things together at the same time. Now, could the three of us have sat in a room for a week with some index cards and posts and done exactly the same thing? Yes; I’m confident we could have ended up at the same place. I also think there is a quality and speed and scope trade-off: this didn’t necessarily need somebody of your skill or my skill or Susan’s skill to write every line. What we needed was a suitable facsimile, a reasonable ersatz ecosystem of private and public organizations.
So the artifacts did their job just like any design fiction does. Some of the best and some of the most weird practitioners [of design fiction] really worry about the details—not just the QR code, but the temperature on the front of the weather report, sort of thing. And we are known for putting those kind of quirky details in the bureaucratic corners. I think we were able to get what we needed to get by modeling “good enough” and focusing on hitting the notes.
That’s a long explanation, but I feel like it gets us where we are headed.
PGR: I think the thing to underline there is when you use the term artifacts, the other term would be prototypes, which really expresses that “good enough to make the point” aspect. You referred earlier to thinking of it more like experiential futures, perhaps, than design fiction: you are creating an environment, even if it’s not necessarily way-out décor and furnishings and what-have-you. You are creating a discursive or textual environment, I suppose.
The individual objects may not have stood up to a huge amount of scrutiny—though they probably did, because they were bureaucratic emails, and those are a particularly generic genre of thing, and it turns out LLMs are quite good at that, and I think we are happy to let them do that sort of work to some extent. But that sense that each individual object didn’t need to be perfect or crafted... because it’s more like pointillism, right? The bigger picture is made up of all the little dots, and they were good enough to make the whole thing have a shape, a three-dimensional shape.
Scott: I wrote a piece ten years ago now, maybe, about lossy futures, using the metaphor of lossy audio that we listen to all day long on Spotify or whatever. It simply means that, as a means of compacting the file, you lose detail, but your brain is able to re-insert that detail. So this was a sufficiently lossy set of artifacts.
Believe me, if we had three more days, there would have been different typefaces and more distinction! But even in the logos of the organizations—those were intentional, prompted designs that were machine-generated to the rough characteristics of the organization itself. So if you think of those as sound files, they made just enough sound for you to go, oh okay, that’s this organization, that’s the other one. And just the extra little touch of having the logo at the top and the email addresses at the bottom or whatever else, was enough to say: this is a box, and inside this box is a slice of the world; now consider this slice of the world, and connect it to the others.
What blew me away was the extent to which people began to actually evidence-wall the material in a way that I didn’t fully see coming, but made me very happy.
Susan: Generally, I think when you walk in the room people are willing to give you the benefit of the doubt that you’re going to give them something interesting and useful to do. So the way that Scott labored over a lot of this was useful for him, because it was building the world, which meant that we knew what we were stepping into. And we also knew, when reading through it, that it would be legible to the people who were participating.
I’m not a Claude or ChatGPT expert, I’ve barely have dipped my toe into it. I definitely have strong feelings about it! But I think in certain cases it makes sense, because it does reduce the workload for generating this type and amount of artifacts. I can see where they have usefulness and where it’s sometimes OK to say, this will lighten our workload.
Scott: I think there is a missing piece of dark matter that may not be evident. Because you could say, oh, it’s just about the outputs that we could have, and we can make the time, blah blah blah... and I’m convinced that under other circumstances, we could do exactly that. But there is something about the shaping, which I think is down to the operator experience. There’s a kind of warp and weft to working with AI, that moves it more in the direction of something like generative art or algo-rave. You know, there is an aspect to it of laying down beats—in this case, narrative beats—and then changing the parameters and repeating it, moving it along; it has a coding aspect to it that is quite rapid.
So when I’m in here and you hear keystrokes all day and all night, it’s not like “well, I’m lazy; here’s the task, you dump this out for me. That looks good enough—print. Let’s go.” There were probably seven to ten days of redrafting, reshaping, pushing, pulling, looking at it from the participants point-of-view over and over again, plus looking at it from the participant list point-of-view. This was a program for the people in the room: it would have been a different set of outcomes, a different set of organizations, if there had been a different participant list. But given the fact that we had a participant list, with the organizations and even the roles, there was a direct one-to-one programming between who was on which team and what they were asked to do.
That’s above and beyond just slop generation. It’s a shaping of the signature, for certain people to respond in certain ways—and that’s both an experiment in terms of learning for us, but it’s also an interesting critical use of the tool-set. If I felt like it was reaching the point of just kind of like “push button, get slop”, I wouldn’t do it, because we can do better ourselves at the same speed. We’re pretty good at producing high quality with velocity.
But there’s something here about experimenting with the point model. We’re jumping all over different metaphors here, but: how do you move the point model around, put different inflections in the tone of the narrative, so that trickles down to the progression between a series of emails? And what is the voice of this person who sat inside the culture—so this was your voice, but it was also a kind of frank voice that needed to be said against the other voices.
Anyway, I’m cautious of overselling it, but I do think that there is an aspect of craft using the tool in this way, as it would be if we were sitting here with keyboards and a MIDI connection.
PGR: That’s exactly where I was going to go next! We talked afterward about the LLM as being like the director’s assistant, who runs around with a big book with all the facts in it, and you can instruct that assistant, like “hey, can you go off and change that?” But the way you’re describing it here makes it sound more like, okay, I’ve got a modular synth and I’ve been asked to play in this particular building, so I have a sense of the resonances that are going to leap out, the sort of tones that are going to echo in an interesting way. And so you have an idea of the kind of shape of the piece, and you’re there kind of tweaking dials…
Scott: Yeah, you’re on the ones and twos, basically. You’re turning knobs to see what works, and that’s a different way of getting into narrative construction, I think, and tone and depth.
But also, there was a little bit of inspiration from precisely where we were doing this. The mood... we talked about being fans of Scandinavian crime fiction and the television that goes with it, the music that goes with it. But also, we have a deep interest in this kind of bureaucratic design fiction. So there were bits of that in the back of our mind, putting it together, but we didn’t have to load all that in. I think we loaded just enough.
PGR: Yeah, but it all informs the final result. I talk about Hemingway’s Iceberg when I talk to people about fiction-for-futures work. Hemingway says in that metaphor that you have to know it all, but like a fifth, if that much, actually goes in the story, on the page. But without your knowing it... how does he put it? “The dignity of the movement of the iceberg comes precisely from that hidden mass”, right? That’s how you get that result.
Building on that point, when I talk to people about worldbuilding they sort of assume, not without reason, that it’s all going to be very kind of flying cars and lightsabers—very flashy sci-fi stuff, skiffy, you know. But this is definitely worldbuilding too. I think it’s a really interesting case for it.
I guess what I want to ask here is: in the process of constructing the scenario that manifested through the artifacts and the stories, what were the key things to think about? Where did you start? What was the thing that let you in?
Scott: I mean, there is a very normie futures entry point here, as we would do with anything: if you’re going to describe a future of a place, we need to model that place, just enough of that future so that we’ve got the weather, so to speak. Looking at the region, looking at the forces shaping the region, both micro and macro... and we introduced some of those in the afternoon, for the sake of projecting forward. So we still did futures from inside the future, right?
But some of it was just taking my own experience of the region, and of similar regions, and just knowing how these kind of bureauspheres work. There’s a word for you! It’s not a biosphere, an ecosphere, it’s a bureausphere. I did a little bit of research, and it was helpful that these governance reviews are woven into that Swedish bureausphere, right? So that gave us a stage and an arch to work under.
Some of it was just doing your basic scenario setting, but knowing what we know about most of those kinds of ecosystems, we could make some estimates about where it might be. And even that wasn’t necessarily... we didn’t have to future-ize all of the situational stuff, so to speak. One, because those bureauspheres evolve slowly, often more slowly than the outside world often; two, because even if you ask people sitting here today in that kind of environment to discuss the future—and we know this from extensive practice—they will describe something that’s short of the future, they’ll describe a much shorter range of change and call it the future.
I guess that was kind of a gut feeling. We didn’t have to put them in some kind of strange land, but we know what some of the forces are—we’re aware of things like migration and how that’s changed politics over the past 20 or 30 years in the region, climate, a cold war that’s heating back up again, social security, and what all of those things do to resourcing, and what that does to the vertical of governance, and the points at which governance is structured. So those five forces we hit in the afternoon, we played those five notes loudly on the keyboard and said “when you hear this, what do you do?”
That to me is scenario development in the sense that you need to know what your big forces are, and use that to frame up the world. The other part was knowing mechanically what we wanted to try, and fortunately I feel like we signaled it in the description. But they had given us permission, in fact, by coming... and we needed to be careful with that permission by showing care in how we guided them in. Your role was part of that; the fact that they were in a safe place in Media Evolution was part of that; the fact that that review could be taking place in Media Evolution’s conference room anyway was part of that. They weren’t going somewhere unusual, they were going to the place where the review could just as easily have actually happened. They were seeing, you know, municipal improvements happening outside the windows. All of those layers were present.
PGR: What was interesting with this project, one of the reasons I like your work, is that you’re very strong on the importance of the looking back. That’s really important! But it’s a different mode. There’s the up and down axis, the macro/micro, and there’s an oscillation the other way, on the other axis, the historical/futural. Obviously there are points where the thinking crosses over, but you’re always on one side of the line or the other. I often say it’s like when you go to the opticians and they put the frame on and ask “is it better with this, or with this?” But you can’t have both of those lenses on at once.
Scott: It was important to point out that what we’re trying to get at here today is the importance of maintaining both short and long view simultaneously. That is a core skill that doesn’t have a methodology name, being able to develop the kind of neuroplasticity to look down and look up, to look long and look short.. So you need to be able to move around in that volume of time and experience.
PGR: To loop back to what you were saying earlier, there is necessarily a science to it, there is necessarily theory. That tendency to over-concretize a theory is hard to avoid, but again, it’s that oscillation: people run with things and take them too literally. If you get stuck in the concretized version of the theory, it’s very hard to get back out again and say, okay, well, it’s just an explanatory shape. It looks right from up there, but if you get down here into the mess of things, it doesn’t.
I’m doing some horizon-scanning work at the moment and just trying to explain to the client, yeah, that could be a technology driver, but it could be a politics driver. To someone looking at it from a technologist’s point of view, oh, that’s clearly a technology driver. That clarity of categorization is easy to make at that really abstract, God’s-eye-view level. But when you actually get down to the front lines of service delivery or whatever it is, those lines aren’t really distinct, and it doesn’t matter what category the driver is in. The problem is in your lap, and you’ve got to deal with it.
Scott: Yeah, exactly. I think that was the the thing that clicked in the move towards experiential futures, particularly from a political angle, but also I think it’s sometimes done to excess on the speculative design side, where the design and the speculation overtakes the intention, and just asks you to go “wow!” instead of putting you in the places where you should go or could go.
PGR: There’s a Canadian writer I like called J.F. Martel, and he talks about the axis between art and artifice. Now, like all spectra, nothing is ever purely one or the other. But the closer something is to the art end of of the spectrum, the more it’s just something that came out of someone’s head, whereas with artifice, there’s an intention there, it has a telos. I think futures work can be the same... and we’re back to oscillations. If we think back to where we were fifteen years ago, when a lot of us were arguing for a lot more creativity in this sort of work, I think it’s a really good thing that we’ve got it. We’ve just got to a point where it’s a long way towards the art end, as a corrective to how things were fifteen years ago. That’s no bad thing! But the thing that came out of this workshop, I think, is a sense that people learnt a way of looking at things that wasn’t just “oh, wow, that was cool”. They’ve learnt a way of looking at things that can be turned to genuine use.
Scott: This was an experiment in workshop pedagogy, in using experience as a means of trying to overcome what we think is a basic challenge in opening up people’s capacity to deal with problems that don’t exist yet, but do nonetheless exist—they exist all the time in their current and past experience. We know this is a very specific situation where you might never get to build a scanning system, create scenarios.
This goes back to the first question you asked. There’s a built-in challenge to teaching that stuff, and we wanted to find a different way to attack it. This approach seemed to be a useful way, and he implicit hopes about how it might work actually seemed to unfurl and function. Back to our earlier metaphor, it’s like we understand enough about music theory; we also understand what our own creative impulses are; we have a sense of the audience. How do we put those things together? Here’s how the logics work best in discussion and a foresight context, a way to bring some creativity into the content and get the logics functioning and then hand it over to the participant, the user, to engage with. That’s it. You know, it was luck, smarts, a really, really great crowd and a good sponsor. And the setting seemed to make a big difference.
PGR: Yeah, you’ve got to have a good audience for a good show.
Scott: We had some intuitions, but also some concerns about the nature of the participants in terms of, like, what’s the receptivity to novelty, how prominent is ritual and organizational ritual critique. We didn’t talk a lot about it; it was just enough to kind of say, all right, we think this is the right thing for this group. I would not take this workshop as it is into some other settings we’ve worked in! We would have to tool it differently, because context, criticality, these things change.
But it’s also, I think, why you need to be sensitized to cultures, both generally but also the future-culture aspect of it, where the cultural mores and understandings of the future sort of stick themselves to each other. We think about this as like, there’s innovation culture, and there’s a future culture that’s really close to that, but it’s doing a different thing; innovation is often very goal-seeking and directional, whereas future culture is very wavy and turbulent.
Thanks for reading this interview—I hope you found it inspiring and useful. If you did, perhaps you’d consider forwarding it to a friend who might also enjoy it?
Comments ()