Smallville
Generative characters get up in the morning, cook breakfast and go to work; artists paint and writers write. The characters form opinions and recognize each other; they initiate conversations; they remember the past days, which they reflect on, and in the process they plan the next day as well – write the researchers in their study “Generative Agents: Interactive Simulacra of Human Behavior”.
To achieve this, the researchers relied on the ChatGPT API. However, they also created an architecture that it tries to map the mind with memories and experiencesThe characters were then released into the world and allowed to interact with each other.
Computer-controlled characters have been present in video games since the 1970s, but a social environment of such complexity has not yet been simulated. The miniature city can even be a prototype for a dynamic game where RPG characters interact with each other in complex and unexpected ways.
Imagine killing an NPC (in-game pre-programmed character – ed.) and then returning to town to find them being buried
someone joked on Twitter.
Houses, a cafe, a park and a grocery store were also “built” in the virtual city. For more effective observation, the world is depicted from a top view and has a look reminiscent of classic 16-bit Japanese RPGs.

The residents of Smallville are each represented by a tiny pixelated avatar. To record the identity of each character and their relationships with other members of the community, the researchers created a core memory that contains natural language descriptions. Human users can enter Smallville and move around the world as an existing or entirely new character. Human users can influence their behavior by talking to the virtual inhabitants or acting as a kind of “inner voice” with instructions.
One of the biggest limitations of the experiment was the limited “memory” of the LLMs. This memory exists in the form of a “context window”, which is the number of blocks of words ChatGPT can process at once. To circumvent these limitations, the researchers designed a system in which the most relevant parts of a character’s memory are activated when they are needed.
Spontaneous behavior
In the study, the researchers discovered three patterns of behavior that emerged as a result of the simulation – and were not pre-programmed.
That’s how it was transfer of information: the characters share information with each other, which spreads as a rumor in the town. The characters developed a kind relational memory also, which means remembering past interactions between characters and later reliving past events. The third form of spontaneously developing behavior is a coordination was: one of the characters organized a party at which several people showed up.
This happened when an AI agent named Isabella Rodriguez organized a Valentine’s Day party at “Hobbs Cafe” where she invited her friends and clients. She decorated the cafe with the help of her friend Maria, who invited her lover, Klaus, to the party.
Of course, the idea of the party did not arise spontaneously, it was implanted in the character’s memory by the researchers, but it is still remarkable that the residents of Smallville autonomously distributed invitations to the party for two days, made new acquaintances, and invited each other on dates at the party. In the end, five people showed up at the party, three of the invitees were too busy, and four characters simply didn’t go.
The researchers also hired human evaluators who reviewed the simulations multiple times to assess how realistic the AI’s behavior was. The researchers also asked people to put themselves in the role of the character whose life they were observing and answer interview questions on their behalf.
The result was that generative architecture produced more believable, i.e. more human, results than people playing role-plays.
Ethics concerns
According to the researchers, the latter raises several ethical questions about the technology. The researchers parasocial relationships (relationship with a fictitious or media persona in which the subject invests a lot emotionally), which may lead to incorrect conclusions, as well as the risk that users will rely on virtual personalities in more and more situations.
In order to ensure an ethical and socially responsible application, according to the researchers, developers should keep several principles in mind. It could be, for example, that the developers will clearly communicate if the user interacts with artificial intelligence, they maintain accurate audit logs of inputs and outputs and preserve the degree of human input in the design of scientific works and technologies.


Digital Transformation 2023
Information and application
Cover image source: Getty Images
Leave a Comment