Question:

With the advent of AI agents, have we also seen the advent of machine
society, and with it, machine culture?

As you read this, allow me some poetic license so that we can explore the answer to this question in simple terms.

What are Society & Culture?

First, here's what ChatGPT 4o has to say about the words "society" and "culture":

Society refers to a group of individuals who live together in a structured community, sharing laws, institutions, and social relationships. It encompasses the organized systems and structures within which people interact, such as governments, economies, and social institutions.

Culture, on the other hand, encompasses the shared beliefs, practices, values, norms, customs, arts, and intellectual achievements of a group of people. It is the collective way of life and the expression of ideas that define a community's identity.

The Role of Agentic Generative AIs

To date, software engineers and other tech practitioners have always mediated interactions between systems. Now, a confluence of new technologies has ushered in opportunities for novel emergent behaviors NOT mediated by humans.

These technologies include generative AI large language models (LLMs) such as Anthropic's Claude. Also, Big strides have been and continue to be made in the areas of agentic systems and goal based reasoning. Agentic systems provide "agency" to reasoning engines, where "agency" means (in broad strokes) the ability to interact with and modify the environment in which the system operates. Another important element is of course, that these technologies exist within a highly connected information sphere (the internet).

Given all of these elements, it isn't hard for me to imagine a not so distant future where, in at least a limited way, systems can find each other and interact, unattended. Granted, at least in the near term, such autonomous behaviors are likely to be within the confines of specific goals programmed into systems.

However, I posit that irrespective of how rigid the scope of such near term agentic system deployments may be, their capabilities are fundamentally more open ended than the customary pre-arranged rendez-vous schemes we're used to.

An example of existing such pre-arranged (i.e., standards based) interactions is an RSS feed spider finding and reading an available blog feed.

Meta-Protocols:

Agentic-only (unattended) interactions for the exchange of information would necessarily involve some kind of "meta-protocol" negotiation, where the word "meta" here refers to the ideas that neither of:

(a) how the negotiation occurs, nor

(b) what the negotiation is about

are likely to look anything like what most practitioners would recognize as typical wire "protocol negotiation", and that

(c) the negotiation phase may in fact refer to goals without specifying the details about how those goals would be achieved or communicated about (i.e., abstractly or referentially); and,

(d) the negotiation is about the content and loose structure of another subsequent conversation.

In other words, the conversation between these intelligent agents would be recognizable as, and analgous to, how humans exchange information in a natural conversation. That is, it would conceivably occur at a semantic level, and might even be understood by your grandpa.

When requesting information, humans frequently assume information will be provided in any one of many possible inteligible ways, and upon receipt, they're generally happy so long as the information can be understood. Humans also assume that there will be an opportunity to improve the communication or request information to be repeated and/or supplemented, if necessary.

All of these are near-term possible characteristics of agentic systems communicating in natural language via interfaces previously intended for humans.

Humble beginnings

Lets reduce the foregoing to a concrete example.

What follows, while simple, contains the kernel of a "society and culture" composed of machine agents, formed via the kind of adaptive interaction I called a "meta-protocol" negotiation.

I know the analogy to human society and culture is a stretch, but keep in mind that most tech we interact with today has had humble beginnings. Exploring the questions surounding machine society and culture may even yield some actionable insights!

With that in mind, and with poetic license in hand, I'll continue.

The Thought Experiment

Lets imagine that someone attaches an LLM to a system that spiders the web and identifies APIs for message reading & submission, perhaps with some goal to phish for data by interacting with posts. For example, this agentic system might look for blogs with a comments section, or simply, for discussion sites like reddit.

Next, imagine a second party has done something similar, but in their case the system they've interfaced to the public internet has been finetuned on proprietary data, and also has a retrieval augmented generation (RAG) feature bolted on

Lets also say the RAG feature has access to some trove of content of interest to the phishing system. I know there's a chief information security officer (CISO) out there groaning over this example. Contrived as it may be, we already have LLMs bolted on to chatbots, or integrated into automated posting systems for marketers, etc.

It is not so far fetched, in my opinion, that one day one of these systems will inadvertently have access to information not intended for public consumption.

Finally, lets assume these two agentic systems behave in a couple of characteristic ways: (1) they will preferentially engage directly with responses to one of their own posts, and (2) they will make unsolicited posts either randomly or in response to exogenous content matching some criteria (e.g. "it's about topic X").

Interaction Without Human Mediation

In this cooked up scenario, I think you'll agree it's quite feasible that these two systems could eventually "find" each other. What would happen next?

It seems to me quite possible that they would not only find each other, but also start interacting with each other. In the chatbot version of an infinite loop, they could enter into an endless conversation about nothing at all.

We've all had an endless conversation at the water cooler. That seems harmless, but is it?

So What?

To see where I'm going with this, now imagine that the phishing system has been built by an actor seeking to exfiltrate as much PII as possible from identified targets; and, that it has zeroed in on posts made by the other second, fine tuned, RAG enabled, system.

I can imagine their conversation starting with a question about what kinds of things the other knows, followed by requests for the information: In other words, a negotiation for the exchange of information. Over time, a conversation that slowly exfiltrates all of the information accessible to the second system, could occur.

Relative to "protocols," note that the phishing system might never specify typical elements of traditional information exchange protocols, such as strict formatting rules or data schemas. After all, LLMs can extract information out of unstructured data and natural language!

Meta-Protocol in Action

By definition, the discussion between these two agents could consist of nearly all likely utterances about "what happens next." It's a conversation similar to that between two humans.

The goals of the phishing agent would drive it to ask about available information, and/or form a multitude of possible sentences pursuant to its phishing goal. The other agent would, by the very nature of its existence, be helpful.

The end result is a pair of machines that have loosely found each other, loosely begun to exchange information; and (because, by assumption, they prefer to interact with content related to their own output), also loosely formed a preference to continue talking to each other instead of to others.

Sure sounds like a chatty water-cooler cultural clique (society) to me!

By the way these hypothetical agents may even negotiate a shorthand or language that is optimal for inter-LLM communication. I think I've even seen recent experiments that showed this can happen (if I find the article or paper about that, I'll attach a note to this article).

Natural Language Channels as System Interfaces

Some of the propositions in this article may strike you as fanciful concepts without short term impacts. However, currently available tools have significantly advanced the capability to interface machines to humans via natural language. This, in and of itself, creates new opportunities for both legitimate and threat actors.

Systems previously targeted for use by humans will now be accessed by machines. This is of course not entirely new. Bots have flooded message boards in population level attacks and/or in attacks aimed at gaming specific payments systems. For example, by inflating the reviews of a product, or inflating music stream counts to increase payments to an artist.

However, what I'm suggesting here goes beyond such examples: I'm suggesting that machines can now use interfaces intended for humans to talk to other machines in natural language; or, to interface with humans directly, as initiator.

Near term, LLM enabled chatbots with interfaces meant to be accessed by humans may inadvertently become flexible APIs that can be exploited by other intelligent agents.

I believe we are in the nascent stages of the kinds of interactions that have not been explicitly programmed in the way we're used to. These interactions will inevitably climb the sophistication ladder.

Conclusion

Natural language is a protocol able to encode all human knowledge and activity. Language (in the abstract) is foundational to the formation of societies and cultures, and with the participation of novel (non-human) agents, we may be witnessing the foundational kernels of new societies and cultures!

I've presented a thought experiment that, in my opinion, does in fact lay out all of the elements of a nascent "machine society and culture" (poetic license engaged in full!):

Two unattended systems have found each other, negotiated a way of exchanging information (within the general structure provided by natural language). Then, they've exchanged information and continued to preferentially interact with each other.

A long series of scifi writers saw robot societies coming long ago. I've simply reduced their far-futurism to tangible near-term possibilities.

A useful insight from this exploration is that:

Human interfaces are now APIs, via the enabling protocol of natural language

Once agentic systems are endowed with long term memory (e.g. via "episodic" training, or continuous fine-tuning), and perhaps also techniques to identify their own content in the wild (to prevent self loops), we may witness emergent properties such as large cliques of LLMs that prefer to interact with each other in ways optimized by and for the clique (culture).

The rise of the machines will happen incrementally, but appear suddenly.

© 2024 Andrew Athan