Complex vs Complicated Systems

Complex vs Complicated

A complex system is “a whole that is more than the sum of its parts”. It is nearly impossible to understand a complex system by breaking it down into its individual components and it depends on the interactions and inter-dependencies through emergent properties, non-linear relationships, feedback loops and context dependence.

A characteristic is that these systems are not proportional or linear in their cause and effect. Small causes does not always produce small effects, etc (i.e nonlinearity). Complex systems also express emergent behavior, meaning the interactions and feedback loops between the interconnected individual components produce new properties that are not easily predictable from the behavior of individual components or their interaction.

Sometimes complicated and interconnected computer systems are referred to as “complex” which is not true given that we scope the system boundary at the computer system and not include a true complex system as a subsystem (for example human interaction with the computer system). Computer software, regardless of how many thousands of individual components a service oriented architecture (SOA) consists of, is - so far - always a complicated system. Similarily, generative AI is also a complicated system given that we scope the system to the AI and not include it as a subsystem within another system consisting of complex subsystems. However, in that case, the complex subsystems would create the emergent behavior by interacting with the complicated system. The generative AI would not create emergent behavior, at least not yet.

I asked ChatGPT 3.5 whether computer software can be considered a complex system…

“Computer software, by itself, without interaction with an external complex system, is typically not considered a complex system in the same sense as natural or socio-technical systems. Instead, computer software is often categorized as a complicated system due to its deterministic behavior and the ability to understand its functioning through analysis and decomposition.”

Further, I asked what the difference between a complex and complicated system is and it responded with the following…

“Examples of complex systems include ecosystems, the human brain, social networks, economies, and weather patterns. These systems often exhibit non-linear dynamics, feedback loops, self-organization, and sensitivity to initial conditions.

A complicated system is one that has many interrelated parts, but where the relationships between the parts are usually deterministic and can be understood through analysis or decomposition. While complicated systems may be difficult to understand or manage due to their intricacy, they do not typically exhibit the same level of emergent behavior seen in complex systems. Examples of complicated systems include automobiles, airplanes, computer systems, and mechanical watches. Complicated systems can often be broken down into simpler subsystems or components, and their behavior can be predicted through detailed analysis or modeling.

The main difference lies in the nature of their behavior and the predictability of outcomes. Complex systems exhibit emergent properties that are not easily predictable from the behavior of individual components, whereas complicated systems have deterministic relationships between components that allow for analysis and prediction with sufficient understanding.”

Cynefin framework

I have issues with the over-simplification of systems thinking illustrated in the decision-making tool or sense-making tool called the Cynefin framework. OK, I get it, it is to figure out which approach you should take to study a particular project, issue, thing in your organization. My issue is that it is single-plane, single-dimension, single-domain and does not consider negative and positive feedback loops between interconnected simple, complicated and complex adaptive systems. Further, Cynefin is used in the context of complex adaptive systems, i.e living systems, human social systems, often organizations. These systems are multi-domain where the complex adaptive system learns, adapts to, and emerge new behavior and properties by interacting with various complicated systems (and complex sub- and super-systems). A complicated or simple system is never isolated from the whole complex system that is the company or organization where it is created, implemented or used. You risk making assumptions that are incorrect. This is why Goldratt’s theories of throughput accounting, theory of constraints, etc is much more profitable. Giving up on the control-mentality that any framework boxes you into and treating the whole for what it is - a complex system - is, in my humble opinion, the better approach. This means things like emergent strategy, continuous architecture, delaying decisions until absolutely necessary, honor self-organization by organizing your team to the system you are designing or implementing, continuous learning, etc.

I asked ChatGPT 3.5 about this as well and it initially did not agree with my criticism before I mentioned the criticism of Cynefin by Niels Pflaeging. Although I do not agree with everything Niels is proposing, but as a Deming-fan myself, I sympathies with many of Niels’ ideas.

“Niels Pflaeging, a management expert and author, has criticized the Cynefin framework, particularly in the context of organizational management and leadership. One of the main criticisms he presents is that the Cynefin framework may oversimplify the complexity of organizational dynamics and decision-making processes. Some of his key points of criticism include:

  • Oversimplification of complexity: Pflaeging argues that the Cynefin framework’s categorization of systems into distinct domains (simple, complicated, complex, chaotic, and disorder) may lead to oversimplification. He suggests that real-world situations often exhibit characteristics of multiple domains simultaneously, making it challenging to apply a single framework effectively.

  • Inadequate emphasis on complexity: Pflaeging asserts that the Cynefin framework doesn’t adequately address the nature of complex systems and the challenges they pose. He believes that complexity is a pervasive aspect of modern organizations and that the framework’s treatment of complexity as one of several domains may downplay its significance.

  • Focus on categorization over action: Pflaeging criticizes the emphasis of the Cynefin framework on categorizing systems into domains rather than providing practical guidance for action. He suggests that instead of focusing on classification, organizations should prioritize developing adaptive capacities and embracing uncertainty.

  • Lack of attention to organizational dynamics: Pflaeging highlights the importance of understanding the underlying dynamics of organizations, such as power structures, culture, and communication patterns. He argues that the Cynefin framework’s focus on systems and domains overlooks these critical aspects of organizational behavior.

Overall, Pflaeging’s criticism of the Cynefin framework revolves around its potential limitations in capturing the complexity of organizational dynamics and providing actionable insights for effective management and leadership in modern organizations. He advocates for approaches that embrace complexity and encourage adaptive responses to uncertainty and change.”

After this, I developed my own criticism and ChatGPT 3.5 answered with the following…

“Indeed, your point highlights a significant aspect of the criticism leveled against the Cynefin framework by Pflaeging and others: its potential limitation in capturing the multidimensional nature of complex organizational dynamics. Systems within organizations can indeed interact in intricate ways, giving rise to emergent behaviors that may not fit neatly into the categories provided by the framework. Moreover, as you mentioned, a system can exhibit characteristics of multiple domains simultaneously, further complicating the application of a single framework.

This perspective underscores the importance of recognizing the interconnectedness and interdependence of various systems within organizations. It suggests that effective management and leadership require a more nuanced understanding that goes beyond simplistic categorizations. Instead of being confined to a single framework, organizations may benefit from approaches that acknowledge the complexity of their internal dynamics and encourage adaptive responses to emergent challenges.

By embracing complexity and fostering a culture of experimentation, learning, and collaboration, organizations can better navigate uncertainty and drive innovation in a rapidly changing world. This broader perspective aligns with Pflaeging’s call for approaches that prioritize adaptability and responsiveness over rigid frameworks and classifications.

Is generative AI a complex system?

I probably already mention this above, but let us ask ChatGPT 3.5…

As an AI language model, I am often considered a complicated system rather than a complex one. Here’s why:

  • Deterministic Relationships: The interactions within the AI model, such as processing input text and generating responses, are based on deterministic algorithms. While the model may have a large number of parameters and layers, the relationships between these elements are well-defined and predictable.

  • Analyzable Behavior: The behavior of the AI model can be analyzed and understood through methods such as examining its architecture, studying training data, and conducting experiments. While the model’s responses may appear complex or nuanced, they ultimately derive from the underlying algorithms and data.

  • Limited Emergent Behavior: While the AI model can generate responses that may seem emergent or creative, these behaviors are ultimately constrained by the programming and training data. There is no true emergence in the sense of complex systems where new properties arise from the interactions of numerous agents or components.

  • Decomposability: The AI model can be decomposed into its constituent parts, such as its architecture, parameters, and training data. This allows researchers and developers to analyze and modify specific aspects of the system without fundamentally altering its behavior.

  • While I, as an AI language model, may exhibit behaviors that resemble those of a complex system—such as generating diverse and contextually appropriate responses—fundamentally, my underlying operation is more akin to a complicated system due to the deterministic nature of my algorithms and the analyzability of my behavior.”

Of course, I had to ask the question what happens when the generative AI reaches the theoretical state of singularity

“The concept of a technological singularity, where artificial intelligence surpasses human intelligence and becomes self-improving, is a speculative notion with various interpretations and implications. It’s important to note that the idea of a singularity remains largely theoretical and subject to debate among experts in the fields of artificial intelligence, computer science, and philosophy.

If such a scenario were to occur, where an AI system reached a level of intelligence surpassing human capabilities and became self-improving, it’s conceivable that the nature of the AI system could transition from being a complicated system to a complex one. Here’s why:

  1. Emergent Behavior: As an AI system becomes increasingly sophisticated and autonomous, it may exhibit emergent behaviors that are not explicitly programmed or predictable based solely on its algorithms or training data. These emergent behaviors could arise from the interactions and adaptations of the AI system as it learns and evolves.

  2. Non-Linear Dynamics: A self-improving AI system may operate in a non-linear manner, with small changes leading to disproportionately large effects or unforeseen consequences. This non-linearity could contribute to the complexity of the system’s behavior and evolution.

  3. Adaptation and Learning: A key characteristic of complex systems is their ability to adapt and learn from their environment. If an AI system reached singularity, it would likely possess advanced capabilities for adaptation, learning, and self-modification, contributing to its complexity.

  4. Interconnectedness: A singularity-level AI system could potentially be interconnected with numerous other systems, both digital and physical, leading to complex interactions and dependencies that contribute to its overall complexity.

However, it’s important to recognize that discussions about the nature and implications of technological singularity are highly speculative and involve significant uncertainties. The transition of an AI system from being complicated to complex, in the event of a singularity, would depend on a wide range of factors, including the design of the AI system, its capabilities, its interactions with its environment, and the broader societal and ethical considerations surrounding its development and deployment.”

prokoptôn

A person making progress. Even though one has not obtained the wisdom of a sage; when appropriate actions are increasingly chosen, fewer and fewer mistakes will be made, and one will be prokoptôn (προκόπτων), making progress.


The text discusses the differences between complex and complicated systems, emphasizing emergent properties and non-linear relationships in the former and deterministic relationships in the latter.

By Mike, 2024-03-29