The Hadean Architecture Whitepaper
Explore a new alternative to existing distributed computing methodologies
The Hadean Architecture
Whitepaper
The Hadean Platform implements a unique process model that transforms the performance, reliability and scalability of cloud computing.
It underpins a number of libraries that solve intractable distributed problems, instilling a set of core properties that allows them to run at massive scale. These libraries act as an interface for developers looking to build out complex, high-performance applications across a distributed cloud and edge network.
“Imagine writing a 50 line program to analyse megabytes of data on your laptop. Now imagine applying the same program to terabytes of data, using thousands of processors in the cloud. This is the system Hadean is building. It will enable everyone to access and explore the world's data using unlimited computing power – leading to new understanding, ideas and innovations.”
David May, Inventor of the first parallel microprocessor
A brief overview of this document
This whitepaper is designed to provide a comprehensive overview of Hadean’s architecture. However not all sections are relevant to all people, so feel free to jump to the sections most relevant to you.
Part 1: A Distributed Process Model
It begins with an overview of the distributed process model, providing insight into the computer science fundamentals that underpin Hadean. It explains the principles that enable dynamically scalable and distributed applications.
Part 2: The Hadean Platform
The second part explains the architecture of the Hadean platform and explores the different component parts, which exist to implement the process model. It also details how and when to use Hadean applications.
Part 3: Aether Engine
The solutions architecture section provides a high-level overview of Aether Engine -- a spatial simulation library built upon the Hadean architecture. It explains the core concepts, introduces each of the component parts and explores how they fit together. The document then dives into the process, algorithms and structures that enable Aether Engine to manage its simulations. The section also explores in depth how the framework is designed and optimised and should serve as a useful guide to anyone looking to understand how Aether Engine runs and manages massive scale, distributed simulations.
Part 4: Muxer
Muxer is a library that can run as a standalone application or in conjunction with Aether Engine. This section of the document digs into what it is that makes it possible to run thousands of connected clients in near real-time, for instance when scaling an MMO game.
Part 5: Use Cases
The final part of the document looks at some real world applications of Hadean, including online gaming, life sciences and virtual events.
All this is to come, but if you're interested in our product specific resources, follow one of these links: Aether, Muxer
A Distributed Process Model
A Distributed Process Model
Unlike existing operating systems, Hadean treats distributed computing as a first-class concern, underpinning all applications with a set of core properties. It uses a hybrid approach to its design, providing a full distributed computing environment as a native feature within Linux. The model integrates with other technologies consistent with the properties Hadean holds as essential for effortless scaling.
The implementation of the platform is referred to as the Hadean Process Model and instils the following capabilities:
-
Core distributed resource allocation and distributed IPC in the systems layer
-
Strong process and resource isolation
-
Distributed operating system scheduling
-
Dynamic scale at run-time
|
Erlang/Orleans
|
Pulumi/libcloud
|
Hadean
|
Runtime Environment
|
VM-based, bytecode
|
N/A: infrastructure only
|
Metal, OS syscalls API, native code
|
Domain
|
Processing Only
|
Infrastructure only
|
Infrastructure + code
|
Hadean utilises native code with explicit resource management, in contrast to alternatives such as Erlang and Orleans that use virtual-machine runtimes (BEAM or .NET respectively). At the same time, Hadean distinguishes itself from infrastructure-as-code models such as Pulumi by providing abstraction over process spawning.
Hadean applications can have their memory and CPU scaled at runtime via operating system resource allocation requests in much the same way as a standard Windows or Linux program. The crucial difference is in that a Hadean system does not need these resources to reside on a single server — a limitation of existing approaches. Instead, it provides developers powerful out-of-the-box abstractions over an entire cloud datacentre to build, deploy, scale and debug distributed applications.
For example, take a twenty line program executing a series of functions on a dataset. It could run on a single server when operating on a small dataset, but through the process of program execution, if the computation becomes too large, it may scale up at runtime to a 100 servers in order to compute the task.
The model has been carefully engineered to be robust when scaling and agnostic of actual distribution. Programs built, compiled and tested on local machines can be effortlessly deployed and scaled across cloud and edge datacentres. Any application strongly preserves crucial properties in behaviour and execution, enabling programs to be easily scaled up and down, without compromising the runtime.
Unlike existing approaches the ability to scale in this manner is an intrinsic property of the program and the Hadean platform. It does not require specialised frameworks in an enterprise architecture or complex engineering to achieve this dynamic scale. This capability is known as scale invariance and it is the core property upon which all Hadean libraries rely.
Hadean Platform-as-a-Service
Developers are able to interface with the Hadean platform through the distributed libraries that Hadean provide, including Aether Engine and Muxer.
These applications solve specific intractable distributed problems, and scale dynamically to allow high-performance regardless of computational intensity.
The Scheduler
The scheduler handles all requisite operations such as resource accounting, machine provisioning, bootstrapping of server nodes and transmission of processes to spawn.
It currently starts managers based on a configuration file, taking in the IP addresses, ports, CPUs and memory of available machines. It has an overall view of everything that is running, and once it has created one or more managers it can then request that they spawn user processes. User processes are defined by the code that runs as requested by the end user, or by an already running process (by making a spawn() call).
Implementation
The platform provides a robust and scalable foundation for Hadean applications. Programs link against the Hadean library, providing an asynchronous “spawn" function that can then be called to create a new process at any time.
When the spawn function is called, the library communicates with the scheduler, providing both the program and information on how to run it. A locality API is available to indicate preferences about the kinds of infrastructure on which the process should be run — for instance the geographical location, proximity to other infrastructure and available hardware.
The scheduler, if necessary, provisions the infrastructure (machines, networking, etc) and launches a manager on each new server, which has responsibility for the machine on which it runs. It then requests that the manager launch the provided program. The manager runs the user-supplied program on the (possibly newly-created) machines, and runs an enforcer that forwards any interesting output from the process to a logging service. The enforcer administers the strong isolation and predictability guarantees provided by the Hadean model, and ensures dangerous syscalls or instructions that can break these guarantees are sanitised.
Hadean channels serve as a first-class distributed IPC (Inter-Process Communication) primitive, ensuring all programs are both distribution-agnostic and scale-agnostic. After spawn() channels (either local or distributed) may be created between the spawning and spawned processes for the purpose of IPC during the program’s execution.
Machine Scaling
Dynamic scalability is a core component of Hadean, ensuring problems such as under-/over-provisioning are avoided. The scheduler is built with a plugin architecture that can be used to add or remove backends for different means of provisioning machines — different clouds, on-prem deployments, et cetera.
Hadean pools resources to ensure that they are available in a timely manner. The platform maintains a pre-allocated list of machines that have each had a manager and enforcer added in anticipation. In order to employ them the user executable also needs to be transferred to them, which happens at run-time. Each machine is controlled by an on-machine manager. Managers are in-turn under the control of the Hadean scheduler, and can be said to be part of the Hadean ‘cluster’.
The Hadean platform automatically requests and “warms up” (transfers the necessary components, e.g. manager, enforcer, etc.) new boxes as its needs increase. The sequence of events is:
- The spawn() command is called and there is insufficient “space” on the existing box(es), so Hadean requests a new box from the underlying infrastructure/cloud host
- Hadean receives the IP address of the new box
- The manager and enforcer are sent to the box
- The user’s executable can now be sent to the new box
When resources are no longer required by the program nor yet by the pool they will be automatically released back to the cloud environment.
Distributed Native Tooling
Distributed Debugging
Understanding and debugging distributed applications can be difficult, especially on previous platforms that don't offer a holistic view of the system, programs and processes. Hadean, however, provides native, fully integrated tooling; the distributed debugging support allows debugging any and all processes in the system, as easily as on a single machine. It integrates with visual debugger frontends like Visual Studio, VS Code and command line frontends, such as GDB. Hadean also includes postmortem distributed debugging support, from coredumps. It interfaces with the system to collate and present all coredumps and take advantage of the higher level information to present the most relevant crashes first.
Combined, these tools make debugging Hadean programs as easy as debugging local programs.
Network Visualiser
The network visualiser doesn't have a single machine analogue in the way debugging does, but it allows users to view a network diagram of every process in the system and every connection between them, in real-time.
It also provides an enormous amount of information about the connections between them. Not just latency, bandwidth and so on, but also detailed statistics, such as the lost packets, the window sizes, the time since the last send or receive. Users are able to both filter and aggregate processes and connections in order to answer high-level questions such as "what is the bandwidth between all datacenters?", or "what is the packet loss distribution over all of my players?"
The Hadean Platform and Alternate Approaches
Hadean exists to overcome the challenges posed by distributed computing. There are however alternative approaches available to the user, which approach the problem differently.
There are three particular distinctions between Hadean and these alternatives:
1. Cloud-nativity: Hadean was designed from first-principles to be both the necessary and sufficient ideal abstraction on top of cloud hardware. Previous approaches are either not natively designed for cloud, adapt existing technologies to the cloud or serve to work around the limitations of non-cloud-native tech.
2. Single Abstraction: Hadean is designed as a single abstraction. It provides a concurrency model, consistency model, distributed IPC, distributed scheduling, resource management, easy deployment and dynamic scaling that are first-class citizens of the platform. Existing technologies solve parts of the distributed computing problem, but often introduce new issues as the user must engineer their own infrastructure stack to carry out even basic cloud computing. For example, a stack might be engineered with Chef, Docker, Kubernetes, the JVM and Akka whereas on Hadean the equivalent capability is trivially available, out-of-the-box, on every instance.
3. Lightweightness: Hadean does away with much of the cruft that exists in existing tools for legacy reasons and emphasises distributed computing in the cloud world.
When to Use Hadean Libraries vs Other Approaches
Kubernetes and containerisation technologies are well suited for designing cloud apps. However, despite being extremely portable, they lack the capabilities required for dynamic scaling and high-performance. Conversely, HPC technologies such as MPI were developed to help carry out low-level optimisations on programs to the specific characteristics of a supercomputer and achieve performance. This approach results in highly-tuned but static workloads. The emphasis is on conserving precious supercomputing resources rather than highly-productive scale-/distribution-agnostic computing.
Hadean by contrast brings together the inherent qualities of a cloud-nativity, combining dynamic scaling and high-performance, and unifying the infrastructure and application as a single concern. Hadean applications are therefore best suited for:
- High-performance and real-time synchronisation
- Powerful and efficient first-class distributed communication
- High-performance compute with real-time guarantees
- Rapid dynamic scale-up and scale-down
- Situations that are hard to predict or capacity plan for
- Programs that serve a need based on location: e.g. highly mobile applications that may need to be allocated and de-allocated at runtime across multiple different data centres in various cloud and edge platforms
Aether Engine
Aether Engine is a spatial simulation application. It scales across different processors and physical machines, utilising more computing power as the simulations grow in complexity and size. From a solutions architecture perspective, it is made up of a number of core components.
Solutions Architecture
Simulation Logic
The Aether Engine framework provides a set of APIs to set up and control all simulation logic. It is lightweight and extensible, enabling users to plug in and distribute their libraries, whilst at the same time providing a set of tools that can speed up the process of building a simulation – for instance the Nvidia PhysX library.
Spatial Simulation Management
To manage its simulation, the framework utilises a distributed octree data structure. As more entities condense into a single spot, the octree data structure is used to repartition space to balance load across CPUs. More complex regions are decomposed into a greater number of cells, while less complex regions are handled by fewer cells.
Muxer
Muxer can run as either a standalone application or part of the Aether Engine ecosystem. It connects vast numbers of disparate clients to each other and the main simulation, across a distributed network. Providing bidirectional data flow from a single rendezvous point, it comprises multiple instances or “nodes”, locally clustered to ensure that the most important information is processed and returned to the client as soon as possible. As more connections join the network, new nodes are automatically created, scaling to enable real-time data streaming, regardless of computational intensity.
The Muxer Node
Located close to the client, the Muxer Node comprises a library for event forwarding and a netcode implementation that handles the interest management (network relevancy.) It uses a binary space partitioning tree to prioritise which data gets sent to a client and reduce bandwidth.
The Muxer Client
The Muxer Client provides a deep integration into the client engine. The plugin replicates (for example simulation entities) directly into the client as actors, removing the need to manually replicate code.
Software Development Kit
Aether Engine has a simple, user-friendly SDK, which connects the engine with the developer. It opens up the API for developers, who can write code and deploy directly to the engine.
Debugging
Aether Engine debugging includes breakpoints and tight integration with Visual Studio.
Spatial Simulation Management
Entity simulation almost always includes interactions between nearby entities, which in a naive implementation can lead to O(N2) complexity. Aether Engine uses spatial acceleration data structures to get this down to O(N log N) interaction, and parallel simulation workers to divide the complexity further.
Morton code, a locality-preserving space-filling curve, is used to sort workers and entities, which reduces O(N2) spatial algorithms to O(N log N), and optimises use of CPU caches. It is particularly helpful when all entities within a certain distance want to interact with each other.
The complexity of a simulation also depends on the density of entities, which can change rapidly, often shifting from one virtual space to another. Aether Engine addresses this by having a lightweight process reallocation mechanism, both at the application level and as part of the underlying Hadean platform. A distributed octree that maps virtual space to CPU space, and allocates more cores to areas with high numbers of entities, or in non-entity simulations, a high compute density. It is supported by the Hadean platform which contains low latency process/communication primitives, enabling low latency process startup (either locally or remotely), channel connection and sending.
Aether Engine pools workers ensuring they are available to the simulation almost instantly. The workers ‘sit’ on a ‘shelf’, ready to be taken and inserted into the simulation, until they’re no longer needed, at which point they’re put back on the ‘shelf’. The ‘shelf’ is an array of processes stored in the octree manager, i.e. the top-most node in the octree.
The manager is written with efficiency first, enabling a single manager to handle hundreds or thousands of Hadean processes (which map to cells). The majority of the communication is directly between neighbours, which allows full use of local bandwidth. When a change to the structure is needed the manager steps in with an efficient rearranging of cells. The interface provided on the cell is simple: a cell does some computation in an area, communicating to neighbours, and reporting its current load to the manager. The simulation interface is even simpler, only needing to know about local entities, foreign entities and aggregates.
Aether Engine provides an easy to use simulation model, on top of this performant framework. Entities have single authority, meaning only one process will make modifications to an entity at any time. Nearby entities will likely be on the same worker and so have the same authority as each other. Entities also have a visibility range, which is a distance at which they can interact with one another. Aether Engine transparently handles the serialisation, compression and communication necessary to get the data to the right place at the right time. This process is called handover.
Aether Engine also provides mechanisms for longer range or global communication and computation. Messages can be sent to any region of space (either a single point, or an area potentially covering multiple workers) and entities addressed either by ID or a custom mechanism. It’s also possible to carry out global computation on a worker process called “global state”, enabling computation that needs to happen exactly once and/or cannot be replicated.
Framework Optimisation
Most current frameworks don’t scale effectively, especially in an environment with many CPU cores. Older generation engines are written without a data oriented approach, usually an inheritance hierarchy, meaning it is more difficult to divide the code across multiple cores and threads (even in a multi-threaded environment). A newer generation is emerging around the ECS framework (entity-component-system).
The ECS model separates data from logic, and it makes explicit the dependencies between systems and components of entities, rendering it a perfect model for distributing. Systems can be run in parallel, and entities can be distributed over multiple workers and simulated in parallel. All data is contained within entities and components, ensuring they can easily be completely serialised and sent to other processes. The ECS is also very beneficial for maximising cache throughput, and for improving the readability and maintainability of simulation logic.
Framework Integrations
PhysX is an external simulation library we’ve integrated as a plugin. It has a complex, opaque internal state, and handles to PhysX objects. It is fully wrapped into a neat ECS-based plugin. The PhysX tick becomes an ECS system that is called every simulation tick and the opaque PhysX handle is wrapped into a component. The PhysX built-in serialisation is used for communicating entities between workers. And directly using PhysXs APIs is supported by the components and system.
This plugin based architecture is very flexible, and could support EnTT (another, widely used ECS framework), and Recast/Detour (a widely used AI, Navigation and Pathfinding pair of libraries).
Muxer
As a standalone library, Muxer connects vast numbers of disparate clients across a distributed cloud and edge network. Providing bidirectional data flow from a single rendezvous point, it comprises multiple instances or “nodes”, locally clustered to ensure the most important information is processed and returned to the client as soon as possible. As more connections join the network, new nodes are automatically created, scaling to enable real-time data streaming, regardless of computational intensity.
When used in combination with Aether Engine, Muxer ensures low latency, globally distributed simulations are made available to hundreds of thousands of connected clients to monitor, view and interact with in real-time.
Muxer Nodes
A Muxer Node comprises two main components - a library (libmuxer) and interest management. It also includes a client authentication layer that provides security mechanisms to protect against throttling, security risks and client misbehaviour such as failing to accept updates or sending too much event data back to the simulation / application.
Libmuxer
Libmuxer enables servers to push information and messages out to clients, as well as receive them. Long running processes remove the need for repeated inefficient connection requests, reducing the overall time it takes for data to be sent back and forth; once the connection is established, communication no longer needs to be client initiated.
It makes extensive use of asynchronous IO, ensuring thousands of connections are handled simultaneously without needing a single thread per client. It also handles the control events sent by each client, forwarding them back to the simulation. Muxer receives messages from the simulation containing a partial world state. From these it reconstructs a complete world state, and stores it in a data structure.
When used with Aether Engine a dispatcher routes internal Aether Engine messages, as well as those received from Muxer. The dispatcher also handles addressing and routing messages to the right workers. For example, a player entity will exist in one worker in the simulation, the player will send input events through Muxer, to the dispatcher, which will route the input event to the worker containing the player entity.
Interest Management
Libmuxer is not responsible for deciding which data gets sent to a client, this is the interest management (net relevancy) component.
Hadean’s generic netcode implementation uses r-trees to restructure the octree simulation within Muxer and efficiently query which information is relevant to each client. Entities found using the r-tree queries are scheduled to be sent to the client based on metrics which correspond to the importance of that entity, enabling less important entities to be sent less frequently and reduce bandwidth.
Distance-based net relevancy illustrates the results of this approach. In a game for instance, objects that are further away from the player are less important proportional to the square of distance. This means things that are twice as far may be sent four times less frequently. In practice, at Hadean, we found this gave a 97.5% reduction in bandwidth, with a constant gameplay experience. The exact benefit depends on the game. And the exact implementation may differ, a 1/x2 priority function may be less suitable for land based games, or alternative simulation domains (life sciences for example). This is why we decided to make our net relevancy implementation easily user customisable. A core belief of Hadean is that we should provide great defaults, but be completely customisable.
Furthermore, it is possible to apply compression to the output stream to reduce outgoing bandwidth to the client - users can apply any custom protocol they wish based on their type of simulation. Muxer uses radii to manage frequency updates per entity but this is completely customizable based on the customer’s interest management requirements. Typically users may want to use a mix of distance and importance of a given entity to define the update relevance of the entity to a player.
Although all the implementations we are currently using are strongly tied to C++, it is possible for a Muxer user to interop with the C++ library from an suitable alternative language or to even completely write an interest management implementation from scratch if they wish.
Muxer could for instance be customised with a frustum-based net relevancy to prune which objects are sent to the client. For example in a game, it might not send objects behind the player, (or send them less frequently) in case they turn around too quickly. This can help mitigate issues such as wallhacks or data being sent which reveals information to the client it shouldn't. It can also help when the simulation data is enormous, but you only want to inspect or visualise a small section at a time. Some examples of this are: a biological cell made of proteins and molecules, or our 10,000 player EVE: Aether Wars battles where the whole world state was simply too large to fit into the memory and bandwidth of any single client.
The number of clients Muxer can handle is only constrained by outgoing bandwidth or CPU usage. More complex net relevancy typically results in lower bandwidth, but results in fewer connections per muxer due to the higher CPU usage. In practice, it will be a highly specific decision as to what sort of net relevancy trade-offs lead to acceptable user experience.
The Muxer Client
The Muxer Client provides a deep integration into the client engine. The plugin replicates (for example Aether Engine entities) directly into the client as actors, removing the need to manually replicate code.
Instrumentation
Every organisation is using a plethora of tools for gathering and inspecting their performance. The Aether Engine data manager, exposes a number of APIs which can be easily connected to analytics tools.
Extensibility
Not every simulation is the same, and not every simulation has the same needs (ie. simulating wind forces on a turbine will require a different physics engine than simulating an NPC in an MMO). The Plugin Manager seamlessly swaps subsystems such as physics engines, pathfinding libraries and persistence backends such as databases.
Use Cases/ Conclusions
Hadean’s technology can be applied to a number of different verticals solving the problem of running large scale, distributed simulations. Application areas include:
Gaming
Today, game engines are constrained by the processing power of a single machine or server, limiting the aspirations of designers, and denying players a truly immersive experience. It has forced us to experience games within a set of seemingly immutable set of confines, including finite artificial intelligence, overly simplified physics and limited player counts.
Hadean inverts these traditional constraints, providing the power and scalability to create games of immeasurable complexity and detail. Vast immersive worlds can be brought to life by high-fidelity landscapes, characters and cities, and underpinned by true to life physics that directly impacts gameplay. At the same time, these game worlds can become social platforms opened up to hundreds and thousands of users.
Virtual Events
The global shift to remote experiences has accelerated the need to create and deliver world class virtual events. These all require a vast number of connections, often across distributed geographies. At the same time, users and brands alike are looking for immersive virtual experiences that take us beyond current video conferencing capabilities.
Hadean provides a platform that enables virtual events with a vast number of connections and unprecedented performance at scale. The automatic allocation and deallocation of resources enables unprecedented levels of scale and fidelity and eliminates the large upfront cost of the server backend for one-off or seasonal events, making large virtual worlds and events viable.
Moreover, Hadean allows large virtual events and conferences numbering in the hundreds of thousands of users, eliminating design constraints around users per room and enabling rich social interactions.
Synthetic Environments
Traditionally, computational complexity derived from the density of entities, structural changes or a data influx overwhelm a synthetic environment. The underlying infrastructure is unable to cope and pre-defined limits mean that such a simulation can never grow beyond a certain size.
Hadean, however, can incorporate any number of data types and modelling techniques to produce a singular coherent view of the real world. It dynamically scales at run time and removes the need for complex capacity planning and infrastructure engineering, allowing for arbitrarily detailed synthetic environments to be simulated.
Decision Support
In the light of events such as the global pandemic, healthcare, supply chains and operational logistics face many challenges in making informed decisions quickly through sophisticated simulations that model reality. One such set of challenges is the associated complexity of configuring the underlying infrastructure as simulations scale in complexity to replicate the world around us, and the inability to tweak parameters and edit algorithms in an easy and iterative manner to elicit impactful insight swiftly.
Parallelisation of simulations using the Hadean platform can significantly speed up the rate of analysis to produce reliable predictions and, ultimately, better informed decisions.
Finance
Hadean’s data processing and analytics capabilities can be applied to financial forecasting and modelling. It has already been used to build financial Monte Carlo simulations, lowering the barrier to distributed computing by making it simpler and more reliable.