Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For example: Trading service owns data about (a) trades, (b) each side of the order, plus (c) a rich list of properties for a group of associated trades. However, when the end-user requests to see their trading history in their application, there’s no reason to display or fetch the extensive list of details related to each individual trade. Most logical practice is to only fetch the data that will be relayed to the Frontend UI and visible to the end-user. Further, GraphQL offers the the ability to traverse the relationships between trades, orders, and other objects and do so with minimal server requests.

Just Further, just as we can use SQL to define schemas for our database tables, we use Marionette uses GraphQL to write specifications that describe the type of data that can be queried from our the servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition Language (SDL).

...

Containers

We run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice - for example, by gobbling up all the CPU. Virtualization is one Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, and allied with a supporting container orchestration platform like Kubernetes, has become many people’s go-to choice for running microservice architectures at scale.

Marionette runs each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice - for example, by gobbling up all the CPU. Virtualization is one way to create isolated execution environments on existing hardware, but normal virtualization techniques can be quite heavy when we consider the size of our Marionette’s microservices. Containers, on the other hand, provide a much more lightweight way approach to provision isolated execution for service instances, resulting . This results in faster spin-up times for new container instances , along with being and proves to be a much more cost effective efficient approach when applied for many architectures, including Marionette.

A microservice instance runs as a separate container on a virtual or physical machine. That container runtime may be managed by a container orchestration tool like Kubernetes.

Containers as a concept work wonderfully well for microservices, and Docker made containers significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.

After we begun playing around with containers, we also realized that we need something to allow us to manage these containers across lots of underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. But now we don’t feel the need to rush to adopt Kubernetes for that matter. It absolutely offers significant advantages over more traditional deployment techniques, but its adoption is difficult to justify if we have only a few microservices. As we gradually increase the complexity of Marionette microservice architecture, we will look to introduce new technology as we need it. We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. We know that but if we do end up doing that, we do our best to ensure that someone else is running the Kubernetes cluster for us, perhaps by making use of a managed service on a public cloud provider. Running our own Kubernetes cluster can be a significant amount of work. By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware. We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve. It also means that we can get our new functionality out to customers more quickly.

Public cloud providers, or more specifically the main three providers - Google Cloud and Amazon Web Services (AWS) - offer a huge array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.

Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, and allied with a supporting container orchestration platform like Kubernetes, has become many people’s go-to choice for running microservice architectures at scale.

By deploying one service per container, as in figure, we get a degree of isolation from other containers and can do so much more cost-effectively than would be possible if we wanted to run each service in its own VM.

...

You should view containers as a great way of isolating execution of trusted software. The Docker toolchain handles much of the work around containers. Docker manages the container provisioning, handles some of the networking problems for us, and even provides its own registry concept that allows you to store Docker applications. Before Docker, we didn’t have the concept of an “image” for containers - this aspect, along with a much nicer set of tools for working with containers, helped containers become much easier to use. The Docker image abstraction is a useful one for us, as the details of how our microservice is implemented are hidden. We have the builds for our microservice create a Docker image as a build artifact and store the image in the Docker registry, and away we go. When you launch an instance of a Docker image, you have a generic set of tools for managing that instance, no matter the underlying technology used - microservices written in NodeJS or Go, or whatever can all be treated the same. When Docker first emerged, its scope was limited to managing containers on one machine. This was of limited use - what if we wanted to manage containers across multiple machines? This is something that is essential if we want to maintain system health, if we have a machine die on us, or if we just want to run enough containers to handle the system’s load. Docker came out with two totally different products of its own to solve this problem, confusingly called “Docker Swarm” and “Docker Swarm Mode”. Really, though, when it comes to managing lots of containers across many machines, Kubernetes is king here, even if we might use the Docker toolchain for building and managing individual containers.

...

By deploying one service per container, as in figure, we get a degree of isolation from other containers and can do so much more in a cost-efficient manner, especially compared to running each service in its own VM.

...

You should view containers as a great way of isolating execution of trusted software. The Docker toolchain handles much of the work around containers. Docker manages the container provisioning, handles some of the networking problems for us, and even provides its own registry concept that allows you to store Docker applications. Before Docker, we didn’t have the concept of an “image” for containers - this aspect, along with a much nicer set of tools for working with containers, helped containers become much easier to use. The Docker image abstraction is a useful one for us, as the details of how our microservice is implemented are hidden. We have the builds for our microservice create a Docker image as a build artifact and store the image in the Docker registry, and away we go. When you launch an instance of a Docker image, you have a generic set of tools for managing that instance, no matter the underlying technology used - microservices written in NodeJS or Go, or whatever can all be treated the same. When Docker first emerged, its scope was limited to managing containers on one machine. This was of limited use - what if we wanted to manage containers across multiple machines? This is something that is essential if we want to maintain system health, if we have a machine die on us, or if we just want to run enough containers to handle the system’s load. Docker came out with two totally different products of its own to solve this problem, confusingly called “Docker Swarm” and “Docker Swarm Mode”. Really, though, when it comes to managing lots of containers across many machines, Kubernetes is king here, even if we might use the Docker toolchain for building and managing individual containers.

...

A microservice instance runs as a separate container on a virtual or physical machine. That container runtime may be managed by a container orchestration tool like Kubernetes.

Containers as a concept work wonderfully well for microservices, and Docker made containers are significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.

Hands on experience with containers showed that we need an efficient way to manage these containers across multiple underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. But now we don’t feel the need to rush to adopt Kubernetes for that matter. It absolutely offers significant advantages over more traditional deployment techniques, but its adoption is difficult to justify if we have only a few microservices. As we gradually increase the complexity of Marionette microservice architecture, we will look to introduce new technology as we need it. We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. We know that but if we do end up doing that, we do our best to ensure that someone else is running the Kubernetes cluster for us, perhaps by making use of a managed service on a public cloud provider. Running our own Kubernetes cluster can be a significant amount of work. By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware. We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve. It also means that we can get our new functionality out to customers more quickly.

Public cloud providers, or more specifically the main three providers - Google Cloud and Amazon Web Services (AWS) - offer a huge array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.

Testing

The question is how to effectively and efficiently test our code’s functionality when it spans a distributed system. Unit testing is a methodology where units of code are tested in isolation from the rest of the application. A unit test might test a particular function, object, class, or module. But unit tests don’t test whether or not units work together when they’re composed to form a whole application. For that, we use a set of full end-to-end functional tests of the whole running application (aka system testing). Eventually, we need to launch Marionette and see what happens when all the parts are put together.

...