Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This document is directed at intermediate-to-advanced developers, particularly web application engineers , site reliability and DevOps specialists/site reliability engineers and product owners. It seeks to fill a need in the clients/users for a practical demonstration of complex . Primary intention of this document is to review cloud native design principles and structural overview of Marionette Software.
But first, a quick word about Marionette Software and its practical application as foundation for most modern financial service businesses.

...

Marionette is a comprehensive launchpad software application meant to service as the foundation of various financial services covering . While some of these use cases are configurable turnkey options, Marionette is developed to cover a wide range of use FinTech use casespossible applications. Here are a few examples of Marionette’s current and possible applications:

...

  • Digital Asset Wallet Software 

  • Centralized Swap Platform for Fiat & Digital Assets

  • Centralized Derivatives Exchange & Order Book Trading

  • Tokenization & Investment Platform

  • Crypto & Fiat Donation Platform

Compatible 3rd Party Services:

Marionette’s architecture allows the software to be compatible and integration friendly with 3rd Party DeFi, CeFi and traditional financial services to further enhance business capabilities and features available to the end-user. Along with FinTech, Marionette is compatible with OpsTech to assist you with compliance and regulatory requirements. Here are some examples:

...

Centralized Derivative Exchanges (ie. Binance, Kraken, WhiteBit)

...

Decentralized Derivative Exchanges (ie. SushiSwap, Pancake, NinjaSwap)

...

Possible Applications:

Although Marionette already supports a wide range of existing FinTech applications, it is developed to handle a lot more. With some custom development, Marionette can support the following business models:

  • P2P Trading Marketplace

  • Crypto & Fiat Payment Processing

  • Donation Processing

  • Crypto & Fiat Escrow Service

  • NFT Minting & Trading

  • On/Off Ramp for Fiat to Cryptocurrency (ie. Ramp)

  • Payment Gateways (ie. Stripe)

  • Centralized & Decentralized Staking

  • Traditional Banks

  • Neo-Banks & Digital Banks

  • Node Management (Chainstack)

  • KYC Services (ie. SumSub)

  • AML/KYT (ie. Chainalysis)

  • Email & SMS (Twilio)

  • & more

Possible Applications:

Although Marionette already supports a wide range of existing FinTech applications, it is developed to handle a lot more. With some custom development, Marionette can support the following business models:

  • P2P Trading Marketplace

  • Crypto & Fiat Payment Processing

  • Donation Processing

  • Crypto & Fiat Escrow Service

  • NFT Minting & Trading

  • Neo Bank Software

  • & more

Marionette is Developed to Integrate 3rd Party Services

Marionette’s architecture allows the software to be compatible for integration with 3rd Party DeFi, CeFi and traditional financial services to further enhance business capabilities and features available to the end-user. Along with FinTech, Marionette is compatible with OpsTech to assist you with compliance and regulatory requirements. Here are some examples:

Regardless of use case or integrations, Marionette is architected to support your custom business requirements. From single use case to ultimate enterprise financial services to include all of the possibilities defined above, Marionette is the ultimate solution for your business application today. The remainder of this document will explain why this is true and what makes Marionette the #1 industry choice in 2023 and moving into the future.

Layered Architecture For Marionette Stack

Marionette FinTech Software is composed of a .

Layered Architecture For Marionette Stack

Marionette FinTech Software is composed of a layered architecture and consists from the following components:

...

GraphQL is a query language for APIs. Today, GraphQL is one of the most popular protocols for building web APIs. It’s a suitable choice for driving integrations between microservices and building integrations with frontend applications.

These are some of the reasons that make GraphQL an excellent choice for building the service API for Marionette. As we explain the specification for the trading APIs, we will also review scalar types of GraphQL, design of custom object types, as well as queries and mutations.

GraphQL for Managing Marionette’s APIs

GraphQL is query language for API’s. It empowers Marionette with the ultimate control of defining the exact API data of interest and how to fetch this data from the server, instead of fetching full representations of resources. For example, GraphQL allows us to fetch one or more properties individually from multiple resources and not the full scope of data associated with those resources.

This allows Marionette to model the relationship between multiple (or single) resources and retrieve just the the defined individual properties from multiple (or single) resources, while achieving this in a single request from the server. In comparison with REST APIs, you get a full list of properties for each resource. Considering performance and business importance of minimizing the amount of the data fetched from the server, GraphQL is a clear choice for Marionette, but let’s take another look.

For example: Trading service owns data about (a) trades, (b) each side of the order, plus (c) a rich list of properties for a group of associated trades. However, when the end-user requests to see their trading history in their application, there’s no reason to display or fetch the extensive list of details related to each individual trade. Most logical practice is to only fetch the data that will be relayed to the Frontend UI and visible to the end-user. Further, GraphQL offers the the ability to traverse the relationships between trades, orders, and other objects and do so with minimal server requests.

Just as we can use SQL to define schemas for our database tables, we use GraphQL to write specifications that describe the type of data that can be queried from our servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition Language (SDL).

...

Containers

We run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice - for example, by gobbling up all the CPU. Virtualization is one way to create isolated execution environments on existing hardware, but normal virtualization techniques can be quite heavy when we consider the size of our microservices. Containers, on the other hand, provide a much more lightweight way to provision isolated execution for service instances, resulting in faster spin-up times for new container instances, along with being much more cost effective for many architectures.

A microservice instance runs as a separate container on a virtual or physical machine. That container runtime may be managed by a container orchestration tool like Kubernetes.

Containers as a concept work wonderfully well for microservices, and Docker made containers significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.

After we begun playing around with containers, we also realized that we need something to allow us to manage these containers across lots of underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. But now we don’t feel the need to rush to adopt Kubernetes for that matter. It absolutely offers significant advantages over more traditional deployment techniques, but its adoption is difficult to justify if we have only a few microservices. As we gradually increase the complexity of Marionette microservice architecture, we will look to introduce new technology as we need it. We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. We know that but if we do end up doing that, we do our best to ensure that someone else is running the Kubernetes cluster for us, perhaps by making use of a managed service on a public cloud provider. Running our own Kubernetes cluster can be a significant amount of work. By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware. We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve. It also means that we can get our new functionality out to customers more quickly.

Public cloud providers, or more specifically the main three providers - Google Cloud and Amazon Web Services (AWS) - offer a huge array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.

Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, and allied with a supporting container orchestration platform like Kubernetes, has become many people’s go-to choice for running microservice architectures at scale.

By deploying one service per container, as in figure, we get a degree of isolation from other containers and can do so much more cost-effectively than would be possible if we wanted to run each service in its own VM.

...

You should view containers as a great way of isolating execution of trusted software. The Docker toolchain handles much of the work around containers. Docker manages the container provisioning, handles some of the networking problems for us, and even provides its own registry concept that allows you to store Docker applications. Before Docker, we didn’t have the concept of an “image” for containers - this aspect, along with a much nicer set of tools for working with containers, helped containers become much easier to use. The Docker image abstraction is a useful one for us, as the details of how our microservice is implemented are hidden. We have the builds for our microservice create a Docker image as a build artifact and store the image in the Docker registry, and away we go. When you launch an instance of a Docker image, you have a generic set of tools for managing that instance, no matter the underlying technology used - microservices written in NodeJS or Go, or whatever can all be treated the same. When Docker first emerged, its scope was limited to managing containers on one machine. This was of limited use - what if we wanted to manage containers across multiple machines? This is something that is essential if we want to maintain system health, if we have a machine die on us, or if we just want to run enough containers to handle the system’s load. Docker came out with two totally different products of its own to solve this problem, confusingly called “Docker Swarm” and “Docker Swarm Mode”. Really, though, when it comes to managing lots of containers across many machines, Kubernetes is king here, even if we might use the Docker toolchain for building and managing individual containers.

...

As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call. Therefore reducing the network calls and bandwidth saves the battery life and CPU cycles consumed by the backend applications.

Why GraphQL ?

  • To avoid multiple versioning of our rest API.

  • Ask for what we need: Client has a provision to ask only those fields which they needs. There would be no handling on server side specific to the platform.

  • Avoid multiple API calls to get the related data. GraphQL allows us to get the related data in a single request.

  • Speed up the application performance.

These are some of the reasons that make GraphQL an excellent choice for building the service API for Marionette. As we explain the specification for the trading APIs, we will also review scalar types of GraphQL, design of custom object types, as well as queries and mutations. (If you are just starting out with GraphQL, Apollo Odyssey has some great interactive tutorials to help you.) 

GraphQL for Managing Marionette’s APIs

GraphQL is query language for API’s. It empowers Marionette with the ultimate control of defining the exact API data of interest and how to fetch this data from the server, instead of fetching full representations of resources. For example, GraphQL allows us to fetch one or more properties individually from multiple resources and not the full scope of data associated with those resources.

This allows Marionette to model the relationship between multiple (or single) resources and retrieve just the the defined individual properties from multiple (or single) resources, while achieving this in a single request from the server. In comparison with REST APIs, you get a full list of properties for each resource. Considering performance and business importance of minimizing the amount of the data fetched from the server, GraphQL is a clear choice for Marionette, but let’s take another look.

For example: Trading service owns data about (a) trades, (b) each side of the order, plus (c) a rich list of properties for a group of associated trades. However, when the end-user requests to see their trading history in their application, there’s no reason to display or fetch the extensive list of details related to each individual trade. Most logical practice is to only fetch the data that will be relayed to the Frontend UI and visible to the end-user. Further, GraphQL offers the the ability to traverse the relationships between trades, orders, and other objects and do so with minimal server requests.

Further, just as we can use SQL to define schemas for our database tables, Marionette uses GraphQL to write specifications that describe the type of data queried from the servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition Language (SDL).

...

Containers

Brief introduction so what containers are?
How are they used in Marionette?
Do they satisfy the current Marionette needs?
What will we do if we need to scale current needs?

Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, often allied with a supporting container orchestration platform like Kubernetes, become a popular choice for running microservice architectures at scale.

Marionette uses Docker to run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice by consuming all the CPU. Virtualization is one way to create isolated execution environments on existing hardware, but normal virtualization techniques can be quite heavy when we consider the size of Marionette’s microservices. Containers, on the other hand, provide a much more lightweight approach to provision isolated execution for service instances. This results in faster spin-up times for new container instances and proves to be a much more cost efficient approach when applied for many architectures, including Marionette.

By deploying one service per container (image below) Marionette achieves a degree of isolation from other containers enabling it to do much more in a cost-efficient manner. Especially when comparing to running each service in its own VM.

...

Docker image abstraction is very useful for Marionette, hiding the details of how our microservice is implemented. Docker toolchain handles much of the work around containers and isolates execution of trusted software. It manages container provisioning, handles some of the networking problems and provides its own registry that allows storage of Docker applications.

Marionette creates a Docker image as a build artifact while storing this image in the Docker registry. When launching an instance of this Docker image, we gain a native set of tools for managing that particular instance.

To maintain optimal system health, Marionette manages containers across multiple machines, allowing restart of a failed service or running additional containers to handle system load.

...

A microservice instance runs as a separate container on a virtual or physical machine.

By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware.

That container runtime may be managed by a container orchestration tool like Kubernetes.

Containers as a concept work wonderfully well for microservices, and Docker made containers are significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.

Hands on experience with containers showed that we need an efficient way to manage these containers across multiple underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. As we gradually increase the complexity of Marionette microservice architecture, we plan to introduce Kubernetes for container orchestration.

We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. . We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve.

Public cloud providers like GCP, AWS and Digital Ocean, offer an array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.

Testing

The question is how to effectively and efficiently test our code’s functionality when it spans a distributed system. Unit testing is a methodology where units of code are tested in isolation from the rest of the application. A unit test might test a particular function, object, class, or module. But unit tests don’t test whether or not units work together when they’re composed to form a whole application. For that, we use a set of full end-to-end functional tests of the whole running application (aka system testing). Eventually, we need to launch Marionette and see what happens when all the parts are put together.

...