...
This document is directed at intermediate-to-advanced developers, particularly web application engineers , site reliability and DevOps specialists/site reliability engineers and product owners. It seeks to fill a need in the clients/users for a practical demonstration of complex . Primary intention of this document is to review cloud native design principles and structural overview of Marionette Software.
But first, a quick word about Marionette Software and its practical application as foundation for most modern financial service businesses.
...
Marionette is a comprehensive launchpad software application meant to service as the foundation of various financial services covering . While some of these use cases are configurable turnkey options, Marionette is developed to cover a wide range of use FinTech use casespossible applications. Here are a few examples of Marionette’s current and possible applications:
...
Digital Asset Wallet Software
Centralized Swap Platform for Fiat & Digital Assets
Centralized Derivatives Exchange & Order Book Trading
Tokenization & Investment Platform
Crypto & Fiat Donation Platform
Compatible 3rd Party Services:
Marionette’s architecture allows the software to be compatible and integration friendly with 3rd Party DeFi, CeFi and traditional financial services to further enhance business capabilities and features available to the end-user. Along with FinTech, Marionette is compatible with OpsTech to assist you with compliance and regulatory requirements. Here are some examples:
...
Centralized Derivative Exchanges (ie. Binance, Kraken, WhiteBit)
...
Decentralized Derivative Exchanges (ie. SushiSwap, Pancake, NinjaSwap)
...
Possible Applications:
Although Marionette already supports a wide range of existing FinTech applications, it is developed to handle a lot more. With some custom development, Marionette can support the following business models:
P2P Trading Marketplace
Crypto & Fiat Payment Processing
Donation Processing
Crypto & Fiat Escrow Service
NFT Minting & Trading
On/Off Ramp for Fiat to Cryptocurrency (ie. Ramp)
Payment Gateways (ie. Stripe)
Centralized & Decentralized Staking
Traditional Banks
Neo-Banks & Digital Banks
Node Management (Chainstack)
KYC Services (ie. SumSub)
AML/KYT (ie. Chainalysis)
Email & SMS (Twilio)
& more
Possible Applications:
Although Marionette already supports a wide range of existing FinTech applications, it is developed to handle a lot more. With some custom development, Marionette can support the following business models:
P2P Trading Marketplace
Crypto & Fiat Payment Processing
Donation Processing
Crypto & Fiat Escrow Service
NFT Minting & Trading
Neo Bank Software
& more
Marionette is Developed to Integrate 3rd Party Services
Marionette’s architecture allows the software to be compatible for integration with 3rd Party DeFi, CeFi and traditional financial services to further enhance business capabilities and features available to the end-user. Along with FinTech, Marionette is compatible with OpsTech to assist you with compliance and regulatory requirements. Here are some examples:
Centralized Derivative Exchanges (ie. Binance, Kraken, WhiteBit)
Decentralized Derivative Exchanges (ie. SushiSwap, Pancake, NinjaSwap)
Centralized Liquidity Providers & Market Makers (ie. WhiteBit)
On/Off Ramp for Fiat to Cryptocurrency
Neo Bank Software
& (ie. Ramp)
Payment Gateways (ie. Stripe)
Centralized & Decentralized Staking
Traditional Banks
Neo-Banks & Digital Banks
Node Management (Chainstack)
Custody Management (Fireblocks)
KYC Services (ie. SumSub)
AML/KYT (ie. Chainalysis)
Email & SMS (Twilio)
& more
Regardless of use case or integrations, Marionette is architected to support your custom business requirements. From single use case to ultimate enterprise financial services to include all of the possibilities defined above, Marionette is the ultimate solution for your business application today. The remainder of this document will explain why this is true and what makes Marionette the #1 industry choice in 2023 and moving into the future.
Layered Architecture For Marionette Stack
Marionette FinTech Software is composed of a .
Layered Architecture For Marionette Stack
Marionette FinTech Software is composed of a layered architecture and consists from the following components:
...
If you’re relatively new to the microservices, you’ll definitely want to read on. Even if you’re somewhat comfortable with microservices, you might want to skim this chapter: there will be a gem or two in here for you. If you’re a seasoned veteran of the microservice development, you can go ahead and move on to the next chapter (or read section. Or read on ironically and judge me).Microservices via APIs in Marionette Software..
The most challenging aspect of working with APIs is ensuring that both the API client and the API server follow the API specification. Here we are going to explain you foundational patterns and principles for building Marionette microservices and driving their integrations with APIs. It is important to understand that an API is just a layer on top of an application, and that there are different types of interfaces.The advantages of microservices are many and varied. , but looking past these challenges, the advantages of microservices are many and varied. Many of these benefits can be laid at the door of any distributed system. Microservices , however, tend to achieve these benefits to at a greater degree, primarily because they take a more opinionated stance in the way service boundaries are defined. By combining the concepts of information hiding and domain-driven design with the power of distributed systems, microservices help us deliver significant gains over other forms of distributed architectures. Microservices may well give us the option for each microservice to be written in a different programming language, to run on a different runtime , or to use a different database - but these are options only.
Marionette’s Microservices collaborate through APIs and further will overview how we apply GraphQL to organize APIs for these Microservices.
...
where necessary.
Further we are going to explain the foundational patterns and principles for building Marionette microservices and driving their integrations with APIs. It is important to understand that an API is just a layer on top of an application, and that there are different types of interfaces. Understanding that Marionette’s Microservices interact through APIs, let’s review how we organize these APIs using GraphQL.
What is GraphQL?
Now that we defined what an API is, we will explain let’s review the defining features of a that define web APIAPIs. A web API is an API that uses the Hypertext Transfer Protocol (HTTP) protocol to transport data. Web APIs are implemented using technologies such as SOAP, REST, GraphQL, gRPC, and others.
When a resource is represented by a large payload, fetching it from the server translates to a large amount of data transfer. With the emergence of API clients running in mobile devices with restricted network access and , limited storage and memory capacity, exchanging large payloads often results in unreliable communication.
In 2012, Facebook was acutely aware of these problems , and it developed a new technology to allow API clients to run granular data queries on the server. This technology was released open-source by Facebook in 2015 under the name of GraphQL.
GraphQL is a query language for APIs. Today, GraphQL is one of the most popular protocols for building web APIs. It’s a suitable choice for driving integrations between microservices and for building integrations with frontend applications. GraphQL gives API consumers full control over the data they want to fetch from the server and how they want to fetch it. GraphQL is a query language for APIs. Instead of fetching full representations of resources, GraphQL allows us to fetch one or more properties of a resource, such as the order or the status of an order. With GraphQL, we can also model the relationship between different objects, which allows us to retrieve, in a single request, the properties of various resources from the server, such as a orders’ details and others. In contrast, with other types of APIs, such as REST, you get a full list of details for each object. Therefore, whenever it’s important to give the client full control over how they fetch data from the server, GraphQL is a great choice.
For example, the trading service owns data about trades as well as their orders. Each trade and orders contains a rich list of properties that describe their features. However, when a client requests a list of trades, they are most likely interested in fetching only a few details about each trade. Also, client (frontend) may be interested in being able to traverse the relationships between trades, orders, and other objects owned by the trading service. For these reasons, GraphQL is an excellent choice for building the service API. As we describe the specification for the trading API and others, you’ll learn about scalar types of GraphQL, design of custom object types, as well as queries and mutations.
Just as we can use SQL to define schemas for our database tables, we can use GraphQL to write specifications that describe the type of data that can be queried from our servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call. Therefore reducing the network calls and bandwidth saves the battery life and CPU cycles consumed by the backend applications.
Why GraphQL ?
To avoid multiple versioning of our rest API.
Ask for what we need: Client has a provision to ask only those fields which they needs. There would be no handling on server side specific to the platform.
Avoid multiple API calls to get the related data. GraphQL allows us to get the related data in a single request.
Speed up the application performance.
These are some of the reasons that make GraphQL an excellent choice for building the service API for Marionette. As we explain the specification for the trading APIs, we will also review scalar types of GraphQL, design of custom object types, as well as queries and mutations. (If you are just starting out with GraphQL, Apollo Odyssey has some great interactive tutorials to help you.)
GraphQL for Managing Marionette’s APIs
GraphQL is query language for API’s. It empowers Marionette with the ultimate control of defining the exact API data of interest and how to fetch this data from the server, instead of fetching full representations of resources. For example, GraphQL allows us to fetch one or more properties individually from multiple resources and not the full scope of data associated with those resources.
This allows Marionette to model the relationship between multiple (or single) resources and retrieve just the the defined individual properties from multiple (or single) resources, while achieving this in a single request from the server. In comparison with REST APIs, you get a full list of properties for each resource. Considering performance and business importance of minimizing the amount of the data fetched from the server, GraphQL is a clear choice for Marionette, but let’s take another look.
For example: Trading service owns data about (a) trades, (b) each side of the order, plus (c) a rich list of properties for a group of associated trades. However, when the end-user requests to see their trading history in their application, there’s no reason to display or fetch the extensive list of details related to each individual trade. Most logical practice is to only fetch the data that will be relayed to the Frontend UI and visible to the end-user. Further, GraphQL offers the the ability to traverse the relationships between trades, orders, and other objects and do so with minimal server requests.
Further, just as we can use SQL to define schemas for our database tables, Marionette uses GraphQL to write specifications that describe the type of data queried from the servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition Language (SDL).
...
Containers
We run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice - for example, by gobbling up all the CPU. Brief introduction so what containers are?
How are they used in Marionette?
Do they satisfy the current Marionette needs?
What will we do if we need to scale current needs?
Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, often allied with a supporting container orchestration platform like Kubernetes, become a popular choice for running microservice architectures at scale.
Marionette uses Docker to run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice by consuming all the CPU. Virtualization is one way to create isolated execution environments on existing hardware, but normal virtualization techniques can be quite heavy when we consider the size of our Marionette’s microservices. Containers, on the other hand, provide a much more lightweight way approach to provision isolated execution for service instances, resulting . This results in faster spin-up times for new container instances , along with being and proves to be a much more cost effective efficient approach when applied for many architectures, including Marionette.
A microservice instance runs as a separate container on a virtual or physical machine. That container runtime may be managed by a container orchestration tool like Kubernetes.
Containers as a concept work wonderfully well for microservices, and Docker made containers significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.
After we begun playing around with containers, we also realized that we need something to allow us to manage these containers across lots of underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. But now we don’t feel the need to rush to adopt Kubernetes for that matter. It absolutely offers significant advantages over more traditional deployment techniques, but its adoption is difficult to justify if we have only a few microservices. As we gradually increase the complexity of Marionette microservice architecture, we will look to introduce new technology as we need it. We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. We know that but if we do end up doing that, we do our best to ensure that someone else is running the Kubernetes cluster for us, perhaps by making use of a managed service on a public cloud provider. Running our own Kubernetes cluster can be a significant amount of work. By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware. We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve. It also means that we can get our new functionality out to customers more quickly.
Public cloud providers, or more specifically the main three providers - Google Cloud and Amazon Web Services (AWS) - offer a huge array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.
Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, and allied with a supporting container orchestration platform like Kubernetes, has become many people’s go-to choice for running microservice architectures at scale.
By deploying one service per container, as in figure, we get a degree of isolation from other containers and can do so much more cost-effectively than would be possible if we wanted to run each service in its own VM.
...
You should view containers as a great way of isolating execution of trusted software. The Docker toolchain handles much of the work around containers. Docker manages the container provisioning, handles some of the networking problems for us, and even provides its own registry concept that allows you to store Docker applications. Before Docker, we didn’t have the concept of an “image” for containers - this aspect, along with a much nicer set of tools for working with containers, helped containers become much easier to use. The Docker image abstraction is a useful one for us, as the details of how our microservice is implemented are hidden. We have the builds for our microservice create a Docker image as a build artifact and store the image in the Docker registry, and away we go. When you launch an instance of a Docker image, you have a generic set of tools for managing that instance, no matter the underlying technology used - microservices written in NodeJS or Go, or whatever can all be treated the same. When Docker first emerged, its scope was limited to managing containers on one machine. This was of limited use - what if we wanted to manage containers across multiple machines? This is something that is essential if we want to maintain system health, if we have a machine die on us, or if we just want to run enough containers to handle the system’s load. Docker came out with two totally different products of its own to solve this problem, confusingly called “Docker Swarm” and “Docker Swarm Mode”. Really, though, when it comes to managing lots of containers across many machines, Kubernetes is king here, even if we might use the Docker toolchain for building and managing individual containers.
...
By deploying one service per container (image below) Marionette achieves a degree of isolation from other containers enabling it to do much more in a cost-efficient manner. Especially when comparing to running each service in its own VM.
...
Docker image abstraction is very useful for Marionette, hiding the details of how our microservice is implemented. Docker toolchain handles much of the work around containers and isolates execution of trusted software. It manages container provisioning, handles some of the networking problems and provides its own registry that allows storage of Docker applications.
Marionette creates a Docker image as a build artifact while storing this image in the Docker registry. When launching an instance of this Docker image, we gain a native set of tools for managing that particular instance.
To maintain optimal system health, Marionette manages containers across multiple machines, allowing restart of a failed service or running additional containers to handle system load.
...
A microservice instance runs as a separate container on a virtual or physical machine.
By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware.
That container runtime may be managed by a container orchestration tool like Kubernetes.
Containers as a concept work wonderfully well for microservices, and Docker made containers are significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.
Hands on experience with containers showed that we need an efficient way to manage these containers across multiple underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. As we gradually increase the complexity of Marionette microservice architecture, we plan to introduce Kubernetes for container orchestration.
We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. . We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve.
Public cloud providers like GCP, AWS and Digital Ocean, offer an array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.
Testing
The question is how to effectively and efficiently test our code’s functionality when it spans a distributed system. Unit testing is a methodology where units of code are tested in isolation from the rest of the application. A unit test might test a particular function, object, class, or module. But unit tests don’t test whether or not units work together when they’re composed to form a whole application. For that, we use a set of full end-to-end functional tests of the whole running application (aka system testing). Eventually, we need to launch Marionette and see what happens when all the parts are put together.
...