Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 76 Next »

“Where shall I begin, please your Majesty?” the White Rabbit asked.
“Begin at the beginning,” the King said gravely, “and go on till you come to the end: then stop.”
— Lewis Carroll, Alice in Wonderland (1865)

This white paper is dedicated to Marionette and aims to review its technical details as a comprehensive review of what is ‘under the hood’ of this FinTech software.

Who Should Read This?

This document is directed at intermediate-to-advanced developers, web application, site reliability and DevOps engineers. Primary intention of this document is to review cloud native design principles and structural overview of Marionette Software. But first, a quick word about Marionette Software and its practical application as foundation for most modern financial service businesses.

What is Marionette?

Marionette is a comprehensive launchpad software application meant to service as the foundation of various financial services. While some of these use cases are configurable turnkey options, Marionette is developed to cover a wide range of possible applications. Here are a few examples of Marionette’s current and possible applications:

Turnkey Use Cases: 

The following use cases are supported by Marionette stack and can be deployed to production for satisfying end-user demand for any of the following financial services:

  • Digital Asset Wallet Software 

  • Centralized Swap Platform for Fiat & Digital Assets

  • Centralized Derivatives Exchange & Order Book Trading

  • Tokenization & Investment Platform

  • Crypto & Fiat Donation Platform

Possible Applications:

Although Marionette already supports a wide range of existing FinTech applications, it is developed to handle a lot more. With some custom development, Marionette can support the following business models:

  • P2P Trading Marketplace

  • Crypto & Fiat Payment Processing

  • Donation Processing

  • Crypto & Fiat Escrow Service

  • NFT Minting & Trading

  • On/Off Ramp for Fiat to Cryptocurrency

  • Neo Bank Software

  • & more

Marionette is Developed to Integrate 3rd Party Services

Marionette’s architecture allows the software to be compatible for integration with 3rd Party DeFi, CeFi and traditional financial services to further enhance business capabilities and features available to the end-user. Along with FinTech, Marionette is compatible with OpsTech to assist you with compliance and regulatory requirements. Here are some examples:

Regardless of use case or integrations, Marionette is architected to support your custom business requirements. From single use case to ultimate enterprise financial services to include all of the possibilities defined above, Marionette is the ultimate solution for your business application.

Layered Architecture For Marionette Stack

Marionette FinTech Software is composed of a layered architecture and consists from the following components:

  1. Hardware Layer: The hardware layer provides the physical infrastructure including the computing and storage devices, as well as the network equipment required for running the cloud services and applications.

  2. Operating System (OS) Layer: The operating system layer consists of distribution, which is based on Debian GNU/Linux. The OS layer is responsible for managing the hardware as virtualized resources. There are a number of support services that are run on top of OS layer.

  3. Docker Layer: The Docker engine runs on the host operating system, includes the necessary binary packages and libraries for executing applications within containers.

  4. Middleware Layer: The middleware layer brings together the resources, providing an integrated and consistent view of the cloud services.

  5. Services Layer: The Services layer integrates useful services and applications. These services are logically grouped into currency exchange services, trading services and enterprise services.

  6. Front-end Layer: The front-end layer provides the different types of interfaces to interact with the infrastructure of the services, and any other tools for assisting in the administrating of services and applications.

Marionette’s Microservice Architecture

Marionette is composed of multiple collaborating microservices and offers the freedom to use different technologies inside each one. This allows us to pick the right tool for each job, rather than having to select a one-size-fits-all approach that often ends up being the lowest common denominator. Microservices allow Marionette to quickly adopt new technologies and analyze how these advancements can be of advantage.

What is Microservice Architecture?

Microservices are an architectural style in which components of a system are designed as standalone and independently deployable applications. This definition emphasizes the fact that microservices are applications that run independently of each other, but collaborate to perform their mutually defined tasks. Martin Fowler defines microservices as an architectural style: “An approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. This definition emphasizes the autonomy of the services, by stating that they run in independent processes.

If you’re relatively new to the microservices, you’ll definitely want to read on. Even if you’re somewhat comfortable with microservices, you might want to skim this chapter: there will be a gem or two in here for you. If you’re a seasoned veteran of the microservice development, you can go ahead and move on to the next section. Or read on ironically and judge me..

The most challenging aspect of working with APIs is ensuring that both the API client and the API server follow API specification, but looking past these challenges, the advantages of microservices are many and varied. Many of these benefits can be laid at the door of any distributed system. Microservices tend to achieve these benefits at a greater degree, primarily because they take a more opinionated stance in the way service boundaries are defined. By combining the concepts of information hiding and domain-driven design with the power of distributed systems, microservices help us deliver significant gains over other forms of distributed architectures. Microservices may well give us the option for each microservice to be written in a different programming language, to run on a different runtime or to use a different database where necessary.

Further we are going to explain the foundational patterns and principles for building Marionette microservices and driving their integrations with APIs. It is important to understand that an API is just a layer on top of an application, and that there are different types of interfaces. Understanding that Marionette’s Microservices interact through APIs, let’s review how we organize these APIs using GraphQL.

What is GraphQL?

Now that we defined what an API is, let’s review the features that define web APIs. A web API is an API that uses the Hypertext Transfer Protocol (HTTP) protocol to transport data. Web APIs are implemented using technologies such as SOAP, REST, GraphQL, gRPC, and others. When a resource is represented by a large payload, fetching it from the server translates to a large amount of data transfer. With the emergence of API clients running in mobile devices with restricted network access, limited storage and memory capacity, exchanging large payloads often results in unreliable communication.

In 2012, Facebook was acutely aware of these problems and developed a new technology to allow API clients to run granular data queries on the server. This technology was released open-source by Facebook in 2015 under the name of GraphQL.

GraphQL is a query language for APIs. Today, GraphQL is one of the most popular protocols for building web APIs. It’s a suitable choice for driving integrations between microservices and building integrations with frontend applications.

Why GraphQL ?

  • To avoid multiple versioning of our rest API.

  • Ask for what we need: Client has a provision to ask only those fields which they needs. There would be no handling on server side specific to the platform.

  • Avoid multiple API calls to get the related data. GraphQL allows us to get the related data in a single request.

  • Speed up the application performance.

These are some of the reasons that make GraphQL an excellent choice for building the service API for Marionette. As we explain the specification for the trading APIs, we will also review scalar types of GraphQL, design of custom object types, as well as queries and mutations.

GraphQL for Managing Marionette’s APIs

GraphQL is query language for API’s. It empowers Marionette with the ultimate control of defining the exact API data of interest and how to fetch this data from the server, instead of fetching full representations of resources. For example, GraphQL allows us to fetch one or more properties individually from multiple resources and not the full scope of data associated with those resources.

This allows Marionette to model the relationship between multiple (or single) resources and retrieve just the the defined individual properties from multiple (or single) resources, while achieving this in a single request from the server. In comparison with REST APIs, you get a full list of properties for each resource. Considering performance and business importance of minimizing the amount of the data fetched from the server, GraphQL is a clear choice for Marionette, but let’s take another look.

For example: Trading service owns data about (a) trades, (b) each side of the order, plus (c) a rich list of properties for a group of associated trades. However, when the end-user requests to see their trading history in their application, there’s no reason to display or fetch the extensive list of details related to each individual trade. Most logical practice is to only fetch the data that will be relayed to the Frontend UI and visible to the end-user. Further, GraphQL offers the the ability to traverse the relationships between trades, orders, and other objects and do so with minimal server requests.

Further, just as we can use SQL to define schemas for our database tables, Marionette uses GraphQL to write specifications that describe the type of data queried from the servers. A GraphQL API specification is called a schema, and it’s written in a standard called Schema Definition Language (SDL).

Containers

Brief introduction so what containers are?
How are they used in Marionette?
Do they satisfy the current Marionette needs?
What will we do if we need to scale current needs?

Containers have become a dominant concept in server-side software deployment and for many are the de facto choice for packaging and running microservice architectures. The container concept, popularized by Docker, often allied with a supporting container orchestration platform like Kubernetes, become a popular choice for running microservice architectures at scale.

Marionette uses Docker to run each microservice instance in isolation. This ensures that issues in one microservice can’t affect another microservice by consuming all the CPU. Virtualization is one way to create isolated execution environments on existing hardware, but normal virtualization techniques can be quite heavy when we consider the size of Marionette’s microservices. Containers, on the other hand, provide a much more lightweight approach to provision isolated execution for service instances. This results in faster spin-up times for new container instances and proves to be a much more cost efficient approach when applied for many architectures, including Marionette.

By deploying one service per container (image below) Marionette achieves a degree of isolation from other containers enabling it to do much more in a cost-efficient manner. Especially when comparing to running each service in its own VM.

Docker image abstraction is very useful for Marionette, hiding the details of how our microservice is implemented. Docker toolchain handles much of the work around containers and isolates execution of trusted software. It manages container provisioning, handles some of the networking problems and provides its own registry that allows storage of Docker applications.

Marionette creates a Docker image as a build artifact while storing this image in the Docker registry. When launching an instance of this Docker image, we gain a native set of tools for managing that particular instance.

To maintain optimal system health, Marionette manages containers across multiple machines, allowing restart of a failed service or running additional containers to handle system load.

A microservice instance runs as a separate container on a virtual or physical machine.

By the way with smaller services, we can scale just those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware.

That container runtime may be managed by a container orchestration tool like Kubernetes.

Containers as a concept work wonderfully well for microservices, and Docker made containers are significantly more viable as a concept. We get our isolation but at a manageable cost. We also hide underlying technology, allowing us to mix different tech stacks. When it comes to implementing concepts like desired state management, though, we’ll need something like Kubernetes to handle it for us.

Hands on experience with containers showed that we need an efficient way to manage these containers across multiple underlying machines. Container orchestration platforms like Kubernetes do exactly that, allowing us to distribute container instances in such a way as to provide the robustness and throughput our service needs, while allowing us to make efficient use of the underlying machines. The work in this direction is being done in full conformity with the adjusted schedule. As we gradually increase the complexity of Marionette microservice architecture, we plan to introduce Kubernetes for container orchestration.

We don’t need a Kubernetes cluster when we have about three dozen services. After the overhead of managing deployment begins to become a significant headache, we will start the use of Kubernetes. . We can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed more quickly. If a problem does occur, it can be quickly isolated to an individual service, making fast rollback easy to achieve.

Public cloud providers like GCP, AWS and Digital Ocean, offer an array of managed services and deployment options for managing Marionette. As our microservice architecture grows, more and more work will be pushed into the operational space. Public cloud providers offer a host of managed services, from managed database instances or Kubernetes clusters to message brokers or distributed filesystems. By making use of these managed services, we are offloading a large amount of this work to a third party that is arguably better able to deal with these tasks.

Testing

The question is how to effectively and efficiently test our code’s functionality when it spans a distributed system. Unit testing is a methodology where units of code are tested in isolation from the rest of the application. A unit test might test a particular function, object, class, or module. But unit tests don’t test whether or not units work together when they’re composed to form a whole application. For that, we use a set of full end-to-end functional tests of the whole running application (aka system testing). Eventually, we need to launch Marionette and see what happens when all the parts are put together.

Which way is right for us? Behaviour Driven Development (BDD) uses human-readable descriptions of software user requirements as the basis for software tests. Like Domain Driven Design, an early step in BDD is the definition of a shared vocabulary between stakeholders, domain experts, and engineers. This process involves the definition of entities, events, and outputs that the users care about, and giving them names that everybody can agree on. Out testers then use that vocabulary to create a domain specific language (named as predicates in our ecosystem) they can use to encode system tests such as User Acceptance Tests. Each test is based on a user story written in the formally specified ubiquitous language (a vocabulary shared by all stakeholders) based on English. Notice that this language is focused exclusively on the business value that a customer should get from the software rather than describing the user interface of the software, or how the software should accomplish the goals. Our testers use custom tools such as Cucumber to create and maintain their own custom domain specific language.

Security

Often when the topic of Marionette microservices security comes up, our clients want to start talking about reasonably sophisticated technological issues like the use of JWT tokens or the need for mutual TLS (topics we will explain later). Oh my! However, the problem with security is that you’re only as secure as your least secure aspect. To use an analogy, if you’re looking to secure your home, it would be a mistake to focus all your efforts on having a front door that is pick resistant, with lights and cameras to deter malicious parties, if you leave your back door open.

Our microservice architecture consists of lots of communication between things. Human users interact with our system via user interfaces. These user interfaces in turn make calls to microservices, and microservices end up calling yet more microservices.

Credentials give a person or computer access to some form of restricted resource. This could be a database, a computer, a user account, or something else. We have the number of humans involved, and we have lots credentials in the mix representing microservices, (virtual) machines, databases, and the like. We break the topic of credentials down into two key areas. Firstly, we have the credentials of the users (and operators) of our system. Secondly, we consider secrets - pieces of information that are critical to running our microservices. Across both sets of credentials, we consider the issues of rotation, revocation, and limiting scope. User credentials, such as email and password combinations, remain essential to how many of our users work with our software, but they also are a potential weak spot when it comes to our system being accessed by malicious parties. Our credentials also extend to managing things like API keys for third-party systems, such as accounts for our public cloud provider.

Critical pieces of information:

  • Certificates for TLS

  • SSH keys

  • Public/private API keypairs

  • Credentials for accessing databases

  • etc.

In the context of security, authentication is the process by which we confirm that a party is who they say they are. We typically authenticate a human user by having them type in their username and password. We assume that only the actual user has access to this information, and therefore the person entering this information must be them. Ease of use is important - we want to make it easy for our users to access our system. Our approach to authentication is to use some sort of single sign-on (SSO) solution to ensure that a user has only to authenticate themselves only once per session, even if during that session they may end up interacting with multiple services.

Authorization is the mechanism by which we map from a principal (generally, when we’re talking abstractly about who or what is being authenticated, we refer to that party as the principal) to the action we are allowing them to do. When a principal is authenticated, we will be given information about them that will help us decide what we should let them do.

Marionette authorization service scenario uses JSON Web Token (JWT). JWT defines a compact and self-contained way for securely transmitting information between parties as a JSON object. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. There is a possibility to get token using Marionette GraphQL interface:

mutation {
login(email: "user@domain.io", password: "mypasswd") {
token
}
}

In authentication, when the user successfully logs in using their credentials, a JSON Web Token will be returned. The output is three Base64-URL strings separated by dots that can be easily passed in HTML and HTTP environments:

{
"data": {
"login": {
"token": "eyJhbGcxxx1NiJ9.eyJxxxpudWxsfQ.TE2ehxfNuxxx"
}
}
}

Whenever the user wants to access a protected route or resource, the user agent should send the JWT, typically in the Authorization header using the Bearer schema. The content of the header should look like the following:

Authorization: Bearer <token>

This can be, in certain cases, a stateless authorization mechanism. The protected routes of the server will check for a valid JWT in the Authorization header, and if it's present, the user will be allowed to access protected resources.

The following steps show how a JWT is obtained and used to access Marionette APIs or resources:

  1. The application or client makes a one-time authorization request to the authorization server. This is performed through one of the different authorization flows. Web application will go through the /login endpoint using the authorization code flow.

  2. The server validates the credentials and, if everything is correct, it returns to the client a JSON with a token that encodes data from a user logged into the system i.e. when the authorization is granted, the authorization server returns an access token to the application.

  3. After receiving the token, the client should store it in the way they prefer, either by LocalStorage, SessionStorage, Cookies and HTTP Only or other client-side storage mechanisms.

  4. Every time the client accesses a route that requires authentication, it just sends this token to the API to authenticate and release the consumption data.

  5. The application uses the access token to access a protected resource (API). The server always validates this token to allow or block a client request.

As an example, you can extract information about a certain order using the previously received token:

query {
userOrder(id: "9dac9971-b947-421a-983e-33b22047a18c") {
id
status
type
}
}

HTTP header:
{
"Authorization":"Bearer eyJhbGcxxx1NiJ9.eyJxxxpudWxsfQ.TE2ehxfNuxxx"
}

There are several options for storing tokens. Each option has costs and benefits. Briefly, the options are: store in memory JavaScript, store sessionStorage, store localStorage and store in a cookie. The main trade off is security. Any information stored outside of the current application's memory is vulnerable to Cross-Site Scripting (XSS) attacks. Marionette uses Cookies and HTTP Only as acceptable
ways to put client state (HttpOnly is an additional flag included in a Set-Cookie HTTP response header). Using the HttpOnly flag when generating a cookie helps mitigate the risk of client side script accessing the protected cookie. If the HttpOnly flag is included in the HTTP response header, the cookie cannot be accessed through client side script. As a result, even if a cross-site scripting flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser will not reveal the cookie to a third party.
Briefly speaking the HttpOnly flag is always set and your browser should not allow a client-side script to access the session cookie.

Re-defining Service Layer of Marionette stack

Let's dive into the technicalities of the construct of Marionette back-end. A microservice is a JavaScript module containing some part of Marionette application. It is isolated and self-contained, meaning that even if it goes offline or crashes the remaining services would be unaffected. Inside service there are definitions of actions and subscriptions to events. From the architectural point-of-view back-end of Marionette can be seen as a composition of two independent parts: the set of core services and the gateway service. The first one is responsible for business logic while the second simply receives user's requests and conveys them to the other services. To ensure that Marionette back-end is resilient to failures the core and the gateway services are running in dedicated nodes. Running services at dedicated nodes means that the transporter module is required for inter services communication. Most of the transporters supported by framework rely on a message broker for inter services communication. Overall, the internal architecture of Marionette back-end is represented in the figure below.

Now, assuming that back-end services are up and running, the back-end can serve user’s requests. So let’s see what actually happens with a request to list all available markets. First, the request (GET /markets) is received by the HTTP server running at gateway node. The incoming request is simply passed from the HTTP server to the Gateway service that does all the processing and mapping. In this case in particular, the user's request is mapped into a “list all markets“-action of the Markets service. Next, the request is passed to the broker, which checks whether the Markets service is a local or a remote service. In this case, the Markets service is remote so the broker needs to use the transporter module to deliver the request. The transporter simply grabs the request and sends it through the communication bus. Since all nodes are connected to the same communication bus (Message broker), the request is successfully delivered to the Markets service node. Upon reception, the Service broker of Markets service node will parse the incoming request and forward it to the Markets service. Finally, the Markets service invokes the action a-la list of all markets and returns the list of all available markets. The response is simply forwarded back to the end-user.

  • No labels