Development Framework
The Mosaic Development Framework is a part of the bigger picture introduced in the introduction article. It is a combination of the following:
Technology Stack
The modern approach for developing custom Mosaic services is supported by the technology choice which utilizes a service-oriented approach through micro-services and micro-frontends.
For a hassle-free setup and easy collaboration between the frontend and backend development process, Node.js and TypeScript are utilized. Moreover, GraphQL is used for the interface between the frontend and backend. Mosaic uses RabbitMQ for reliable inter-service communication.
Core technologies used by Mosaic:
-
Node.js - https://nodejs.org/ - JavaScript runtime, the most popular tool for running server-side applications
-
Express - https://expressjs.com/ - a web framework for Node.js
-
Typescript - https://www.typescriptlang.org/ - extends JavaScript by adding types. A very valuable improvement to JavaScript for enterprise-scale applications. All Mosaic components are developed with TypeScript.
-
GraphQL - https://graphql.org/ - a query language that allows clients to mutate and fetch exactly (and only) the data that they need. All Mosaic Services expose their APIs as GraphQL APIs. GraphQL offers clients a type-safe way to consume the APIs.
-
RabbitMQ - https://www.rabbitmq.com/ - a message broker for reliable and flexible messaging to easily integrate multiple micro-services
-
React - https://reactjs.org/ - declarative and flexible JavaScript library for building complex UIs from small and isolated pieces
-
Piral - https://www.piral.io/ - a framework enabling Micro-Frontends
-
PostgreSQL - https://www.postgresql.org/ - an open-source relational database with good support for JSON, column, and row-level security and many other features
-
PostGraphile - https://www.graphile.org/postgraphile/ - a tool that exposes PostgreSQL database as a GraphQL API in a simple way
-
Micro-Services - fully decoupled services that allow API-first approach and effective scalability
-
Micro-Frontends - fully decoupled workflows to enable domain-scoped development and deployment

3rd Party Tools
Selected and Integrated 3rd Party Tools:
-
Visual Studio Code - https://code.visualstudio.com/ - freeware source-code editor. Axinom suggests a set of extensions for optimal support for the technologies used.
-
Storybook - https://storybook.js.org/ - a tool for developing UI components in isolation and testing them on the fly
-
Jest - https://jestjs.io/ - JavaScript Testing Framework, well-integrated with TypeScript, Node.js, and React
-
Yarn - https://yarnpkg.com/ - package manager with some advantages compared to NPM, especially for big projects
Patterns and Techniques
Mosaic is much more than a combination of all the used technologies and tools. They are used in a specific way, aimed for efficient application development and to reduce the time-to-market.
Developers can concentrate on what matters for their business and don’t need to worry about secondary details. The recommended development patterns are presented in the subsequent chapters of Development Framework documentation.
GraphQL
The Mosaic framework uses GraphQL as the interface between the backend and the workflows. The backends use PostGraphile to generate their GraphQL APIs. PostGraphile introspects the PostgreSQL database (tables, columns, relationships, etc.) to create a lightning-fast GraphQL API with powerful graph capabilities and filters.
GraphQL Setup and Configuration
PostGraphile offers an HTTP endpoint for GraphQL queries and mutations
as well as a WebSocket endpoint to support GraphQL subscriptions. The
setup adds these endpoints to the WebSocket-enabled HTTP server. The
GraphQL endpoint is available at the route /graphql
. The
interactive GraphiQL IDE is added as /graphiql
endpoint that allows
you to conveniently view your GraphQL schema and execute queries and
mutations. This endpoint can be disabled, e.g. for production
deployments.
In general, you can tweak every aspect of the GraphQL API generation. The Mosaic framework provides a PostGraphile options builder which contains already reasonable defaults that you can further extend. The options define which plug-ins should be loaded (and in which order), settings on how the GraphQL API should be built, PostgreSQL settings for database queries, parses the HTTP request (e.g. for JWT parsing), etc.
We suggest enabling the following open-source PostGraphile plug-ins:
-
PgSimplifyInflectorPlugin - for nicer endpoint names.
-
ConnectionFilterPlugin - adds a powerful suite of filtering capabilities to a PostGraphile schema. This enables complex queries with single or multiple combined filter operations, different operators, string filters, and/or/not, etc. The drawback is that this can lead to expensive database operations. For internal/protected services, this is likely fine. However, for public/anonymous queries, special protection (like query pinning) is suggested.
-
AtomicMutationsPlugin - enables mutation atomicity with GraphQL requests containing multiple mutations.
and the following Mosaic-specific plug-ins:
-
SubscriptionsPluginFactory - a factory to create subscription plug-ins for your entities.
-
AxGuardPlugin - wraps resolver executions into an authentication check, making sure that that JWT subject is authorized to access the GraphQL resource.
-
EnforceStrictPermissionsPlugin - omits all GraphQL endpoints that don’t have any permission definition for it.
-
ValidationDirectivesPlugin - automatically adds validation notes to the comments of the GraphQL schema based on the database constraints.
-
AnnotateTypesWithPermissionsPlugin - automatically adds the permissions that allow to call the GraphQL endpoint to the schema description.
General Schema Generation
Based on your database tables, columns, and relationships, PostGraphile
automatically creates the GraphQL query and mutation endpoints. If you
have a movies
table that has a 1:n relation to a table movies_casts
,
it creates the following query endpoints (when using the default Mosaic
settings - this can be fine-tuned):
-
movie
- to get a movie by the unique ID. -
movieByExternalId
- to get a movie by another unique property (theExternalId
in this example). -
movies
- to query for movies with powerful filters, sorting, paging, cursors, and more. -
moviesCast
- to get the details of a single cast entry. -
moviesCasts
- to query for all casts in all movies with powerful filters, sorting, paging, cursors, and more.
The GraphQL API allows one to easily traverse the graph. From the movie entity, you can access the cast members that are part of the movie. This connection has again all the filters, sorting, paging, and other functionality to return exactly the data that the frontend needs.
To manipulate the data, PostGraphile generates (by default) the following mutation endpoints:
-
createMovie
- create a new movie by providing values for at least the non-nullable fields. -
createMoviesCast
- create a movie cast entry related to a movie. -
updateMovie
- update an existing movie by providing the new values and the unique ID of the movie. -
updateMovieByExternalId
- same as the above but provides the unique external ID. -
updateMoviesCast
- update an existing movie cast entry. -
deleteMovie
- delete a movie by providing the unique ID. -
deleteMovieByExternalId
- delete a movie by providing another unique property (theExternalId
in this example). -
deleteMoviesCast
- delete a cast member entry for a movie.
In addition to the query and mutation endpoints, there is the GraphQL subscription
endpoint. If the database is set up to issue change triggers, those are forwarded
to the GraphQL subscribers. The endpoint name for all the movie and movie
cast-related changes would be movieMutated
.
Writing GraphQL Plug-ins
PostGraphile generates the GraphQL API based on your PostgreSQL database already in a very useable way. Plugins allow you to further tweak and customize the aspects of how exactly the API should be generated.
The simplest way is to use the PostGraphile makeExtendSchemaPlugin. This allows you to adjust the GraphQL schema, for example, by providing a new query or mutation endpoint and adding corresponding resolvers to it.
Another option is to wrap existing resolvers and add your own logic before or after the resolver is called by utilizing the makeWrapResolversPlugin.
The most powerful, but also a more complex way, to fine-tune every aspect of the generated GraphQL API is to use the hooks. This lets you tweak the fields of GraphQL types, query and mutation operations, adjust input types, and much more.
Database
By default, every Mosaic micro-service has one PostgreSQL database to hold and manage its data. The database is used to create the GraphQL schema to expose entities in the API. Moreover, it is also used for other internal service data.
Schemas
To accommodate both GraphQL and internal data, Mosaic uses different database schemas and database roles. The ones that are meant to be used in your project are:
-
app_public
: this database schema should contain all the tables that you want to expose as a part of the GraphQL API. The database roleDATABASE_GQL_ROLE
should receive fine granular GRANTs to select, insert, update, and delete all or some data for those tables and columns. -
app_hidden
: this database schema should contain all the tables that are accessible to the database roleDATABASE_GQL_ROLE
but should not be made available in the GraphQL API. Those tables might be used and exposed through PostgreSQL functions (in theapp_public schema
) or used in plug-ins. -
app_private
: this database schema contains tables and functions that are not available to the database roleDATABASE_GQL_ROLE
. Those are only available to the database owner role and contain secret/sensitive data. -
ax_utils
: this database schema contains PostgreSQL utility functions that help you to create a consistent database structure and optimal integration into existing Mosaic plug-ins. Those functions are provided by the@axinom/mosaic-db-common
library and should not be changed.
Database Migration
To create and maintain your database, you can use any database migration tool or manually create and alter the database. In our template projects, we use the Graphile migrate library for easy and fast development cycles which automatically update the database and the local GraphQL API on every change to the migration SQL file.
Every project has a ./migrations/current.sql
file in which you can add your SQL
migration code. As the Graphile migrate is a forward only
migration and is executed
on many times against the local database it must be idempotent.
This means, for example, to add a DROP TABLE IF EXISTS app_public.movies CASCADE;
before calling the CREATE TABLE app_public.movies (…)
. When the migration is
applied (exactly once) in production, this does not cause any issues. However, it
is immensely helpful to have fast development cycles locally.
When your local service runs in the dev
mode, it watches for any change to the
current.sql
file. Whenever you save the file, it is run against your local
database. This way, you get immediate feedback if there are any errors in the SQL
file. If it can be applied successfully, it automatically triggers a rebuild of
the GraphQL API to make the new changes directly available without any manual
rebuild/restart. If you use the GraphiQL web IDE, this automatically reloads the
schema as well. In summary, this allows to change something in the current.sql
file which directly reflects in the GraphiQL web app.
When you are happy with the migration SQL code, you can run the Graphile migrate
commit command to freeze
the contents of your current.sql
file into a committed
database migration file. This is a Grapile migration step - not to be confused with
GIT commits. After this step, the current.sql
file is empty again. If you want to
change something in the last committed database migration file, you can run the
uncommit
command which removes the last commit file and adds the contents back
to the current.sql
file. This operation must be used only as long as it was
never deployed to production!
As a general best practice, the current.sql
file should always be empty when
you GIT commit code to the master branch.
When the application is deployed to some productive server, it executes all the
committed database migration files to the database that were not run so far. The
current.sql
file is only executed in the DEV
mode - not in the production mode.
Messaging
For asynchronous message processing, we use the Mosaic message bus which is based on the Rascal pub/sub wrapper around the amqplib library. The Mosaic message bus adds the notions of events and commands and provides the functionality to easily integrate into the Mosaic Managed Services in a secure way.
The main objective of the messaging
folder is the setup and registration of the
message handlers and middleware to consume messages and the publishers to send out
messages. In addition, there is a media message handler that protects message
handlers by checking the message for specific permissions.
Libraries and Tools
Mosaic provides libraries and tools for both backend and frontend developers. The packages encapsulate all heavy operations and the details of communication with the Managed Services.
The following packages are available:
Name | Purpose | Comment |
---|---|---|
Backend |
Shared types and core functionalities. |
|
Backend |
This package encapsulates database-related functionality to develop Mosaic-based services. |
|
Backend |
Authentication and authorization helpers for Mosaic services. |
|
Frontend |
UI components for building Mosaic applications. |
|
Frontend |
Integration utilities from id-service for application frontends. |
|
Backend |
Integration utilities from id-service for Mosaic services. |
|
Backend |
Shared types used by id-service for integration clients. |
|
Backend |
Messaging library for Mosaic services. |
|
Backend |
Shared types for Mosaic service messages. |
|
Backend |
Common helpers and PostgreSQL-related functionality. |
|
CLI |
Command-line interface application providing developer tools. |
|
Frontend |
Orchestration Application for running micro-frontends in the local development environment. |
Mosaic Frontend Samples Application
While Axinom Mosaic focuses on the development of backend applications, services and processes, ultimately the services provided by the backend are consumed by end-users through frontend applications. Axinom Mosaic is unopinionated about the frontend technologies, concepts or user experiences used to develop the end-user facing applications. Still, we created the Frontend Sample Application that demonstrates how a frontend can interact with Mosaic services. It is meant to showcase the capabilities of Mosaic and to provide a starting point for frontend developers but can also be used to test out your backend services without the need of an actual frontend.
The Frontend Sample Application contains pre-defined scenarios for various interactions a frontend may want to implement. Each scenario contains code samples that demonstrates the usage of Mosaic service APIs and related libraries to achieve a certain functionality.
A deployed version of the application can be found at https://mosaic-frontend-samples.axinom.net, and the source code of each scenario is published under the open-source project at https://github.com/Axinom/mosaic-frontend-samples.
Tip
|
In case you are running customized versions of Mosaic services, having custom API signatures that are incompatible with the hosted frontend samples, you can always clone or fork the repository and adjust the code samples to your needs. |
If you want to find out more about the Frontend Samples, check out its User Guide.