This document describes the current architecture that guides the platform implementation, detailing the components that comprise the solution, as well as their functionalities and how each of them contribute to the platform as a whole.
While a brief explanation of each component is provided, this high level description does not explain (or aims to explain) the minutia of each component’s implementation. For that, please refer to each component’s own documentation.
Table of Contents
dojot was designed to make fast solution prototyping possible, providing a platform that’s easy to use, scalable and robust. Its internal architecture makes use of many well-known open-source components with others designed and implemented by dojot team. This architecture is described on Fig. 1.
Using dojot is as follows: a user configures IoT devices through the GUI or directly using the REST APIs provided by the API Gateway. Data processing flows might be also configured - these entities can perform a variety of actions, such as generate notifications when a particular device attribute reaches a certain threshold or save all data generated by a device onto an external database. As devices start sending their readings to dojot, a user can:
- receive these readings via notifications generated by subscriptions;
- consolidate all data into virtual devices;
- gather all data from historical database, and so on.
These features can be used through REST APIs - these are the basic building blocks that any application based on dojot should use. dojot GUI provides an easy way to perform management operations for all entities related to the platform (users, devices, templates and flows) and can also be used to check if everything is working fine.
The user context are isolated and there is no data sharing, the access credentials are validated by the authorization service for each and every operation (API Request). Therefore, a user belonging to a particular context (tenant) cannot reach any data (including devices, templates, flows or any other data related to these resources) from other ones.
Once devices are configured, the IoT Agent is capable of mapping the data received from devices, encapsulated on MQTT for example, and send them to the message broker for internal distribution. This way, the data reaches the history service, for instance, so it can persist the data on a database.
For more information about what’s going on with dojot, you should take a look at dojot GitHub repository. There you’ll find all components used in dojot.
Each one of the components that are part of the architecture are briefly described on the sub-sections below.
Apache Kafka is a distributed messaging platform that can be used by applications which need to stream data or consume/produce data pipelines. In comparison with other open-source messaging solutions, Kafka seems to be more appropriate to fulfil dojot’s architectural requirements (responsibility isolation, simplicity, and so on).
In Kafka, a specialized topics structure is used to insure isolation between different users and applications data, enabling a multi-tenant infrastructure.
The DataBroker service makes use of an in-memory database for efficiency. It adds context to Apache Kafka, making it possible that internal or even external services are able to subscribe or query data based on context. DataBroker is also a distributed service to avoid it being a single point of failure or even a bottleneck for the architecture.
DeviceManager is a core entity which is responsible for keeping device and templates data models. It is also responsible for publishing any updates to all interested components (namely IoT agents, history and subscription manager) through Kafka.
This service is stateless, having its data persisted to a database, with data isolation for users and applications, making possible a multi-tenant architecture for the middleware.
An IoT agent is an adaptation service between physical devices and dojot’s core components. It could be understood as a device driver for a set of devices. The dojot platform can have multiple iot-agents, each one of them being specialized in a specific protocol like, for instance, MQTT/JSON, CoAP/LWM2M and HTTP/JSON.
It is also responsible to ensure that it communicates with devices using secure channels.
This service provides mechanisms to build data processing flows to perform a set of actions. These flows can be extended using external processing blocks (which can be added using REST APIs).
The History component works as a pipeline for data and events that must be persisted on a database. The data is converted into an storage structure and is sent to the corresponding database.
For internal storage, the MongoDB non-relational database is being used, it allows a Sharded Cluster configuration that may be required according to the use case.
The data may also be directed to databases that are external do the dojot platform, requiring only a proper configuration of Logstash and the data model to be used.
All the services that are part of the dojot platform can generate usage metrics of its resources that can be used by a logging and auditing service, which process this registers and summarize then based on users and applications.
The consolidated data is presented back to the services, allowing then, for example, to expose this data to the user via a graphical interface, to limit the usage of the system based on resource consumption and quotas associated with users or even to be used by billing services to charge users for the utilization of the platform.
Such components are currently in development.
The Kong API Gateways is used as the entry point for applications and external services to reach the services that are internal to the dojot platform, resulting in multiple advantages like, for instance, single access point and ease when applying rules over the API calls like traffic rate limitation and access control.
The Graphical User Interface in dojot is responsible for providing responsive interfaces to manage the platform, including functionalities like:
- User Profile Management: define profiles and the API permission associated to those profiles
- User Management: Creation, Visualization, Edition and Deletion Operations
- Applications Management: Creation, Visualization, Edition and Deletion Operations
- Device Models Management: Creation, Visualization, Edition and Deletion Operations
- Devices Management: Creation, Visualization (real time data), Edition and Deletion Operations
- Processing Flows Management: Creation, Visualization, Edition and Deletion Operations
This is a service specialized for cloud environments, that is capable of monitoring the utilization of the platform, being able to increase or decrease its storage and processing capacity in an dynamic and automatic fashion to adapt to the variability on the demand.
This controller depends that the dojot platform services are horizontally scalable, as well as the databases must be clusterizable, which match with the adopted architecture.
This component is currently scheduled for development.
This component is responsible for handling alarms generated by dojot’s internal components, such as IoT agents, Device Manager, and so on.
This component is also scheduled for development.
A few extra components are used in dojot that were not shown in Fig. 1. They are:
- postgres: this database is used to persist data from many components, such as Device Manager.
- redis: in-memory database used as cache in many components, such as service orchestrator, subscription manager, IoT agents, and so on. It is very light and easy to use.
- rabbitMQ: message broker used in service orchestrator in order to implement action flows related that should be applied to messages received from components.
- mongo database: widly used database solution that is easy to use and doesn’t add a considerable access overhead (where it was employed in dojot).
- zookeeper: keeps replicated services within a cluster under control.
All components communicate with each other in two ways:
- Using HTTP requests: if one component needs to retrieve data from other one, say an IoT agent needs the list of currently configured devices from Device Manager, it can send a HTTP request to the appropriate component.
- Using Kafka messages: if one component needs to send new information about a resource controlled by it (such as new devices created in Device Manager), the component may publish this data through Kafka. Using this mechanism, any other component that is interested in such information needs only to listen to a particular topic to receive it. Note that this mechanism doesn’t make any hard associations between components. For instance, Device Manager doesn’t know which components need its information, and an IoT agent doesn’t need to know which component is sending data through a particular topic.