Synchronization server with a sick combo: Express, Socket.IO, PostgreSQL and Knex.js
In this article I will try to guide you on implementing a simple but scalable synchronization server. This server, based on node.js will be able to receive and process a bulk of CRUD requests from a client, to then acting in consequence and replying back with individualized responses. We will use several tools: The framework Express as middleware, PostgreSQL as DBMS, Knex.js to create some data and access it in a transactional way and finally Socket.IO for an efficient client-server communication. Get ready for takeoff!
First of all we will need to install node.js, npm and PostgreSQL. I’m sure that Google will explain better than me how to do so. I will consider this prerequisites as being met if you continue reading.
We can now initialize a node.js app on an empty directory and install the required dependencies.
We create a server.js file on a dedicated folder to implement the server itself. A simple code similar to this one from Express documentation will be enough for now.
An script to run the server with nodemon will be needed, so we add a new line on package.json file, located on the root of our project. After that we can execute the script with
npm start .
Knex, the friend of our database
Despite what this article states (worth reading), we may want some abstraction (or help) when creating and dealing with the database. ORM’s like Sequelize are pretty simple to use, but in my opinion, our code gets unnecessarily more complicated. We can find an alternative on Query Builders, that work with a language closer to SQL and do not decrease performance so much. Knex.js was my choice.
npx knex init a template file called knexfile.js will be generated on the root of our project. We modify it with a simple development configuration:
As we can see, we are providing to Knex the URL to connect to our database and also the directories where our migrations and seeds will be stored. The
DATABASE_URL variable must be defined on an environment file .env like this:
The URL parameters must refer to an existing database in PostgreSQL. In order to not getting too long, we will create a simple database that simulates a market storing
products with a
category . This can be created as follows:
npx knex migrate:make product and
npx knex migrate:make categorywill create two migration files that we will use to define our database tables. Another migration
npx knex migrate:make product_update will be necessary to update a table defining a foreign key. We modify it to our needs:
As you can see, our tables are very simple but multiple data types and functionalities for building schemes can be found on this documentation. Executing
npx knex migrate:latest our tables will be created.
Similarly, Knex allows us to seed a table too with the use of the command
npx knex seed:run. For example we can define a seed file (
npx knex seed:make seed_categories)to populate the categories of the products.
To finish up with this configuration, we need a knex.js file to hold the single instance of the Knex module with the correct environment config.
We assume that we want a server capable of efficiently managing the synchronization of data of a potentially high volume. So even though messages from the client to the server could be bulk, they had to be individual in the opposite direction. That is, despite the client must be able to send multiple demands in one unique request, these must be answered exclusively. With that, when an action requested by the customer has been carried out (whether or not satisfactorily), the client receives a response without having to wait for the rest of the actions to complete.
This avoids timeout problems in case that a transaction takes longer than expected. Therefore, efficiency is improved and we also protect ourselves from a possible error of connection conditions an entire synchronization.
For this purpose, the request/response paradigm on which HTTP is based does not serve us, so we use the Socket.IO library, which enables real-time, bidirectional and event-based communication.
Knowing that, we modify server.js to work with this tool. We alsocreate and import a file called ioHandler.js in charge of managing the logic and namespaces (endpoints) of Socket.IO separately from the server file.
Meet the boss: product controller
As we can see above, we use a function of productController.js to receive the block of actions from the socket. Later on we will see that this actions are actually a JSON file. It is also responsible for classifying the actions (create, read, update or delete) and calling the indicate service. Here a short version of this code:
For every action from the client, the controller gets a result from the specific service (in the case above, CREATE) and sends it back to him. When analyzing the services, we will see that this result is formed by some info and a transaction (this is why has two components).
Both components are passed to handleFinishTransaction function, that faces three possible situations:
- The action was not completed, so its status is “NACK”. We rollback the transaction, so any change related to that action is performed on the database.
- The action was completed with status “ACK”. A timer starts on the server, that waits for a “FIN”. This message is meant to confirm the reception of the action result and finish the transaction. Unfortunately any “FIN” message from the client is received, so we rollback the transaction too.
- Same as 2 but now we receive the client’s “FIN”. We commit the transaction, making the changes of that action permanent on the database.
At your service sir
Finishing up with the explanation of the server code, lets see how the createProduct service performs the CREATE action returning then a result.
Here is where we use the instance of Knex that we mentioned previously. We obtain a reusable transaction object, building the query from it. The attributes to insert are taken from the received action. We also define an empty action result, filled later.
Knex provides a very simple but extense syntax for building queries. In this case we insert an entry to products table, returning the ID and the name (if everything OK) or an error. The transaction object (that will be committed or rolled back by the controller) is also returned.
The complete code of the server can be found HERE.
Is the client is always right?
To test our server we can use a very simple Socket.IO client. We only have to initialize another node project as we did before and install the package
With the code below, we connect a client to the server’s
/sendActions namespace. Once communication is established we emit a
sendProductActions event with a JSON file on it. When acknowledged results are received, we wait 3 seconds (only to simulate net delay) and then we emit a “FIN” for each action.
The JSON above contains several CREATE actions for adding multiple products to our database. Obviously, we can try with the other action types. Nevertheless, an important think to remark is that we cannot read, update or delete a register that is being created or accessed. So although it may not make much sense, we cannot create and update a certain register in the same syncronization, for instance.
To conclude, I want to say that the scope of this article only reaches a basic implementation with four functions for the product entity. Despite this, the amount of possibilities is huge. More tables and relations can be defined implying new controllers, services and actions. The synchronization exemplified here must be expanded too. For example, it can be unidirectional or bidirectional, we may need versioned data and so on. My final degree thesis derstands these aspects in more detail.
Thats all folks, happy coding!