There are many many ways how to set up a micro-service based application. The following article demonstrates how the Spring Cloud Netflix libraries can be used in order to build such an infrastructure. Besides the Spring stuff Hazelcast is taken as technology for session distribution.
Following requirements are fulfilled:
Monitoring and other operational topics are not discussed in this article. All micro-services are assumed to be stateless. Persistence (database) is not considered here.
The shown setup is not limited to two servers. It can be extended easily to support multiple servers. Anyhow, in production it is not very likely that micro-services are deployed directly on a server. Rather e.g. Docker and Kubernetes or something alike will be used to provide an elastic and maintainable environment.
It is assumed a fail-safe edge service will be in place that provides load-balancing.
The following diagram shows the connections that are established between the considered components.
The Spring Cloud Config server is used in order to support different environments. This server uses a git repository to store all configuration files. During start-up all micro-services request their configuration from that server which must run on a known URL which is basically the only information that must be stored statically in each micro-service.
The service discovery is an important building block and provided by Eureka. It provides information which service (
serviceId) is running where (IP address and port).
The left Eureka is configured with the URL of the right Eureka server and vice versa so both servers connect to each other and share their information. Consequently, each Eureka instance knows about all running micro-services on both servers.
The API gateway is used to protect internal resources and allows mediation etc. to maintain compatibility. If one is not interested in these things, Zuul might not be required and the initial load balancing between the two servers is done by the external load balancer.
For load balancing Zuul can use Ribbon and Hystrix. While Ribbon provides the actual balancing Hystrix is the circuit breaker that ensures that Ribbon does not send requests to micro-services that do not respond as expected.
Each micro-service can be implemented straight-forwarding (or not) providing the required functionality. It may connect to a database e.g. using hibernate or whatever technology.
In order to be discoverable by other micro-services each micro-service must register with Eureka on start-up with its
serviceId. In order to find other micro-services it will request this information from Eureka.
In this article it is assumed that
MS-2 depends on
MS-1 in order to provide its service.
MS-1 does not depend on another service. Furthermore, it is assumed that only one
MS-1 instance is deployed on each server and two instances of
Although it is not necessary, it makes a lot of sense to use Feign for the communication within the micro-services. Feign makes use of Eureka in order to find other services by their
serviceId. Besides that Feign is not related to the general setup and therefore is not discussed in this article.
In the same way as Zuul each micro-service uses Ribbon and Hystrix to load-balance requests to other micro-services. Since the two Eureka instances are connected to each other providing the information of all micro-services on both nodes the load-balancing also will use both servers. So, in case
MS-1 (left) dies, the functionality of
MS-2 (left) is not effected as it also uses both
MS-1 (right) instances.
In this article it is assumed that Spring Security and Cookies are used for authentication matters. A micro-service connects as Hazelcast client to a Hazelcast node that is running on localhost.
Hazelcast is not related to Spring Cloud but is supported by Spring Security and allows easy distribution of session information between different servers and effectively allows failover when one server dies.
The session service is configured with a list of all session services so that hazelcast is able to form the cluster.
It should also be possible to use the information about the available session services that is provided by Eureka. That is not supported out-of-the-box and I have not yet tried to implement this.
A different option could be to use JWT. Although JWT promises simplicity it might not be that simple as one could think.