What is Sidecar?
A sidecar is a separate container that runs alongside an application container in a Kubernetes pod. Sidecar is responsible for networking features such as inter communication, monitoring, security, timeouts, retries, circuit breaking communication protocols such as HTTP, GRPC.
Sidecars are typically managed by some type of control plane.
What is Service Mesh?
- A service mesh is a configurable, low‑latency infrastructure layer that handles the communication between services using application programming interfaces (APIs) and without having to change the code in a microservice architecture.
- A service mesh ensures that communication between services is fast, reliable and secure.
Service mesh provides,
- Service discovery
- load balancing
- authentication and authorization
- support for the circuit breaker pattern
- Monolithic Architecture is a traditional way of building applications.
- A monolithic application is thus built and will function as one entity, with things like APIs, databases, services, and load balancers into a single application framework.
- Here authentication, authorization, monitoring, networking, logging everything are located inside a single deliverable.
After migration monolithic application to microservices, we should implement authentication, authorization, monitoring, networking, logging and load balancing everything within each and every microservices. It is really hard to maintain below mentioned things, if we have lot of services.
These are the pros and cons on micro services,
Why Sidecar Proxy?
Here you can see according to the below picture, we can use sidecar proxy to handle all the authentication, authorization, security, monitoring and etc. This is the way how Service mesh is coming to the picture.
When we are using service mesh we don’t need to worry about inter service security, load balancing, service authentication and etc. Service mesh handles every things for us.
If we are using service mesh, we don’t need to change the application code to configure security and loadbalancing and etc. We should configure it on the service mesh.
An Istio service mesh is also logically split into a data plane and a control plane as other service mesh.
Istio is using Envoy proxy as side car. These proxies mediate and control all network communication between microservices. They also collect and report telemetry on all mesh traffic.
Envoy’s many built-in features,
- Dynamic service discovery
- Load balancing
- TLS termination
- HTTP/2 and gRPC proxies
- Circuit breakers
- Health checks
- Staged rollouts with %-based traffic split
- Fault injection
- Rich metrics
- Istiod provides service discovery, configuration and certificate management.
- Istiod acts as a Certificate Authority (CA) and generates certificates to allow secure mTLS communication in the data plane.
2. Citadel- manage certificate generation
3. Pilot — service discovery
4. Galley- validating configuration files
- Citadel, Pilot and Galley three components are combined into single deamon called Istiod in the later version.
- Istio-ingress gateway manage all the inbound traffice into the services. Ingress gateway is standalone envoy proxy sitting at the edge of the service mesh. They do not work as sidecar.
Install Istio & Deploy
- First we should install istioctl. After that we can install istio.
2. After installing istio-system namespace is available in Kubernetes
3. Deploy bookinfo application
4. Here you can see we have only one container in the pods.
5. Check the default namespace on Kubernetes
6. Set Istio as default namesapce on Kubernetes
7. Check the default namespace
8. Delete all the services and pods.
9. Deploy bookinfo application again.
10. Here you can see, now we have two containers in each pods. One is application container and other one is envoy proxy container.