Istio Traffic Shifting
What is Istio
- Istio is an open source service mesh that layers transparently onto existing distributed applications.
- Istio is the path to load balancing, service-to-service authentication, and monitoring — with few or no service code changes.
Deployment and Traffic shifting
- Traffic shifting makes it possible to gradually migrate traffic from one version of a micro service to another version. This usually happens when migrating from an older version of an app to a newer one. After doing some feature enhancement, first we should need to send small amount of traffic to the new version of service in the initial stage. After that gradually we can increase traffic percentage using service mesh.
Prerequisite
- First we need to start minikube.
minikube start
- Install Istio and istioctl.
- Here I am using simple spring boot rest api.
a) This is my old version.
b) This is my new version file.
- First we should need to create two docker images for old version and new version. Here I am using local docker images. We can push docker images into docker hub and we can pull those images from the docker hub or registry when deploying in Kubernetes.
- Provides instructions to point your terminal’s docker-cli to minikuber docker-env.
eval $(minikube docker-env)
- Build the jar file for each version and build the docker image.(how to dockerize SpringBoot application)
- Here we have 1.0 and 2.0 myapp docker images.
- We can use below mentioned kubectl command to add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies.
kubectl label namespace default istio-injection=enabled
- After enabled Istio in minikube, Istio-proxy(Envoy proxy) called docker conainer is running in each and every pods.
Deployment
- Now we can do all the deployments.
- First we should deploy myapp service and two pods for v1 and v2.
Here imagePullPolicy is Never because I am using local docker images. We have two deployment definition for myapp-v1 and myapp-v2. myapp:1.0 is defined as v1 and myapp:2.0 is defined as v2.
kubectl apply -f myapp.yaml
- Check deployments , pods and services.
kubectl get svc
kubectl get pods
kubectl get deployments
- We have defined only one container in the deployment description. But two containers are running in the pod because each and every pods have its own Istio-proxy(Envoy proxy) and defined application container(myapp). After that all the comminucation between services, using that envoy proxy.
containers:
- name: myapp
image: myapp:1.0
imagePullPolicy: Never
kubectl describe pods myapp-v1-6cc8bbccf6-svllg
- You can use minikube dashboard to check all the service, pods, deployments and etc.
minikube dashboard
2. Now we should deploy destination rule.
- These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
- Version specific policies can be specified by defining a named
subset.
- Here we have two versions for myapp(v1 and v2). So we should define these two versions as two subsets.
kubectl apply -f myapp-dr.yaml
- Check destination rules.
kubectl get dr
- There are 6 different load balancing policies. We can use load balancing policy for subset level and service level.
3. Deploy Gateway and Virtual Services.
VirtualService
- VirtualService defines a set of traffic routing rules with matching criteria.
- If the traffic is matched, then it is sent to a named destination service (or subset/version of it) defined in the registry.
Example -
If uri is “api/v1/hello”, it is sent all the traffic to myapp-v1 subset and myapp-v2. Before deploying this one, we should deploy destination rules because we define all the subsets in the destination rule.
- match:
- uri:
exact: /api/v1/hello
route:
- destination:
host: myapp
port:
number: 8080
subset: v1
weight: 75
- destination:
host: myapp
port:
number: 8080
subset: v2
weight: 25
- For traffic shifting, we are using weight. According to above example, we are routing 25% traffic to myapp-v2 and 75% traffic to myapp-v1.
Gateway
- Gateway describes a load balance operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.
- We can expose set of ports in Gateway definition which use the type of protocol.
kubectl apply -f myapp-gateway.yaml
- Check virtual services and gateways.
kubectl get gw
kubectl get vs
After deployment we check the Istio errors using below command.
istioctl analyze
4. Start minikube tunnel
Minikube tunnel that sends traffic to your Istio Ingress Gateway. This will provide an external load balancer, EXTERNAL-IP
, for service/istio-ingressgateway
.
minikube tunnel
- Check the external IP of ingress gateway load balancer.
kubectl -n istio-system get service istio-ingressgateway
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
- Check the external http2 port of ingress gateway load balancer.
kubectl -n istio-system get service istio-ingressgateway
-o jsonpath='{.spec.ports[?(@.name=="http2")].port}'
- These are the istio-system services which running in minikube after installed istio.
5. Testing
curl http://kubectl -n istio-system get service istio-ingressgateway
-o jsonpath='{.status.loadBalancer.ingress[0].ip}':kubectl -n istio-system get service istio-ingressgateway
-o jsonpath='{.spec.ports[?(@.name=="http2")].port}'/api/v1/hello
curl http://127.0.0.1/api/v1/hello
- Here you can see, we have two different output for same url.
Kiali Console
- Kiali is a console for Istio service mesh. Kiali can be quickly installed as an Istio add-on, or trusted as a part of your production environment.
- There are several addons for Istio.
- Deploy Kiali, prometheus using ‘kubectl apply -f addons’.
- Open Kiali dashboard using below command.
istioctl dashboard kiali
- We can view all the deployment yaml file here which we used for services, deployments, virtual services and etc.
- This is how istio-ingressgateway is routing data to myapp-v1 and myapp-v2
- Send some traffic to the application and check the Kiali dashboard.
for ((i=1;i<=100;i++)); do sleep 0.5; curl -v --header
"Connection: keep-alive" "http://127.0.0.1/api/v1/hello"; done
- Here you can see, traffic shifting has been worked according to our definition. Traffic has been routed to myapp-v1 around 72.6. for myapp-v2, it is around 27.4
github — https://github.com/pramodShehan5/istio-traffic-shifting-demo