Over the last couple of years, a networking technology called Service Mesh has been gaining in popularity and adoption in the cloud-native ecosystem. As with any new technology, Service Meshes solve specific problems. As applications/the application layer move toward or are being built with microservice architectures, the number of endpoints (your services) increases. The rise in containerized applications and the rise in popularity of Kubernetes certainly has made way for Service Mesh technology investment.
With this two-fold change in architecture with more endpoints and the ephemerality of containers, networking complexity skyrockets. The days of having a permanent address for a virtual machine or physical machine fade away with containers being spun up and down by Kubernetes. Your services still need to communicate with the inside and the outside world and having a Service Mesh brings a control plane and functionality to help tackle modern networking complexity.
Service Meshes are a dedicated infrastructure layer and have four pillars that they provide. A Service Mesh has the capabilities to connect your services, secure communications, act as a control plane, and have the ability to observe and be observed.
With the networking sprawl and rebalancing that is required for containerized microservices, having a way for services to communicate with each other - and the outside world - can be a challenge. By providing routing and connectivity for services that are not on permanent infrastructure, a Service Mesh can be the conduit for communications. Because of the programmatic implementation, network shifting techniques such as a blue-green or canary deployment can be implemented with traffic shifts.
Having consistently secure communication also can be a challenge with containerized microservices. Because of all the moving pieces and remote calls, ensuring traffic has consistent encryption is a benefit of a Service Mesh. Remote calls can trigger several downstream remote calls to fulfill a request in a microservice model.
With a Service Mesh, you are also getting a control plane. A key pillar here is policy-based control. With policy-based control, there can be real-time and rule-based processing of traffic. Certain traffic can be flagged/load balanced to different endpoints, and more advanced load balancing schemes, such as circuit breaking patterns, can be implemented.
Tracking and tracing at the network level used to be a mystery for application teams as this responsibility was delegated to networking teams. You can tell a lot about an application stack by having the ability to trace and track application traffic and calls. With a Service Mesh, visibility into service traffic, even aggregate service traffic is available to be logged and inspected. End-to-end monitoring, distributed tracing, metrics, and logging are now all available at the Service Mesh level.
With the rise of two fronts, organizations and platform engineering teams are embracing a more distributed microservice model and supporting an increasing polyglot of languages and frameworks in the enterprise. Service Meshes are needed to support communication across a myriad of languages and microservices. Creating and then hardcoding DNS entries for ephemeral/transit infrastructure would be a nonstop job, especially with the rise in the number of services that need support.
Like any technology, this “depends” on the needs of the organization and the benefits that having a Service Mesh brings. For monolithic architectures where most calls are internal to the running application (ie: method invocation), having a Service Mesh as a facade to the monolith does not make much sense. If there are only a few endpoints that are homogeneous in stack (ie: written in the same language and similar application infrastructure), using a Service Mesh would be overkill.
According to The New Stack, as technology stacks become more heterogeneous and there is a bloom in endpoints, a Service Mesh can solve several networking and operational challenges. The four pillars of a Service Mesh (connecting, securing, controlling, and observing connectivity) makes more sense to invest in a central spot.
There are several Service Mesh projects and providers out there. In the same vein that Kubernetes is the prominent container orchestrator, Istio is the prominent Service Mesh. There are a few topologies to consider with a Service Mesh, such as a side-car proxy, and several other Service Mesh providers, such as LinkerD/Buoyant, Consul, Solo, and AWS App Mesh.
To get started with a Service Mesh, the controller/control plane needs to be installed. Then, there needs to be a communication mechanism between the application node and the controller. Like any other technology, there is no need to boil the ocean on the first day. One can focus on the controller installation and the first agent (or injection) to handle basic traffic/destination rules with the Service Mesh. As confidence and skills grow, one can start to expand to other pillars of a Service Mesh and expand on the complexity of the rule sets. Though, if one is getting started today, the most popular Service Mesh in the Kubernetes space is Istio.
Istio was formed in the summer of 2018 as an open-source project and a joint partnership between IBM, Google, and Lyft. Before Istio, if you were looking at a Service Mesh, the route was to leverage several Netflix OSS projects such as Zuul, Ribbon, and Hystrix. Istio extends the Envoy proxy project with Service Mesh capabilities.
Istio has four main components. The three non-Envoy components are moving towards an istiod single distribution for management and configuration.
Envoy is the main sidecar proxy that allows for application communication between the control and data plane. Envoy enables fault injection, dynamic service discovery, and TLS termination among other functionalities.
Pilot is responsible for the lifecycle of Envoy instances across the Service Mesh. Pilot helps generate dynamically the Envoy specific configurations.
Galley converts the configuration YAML to a format that Istio natively understands. In a nutshell, Galley provides configuration management for Istio.
Citadel manages keys and certificates across the Service Mesh. Alongside Pilot and Galley, Citadel functionality is being wrapped up by the new istiod.
No matter where you are in your Istio or Service Mesh journey, getting started with Harness is a breeze.
An extremely powerful pattern in Istio is Traffic Shifting. Having the ability to apply percentages/weights to services (for example: v1 and v2 of a service) in a traffic splitting pattern opens up the doors for a canary deployment.
You would most likely be touching a traffic shifting rule during a deployment. Orchestrating a set of KubeCTL and IstioCTL commands, maintaining the configurations, and designing for a failure (rollback) for those tasks certainly requires proper planning and thought. The Harness Platform with Traffic Management support allows you to step away from the orchestration and failure complexity to focus on just the rules and outcomes themselves.
Create a new Harness Application called “My K8s” with a Harness Service called “nginx_k8s,” which is just Nginx pulled through Docker Hub.
Docker Image Name “library/nginx.”
Create a new Harness Workflow.
Navigate to Setup -> Your Application -> Workflows then “+ Add Workflow.” We will call our new workflow “Istio Canary.”
Next, you can add a Deployment Phase into your Workflow. In this example, we are leveraging a pre-existing “nginx_k8s” Service, which we will be deploying to Minikube.
Next, add a Traffic Split into the Verify Phase with “+ Add Phase” under the Kubernetes column.
Set the Weights of the Traffic Split. Depending on how many canary interactions are needed, it can have multiple splits. Here, I am leveraging a 20/80 split.
Head back to the Harness Service definition for “nginx_k8s.” Harness can be used as a UI to fill in items that need to be sent along for configuration.
Below in the manifest section, let’s add another file named istio.yaml with a basic Istio Destination Rule and Virtual Service. Harness has the ability to apply YAMLs on your behalf and we will leverage this for the “DestinationRule” and “VirtualService.”
To do so, hover over the folder, then with the dots, select “+ Add File” and add an istio.yaml.
In the istio.yaml, add the basic configurations for the Istio Destination Rule and Istio Virtual Service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
annotations:
staging-devharnessio.kinsta.cloud/managed: true
name: {{.Values.name}}-destinationrule
spec:
host: {{.Values.name}}-svc
trafficPolicy:
loadBalancer:
simple: RANDOM
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
annotations:
staging-devharnessio.kinsta.cloud/managed: true
name: {{.Values.name}}-virtualservice
spec:
gateways:
- {{.Values.name}}-gateway
hosts:
- test.com
http:
- route:
- destination:
host: {{.Values.name}}-svc
Then, click Save.
Navigate to Continuous Deployment -> Start New Deployment
Once you hit Submit, we can watch the traffic split being applied.
Zooming in:
With that, your first Service Mesh has been deployed to all with the help of Harness!
In prior webinars, here were a few questions that were asked around Service Meshes.
There has been an ongoing movement with a “shift left” mentality (shifting more decisions and responsibilities towards the development team). Networking has been seen as a traditional infrastructure role where there can be long lead times to modify/expand networking rules. With the bloom of containers/ephemeral infrastructure and microservices, the networking topology has shifted from absolutes to dynamic/rule-based. Service Mesh technology is allowing for the codification of business logic/rules-based networking rules and operations made to be modified by a software engineer.
A Service Mesh is a piece of networking infrastructure that is made to be modified by development teams. Providing connectivity, security, control, and observability in the network stack and how services communicate with the inside and outside world. As the endpoints and microservices increase, a Service Mesh will start to play a more integral part in the architecture.
There is overhead for using a Service Mesh and a learning curve. Microservices do allude themselves as better candidates for Service Meshes. If your application uses HTTP/HTTP2/gRPC, then your application can easily be part of a Service Mesh. Service communication is not only a microservice pattern.
Kubernetes by itself is not a Service Mesh, though Service Meshes became popular with the rise of Kubernetes as a platform and as a cluster.
No matter where you are in your Continuous Delivery or Service Mesh journey, Harness is here to partner with you. Harness has a complete end-to-end software delivery platform that can take you from idea to production. If you don’t have a Harness Account, feel free to sign up today!
Cheers!
-Ravi
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.