Overview: Brian and Tyler discuss the basics of Service Meshes, such as Istio, Envoy and Linkerd.
- Istio Homepage
- Envoy Homepage
- Linkerd Homepage
- Introduction to modern network load balancing and proxying
- OpenShift Commons Briefing #103: Microservices and Istio on OpenShift
- Sidecars and a Microservices Mesh
- Videos from CNCF / KubeCon
A Service Mesh is a layer that manages the communication between apps (or between parts of the same app, e.g. microservices)
Just as applications shouldn’t be writing their own TCP stack, they also shouldn’t be managing their own load balancing logic, or their own service discovery management, or their own retry and timeout logic. – link
Mesh: A group of hosts that coordinate to provide a consistent network topology. In this documentation, an “Envoy mesh” is a group of Envoy proxies that form a message passing substrate for a distributed system comprised of many different services and application platforms. – link
Topic 1 – What is a Service Mesh?
- Service Discovery
- Fault Injection
- Circuit Breaking
- A/B Deployments
- Blue/Green Deployments
- Canary Deployments
- Traffic Limiting
- Security Services (e.g. Mutual TLS)
Topic 2 – Didn’t developers build Microservices before Service Meshes?
Topic 3 – How does a Container or Kubernetes interact with a Service Mesh?