ngrok Kubernetes Ingress Controller
Hey there everyone! My name is Alex Bezek and I'm on the Infrastructure team here at ngrok. Today I'm excited to share a first look at our new ngrok Kubernetes Ingress Controller.
What is a Kubernetes Ingress Controller?
An ingress controller enables traffic from the outside world to your Kubernetes services. They do so by continually watching for Kubernetes ingress object changes and dynamically configuring routing into the cluster.
There are numerous ingress controllers out there. Some provide simple access without load balancing, authentication, or authorization. Others require specific products and charge by the hour or per connection. Others are tied to a specific cloud provider and require completely separate and distinct tools to perform the same functions locally. As a whole, each of those controllers introduce new tooling and skill requirements into your deployment processes and application stack.
The ngrok Ingress Controller is fundamentally an operator that manages a collection of ngrok tunnels and API resources in your account automatically. Whether you’re running on your local machine, in your data center, or across multiple clouds, you get all of our capabilities - automatic DNS configuration, certificate management, OAuth 2.0, load balancing, circuit breakers, and much more - out of the box with the same configuration within a single tool.
ngrok’s Journey with Kubernetes
We architected the ngrok agent to run just about anywhere to provide network access to apps, services, and devices. As it turns out, exposing a local network device behind a NAT to the internet is just like exposing a Kubernetes pod to a service outside of the cluster’s network. If you spin up an ngrok agent within the Kubernetes network and point it at a Kubernetes service, you get an ingress tunnel to that service. Here at ngrok we have provided ingress to many of our internal applications and services this way! Essentially, we have a list of services that get written to a ngrok agent config file, and a single pod runs an agent to provide ingress to those services.
While this worked well, it introduced a major trade-off: upgrading the agent, adding new services to the underlying config file, or deployment changes caused an outage. It was brief - in the range of seconds - but it blocked us from using it for any highly available production services. Effectively, it kept our work as a “fun project” instead of “trusted infrastructure.” But everything changed with the latest generation of ngrok, specifically due to Cloud Edges, labeled tunnels, and ngrok-go.
ngrok Cloud Edges, labeled tunnels, and ngrok-go gives us the ability to run multiple ngrok agents with identical configurations shared by common labels. As you create new tunnels with the same label, we add them to a group and the Cloud Edge routes traffic and load balances across those tunnels. This allows us to run multiple agents in a Highly Available (HA) setup to avoid service disruptions during any updates or rollouts. Further, with ngrok-go, we could embed ngrok directly into the controller without a separate process or binary to manage. With these tools in our arsenal, we decided now was the time to move away from our statically defined configuration file and embrace the Kubernetes Ingress Controller way and share with everyone.
Towards Trusted Infrastructure
If you develop within a Kubernetes environment locally and need to share your app or receive incoming webhooks, odds are you are using ngrok to expose that app or service to the internet. This ingress controller fits seamlessly into that local environment but the more exciting part is leveraging all the new ngrok edge functionality to provide a robust and feature rich ingress solution.
The two most important things are activating load balancing and circuit breaking for more teams. As you start multiple tunnels with the same label, those pods are automatically included in your load balancing but our circuit breaking capabilities will allow ngrok to remove misbehaving pods just as easily. From there, we want to get more creative and show users how to build serverless ingress including OAuth 2.0, IP restrictions, and similar access controls blending cloud vendors, on-prem systems, and your Raspberry Pi cluster seamlessly.
One of the powers of embracing the Kubernetes Platform and its ecosystem is the ability to interop with other great tools seamlessly. As big fans of many of HashiCorp's products, we worked with them to show how we can be a drop in replacement for any ingress controller to provide ingress to services within their Consul service mesh. If you're not familiar, Consul is a service mesh product by HashiCorp. Using either the self-hosted version or through the managed HashiCorp Cloud Platform service, you can connect services within your Kubernetes cluster and even service in other clusters.
While Consul provides both service to service communication (called east-west traffic) as well as ingress (called north-south traffic) through the built-in Consul API Gateway, Consul Service Mesh also provides a pluggable model to integrate ingress traffic with their mesh with whichever solutions supports those interfaces. Consul Service Mesh also makes it simple to get any service or component into the mesh. All it takes is adding a couple of documented annotations to the pods, which you can do via our Helm Chart:
The ngrok ingress controller provides yet another great way to provide ingress traffic into workloads that run on Consul Service Mesh, whether the mesh is powered by HashiCorp Cloud Platform or by servers on self-managed infrastructure.
The ngrok ingress controller provides a great alternative to the Consul API Gateway for providing ingress traffic into Consul service mesh, whether the mesh is powered by HashiCorp Cloud Platform or by servers on self-managed infrastructure. We’re excited to see the capabilities they bring to Consul and the ecosystem as a whole. – David Yu, Senior Product Manager, HashiCorp
Try out the ngrok ingress controller
This is an alpha release. We don’t recommend using it for production yet, but if you want to give it a try in a remote or local Kubernetes cluster, all you need is a paid ngrok account, Helm, and access to your cluster.
We are working to make it usable with a free account but aren’t quite there yet. Feel free to visit our community slack and ping our community manager “Danger” for a free trial month. Yes, that’s really his name.
To get started, open your terminal and run these commands replacing your API Key and authtoken as appropriate:
Note: The full and definitive instructions will always be in the ngrok/ngrok-ingress-controller repository.
Once it's running successfully, you can create ingress objects for the ngrok ingress class pointing to a Kubernetes service in your cluster, and voila, ingress! Or as we call it: “ngress!” No one else calls it that currently but you can help with that too!
What’s next?
While we are excited about this project and our continued involvement in the k8s ecosystem, it’s still early for the controller. We welcome anyone that wants to try it out and hop into the ngrok community slack to let us know what you think!
Our primary goals with the ngrok Ingress Controller are:
- Get feedback on what people think and the use cases they roll out
- Stabilizing the ingress controller’s APIs for consistency and functionality
- Testing and polishing the ingress controller’s tunnel creation to ensure high availability production ingress
- Enable all the ngrok route module configurations
- Enable it to work with Free accounts
The most exciting part for us is that once we test and validate the high availability aspects, we’ll start flipping our internal services to use this, adding in support for modules we need and stressing the scalability using our millions of daily requests.
We’ll be at KubeCon Europe 2023, come visit us!