Create kubernetes nginx ingress without GCP load-balancer
Yes this is possible. Deploy your ingress controller, and deploy it with a NodePort service. Example:
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: kube-system
labels:
k8s-app: nginx-ingress-controller
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32080
protocol: TCP
name: http
- port: 443
targetPort: 443
nodePort: 32443
protocol: TCP
name: https
selector:
k8s-app: nginx-ingress-controller
Now, create an ingress with a DNS entry:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-service #obviously point this to a valid service + port
servicePort: 80
Now, assuming your static IP is attached to any kubernetes node running kube-proxy, have DNS updated to point to the static IP, and you should be able to visit myapp.example.com:32080
and the ingress will map you back to your app.
A few additional things:
If you want to use a lower port than 32080, then bear in mind if you're using CNI networking, you'll have trouble with hostport. It's recommend to have a load balancer listening on port 80, I guess you could just have nginx set up to do proxy pass, but it becomes difficult. This is why a load balancer with your cloud provider is recommended :)
TLDR: If you want to serve your website/webservice on ports below 3000, then no, it's not possible. If someone finds a way to do it, I'd be eager to know how.
The two main approaches I used while trying to serve on a port below 3000 included:
- Installing the
nginx-ingress
controller service to be of typeNodePort
, listening on ports 80 and 443. However, this results in the following error:
The way to work around this error is to change theError: UPGRADE FAILED: Service "nginx-ingress-controller" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767
--service-node-port-range
flag used when startingkube-apiserver
. However, this configuration cannot be accessed on GCP. If you'd like to try for yourself, you can check out the instructions here: Kubernetes service node port range - Following the steps in the thread Expose port 80 and 443 on Google Container Engine without load balancer. This relies on using an
externalIP
attribute attached to aservice
oftype: ClusterIP
. At first glance, this would seem to be an ideal solution. However, there is a bug in the way that theexternalIP
attribute works. It does not accept an external, static IP, but rather an internal, ephemeral IP. If you hardcode an internal, ephemeral IP in theexternalIP
field, and then attach an external, static IP to one of the nodes in your cluster through the GCP Console, requests are successfully routed. However, this is not a viable solution because you've now hardcoded an ephemeral IP in yourservice
definition, so your website will inevitably go offline as the nodes' internal IPs change.
If you are okay with serving on ports above 3000, see my instructions below.
How to remove the LoadBalancer (only allows serving on ports > 3000)
I've tried removing my LoadBalancer, and this is the best solution I could come up with. It has the following flaws:
- The ports used to access the webpage are not the usual 80 and 443 because exposing these ports from a node is not trivial. I'll update later if I figure it out.
And the following benefits:
- There's no LoadBalancer.
- The IP of the website/webservice is static.
- It's relies on the popular
nginx-ingress
helm chart. - It uses an
ingress
, allowing complete control over how requests are routed to yourservices
based on the paths of the requests.
1. Install the ingress service and controller
Assuming you already have Helm installed (if you don't follow the steps here: Installing Helm on GKE), create an nginx-ingress
with a type
of NodePort
.
helm install \
--name nginx-ingress \
stable/nginx-ingress \
--set rbac.create=true \
--set controller.publishService.enabled=true \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=30080 \
--set controller.service.nodePorts.https=30443
2. Create the ingress resource
Create the ingress definition for your routing.
# my-ingress-resource.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: reverse-proxy
namespace: production # Namespace must be the same as that of target services below.
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Set to true once SSL is set up.
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 3000
- path: /
backend:
serviceName: frontend
servicePort: 80
Then install it with
kubectl apply -f my-ingress-resource.yaml
3. Create a firewall rule
Find the tag of your cluster.
gcloud compute instances list
If your cluster instances have names like
gke-cluster-1-pool-1-fee097a3-n6c8
gke-cluster-1-pool-1-fee097a3-zssz
Then your cluster tag is gke-cluster-1-pool-1-fee097a3
.
Go to the GCP firewall page. Verify that you have the right project selected in the navbar.
Click "Create Firewall Rule". Give the rule a decent name. You can leave most of the settings as defaults, but past your cluster tag under "Target tags". Set the Source IP Ranges to 0.0.0.0/0
. Under Protocols and Ports, change "Allow all" to "Specified protocols and ports". Check the TCP box, and put 30080, 30443
in the input field. Click "Create".
4. Create a static IP
Go to https://console.cloud.google.com/networking/addresses/ and click "Reserve Static Address". Give it a descriptive name, and select the correct region. After selecting the correct region, you should be able to click the "Attached to" dropdown and select one of your Kubernetes nodes. Click "Reserve".
5. Test the configuration
After reserving the static IP, find out which static IP was granted by looking at the External IP Address list.
Copy it into your browser, then tack on a port (<your-ip>:30080
for HTTP or https://<your-ip>:30443
for HTTPS). You should see your webpage.