Skip to content

Update documentation to match current state #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 15, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
106 changes: 85 additions & 21 deletions DESIGN.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,108 @@
# Overview

The nginx-k8s-edge-controller is intended to run within Kubernetes and watch for changes in Ingress resources.
When changes are made -- creation, deletion, modification -- to Ingress definitions the controller will synchronize the nginx+ downstream objects.
The nginx-k8s-edge-controller runs in a Kubernetes Cluster and responds to changes in resources of interest, updating designated NGINX Plus hosts with the appropriate configuration.

## Basic Architecture

The controller is deployed in a Kubernetes Cluster. Upon startup, it registers interest in changes to Ingress resources within the Cluster.
A Watcher handles the events raised by the Cluster and uses the appropriate Translator to convert the events into NGINX+ requests,
and then uses the Synchronizer to update the target NGINX+ instance via the [NGINX+ Configuration API](https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/).
The controller is deployed in a Kubernetes Cluster. Upon startup, it registers interest in changes to Service resources in the "nginx-ingress" namespace.
The Handler accepts the events raised by the Cluster and calls the Translator to convert the events into event definitions that are used to update NGINX Plus hosts.
Next, the Handler calls the Synchronizer with the list of translated events which are fanned-out for each NGINX host.
Lastly, the Synchronizer calls the [NGINX+ Configuration API](https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/) using the [NGINX Plus Go client](https://github.com/nginxinc/nginx-plus-go-client) to update the target NGINX Plus host(s).

```mermaid
stateDiagram-v2
Controller --> Watcher
Controller --> Watcher
Controller --> Settings
Watcher --> Handler : "nkl-handler queue"
Handler --> Translator
Translator --> Handler
Handler --> Synchronizer : "nkl-synchronizer queue"
Synchronizer --> NGINX+
Synchronizer --> NGINXPlusLB1
Synchronizer --> NGINXPlusLB2
Synchronizer --> NGINXPlusLB...
Synchronizer --> NGINXPlusLBn
```

### Event Handler
### Settings

The event handling is implemented using two [k8s work queues](https://pkg.go.dev/k8s.io/client-go/util/workqueue).
The first queue, "nkl-handler", is populated with `core.Event` instances by the Watcher which are based upon the events
raised by k8s.
The Settings module is responsible for loading the configuration settings from the "nkl-config" ConfigMap resource in the "nkl" namespace.
The Settings are loaded when the controller starts and are reloaded when the "nkl-config" ConfigMap resource is updated.

The Handler then takes the `core.Event` instances and calls the `translation.Translator` to convert the event into a `nginx.Nginx` instance.
The `core.Event` instance is update with the `nginx.Nginx` instance and then placed on the second queue, named "nkl-synchronizer".
### Watcher

### Synchronizer
The Watcher is responsible for monitoring changes to Service resources in the "nginx-ingress" namespace.
It registers methods that handle each event type. Events are handled by creating a `core.Event` instance and adding it to the "nkl-handler" queue.
When adding the event to the queue, the Watcher also retrieves the list of Node IPs and adds the list to the event.
The master node ip is excluded from the list. (NOTE: This should be configurable.)

### Handler

The Synchronizer is responsible for taking the `core.Event` instances from the "nkl-synchronizer" queue and updating the target NGINX+
using the `nginx.Nginx` member of the event.
The Handler is responsible for taking the `core.Event` instances from the "nkl-handler" queue and calling the Translator to convert the event into a `core.ServerUpdateEvent` instance,
adding each `core.ServerUpdateEvent` to the "nkl-synchronizer" queue.

### Translator

The Translator is responsible for converting the `k8s.Ingress` resource definition into an `nginxClient.UpstreamServer` definition.
The Translator is responsible for converting the `core.Event` event into an `nginxClient.UpstreamServer` event.
This involves filtering out the `core.Event` instances that are not of interest to the controller by accepting only Port names starting with the NklPrefix value (currently _nkl-_).
The event is then fanned-out based on the defined Ports, one event per defined Port. Each port is then augmented with the Ingress name (the name configured in the Port definition with the NklPrefix value removed),
and the list of the Node's IP addresses.

The Translator passes the list of events to the Synchronizer by calling the `AddEvents` method.

**NOTE: It is important that the Port names match the name of the defined NGINX Plus Upstreams.**

In the following example the NGINX Plus Upstreams are named "nkl-nginx-lb-http" and "nkl-nginx-lb-https". These match the name in the NGINX Plus configuration.

```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: nkl-nginx-lb-http
- port: 443
targetPort: 443
protocol: TCP
name: nkl-nginx-lb-https
selector:
app: nginx-ingress
```

### Synchronizer

The Synchronizer is responsible for fanning-out the given list of `core.ServerUpdateEvent` events, one for each configured NGINX Plus host.
The NGINX Plus hosts are configured using a ConfigMap resource named "nkl-config" in the "nkl" namespace. An example of the ConfigMap is shown below.

```yaml
apiVersion: v1
kind: ConfigMap
data:
nginx-hosts:
"http://10.1.1.4:9000/api,http://10.1.1.5:9000/api"
metadata:
name: nkl-config
namespace: nkl
```

This example includes two NGINX Plus hosts to support High Availability.

Additionally, the Synchronizer is responsible for taking the `core.ServerUpdateEvent` instances from the "nkl-synchronizer" queue and updating the target NGINX Plus host.
The Synchronizer uses the [NGINX Plus Go client](https://github.com/nginxinc/nginx-plus-go-client) to communicate with each NGINX Plus host.


#### Retry Mechanism

The Synchronizer uses a retry mechanism to handle failures when updating the NGINX Plus hosts.
The retry mechanism is implemented in the workqueue using the `workqueue.NewItemExponentialFailureRateLimiter`,
having defaults set to a base of 2 seconds, and a maximum of 60 seconds.

### Retry Mechanism
### Jitter

The Synchronizer uses a retry mechanism to handle failures when updating the NGINX+ instance.
The retry mechanism is implemented in the workqueue using the `workqueue.NewItemExponentialFailureRateLimiter`.
Each workqueue can be configured independently, having defaults set to a base of 2 seconds, and a maximum of 60 seconds.
The Synchronizer uses a jitter mechanism to avoid thrashing the NGINX Plus hosts. Each `core.ServerUpdateEvent` instance
is added to the "nkl-synchronizer" queue with a random jitter value between 250 and 750 milliseconds.