Skip to content

Releases: netscaler/netscaler-k8s-ingress-controller

Release 2.1.4

11 Sep 09:46
69d33b2
Compare
Choose a tag to compare

Version 2.1.4

What's new

Multi-monitor support for GSLB

In a GSLB setup, you can now configure multiple monitors to monitor services of the same host. The monitors can be of different types, depending on the request protocol used to check the health of the services. For example, HTTP, HTTPS, and TCP.

In addition to configuring multiple monitors, you can define additional parameters for a monitor. You can also define the combination of parameters for each monitor as per your requirement. For more information, see Multi-monitor support for GSLB
.

Note:

When you upgrade to NSIC version 2.1.4, you must reapply the GTP CRD using the following command: kubectl apply -f https://raw.githubusercontent.com/netscaler/netscaler-k8s-ingress-controller/master/gslb/Manifest/gtp-crd.yaml.

Support to bind multiple SSL certificates for a service of type LoadBalancer

You can now bind multiple SSL certificates as front-end server certificates for a service of type LoadBalancer by using the following annotations: service.citrix.com/secret and service.citrix.com/preconfigured-certkey. For more information, see SSL certificate for services of type LoadBalancer through the Kubernetes secret resource.

Fixed issues

  • NSIC doesn't process node update events in certain cases.

Release 2.0.6

20 Aug 13:15
Compare
Choose a tag to compare

Version 2.0.6

What's new

Support for multi-cluster ingress solution

NetScaler multi-cluster ingress solution enables NetScaler to load balance applications distributed across clusters using a single front-end IP address. The load-balanced applications can be either the same application, different applications of the same domain, or entirely different applications.

Earlier, to load balance applications in multiple clusters, a dedicated content switching virtual server was required on NetScaler for each instance of NetScaler Ingress Controller (NSIC) running in the clusters. With NetScaler multi-cluster ingress solution, multiple ingress controllers can share a content switching virtual server. Therefore, applications deployed across clusters can be load balanced using the same content switching virtual server IP (VIP) address. For more information, see Multi-cluster ingress.

New parameters within ConfigMap

The parameters metrics.service and transactions.service parameters are added under the endpoint object for analytics configuration using a ConfigMap.

  • metrics.service: Set this value as the IP address or DNS address of the observability endpoint.

    Note:

    The metrics.service parameter replaces the server parameter starting from NSIC release 2.0.6.

  • transactions.service: Set this value as the IP address or namespace/service of the NetScaler Observability Exporter service.

    Note:

    The transactions.service parameter replaces the service parameter starting from NSIC release 2.0.6.

You can now change all the ConfigMap settings at runtime while NetScaler Ingress Controller is operational.

Fixed issues

  • Sometimes, the content switching virtual servers in NetScaler are deleted because of a Kubernetes error. Meanwhile, when NetScaler Ingress Controller (NSIC) restarts, it looks for the content switching virtual servers in NetScaler and because those servers are not found, NSIC remains in the reconciliation loop. With this fix, NSIC no longer looks for the content switching virtual servers in NetScaler and proceeds with further configuration.

Release 1.43.7

17 Jul 14:17
b42b782
Compare
Choose a tag to compare

Version 1.43.7

What's new

Implementation of Liveness and Readiness probes in NetScaler Ingress Controller (NSIC)

Liveness and Readiness probes are critical for ensuring the reliability and availability of containers within a pod to handle traffic in Kubernetes/OpenShift. These probes are designed to manage traffic flow effectively and maintain container health by performing specific checks.

  • Liveness probe: Determines if a container is running (alive). If the container fails this check, Kubernetes/OpenShift automatically restarts the container.
  • Readiness probe: Determines the readiness of containers to receive traffic. If the containers fail this check, the traffic is not directed to that pod. The pod itself is not terminated; instead, the containers are given time to complete their initialization process.

With the implementation of these probes, traffic is only directed to pods that are fully prepared to handle requests. If a container in a pod is not ready, Kubernetes/OpenShift temporarily stops sending traffic to that pod and allows the pod to initialize properly. For information about enabling and configuring the probes for NSIC, see the Helm chart release notes for NSIC 1.43.7.

For NSIC OpenShift deployments, DeploymentConfig objects are replaced with Deployment objects

Release 1.42.12

01 Jul 08:44
39e930f
Compare
Choose a tag to compare

Version 1.42.12

Fixed issues

  • When multiple NetScaler Ingress Controllers (NSIC) coexist in a cluster, an NSIC associated with a specific ingress class processes the rewrite policies associated with a different ingress class.
  • After the NSIC pod restart, if the Kube API fails and is unreachable, NSIC deletes the configuration in NetScaler.
  • NSIC logs a traceback error when a route is deployed prior to the service mentioned in that route.
  • The exception handling is faulty in the following scenarios, resulting in an incorrect configuration of metrics server in NetScaler.
    • When an NSIC tries to configure a metrics server in NetScaler and the metrics server already exists.
    • When multiple NSIC instances try to configure the metrics server in NetScaler simultaneously.
  • The responder policy parameters, such as redirect-status-code and redirect-reason, are not configured on the corresponding virtual server on NetScaler, even though a responder policy is successfully applied to a service in the Kubernetes cluster.
  • Sometimes, NSIC fails to update NetScaler based on the updates to Kubernetes’ resource configuration and NetScaler returns an error. In such cases, NSIC clears the existing NetScaler configuration; when the configuration is cleared on NetScaler, an event notification is not logged in Kubernetes.

Release 1.41.5

24 Apr 16:06
024c7b7
Compare
Choose a tag to compare

Version 1.41.5

What's new

Support to specify a custom header for the GSLB-endpoint monitoring traffic

You can now specify a custom header that you want to add to the GSLB-endpoint monitoring traffic by adding the "customHeader" argument under the monitor parameter in the global traffic policy (GTP). Earlier, the host URL specified in the GTP YAML was added to the custom header of GSLB-endpoint monitoring traffic by default.

The following GTP excerpt shows the usage of customHeader argument under monitoring.

monitor:
- monType: HTTPS
  uri: ''
  customHeader: "Host: <custom hostname>\r\n x-b3-traceid: afc38bae00096a96\r\n\r\n"
  respCode: '200,300,400'

Fixed issues

  • Even though a responder policy was successfully applied to a service in the Kubernetes cluster, the responder policy parameters, such as redirect-status-code and redirect-reason, were not configured on the corresponding virtual server on NetScaler. This issue is fixed now.
  • NetScaler Ingress Controller (NSIC) logged a traceback error when it attempted to get the analytics endpoints for NetScaler Observability Exporter service specified in the ConfigMap. This issue is fixed now.
  • Installation of NetScaler Ingress Controller using NetScaler Operator failed because of certain settings in 'analyticsConfig' with the lookup: nodes is forbidden error. This failure was because of a lack of ClusterRole permission to run API calls to get node-specific information. This issue is fixed now.

Release 1.40.12

04 Apr 14:24
a5e5bf9
Compare
Choose a tag to compare

What's new

Support to bind SNI SSL certificate to NetScaler

NetScaler Ingress Controller (NSIC) now accepts default-ssl-sni-certificate argument using which you can provide a secret that is used to configure SSL SNI certificate on NetScaler for HTTPS ingresses and routes.
Configure default-ssl-sni-certificate argument in the NSIC deployment YAML by providing the secret name and the namespace where the secret has been deployed in the cluster as following: --default-ssl-sni-certificate <NAMESPACE>/<SECRET_NAME>.

Support for namespace-specific NSIC in OpenShift

NSIC can now be deployed at the namespace level in the OpenShift cluster. In this deployment mode, NSIC processes resources pertaining to the given namespace instead of managing all the resources across the entire cluster.

Note:

If NSIC requires access to clusterwide resources such as config.openshift.io, network.openshift.io, etc., it must be deployed with ClusterRole privileges.

ImagePullSecret support for GSLB Controller

The GSLB controller Helm chart now supports the imagePullSecret option that ensures smooth integration with container registries that require authentication. Before deploying the Helm chart, you must ensure the corresponding Kubernetes secret is created within the same namespace to enable seamless image pull during helm installation.

Fixed issues

  • When NSIC was deployed to configure VPX in OpenShift environment without specifying a VIP address (nsVIP) for the VPX, the NSIC attempted to process the ingress or route resources repeatedly resulting in failures. This issue is fixed now.
  • NSIC encountered traceback errors when the container port was absent from the service deployment YAML. This issue is fixed now.
  • The removal of stale endpoint labels resulted in reinitialization of NSIC. This issue is fixed now.
  • The ingressClass annotation was not supported when NSIC was deployed with a local RBAC role. This issue is fixed now.

Release 1.39.6

23 Feb 09:37
e11a5b3
Compare
Choose a tag to compare

Version 1.39.6

What’s new

Enhanced security posture

In our ongoing commitment to security and reliability, this release introduces a significant upgrade to NetScaler Ingress Controller. We have transitioned to a new underlying base image, meticulously selected and optimized to ensure that all installed packages are patched against known Common Vulnerabilities and Exposures (CVEs). This strategic update strengthens the security framework of NetScaler Ingress Controller.

Compatibility with OVN CNI-based OpenShift v4.13+ environments

This release addresses and resolves a previously identified issue affecting NetScaler Ingress Controller when operating on OpenShift v4.13 with the OVN CNI. The OpenShift v4.13 has certain major changes w.r.t. OVN CNI. NetScaler Ingress Controller needed an enhancement to ensure compatibility and smooth operation within the specified environment, enhancing stability and performance. Users running NetScaler Ingress Controller on OpenShift v4.13+ with OVN CNI are encouraged to update to this release for continued support and seamless user experience.

Release 1.38.27

08 Feb 07:51
01095ba
Compare
Choose a tag to compare

Version 1.38.27

What's new

Support for single-site deployment of NetScaler GSLB controller

From this release, single-site deployment of the GSLB controller is supported.

Support to trigger an HTTP response code

A new ingress annotation ingress.citrix.com/default-response-code: '{response-code: "<code>"}' is introduced. This annotation enables you to configure NetScaler to generate an HTTP response code when any one of the following conditions is met on receiving an HTTP request.

-  None of the content switching policies match.
-  All the backend service endpoints are unavailable.

Note:

If the default backend service is added in the ingress, the response code is sent from NetScaler when all the backend service endpoints are down.

Acceptable values for code are 404 and 503. For example, ingress.citrix.com/default-response-code: '{response-code: "404"}'.

For more information, see Annotations.

Support for subnet and IP address range in the rewrite and responder CRD dataset

The rewrite and responder CRD dataset now supports the inclusion of subnet and IP address ranges. This enhancement enables you to add IP address entries efficiently for IPv4, IPv6, and MAC addresses within the CRD dataset.

An example of a dataset with IP address, IP address range, and subnet values for IPv4:

```
dataset:
 - name: redirectIPs
   type: ipv4
   values:
    - 10.1.1.100
    - 1.1.1.1 - 1.1.1.100
    - 2.2.2.2/10
```

Support to configure IP parameters

NetScaler Ingress Controller already supports the BGP advertisement of external IP addresses for type LoadBalancer services.
A new annotation service.citrix.com/vipparams is introduced for services of the type LoadBalancer. This annotation enables you to configure additional parameters such as "metrics" for the advertised IP address.

For information on the supported IP parameters, see nsip configuration.

Fixed Issues

  • When multiple ingress controllers coexist within a cluster and the ingress class specified for a controller is switched to another controller, the newly associated NetScaler Ingress Controller was not updating the ingress status correctly when VIP CRD is deployed in the cluster. This issue is now fixed.

Release 1.37.5

20 Nov 10:39
a354c95
Compare
Choose a tag to compare

Version 1.37.5

Enhancements

  • Earlier, the global server load balancing (GSLB) site configuration had to be done manually. Now, the global server load balancing (GSLB) site configuration on NetScaler is done automatically.

Fixed issues

  • When multiple ingress controllers coexist within a cluster and the ingress class specified for a controller is switched to another controller, the newly associated NetScaler Ingress Controller does not update the ingress status correctly. This issue is fixed.

Release 1.36.5

11 Oct 11:31
4304e32
Compare
Choose a tag to compare

Version 1.36.5

What's new

Direct export of metrics to Prometheus

NetScaler ingress controller now supports directly exporting metrics from NetScaler to Prometheus. For more information, see Exporting metrics directly to Prometheus.

Fixed issues

  • When you modify the Ingress or service class of an Ingress resource or a load balancer service from a supported to an unsupported class, NetScaler ingress controller now automatically clears stale VIP addresses on the respective Ingress and service resources.
  • Multiple NetScaler ingress controllers in the same cluster were causing a loop of creating and deleting VIPs. Now, this issue is fixed.