Changelog History
Page 1
-
v22.11.2 Changes
🚀 This edge release introduces the use of the Kubernetes metadata API in the proxy-injector and tap-injector components. This can reduce the IO and memory 📇 footprint for those components as they now only need to track the metadata for certain resources, rather than the entire resource itself. Similar changes will 🚀 be made for the destination component in an upcoming release.
- ⬆️ Bumped HTTP dependencies to fix a potential deadlock in HTTP/2 clients
- 📇 Changed the proxy-injector and tap-injector components to use the metadata API which should result in less memory consumption
-
v22.11.1 Changes
🚀 This edge releases ships a few fixes in Linkerd's dashboard, and the 🛠 multicluster extension. Additionally, a regression has been fixed in the CLI ⬆️ that blocked upgrades from versions older than 2.12.0, due to missing CRDs 🚀 (even if the CRDs were present in-cluster). Finally, the release includes 🔄 changes to the helm charts to allow for arbitrary (user-provided) labels on 🔗 Linkerd workloads.
- 🛠 Fixed an issue in the CLI where upgrades from any version prior to
stable-2.12.0 would fail when using the
--from-manifest
flag - ✂ Removed un-injectable namespaces, such as kube-system from unmeshed resource notification in the dashboard (thanks @MoSattler!)
- 🛠 Fixed an issue where the dashboard would respond to requests with 404 due to wrong root paths in the HTML script (thanks @junnplus!)
- ✂ Removed the proxyProtocol field in the multicluster gateway policy; this has the effect of changing the protocol from 'HTTP/1.1' to 'unknown' (thanks @psmit!)
- 🛠 Fixed the multicluster gateway UID when installing through the CLI, prior to this change the 'runAsUser' field would be empty
- 🔄 Changed the helm chart for the control plane and all extensions to support arbitrary labels on resources (thanks @bastienbosser!)
- 🛠 Fixed an issue in the CLI where upgrades from any version prior to
stable-2.12.0 would fail when using the
-
v22.10.3 Changes
🚀 This edge release adds
network-validator
, a new init container to be used when CNI is enabled.network-validator
ensures that local iptables rules are working as expected. It will validate this before linkerd-proxy starts.network-validator
replaces thenoop
container, runs asnobody
, and drops all capabilities before starting.- 🔧 Validate CNI
iptables
configuration during pod startup - 🛠 Fix "cluster networks contains all services" fails with services with no ClusterIP
- ✂ Remove kubectl version check from
linkerd check
(thanks @ziollek!) - Set
readOnlyRootFilesystem: true
in viz chart (thanks @mikutas!) - 🛠 Fix
linkerd multicluster install
by re-addingpause
container image in chart - 🔗 linkerd-viz have hardcoded image value in namespace-metadata.yml template bug correction (thanks @bastienbosser!)
- 🔧 Validate CNI
-
v22.10.2 Changes
🚀 This edge release fixes an issue with CNI chaining that was preventing the 🔗 Linkerd CNI plugin from working with other CNI plugins such as Cilium. It also 🛠 includes several other fixes.
- ⚡️ Updated Grafana dashboards to use variable duration parameter so that they can be used when Prometheus has a longer scrape interval (thanks @TarekAS)
- 🛠 Fixed handling of .conf files in the CNI plugin so that the Linkerd CNI plugin can be used alongside other CNI plugins such as Cilium
- ➕ Added a
linkerd diagnostics policy
command to inspect Linkerd policy state - ➕ Added a check that ClusterIP services are in the cluster networks
- ➕ Added a noop init container to injected pods when the CNI plugin is enabled to prevent certain scenarios where a pod can get stuck without an IP address
- 🛠 Fixed a bug where the
config.linkerd.io/proxy-version
annotation could be empty
-
v22.10.1 Changes
🚀 This edge release fixes some sections of the Viz dashboard appearing blank, and ➕ adds an optional PodMonitor resource to the Helm chart to enable easier 🛠 integration with the Prometheus Operator. It also includes many fixes submitted by our contributors.
- 🛠 Fixed the dashboard sections Tap, Top, and Routes appearing blank (thanks @MoSattler!)
- ➕ Added an optional PodMonitor resource to the main Helm chart (thanks @jaygridley!)
- 🛠 Fixed the CLI ignoring the
--api-addr
flag (thanks @mikutas!) - Expanded the
linkerd authz
command to display AuthorizationPolicy resources that target namespaces (thanks @aatarasoff!) - 🛠 Fixed the
NotIn
label selector operator in the policy resources, being erroneously treated asIn
. - 🛠 Fixed warning logic around the "linkerd-viz ClusterRoles exist" and
"linkerd-viz ClusterRoleBindings exist" checks in
linkerd viz check
- 🛠 Fixed proxies emitting some duplicate inbound metrics
-
v22.9.2 Changes
🚀 This release fixes an issue where the jaeger injector would put pods into an ⬆️ error state when upgrading from stable-2.11.x.
- ⚡️ Updated AdmissionRegistration API version usage to v1
- 🛠 Fixed jaeger injector interfering with upgrades to 2.12.x
-
v22.9.1 Changes
🚀 This release adds the
linkerd.io/trust-root-sha256
annotation to all injected workloads allowing predictable comparison of all workloads' trust anchors via the Kubernetes API.➕ Additionally, this release lowers the inbound connection pool idle timeout to 3s. This should help avoid socket errors, especially for Kubernetes probes.
- ➕ Added
linkerd.io/trust-root-sha256
annotation on all injected workloads to indicate certifcate bundle - ⏱ Lowered inbound connection pool idle timeout to 3s
- ⏪ Restored
namespace
field in Linkerd helm charts - ⚡️ Updated fields in
AuthorizationPolicy
andMeshTLSAuthentication
to conform to specification (thanks @aatarasoff!) - ⚡️ Updated the identity controller to not require a
ClusterRoleBinding
to read all deployment resources.
- ➕ Added
-
v22.8.3 Changes
Increased control plane HTTP servers' read timeouts so that they no longer 0️⃣ match the default probe intervals. This was leading to closed connections and decreased controller success rate.
-
v22.8.2 Changes
🚀 This release is considered a release candidate for stable-2.12.0 and we ⚡️ encourage you to try it out! It includes an update to the multicluster extension ⚡️ which adds support for Kubernetes v1.24 and also updates many CLI commands to 👌 support the new policy resources: ServerAuthorization and HTTPRoute.
- ⚡️ Updated linkerd check to allow RSA signed trust anchors (thanks @danibaeyens!)
- 🛠 Fixed some invalid yaml in the viz extension's tap-injector template (thanks @wc-s!)
- ➕ Added support for AuthorizationPolicy and HttpRoute to viz authz command
- ➕ Added support for AuthorizationPolicy and HttpRoute to viz stat
- ➕ Added support for policy metadata in linkerd tap
- 🛠 Fixed an issue where certain control plane components were not restarting as necessary after a trust root rotation
- ➕ Added a ServiceAccount token Secret to the multicluster extension to support Kubernetes versions >= v1.24
- 🛠 Fixed an issue where the --default-inbound-policy setting was not being respected
-
v22.8.1 Changes
🚀 This releases introduces default probe authorization. This means that on 0️⃣ clusters that use a default
deny
policy, probes do not have to be explicitly authorized using policy resources. Additionally, thepolicyController.probeNetworks
Helm value has been added, which allows users 🔧 to configure the networks that probes are expected to be performed from.➕ Additionally, the
linkerd authz
command has been updated to support the policy resources AuthorizationPolicy and HttpRoute.Finally, some smaller changes include allowing to disable
linkerd-await
on 🔧 control plane components (using the existingproxy.await
configuration) and 0️⃣ changing the default iptables mode back tolegacy
to support more cluster 0️⃣ environments by default.- ⚡️ Updated the
linkerd authz
command to support AuthorizationPolicy and HttpRoute resources - 🔄 Changed the
proxy.await
Helm value so that users can now disablelinkerd-await
on control plane components - ➕ Added probe authorization by default allowing clusters that use a default
deny
policy to not explicitly need to authorize probes - ➕ Added ability to run the Linkerd CNI plugin in non-chained (stand-alone) mode
- ➕ Added the
policyController.probeNetworks
Helm value for configuring the networks that probes are expected to be performed from - 🔄 Changed the default iptables mode to
legacy
- ⚡️ Updated the