In our recent announcement, we introduced Operator Lifecycle Manager (OLM) v1 and its exciting new features designed to simplify operator management on Red Hat OpenShift. This article walks through key user scenarios for OLM v1. We'll use the new ClusterExtension API and provide easy-to-follow, copy-paste examples to show how you can apply these improvements in your day-to-day operations.
If you're new to OLM v1, read our announcement post first for a high-level overview of its benefits, simplified API, and new features: Announcing OLM v1: Next-Generation Operator Lifecycle Management.
Manage operators as ClusterExtensions with OLM v1
In OLM v1, operators are managed declaratively using the ClusterExtension API objects. Let's walk through common lifecycle operations in 6 steps:
Explore operator packages to install from a catalog.
Install an operator package with a ClusterExtension.
Upgrade a ClusterExtension.
Optionally rollback to an older version for a ClusterExtension.
Grant user access to the provided APIs of an installed operator package.
Uninstall an operator package with a ClusterExtension.
1. Explore operator packages to install from a catalog
OLM v1 shifts from a CustomResourceDefinition (CRD)-based catalog management approach to a new RESTful API, improving performance and reducing Kubernetes API server load. While the initial catalog API provides all content for a given catalog image through a single endpoint, we are actively developing support for more specific queries like listing all available channels in a particular operator package or listing all available versions in a certain channel.
Currently, you can query the catalog image off-cluster to explore and find operator packages.
Supported packages
OLM v1's initial general availability (GA) release supports operator packages that meet the following requirements:
- Uses the registry+v1 bundle format introduced in the existing OLM.
- Supports installation via the AllNamespaces install mode.
- Does not use webhooks.
- Does not declare dependencies using file-based catalog properties (
olm.gvk.required
,olm.package.required
,olm.constraint
).
In this initial release, OLM v1 verifies these constraints before installation, reporting any violations in the ClusterExtension condition. While OLM v1 initially supports a select set of operators, we're actively expanding compatibility.
Procedure
Follow these steps to query the catalog image off-cluster for operator packages to install:
Query the catalog image to get a list of compatible operator packages by running the
opm render
command:opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | jq -r --arg pkg "" ' select(.schema == "olm.bundle" and (.package == $pkg or $pkg == "")) | {package:.package, name:.name, image:.image, supportsAllNamespaces: (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] | select(.type == "AllNamespaces").supported == true)} ' | tee allNamespaces.json | jq -r '.image' | xargs -I '{}' -n1 -P8 bash -c ' opm render {} > $(mktemp -d -p . -t olmv1-compat-bundle-XXXXXXX)/bundle.json ' && bash -c 'cat olmv1-compat-bundle*/*.json' | jq '{package, name, image, requiresWebhooks: (.properties[] | select(.type == "olm.bundle.object").value.data | @base64d | fromjson | select(.kind == "ClusterServiceVersion").spec.webhookdefinitions != null)}' > webhooks.json && jq -s ' group_by(.name)[] | reduce .[] as $item ({}; . *= $item) | . *= {compatible: ((.requiresWebhooks | not) and .supportsAllNamespaces)} | {name, package, compatible} ' allNamespaces.json webhooks.json | jq -r '. | select(.compatible == true) | .package' | sort -u
Example output:
3scale-operator amq-broker-rhel8 amq-online … quay-operator … tang-operator volsync-product web-terminal
Get a list of compatible bundle versions from a selected package by running the following
opm render
command:opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | jq -r --arg pkg "quay-operator" 'select(.schema == "olm.bundle" and (.package == $pkg or $pkg == "")) | {"package":.package, "name":.name, "image":.image, "supportsAllNamespaces": (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] | select(.type == "AllNamespaces").supported == true)}' | tee allNamespaces.json | jq -r '.image' | xargs -I '{}' -n1 -P8 bash -c 'opm render {} > $(mktemp -d -p . -t olmv1-compat-bundle-XXXXXXX)/bundle.json' && bash -c 'cat olmv1-compat-bundle*/*.json' | jq '{package, name, image, "requiresWebhooks": (.properties[] | select(.type == "olm.bundle.object").value.data | @base64d | fromjson | select(.kind == "ClusterServiceVersion").spec.webhookdefinitions != null)}' > webhooks.json && jq -s 'group_by(.name)[] | reduce .[] as $item ({}; . *= $item) | . *= {"compatible": ((.requiresWebhooks | not) and .supportsAllNamespaces)} | {name,compatible}' allNamespaces.json webhooks.json && rm -rf allNamespaces.json webhooks.json olmv1-compat-bundle*
Example output:
{ "name": "quay-operator.v3.10.0", "compatible": true } { "name": "quay-operator.v3.10.1", "compatible": true } … { "name": "quay-operator.v3.12.0", "compatible": true } { "name": "quay-operator.v3.12.1", "compatible": true } { "name": "quay-operator.v3.12.2", "compatible": true } { "name": "quay-operator.v3.12.3", "compatible": true } … { "name": "quay-operator.v3.13.0", "compatible": true } … ~ took 5m 45.2s
Get a list of channels from a selected package by running the following
opm render
command:opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | jq -s '.[] | select( .schema == "olm.channel" ) | select( .package == "quay-operator") | .name'
Example output:
"quay-v3.4" "quay-v3.5" "stable-3.10" "stable-3.11" "stable-3.12" "stable-3.13" "stable-3.6" "stable-3.7" "stable-3.8" "stable-3.9"
Get a list of the versions published in a channel by running the following
opm render
command:opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | jq -s '.[] | select( .package == "quay-operator" ) | select( .schema == "olm.channel" ) | select( .name == "stable-3.12" ) | .entries | .[] | .name'
Example output:
"quay-operator.v3.12.0" "quay-operator.v3.12.1" "quay-operator.v3.12.2" "quay-operator.v3.12.3" "quay-operator.v3.12.4" "quay-operator.v3.12.5" "quay-operator.v3.12.6" "quay-operator.v3.12.7"
2. Install an operator package with a ClusterExtension
To install a package from the catalog, create and apply a ClusterExtension API object in your cluster.
Prerequisites: Create a ServiceAccount to manage a ClusterExtension
OLM v1 operates under a least privilege model and requires a user-provided ServiceAccount with appropriate role-based access control (RBAC) permissions to manage ClusterExtensions. You will need to create and configure this ServiceAccount with the permissions required by the target ClusterExtension. Installation will fail if the ServiceAccount lacks the necessary permissions, and ClusterExtension manifests without a ServiceAccount will be rejected. For details on determining the minimal required permissions, refer to Derive minimal ServiceAccount required for ClusterExtension Installation and Management.
Future iterations of OLM v1 will simplify ServiceAccount management by providing mechanisms for discovering required permissions and assisting with ServiceAccount creation. In a future release, OLM v1 will gain the capability to allow you to preview required permissions and create appropriate ServiceAccounts before installation or upgrade.
Procedure
Follow these steps to install an operator package:
If you want to install your package version into a new namespace, run the following command:
oc adm new-project redhat-quay
For package installation, use the ServiceAccount you’ve created from the prerequisites section. If you don't have one (non-production testing only), create a ServiceAccount and bind the
cluster-admin
ClusterRole:oc apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: quay-installer-sa namespace: redhat-quay --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-extension-installer-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: quay-installer-sa namespace: redhat-quay EOF
Compose a CustomResource (CR), similar to the following example:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: quay-operator spec: namespace: <namespace> serviceAccount: name: <serviceAccount_name> source: sourceType: Catalog catalog: packageName: quay-operator channels: [<channel>] version: "<version>"
In this CR:
<namespace>
: Specifies the namespace where you want the package installed, such asquay-operator
ormy-namespace
. ClusterExtension is still cluster-scoped and might contain resources that are installed in different namespaces.<serviceAccount_name>
: Specifies the name of the service account you created beforehand to install, update, and manage your package version. For example,quay-installer-sa
.<channel>
(optional): Specifies the channel, such asstable-3.12
, for the package you want to install or update.<version>
(optional): Specifies the version or version range, such as 3.12.1, 3.12.x, or >=3.12.1, of the package you want to install or update. For more information, see Support for version ranges.
For instance:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: quay-operator spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: sourceType: Catalog catalog: packageName: quay-operator channels: [stable-3.12] version: "3.12.1"
Apply the CR to the cluster by running the following command:
oc apply -f quay-operator.yaml
Example output:
clusterextension.olm.operatorframework.io/quay-operator created
Verification
To verify the CR, proceed to:
View the CR in the YAML format by running the following command:
oc get clusterextension quay-operator -o yaml
Example output:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"quay-operator"},"spec":{"namespace":"redhat-quay","serviceAccount":{"name":"quay-installer-sa"},"source":{"catalog":{"channels":["stable-3.12"],"packageName":"quay-operator","version":"3.12.1"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-28T22:40:21Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 managedFields: … name: quay-operator resourceVersion: "60237" uid: e2176f60-92a3-4fe1-b107-253f18d4e362 spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: catalog: channels: - stable-3.12 packageName: quay-operator upgradeConstraintPolicy: CatalogProvided version: 3.12.1 sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-28T22:40:24Z" message: Installed bundle registry.redhat.io/quay/quay-operator-bundle@sha256:9607e4d2493623608d8c7b578ff65cbece8e0b7f609b82f22858f9e313f4face successfully observedGeneration: 1 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-28T22:40:24Z" message: desired state reached observedGeneration: 1 reason: Succeeded status: "True" type: Progressing install: bundle: name: quay-operator.v3.12.1 version: 3.12.1
3. Upgrade a ClusterExtension
To upgrade a package from the catalog, edit and apply the changes to a ClusterExtension API object in your cluster.
Prerequisites: ServiceAccount's RBAC permissions are adequate for the installed and target version
As previously noted, OLM v1 requires a user-provided ServiceAccount with the necessary RBAC permissions to manage ClusterExtensions. During a version update, the update will fail if the new version requires permissions beyond those granted to the ServiceAccount. OLM v1 will report the specific missing permissions, allowing you to assess the permission escalation. Currently, you'll need to manually update the ServiceAccount to include the required permissions for the target ClusterExtension.
Future versions of OLM v1 will streamline this process by providing tools to discover the required permissions and assist in creating the appropriate ServiceAccount. This will allow you to preview the needed permissions and create the necessary ServiceAccount with OLM v1's help.
Version pinning without automatic updates
You can select a specific version (for example, 3.12.2) from the catalog to which an installed package should be pinned. This feature enables you to precisely control updates, which is particularly helpful for reproducing a staging environment setup in production after successful testing.
Procedure
To upgrade a package, follow these instructions:
Edit the CR, similar to the following example:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: quay-operator spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: sourceType: Catalog catalog: packageName: quay-operator channels: [stable-3.12] version: "3.12.2" # pin quay-operator to version 3.12.2
Apply the CR to the cluster by running the following command:
oc apply -f quay-operator.yaml
Example output:
clusterextension.olm.operatorframework.io/quay-operator configured
Verification
To verify the CR, proceed to:
View the CR in the YAML format by running the following command:
oc get clusterextension quay-operator -o yaml
Example output:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"quay-operator"},"spec":{"namespace":"redhat-quay","serviceAccount":{"name":"quay-installer-sa"},"source":{"catalog":{"channels":["stable-3.12"],"packageName":"quay-operator","version":"3.12.2"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-28T22:40:21Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 2 managedFields: … name: quay-operator resourceVersion: "60967" uid: e2176f60-92a3-4fe1-b107-253f18d4e362 spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: catalog: channels: - stable-3.12 packageName: quay-operator upgradeConstraintPolicy: CatalogProvided version: 3.12.2 sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-28T22:40:24Z" message: Installed bundle registry.redhat.io/quay/quay-operator-bundle@sha256:7a997716b48ad2d2c355574e613c2a4e58666dc51038c9dd1ca977de53d42a78 successfully observedGeneration: 2 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-28T22:40:24Z" message: desired state reached observedGeneration: 2 reason: Succeeded status: "True" type: Progressing install: bundle: name: quay-operator.v3.12.2 version: 3.12.2
Version range auto updates
In addition to specifying a fixed version, you can configure automatic updates within a defined version range. For example, you can limit automatic updates to z-stream patches (for example, 3.12.x or ~3.12) to avoid more substantial changes beyond bug and security fixes being applied when setting the desired target version. OLM v1 will then automatically apply any new version within this range that is released in the catalog.
See version ranges for more information on installing and updating operator packages with ClusterExtensions using SemVer version ranges.
Procedure
Follow these steps to configure automatic updates:
Edit the CR, similar to the following example:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: quay-operator spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: sourceType: Catalog catalog: packageName: quay-operator channels: [stable-3.12] # 'channels' optional here, but recommended for # channels contain mixed minor versions or # release maturity (e.g., 'stable', 'candidate') version: "3.12.x" # or ~3.12 for automatic z-stream updates
Apply the CR to the cluster by running the following command:
oc apply -f quay-operator.yaml
Example output:
clusterextension.olm.operatorframework.io/quay-operator configured
Verification
To verify the CR, proceed to:
View the CR in the YAML format by running the following command:
oc get clusterextension quay-operator -o yaml
Example output:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"quay-operator"},"spec":{"namespace":"redhat-quay","serviceAccount":{"name":"quay-installer-sa"},"source":{"catalog":{"channels":["stable-3.12"],"packageName":"quay-operator","version":"3.12.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-28T22:40:21Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 3 managedFields: … name: quay-operator resourceVersion: "62569" uid: e2176f60-92a3-4fe1-b107-253f18d4e362 spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: catalog: channels: - stable-3.12 packageName: quay-operator upgradeConstraintPolicy: CatalogProvided version: 3.12.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 3 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 3 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 3 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 3 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-28T22:40:24Z" message: Installed bundle registry.redhat.io/quay/quay-operator-bundle@sha256:ea077de1039e87e20f699ab2a68fa4f825898f7164bf071cc9404653789e352f successfully observedGeneration: 3 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-28T22:40:24Z" message: desired state reached observedGeneration: 3 reason: Succeeded status: "True" type: Progressing install: bundle: name: quay-operator.v3.12.8 version: 3.12.8
4. Optional rollback to an older version for a ClusterExtension
You might need to downgrade a ClusterExtension to a previous version if you encounter compatibility issues, unexpected behavior, or need specific features from an older release. However, downgrading can be risky. You could lose data, run into problems with newer CRD versions, or break clients that depend on the latest version. You should carefully weigh these risks before deciding to downgrade.
Prerequisites: Data backup and compatibility checks
Before downgrading, back up your configurations and data, make sure the target downgrade version is available, and verify its compatibility with your system and dependencies. For Red Hat products, you should only do this when instructed by support.
Procedure
Follow these instructions to roll back to an older version:
Edit the CR, similar to the following example, to ignore catalog-provided upgrade constraints to allow downgrade:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: quay-operator spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: sourceType: Catalog catalog: packageName: quay-operator channels: [stable-3.12] version: "3.12.1" # downgrade to 3.12.1 upgradeConstraintPolicy: SelfCertified
In this CR:
- Set the upgradeConstraintPolicy to
SelfCertified
for downgrades or any version changes without adhering to the predefined upgrade paths.
- Set the upgradeConstraintPolicy to
Apply the CR to the cluster by running the following command:
oc apply -f quay-operator.yaml
Verification
To verify the CR, proceed to:
View the CR in the YAML format by running the following command:
oc get clusterextension quay-operator -o yaml
Example output:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"quay-operator"},"spec":{"namespace":"redhat-quay","serviceAccount":{"name":"quay-installer-sa"},"source":{"catalog":{"channels":["stable-3.12"],"packageName":"quay-operator","version":"3.12.1"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-28T22:40:21Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 5 managedFields: … name: quay-operator resourceVersion: "63196" uid: e2176f60-92a3-4fe1-b107-253f18d4e362 spec: namespace: redhat-quay serviceAccount: name: quay-installer-sa source: catalog: channels: - stable-3.12 packageName: quay-operator upgradeConstraintPolicy: SelfCertified version: 3.12.1 sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 5 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 5 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 5 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-28T22:40:21Z" message: "" observedGeneration: 5 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-28T22:40:24Z" message: Installed bundle registry.redhat.io/quay/quay-operator-bundle@sha256:9607e4d2493623608d8c7b578ff65cbece8e0b7f609b82f22858f9e313f4face successfully observedGeneration: 5 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-28T22:40:24Z" message: desired state reached observedGeneration: 5 reason: Succeeded status: "True" type: Progressing install: bundle: name: quay-operator.v3.12.1 version: 3.12.1
For more information, the Downgrading a ClusterExtension guide provides details on performing a downgrade, including overriding default constraints and disabling CRD safety checks.
5. Grant user access to the provided APIs of an installed Operator package
Cluster extensions managed by OLM v1 often provide CustomResourceDefinitions that define new API resources. Cluster administrators usually have full access to these CRDs, but non-administrative users require explicit permissions. Administrators are responsible for configuring RBAC to grant non-administrative users the necessary permissions to create, view, and edit CustomResources. OLM v1 does not automatically configure RBAC for these APIs.
The guide Granting Users Access to API Resources in OLM shows you how to manually configure RBAC, including creating and binding ClusterRoles.
6. Uninstall an operator package with a ClusterExtension
To uninstall a package and its CRDs, delete the ClusterExtension API object in your cluster. This action will remove all instances of the custom resources and the operator itself, essentially reverting the cluster to its state before operator installation.
This is a change from how uninstallation works in the current OLM, which keeps your CRDs and custom resources. In the future, OLM v1 will give you the option to keep these resources even after you uninstall an Operator. This will be helpful if you want to switch to a different operator that manages the same APIs, troubleshoot issues, or keep your data.
Procedure
Follow these steps to uninstall an operator package:
Delete a package and its CRDs by running the following command:
oc delete clusterextension <clusterextension_name>
Here,
<clusterextension_name>
specifies the name defined in themetadata.name
field of the ClusterExtension CR.Example output:
➜ oc delete clusterextension quay-operator clusterextension.olm.operatorframework.io "quay-operator" deleted
- Remove the ClusterExtension namespace, installer ServiceAccount, and cluster-scoped RBAC resources (if applicable).
Verification
To verify the uninstallation, proceed to:
Verify the ClusterExtension is deleted by running the following command:
oc get clusterextension <clusterextension_name> -o yaml
Example output:
➜ oc get clusterextension quay-operator -o yaml Error from server (NotFound): clusterextensions.olm.operatorframework.io "quay-operator" not found
Verify that the CustomResourceDefinition provided by the operator package is deleted by running the following command:
oc get crd <crd-name> -o yaml
Example output:
➜ oc get crd QuayRegistry -o yaml Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "QuayRegistry" not found
Conclusion
These examples cover the fundamental lifecycle operations and illustrate the ease and power of managing operators with OLM v1. With the ClusterExtension API, you gain more control and flexibility in managing the lifecycle of your operator packages on OpenShift. Remember to consult the official Red Hat OpenShift documentation for more advanced configurations and details on ServiceAccount permissions. We encourage you to try these examples in your environment and explore the full capabilities of OLM v1!
For a high-level overview of the key features and benefits of OLM v1, refer to our announcement blog post: Announcing OLM v1: Next-Generation Operator Lifecycle Management.