Large language models (LLMs) have made remarkable strides in recent years, and their integration as AI assistants in technical environments is quickly becoming part of everyday workflows. These tools are now being used to handle a growing range of complex tasks, so it’s only natural to wonder how far they can really go. Red Hat OpenShift Lightspeed is no exception. This AI assistant built into OpenShift simplifies tasks, accelerates workflows, and helps users become more productive administering OpenShift clusters.
What if we took it a step further? What if we treated OpenShift Lightspeed as a potential candidate for the Red Hat Certified OpenShift Administrator exam? This certification validates the ability to:
- Troubleshoot OpenShift clusters and applications.
- Configure authentication and authorization within the cluster.
- Ensure application security using Secrets and Security Context Constraints (SCCs).
- Expose applications and protect network traffic with Network Policies and TLS security.
- Control Pod scheduling and limiting resource usage with Resource Quotas.
- Manage OpenShift cluster updates, workloads, and operators.
In this post, the first in a two-part series, we will explore whether OpenShift Lightspeed can handle real-world OpenShift certification questions. We’ll challenge it with tasks and scenarios similar to those found in the Red Hat Certified OpenShift Administrator exam and evaluate its performance. Can OpenShift Lightspeed meet the standards of a certified OpenShift administrator? Let’s find out.
But before we do...
It’s important to recognize that Red Hat Certified administrators bring the critical thinking, contextual awareness, and production experience needed to apply AI-generated suggestions responsibly—especially in high-stakes environments like government, healthcare, finance, energy, and transportation. OpenShift Lightspeed is not a replacement for expertise, but a valuable companion that helps skilled professionals work faster, smarter, and with greater confidence. We want to highlight that the tool is competent enough to make users of all experience levels more productive—not act as a substitute.
Exploring OpenShift Lightspeed: What it can and can’t do yet
The OpenShift Lightspeed Operator is Generally Available (GA) as of June 2025. As a recently launched product, it remains under active evolution, with frequent enhancements and new features expected. Like any emerging technology, it's important to understand its current capabilities while keeping an eye on its rapidly expanding roadmap.
Today, OpenShift Lightspeed can be configured to work with external LLM providers such as WatsonX, OpenAI, or Azure OpenAI, as well as models hosted on Red Hat OpenShift AI and Red Hat Enterprise Linux AI. You can query these models directly through the interface embedded within OpenShift to get responses based on the OpenShift documentation. Currently, OpenShift Lightspeed is capable of generating context-aware responses in combination with links to relevant sections of the official documentation. Additionally, OpenShift Lightspeed provides concrete CLI commands to help users take action directly.
With this GA release, OpenShift Lightspeed also introduces the cluster interaction feature as a Technology Preview. Cluster interaction allows the LLM to enhance its understanding by accessing real-time information from the OpenShift cluster, to deliver more tailored and context-aware responses. In this two-part series, we will not make use of this feature, so all results are based solely on the prompts and static manifests provided by the user.
Defining our benchmark
Before diving into our experiment, let’s establish a few criteria for the test. We’ll work with the latest available version of OpenShift Lightspeed (1.0), configured to run with the Azure OpenAI GPT-4 model.
For the evaluation, two representative exercises have been selected, similar to those that may appear in the exam. Each exercise covers multiple aspects and OpenShift resources from different areas. In this first blog, we’ll go through Exercise 1: Cluster self-service setup. In the next installment, we’ll look at Exercise 2: Secure applications.
We'll start by feeding OpenShift Lightspeed with the original questions, similar to how they may appear in the exam. If needed, we’ll adjust the phrasing slightly to provide additional context and help the model better understand the question. When relevant, we’ll also include attachments to enrich the prompt.
Now, how will we measure success? Each response will be graded based on its relevance and technical accuracy:
- If OpenShift Lightspeed provides a clear and actionable set of steps and commands to solve the task, it earns a Correct (100%) mark.
- If the response offers helpful guidance or general direction although not fully detailed or precise for the specific question, it’ll be marked as Partially Correct (50%).
- And if the answer is off-topic, vague, or doesn’t address the question, it’ll be considered Incorrect (0%).
The bar to pass is an average score of 70%. The same as the certification exam. Now it’s OpenShift Lightspeed’s moment to shine!
Exercise 1: Cluster self-service setup
In this section, we will compile all the questions that have been asked to OpenShift Lightspeed for Exercise 1. Interested in a specific topic? Here’s an index that corresponds to the questions we have asked OpenShift Lightspeed:
- Creating users and groups
- Configuring persistent access control with RBAC
- Assigning custom roles to user group
- Granting full cluster-admin privileges
- Enforcing resource quotas
- Applying resource limits per-workload
- Defining default resource requests and limits
- Automating base project configuration with Templates
- Deploying containerized workloads
- Managing projects, groups, and role assignments
Because every question being asked is related and part of the same exercise, we will use the same chat session for the entirety of the process. For each question, we will explain the decisions made when formulating it, such as adding attachments. Then, a screenshot of the response provided by OpenShift Lightspeed will be included. To avoid making this blog too long, non-essential paragraphs such as validation steps or links to documentation have been omitted. Finally, each question will conclude with an analysis of the response.
1. Creating users and groups
Question 1: Create the groups with the specified users in the following table.
Group | User |
platform | do280-platform |
presenters | do280-presenter |
workshop-support | do280-support |
We’ll begin the first exercise by opening a blank chat session in OpenShift Lightspeed, without any prior context. We’ll paste the question and the table directly into the message box, which means the table’s structure will be lost. Still, we want to see whether OpenShift Lightspeed is able to correctly interpret the rows and columns, and whether the response shown in Figure 1 is accurate.

The response is perfect; OpenShift Lightspeed was able to fully understand the question and correctly interpret the table format. Once the commands were executed, the groups and users were created and assigned properly. We’re definitely going to give this first question a Correct 🟢 mark.
2. Configuring persistent access control with RBAC
Question 2: Ensure that only users from the following groups can create projects. Because this exercise requires steps that restart the Kubernetes API server, this configuration must persist across API server restarts.
Group |
platform |
presenters |
workshop-support |
For this second question, the task is not about performing a specific action (like creating users in the previous case) but about stating what we want to achieve, leaving it up to OpenShift Lightspeed to infer the necessary steps to accomplish it. See Figure 2.

In this case, let’s divide the response in three parts. First, OpenShift Lightspeed gives us a command to remove the self-provisioner
cluster role for all users. This procedure is correct. However, OpenShift Lightspeed suggests removing the role from the system:authenticated
group, making the command fail when executed. The group we are looking for to remove the role from should be system:authenticated:oauth
. Quite similar, but not the same. However, it’s worth noting that the text generated by OpenShift Lightspeed mentions that we need to remove the binding from the system:authenticated:oauth
group, even if the command states the group incorrectly.
When we get to Step 2, we check that the commands provided are correct. The cluster role that permits creating projects in OpenShift is assigned to the groups specified in the table. So we can consider this step as perfectly solved.
And now we reach the last part of the question: we want to make the changes permanent. To do so, as OpenShift Lightspeed suggests, we need to modify the self-provisioners
cluster role binding and change the annotation to false
. You might be thinking that we should give this part a 100% mark, right?
Well, in this scenario we need to take into account that the self-provisioners
cluster role binding no longer exists (it was deleted in Step 1). Now we have three different cluster role bindings, one for each group in the table: self-provisioner
, self-provisioner-0
, and self-provisioner-1
. As you can see, when we want to patch the self-provisioners
(plural) resource as OpenShift Lightspeed suggests, it does not exist and the command fails.
Considering all the points mentioned above, we see that the procedure makes sense, but it was not 100% correct in our scenario. If OpenShift Lightspeed was aware of the existing resources in the cluster, it could have verified that the system:authenticated
group was incomplete and that the self-provisioners
ClusterRoleBinding no longer existed. But this is not the case. For this reason, we consider its response to be Partially Correct 🟡.
3. Assigning custom roles to user group
Question 3: The workshop-support
group requires the following roles in the cluster:
- The admin role to administer projects.
- A custom role that is provided in the
groups-role.yaml
file. You must create this custom role to enable support members to create workshop groups and to add workshop attendees.
In this third question we have a slightly different format: it's presented as two separate bullet points. The second question requires us to attach the groups-role.yaml file to the query so that OpenShift Lightspeed can read it and apply the necessary modifications. Let’s see how well OpenShift Lightspeed handles attachments (Figure 3).

OpenShift Lightspeed suggests creating a new file called manage-groups.yaml, even though we had already attached a file with the same content but a different name. While the procedure works, it could have simply suggested applying the attached file instead of creating a new one with identical content. This seems to be because OpenShift Lightspeed does not analyze the name of the file attached to the query, only its content. Even though additional steps are being added, the process is correct and successfully answers all the questions, so we can assign it a Correct 🟢 mark.
4. Granting full cluster-admin privileges
Question 4: The platform
group must be able to administer the cluster without restrictions.
This will be the last question about users, groups, and roles (Figure 4). Remember that we will continue using the same chat session to ensure OpenShift Lightspeed has the full context from the previous three questions.

This was a fairly simple question, so OpenShift Lightspeed had no trouble providing a correct answer. With that single command, the necessary role to manage the cluster is assigned. That’s two Correct 🟢 answers in a row!
5. Enforcing resource quotas
Question 5: All the resources that the cluster creates with a new workshop project must use workshop as the name for grading purposes. Each workshop must enforce the following maximum constraints:
- The project uses up to 2 CPUs.
- The project uses up to 1 Gi of RAM.
- The project requests up to 1.5 CPUs.
- The project requests up to 750 Mi of RAM.
Now we are starting a new topic. Our environment has been provisioned with a quota.yaml template to be completed by the user to match the requirements. So we will need to attach it to the query, too. See Figure 5.

Although the full YAML file isn’t visible in Figure 5, it has been correctly modified to meet the memory and CPU requirements. Additionally, OpenShift Lightspeed provides a brief explanation for each of the completed fields, making it easier for the user to understand.
However, as we saw earlier, when applying the resource, it assumed a different file name (workshop-resourcequota.yaml
) other than the one actually provided in the attachment (quota.yaml
). It would be nice if OpenShift Lightspeed could also pick up the name of the attachment and include it in the response, although it's quite obvious that the file it asks you to apply is the same one you've attached.
That said, the steps themselves are technically correct and you achieve the desired result, so the fair assessment here would be to mark it as Correct 🟢. The good streak goes on!
6. Applying resource limits per-workload
Question 6: Each workshop must enforce constraints to prevent an attendee's workload from consuming all the allocated resources for the workshop:
- A workload uses up to 750m CPUs.
- A workload uses up to 750 Mi.
This question is similar to the previous one, but this time using Limit Ranges. As we did before, we’re going to attach the limitrange.yaml file to our question so that OpenShift Lightspeed can modify it for us (Figure 6).

We’re in the same situation as the previous question. The steps are correct. The file is modified to meet the requirements, but when it’s applied, the original name of the attached file is not used. As we saw before, this could be a future enhancement, but when it comes to the accuracy of the steps, OpenShift Lightspeed is spot on. So another Correct 🟢 mark.
7. Defining default resource requests and limits
Question 7: Each workshop must have a resource specification for workloads:
- A default limit of 500m CPUs.
- A default limit of 500 Mi of RAM.
- A default request of 0.1 CPUs.
- A default request of 250 Mi of RAM.
For this question, the preceding requirements need to be included in the limitrange.yaml
file that we attached earlier. Let’s test if, keeping the same chat session, OpenShift Lightspeed is able to understand that these requirements need to be added to the file provided in a previous query, without attaching it again (Figure 7). It will also be interesting to see if the response will return only the limitations indicated above or if the model is able to concatenate those in the output file provided in the previous question. That would be top-notch!

As expected, OpenShift Lightspeed correctly understood the requirements and updated the YAML file accordingly. This is a great example of why it's important to keep working within the same chat session: it allows OpenShift Lightspeed to retain context and apply previously learned information to new queries.
Unfortunately, it wasn't able to build on the response from the previous question and combine the requirements into a single file. In fact, this creates a problem: when we apply the new file the resource name is the same as the one in the previous question, so the changes are overwritten and we end up losing the previously specified max.cpu
and max.memory
values.
The response is technically correct. Nevertheless, OpenShift Lightspeed has not been able to process the whole context and take the previously created limit range into consideration in this particular scenario. If all the limits had been included in the same question, it would have handled it properly, but since they were split, that wasn’t the case. So we’ll assign it a Partially Correct 🟡 score.
8. Automating base project configuration with Templates
Question 8: You must set up the cluster so that when the presenter creates a workshop project, the project gets a base configuration. Each workshop project must have this additional default configuration:
- A local binding for the presenter user to the
admin
cluster role with theworkshop
name. - The
workshop=project_name
label to help to identify the workshop workload. - Each workshop must accept traffic only from within the same workshop by using the label
workshop=project_name
, or traffic coming from the ingress controller by using the labelpolicy-group.network.openshift.io/ingress: ""
.
We've now reached what might be the most complicated question for OpenShift Lightspeed that we’ve asked so far in this blog. It covers several different topics and requires a complex configuration. To address it, a network policy is required. The environment includes a sample networkpolicy.yaml template, which we will attach to the question. Let’s see if OpenShift Lightspeed is capable of understanding all three requirements and filling in the gaps with the necessary intermediate steps (Figure 8).

As previously mentioned, this question is a bit more complex so let's walk through the response step by step. First, OpenShift Lightspeed generates a template based on the given requirements. However, the template is somewhat generic, as it does not use the OpenShift-specific command oc adm create-bootstrap-project-template
, which creates a project template based on the configuration of an existing namespace. As a result, the template provided by OpenShift Lightspeed is not entirely accurate.
To start, one of the requirements was to label the project with workshop=project_name
. Instead of using the OpenShift Project
resource, OpenShift Lightspeed creates a Kubernetes Namespace
resource:
- apiVersion: v1
kind: Namespace
metadata:
name: ${PROJECT_NAME} # Placeholder for the project name
labels:
workshop: ${PROJECT_NAME} # Label to identify the workshop workload
Furthermore, while OpenShift Lightspeed correctly assigns the admin
role to the specified user, the roleRef
section is missing a mandatory field: the roleRef.apiGroup: rbac.authorization.k8s.io
line is required but was not included.
roleRef:
kind: ClusterRole
name: admin
Additionally, the generated NetworkPolicy is more restrictive than required. In this version, the policy only applies to pods in the namespace that are labeled with workshop=project_name
. However, the requirement was to apply the policy to all pods within the namespace, so the correct statement would be to use podSelector: {}
.
spec:
podSelector:
matchLabels:
workshop: ${PROJECT_NAME} # Label to restrict traffic within the workshop
This question would require much more context for OpenShift Lightspeed to truly understand what we were trying to achieve. Additionally, the rest of the steps suggested by OpenShift Lightspeed deviate significantly from what actually needed to be done. Unfortunately, that means we have to give our first Incorrect 🔴 score. Nobody's perfect…
9. Deploying containerized workloads
Question 9: Use the following container image stored in the provided registry: registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:v1.0
, which listens on the 8080
port, to simulate a workshop workload.
To get back on track, let’s move on to something a bit easier. We’ll now set up a workload to verify the network policy in action, as shown in Figure 9.

The response is quite solid. First, it creates the workload and specifies the listening port, which is exactly what the prompt required. However, OpenShift Lightspeed goes a step further by adding extra actions based on the context it has from previous questions.
In the following steps, it suggests labeling the workload and applying the network policy. This is a good move, as it aligns with the requirements we explored earlier. That said, OpenShift Lightspeed doesn’t seem to recall that all of this was supposed to be handled via the project template, so theoretically, these steps wouldn't be necessary.
In any case, all the steps outlined in the answer are technically correct, but don’t adjust to the specific scenario. Therefore, we’re going to mark this response as Partially Correct 🟡.
10. Managing projects, groups, and role assignments
Question 10: As the do280-presenter
user, you must create a workshop with the do280
name. As the do280-support
user, you must create the do280-attendees
group with the do280-attendee
user, and assign the edit
role to the do280-attendees
group.
And with this, we reach the final question of this exercise. We come full circle by creating users and assigning permissions, just as we did at the beginning. Back then, OpenShift Lightspeed handled it flawlessly, but this time, each action must be performed by a different user. Let’s see how it adapts to this added complexity (Figure 10).

And we finish with a perfect answer! The presenter was used to create the project, and the support user took care of creating the group, linking it to the user and granting the necessary permissions. A great way to close the exercise with our final Correct 🟢 score!
Preliminary conclusions
It’s time to take a first look at the results delivered by OpenShift Lightspeed during this first sample exercise of the OpenShift certification exam. Let’s review the scores, from lowest to highest:
OpenShift Lightspeed only gave 1 Incorrect answer. It was the question where we needed to create a project template. This might be the most complex task of the entire exercise, so it makes sense that it posed a challenge.
We continue climbing the rankings. When it comes to answers that were technically correct but didn’t fully align with the question, we had 3 Partially Correct scores. One of them involved attempting to patch a
ClusterRoleBinding
that had been deleted in a previous step caused by a misunderstanding of the current cluster state. Another case occurred while creatingLimitRange
s, when OpenShift Lightspeed failed to combine the requirements from two related questions into a single file, resulting in the previous configuration being overwritten. These are very specific scenarios, but they would unfortunately result in a failed question in the actual exam.Finally, let’s end on a high note. OpenShift Lightspeed has proven to be quite good at understanding our questions, especially when paired with its knowledge of OpenShift. This is reflected in the 6 Correct scores it achieved. What’s even more impressive is that these correct answers span a wide range of topics: user and group management, permissions, limits and quotas, project creation, application deployment, and more. In many of these cases, we also made use of the attachments feature, which highlights just how important it is to provide the model with the right context. A truly impressive result!
Adding up all the scores and calculating the average, we get a 75% success rate, which means that OpenShift Lightspeed stays above the passing mark for this certification exam!
But this isn’t the end of the story. To get a more representative sample, we’re going to put it to the test with a second exercise. That’s something we’ll cover in the next blog.
While a 75% success rate is impressive, it’s critical to remember that passing an exam and operating reliably in production are two very different challenges. Real-world environments—especially in sectors like healthcare, finance, government, and energy—demand precision, judgment, and accountability. This is where Red Hat Certified Professionals remain indispensable. Their experience and deep understanding of OpenShift allow them to not only use tools like OpenShift Lightspeed more effectively, but also to validate, interpret, and contextualize AI-generated solutions. OpenShift Lightspeed enhances productivity and reduces friction, but it is even more valuable when guided by the informed hands of an expert.
Part 2: Securing your applications
In this article, we’ve put OpenShift Lightspeed to the test with our first exam-style exercise. However, a single exercise isn’t enough to truly assess its performance. That’s why we’ve prepared a second post, where we’ll continue testing it with a new challenge and fresh topics. If you’re eager to keep learning, head over to Part 2 and join us for the next round!