Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snc-library: Delete the failed pods before check for available one #921

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions snc-library.sh
Original file line number Diff line number Diff line change
Expand Up @@ -241,8 +241,13 @@ function no_operators_degraded() {
${OC} get co -ojsonpath='{.items[*].status.conditions[?(@.type=="Degraded")].status}' | grep -v True
}

function retry_failed_pods() {
${OC} delete pods --field-selector=status.phase=Failed -A
}

function all_pods_are_running_completed() {
local ignoreNamespace=$1
retry_failed_pods
! ${OC} get pod --no-headers --all-namespaces --field-selector=metadata.namespace!="${ignoreNamespace}" | grep -v Running | grep -v Completed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of all_pods_are_running_completed is that it tells us if there are pods which are not in the Running state nor in the Completed state. If we start deleting these pods before doing the check, is it worth keeping the check at all?

Which pods get to this weird ContainerStatusUnknown state? Is there any explanation for this? Maybe we can only delete pods in this ContainerStatusUnknown state if we are sure it's always a false positive for all_pods_are_running_complete?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of all_pods_are_running_completed is that it tells us if there are pods which are not in the Running state nor in the Completed state. If we start deleting these pods before doing the check, is it worth keeping the check at all?

we are only deleting the pods which have in Failed phase , there might be other phase of pod like pending . (https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase )

Which pods get to this weird ContainerStatusUnknown state? Is there any explanation for this?

We did try to describe those pods but doesn't get proper explanation, just get the phase as Failed :(

Maybe we can only delete pods in this ContainerStatusUnknown state if we are sure it's always a false positive for all_pods_are_running_complete?

when container goes to this state the phase become Failed and that is used for field-section option. There is no option around ContainerStatusUnknown state in the field section side. It shouldn't provide a false positive because there we are making sure no failed container but also not a pending state container.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a strict equivalence between Failed and ContainerStatusUnknown? In other words, if a container in the Failed phase, it always has ContainerStatusUnknown, and if a container has ContainerStatusUnknown, it always is in the Failed phase. The latter is true, but is the former also true Failed implies ContainerStatusUnknown, and nothing else? We don't want to ignore Failed containers which are not ContainerStatusUnknown.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a strict equivalence between Failed and ContainerStatusUnknown? In other words, if a container in the Failed phase, it always has ContainerStatusUnknown, and if a container has ContainerStatusUnknown, it always is in the Failed phase.

Second assumption is correct if a container has ContainerStatusUnknown then it always in the Failed phase

The latter is true, but is the former also true Failed implies ContainerStatusUnknown, and nothing else? We don't want to ignore Failed containers which are not ContainerStatusUnknown.

There can be many reason a container can be in Failed phase like resource limitation or dependent resource not available ...etc. and most of the time it retries and change the phase like container creating or pending . If we delete a Failed pod which is associated with a deployment (which is the case for all the operator) then new pod come up. Even the ContainerStatusUnknown state pod is deleted and associated with deployment it will come up a new one so it is not like we ignoring it but we are making sure once we delete the failed one the new pods initiated by respective deployment should be green.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this explanation, it makes sense to add a retry_failed_pods method which will do the oc delete. It's really unexpected that the method all_pods_are_running_or_completed would delete pods or ensure the failed pods are restated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added retry_failed_pods and this method is called from all_pods_are_running_or_completed .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you call retry_failed_pods from all_pods_are_running_or_completed, you still have the same issue I described in the previous comment:

It's really unexpected that the method all_pods_are_running_or_completed would delete pods or ensure the failed pods are restarted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already delete the failed pods here

retry ${OC} delete pods --field-selector=status.phase=Failed -A

Are there new failed pods appearing while we are waiting for all the pods to be ready?
all_pods_are_running_or_completed has an ignore_namespace parameter, should we also ignore that namespace when deleting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there new failed pods appearing while we are waiting for all the pods to be ready?

@cfergeau yes that's the reason we are putting it in all_pods_are_running_or_completed function.

all_pods_are_running_or_completed has an ignore_namespace parameter, should we also ignore that namespace when deleting?

No, we don't need to ignore namespace for deleting failed pods.

Copy link
Contributor

@cfergeau cfergeau Jul 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cfergeau yes that's the reason we are putting it in all_pods_are_running_or_completed function.

So even something like this is not enough?

diff --git a/snc-library.sh b/snc-library.sh
index f3f2831..3997a06 100755
--- a/snc-library.sh
+++ b/snc-library.sh
@@ -248,13 +248,14 @@ function wait_till_cluster_stable() {
        fi
        if [ $count -eq $numConsecutive ]; then
            echo "Cluster has stabilized"
-           retry ${OC} delete pods --field-selector=status.phase=Failed -A
            break
        fi
        sleep 30s
        a=$((a + 1))
     done
 
+    retry ${OC} delete pods --field-selector=status.phase=Failed -A
+
     # Wait till all the pods are either running or complete state
     retry all_pods_are_running_completed "${ignoreNamespace}"
 }

The removal of this oc delete pods could be part of this PR..

No, we don't need to ignore namespace for deleting failed pods.

Why? This makes all_pods_are_running_or_completed even more odd and confusing.

}

Expand Down