Stackify is now BMC. Read theBlog

Why do Kubernetes pod stay in pending state?

  |  May 1, 2023
Why do Kubernetes pod stay in pending state?

Kubernetes refers to an open-source platform managing containerized service. This portable system simplifies automation and configuration. You can link an app in a Kubernetes cluster and connect it to IBM Cloud Kubernetes service through the VPN. In this article, we will focus on why your kubernetes pod stays in pending state.

Use of VPN in Kubernetes

Install the VPN in the platform’s cluster. After that, expose the VPN for all devices using NodePort. You’ll find this open port in your cluster nodes. Follow these simple steps.

  • Connect the VPN node to the cluster.
  • Connect it to Kubernetes services.
  • Adjust the VPN pattern appropriately.

Kubernetes pod stuck in pending?

What happens if your Kubernetes pod stays in pending state? This cannot be programmed into a node as inadequate resources prevent proper programming. If you have a concern if a host port is needed, scheduling of pods depends on the number of nodes in the Kubernetes cluster.

Reasons for Failure of Scheduling

Identify the problem immediately. Your options for your Kubernetes pod staying in pending state include the following:

  • Debug Pods
  • Debug Replication Controllers
  • Debug Services

Continuous debugging depends on the pods’ status. What if the pod remains pending? This indicates you cannot schedule the pod into a node.


Find out the reasons through messages from your scheduler.

  • You have insufficient resources because CPU or memory supply has been consumed. Therefore, you have three choices – delete pods, add new nodes or tweak resource requests.
  • You’re using hostPort. Binding a pod to hostPort means limited areas for scheduling. However, it’s pointless using a service object to expose the pod.

Waiting Status

Your pod remaining in ‘waiting’ status means it has been scheduled in the worker’s node. Yet, the pod can’t run on said machine. Most often, waiting pods happen when the image can’t be pulled. If that occurs, you can do these three things:

  • You should have the correct name for the image.
  • Push the image to the storage area.
  • Perform manual docker pull <image> on your machine. This will determine if you can pull the image.

The Pod Crashes

Your pod suddenly crashes.  Maybe it’s because it is ready for debugging after it is scheduled or it will not function properly due to the incorrect pod description?

You may have typed the key name wrongly. For example, you incorrectly spelled “command” incorrectly. Then, when the pod can be created, it can’t use the command line you selected.

In this case, you need to delete your pod and create it over again. With this in mind, use the validate option. Run [kubectl apply –validate -f mypod. yaml].

Then, check your pod on the Api server. It should match the pod you wanted to create. There may be lines on the Api version not found on the original version. However, you should expect this result. If that comes out, your pod spec has a problem.

Debugging

Services deliver load-balancing across several pods. Follow these directions for debugging Service issues:

  • Identify endpoints (API) for the service. The API service makes available endpoint resources for each Service object.
  • Endpoints must match with the number of containers in your Service. If you miss endpoints, list pods with labels Service uses.
  • Your endpoints remain empty although the list matches expectations. Given this, the right ports may not have been exposed. Make sure the pod’s ContainerPort matches with the Service’s targetPort.

Network Traffic

Network traffic has not been forwarded. Thus, you can’t connect to the service. Likewise, the connection gets dropped. Most likely, the proxy failed to contact your pods.

Check the following:

  • Are the pods functioning accurately? Search restart count. Debug the pods.
  • Can you connect directly to the pods? Secure the pod’s IP address. Try connecting directly to said IP.
  • Does the app serve on the port you configured? Kubernetes has no capacity to perform port remapping. If the app serves on 8080, the containerPort field should be 8080.

Inadequate CPU Memory

What does ‘insufficient memory’ mean? This simply indicates the pod doesn’t match the nodes.  The reason – Inadequate Central Processing Unit (CPU) memory.

You can find out the actual number of resources used. Run this command: kubectl describes nodes.  As a result, it provides you with the following information:

  • CPU/Memory available in each node
  • CPU/Member used by pods/containers
  • CPU/Member (free) and used per node

One common error in defining resources for your cluster is failure to consider resources system components utilize. These containers run separately from those that the configuration specifies.

These system components are fitted by default with Kubernetes and run in the system’s namespace. However, you cannot see that in the default namespace.

The system services consume at least one CPU per node. 

  • You have very small nodes (2 CPUs).
  • A set of system services always run per node.

You may use these to deal with problems:

  • Minimize CPU requests until it fully functions.
  • Remove unnecessary pods to free up CPU space.
  • Add a new node (worker) to increase CPUs. Avoid using small nodes for production clusters. Use bigger nodes in creating clusters. Use four up to eight CPUs as minimum.

EKS and VPC

EKS is an essential resource regarding Kubernetes. Elastic Kubernetes Service (EKS) refers to the administered Kubernetes service. It simplifies running on Amazon Web Service (AWS). 

EKS eliminates the need for installation, operation and maintenance of the control plane. It also automatically spots and replaces corrupted nodes. 

When do you need to set up the EKS cluster? EKS is for pods that cannot connect to the master server. It occurs if you allow public endpoint access. Consider these points:

  • Enable access. Thus, worker nodes or VPC pods can be connected.
  • Set up the security group. Plot the EKS cluster. Add pod and security group in the ingress rule. This should be the 443 port access.

The VPC refers to the virtual private cloud. VPC means the virtual network (VPN for all devices). This VPC is separated from other networks in the cloud.

What to Expect Ahead?

While you cannot avoid encountering issues with regards to Kubernetes, you can proactively diagnose issues with an Application Performance Management tool, such as Stackify Retrace.  Retrace offers container support for Kubernetes to monitor and troubleshoot applications.  Try your free, 14 day trial of Retrace today.  

Improve Your Code with Retrace APM

Stackify's APM tools are used by thousands of .NET, Java, PHP, Node.js, Python, & Ruby developers all over the world.
Explore Retrace's product features to learn more.

Learn More

Want to contribute to the Stackify blog?

If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]