kube-system/image-package-extractor getting OOMKilled

Hello folks,

I've found many times the image-package-extractor is unable to extract packages from some images I'm running because the daemon set creates the pods with 50MB of memory request, and the same ammount for it's limits.

Apparently there's something (addon?) controlling the Daemonset to reset the request/limit values everytime I attempt to change them manually.

Any ideas on how can I customize that daemonset so that it's able to finally scan my images properly?

4 11 4,170
11 REPLIES 11

The same is happening to me. Anyone else?

DrJJ
New Member

Same issue ever since we switched to GKE 1.25. It worked fine on GKE 1.23

I filed a support ticket. The issue is known and they fixed it at least with 1.27.1-gke.400.

Hi, I was wondering you are resolving this issue on the Daemonset? 

The reason I ask is because I'm on GKE autopilot and when I try to change it I receive an error about insufficient permissions. 

kubectl edit ds image-package-extractor --namespace=kube-system
error: daemonsets.apps "image-package-extractor" could not be patched: daemonsets.apps "image-package-extractor" is forbidden: User "jk" cannot patch resource "daemonsets" in API group "apps" in the namespace "kube-system": GKE Warden authz [denied by managed-namespaces-limitation]: the namespace "kube-system" is managed and the request's verb "patch" is denied

You can run `kubectl replace -f /tmp/kubectl-edit-751860510.yaml` to try this update again.

kubectl replace -f /tmp/kubectl-edit-751860510.yaml
Error from server (Forbidden): error when replacing "/tmp/kubectl-edit-751860510.yaml": daemonsets.apps "image-package-extractor" is forbidden: User "jk" cannot update resource "daemonsets" in API group "apps" in the namespace "kube-system": GKE Warden authz [denied by managed-namespaces-limitation]: the namespace "kube-system" is managed and the request's verb "update" is denied

I am still facing this issue in 1.27.4-gke.900. This is standard GKE cluster and I can't modify it. It uses low resources limits :
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
That makes it get killed. Attaching memory consumption chart. oom.png 

Same as @shubhagsaxena  up above, we are on `1.27.4-gke.900` & are getting the same OOM kills. This is causing our OOM pod alerts to fire and pollute the space.

Any ideas how this can be addressed, e.g. by increasing the mem? Manually updating the `DaemonSet/image-package-extractor` DS's pod template?

hey @mix4242, disable "Workload vulnerability scanning " on your GKE and be happy, this will go.

Update :
Got info from google that this is a known issue and their Product Team has fixed/updated the image-package-extractor version to 0.0.41 to solve this.

Still happening with 0.0.41 in v1.27.4-gke.900

Now happening on 1.30.1-gke.1329000.  Had no problems with 1.30.0.

Top Labels in this Space
Top Solution Authors