Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: QueryNode oomkilled due to sudden increase in the number of growing segments #34554

Open
1 task done
ThreadDao opened this issue Jul 10, 2024 · 5 comments
Open
1 task done
Assignees
Labels
kind/bug Issues or changes related a bug priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@ThreadDao
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.4-20240709-0d8defb1
- Deployment mode(standalone or cluster): cluster
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

deploy milvus

deploy level-zero-insert-op-96-4610 with 4 queryNodes

    queryNode:
      paused: false
      replicas: 4
      resources:
        limits:
          cpu: "8" 
          memory: 24Gi
        requests:
          cpu: "4" 
          memory: 16Gi
  config:
    dataCoord:
      enableActiveStandby: true
      segment:
        enableLevelZero: true
    indexCoord:
      enableActiveStandby: true
    log:
      level: debug
    queryCoord:
      enableActiveStandby: true
    rootCoord:
      enableActiveStandby: true
    trace:
      exporter: jaeger
      jaeger:
        url: http://tempo-distributor.tempo:14268/api/traces
      sampleFraction: 1

test

  1. create collection with 2 shards -> index
  2. insert 3m-128d data -> flush
  3. index -> load
  4. concurrent requests: search + insert + delete + flush
    image

results

queryNode (one of delegator oomkilled)
metrics of level-zero-insert-op-96-4610
image

Expected Behavior

No response

Steps To Reproduce

https://argo-workflows.zilliz.cc/archived-workflows/qa/dff50b01-bbe4-4995-970d-db19181d2f12?nodeId=level-zero-stable-1720537200-171318015

Milvus Log

pods:

level-zero-insert-op-96-4610-etcd-0                               1/1     Running            0               12h     10.104.18.78    4am-node25   <none>           <none>
level-zero-insert-op-96-4610-etcd-1                               1/1     Running            0               12h     10.104.23.150   4am-node27   <none>           <none>
level-zero-insert-op-96-4610-etcd-2                               1/1     Running            0               12h     10.104.19.156   4am-node28   <none>           <none>
level-zero-insert-op-96-4610-milvus-datanode-76bd6668f6-kcmmt     1/1     Running            1 (12h ago)     12h     10.104.5.81     4am-node12   <none>           <none>
level-zero-insert-op-96-4610-milvus-datanode-76bd6668f6-wdl76     1/1     Running            1 (12h ago)     12h     10.104.18.92    4am-node25   <none>           <none>
level-zero-insert-op-96-4610-milvus-indexnode-6cf6fbfbfb-6vztj    1/1     Running            0               12h     10.104.23.159   4am-node27   <none>           <none>
level-zero-insert-op-96-4610-milvus-indexnode-6cf6fbfbfb-q8kz8    1/1     Running            0               12h     10.104.18.108   4am-node25   <none>           <none>
level-zero-insert-op-96-4610-milvus-mixcoord-5c757dfd46-xx2zc     1/1     Running            0               12h     10.104.16.170   4am-node21   <none>           <none>
level-zero-insert-op-96-4610-milvus-proxy-67c78785-nm4b5          1/1     Running            1 (12h ago)     12h     10.104.25.85    4am-node30   <none>           <none>
level-zero-insert-op-96-4610-milvus-querynode-0-67d5895bd4dklpj   1/1     Running            0               12h     10.104.19.167   4am-node28   <none>           <none>
level-zero-insert-op-96-4610-milvus-querynode-0-67d5895bd4hk25l   1/1     Running            0               12h     10.104.30.197   4am-node38   <none>           <none>
level-zero-insert-op-96-4610-milvus-querynode-0-67d5895bd4t2v99   0/1     CrashLoopBackOff   25 (8s ago)     12h     10.104.20.65    4am-node22   <none>           <none>
level-zero-insert-op-96-4610-milvus-querynode-0-67d5895bd4zdbg6   1/1     Running            0               12h     10.104.30.196   4am-node38   <none>           <none>
level-zero-insert-op-96-4610-minio-0                              1/1     Running            0               12h     10.104.18.81    4am-node25   <none>           <none>
level-zero-insert-op-96-4610-minio-1                              1/1     Running            0               12h     10.104.15.70    4am-node20   <none>           <none>
level-zero-insert-op-96-4610-minio-2                              1/1     Running            0               12h     10.104.30.178   4am-node38   <none>           <none>
level-zero-insert-op-96-4610-minio-3                              1/1     Running            0               12h     10.104.27.211   4am-node31   <none>           <none>
level-zero-insert-op-96-4610-pulsar-bookie-0                      1/1     Running            0               12h     10.104.23.153   4am-node27   <none>           <none>
level-zero-insert-op-96-4610-pulsar-bookie-1                      1/1     Running            0               12h     10.104.21.230   4am-node24   <none>           <none>
level-zero-insert-op-96-4610-pulsar-bookie-2                      1/1     Running            0               12h     10.104.24.84    4am-node29   <none>           <none>
level-zero-insert-op-96-4610-pulsar-bookie-init-pw674             0/1     Completed          0               12h     10.104.18.70    4am-node25   <none>           <none>
level-zero-insert-op-96-4610-pulsar-broker-0                      1/1     Running            0               12h     10.104.14.113   4am-node18   <none>           <none>
level-zero-insert-op-96-4610-pulsar-proxy-0                       1/1     Running            0               12h     10.104.27.204   4am-node31   <none>           <none>
level-zero-insert-op-96-4610-pulsar-pulsar-init-q4kcs             0/1     Completed          0               12h     10.104.26.77    4am-node32   <none>           <none>
level-zero-insert-op-96-4610-pulsar-recovery-0                    1/1     Running            0               12h     10.104.1.195    4am-node10   <none>           <none>
level-zero-insert-op-96-4610-pulsar-zookeeper-0                   1/1     Running            0               12h     10.104.18.82    4am-node25   <none>           <none>
level-zero-insert-op-96-4610-pulsar-zookeeper-1                   1/1     Running            0               12h     10.104.25.84    4am-node30   <none>           <none>
level-zero-insert-op-96-4610-pulsar-zookeeper-2                   1/1     Running            0               12h     10.104.23.157   4am-node27   <none>           <none>

Anything else?

No response

@ThreadDao ThreadDao added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 10, 2024
@ThreadDao ThreadDao added this to the 2.4.6 milestone Jul 10, 2024
@ThreadDao ThreadDao added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Jul 10, 2024
@ThreadDao
Copy link
Contributor Author

/assign @congqixia

@congqixia
Copy link
Contributor

image
the target failed to sync and cause growing segment stuck in delegator. Digging why target update failed

@yanliang567 yanliang567 added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 10, 2024
@yanliang567 yanliang567 removed their assignment Jul 10, 2024
@xiaofan-luan
Copy link
Contributor

xiaofan-luan commented Jul 15, 2024

/assign @bigsheeper
Let's add a seal policy to keep growing segment of each shard to be less than 4GB

sre-ci-robot pushed a commit that referenced this issue Jul 17, 2024
Seals the largest growing segment if the total size of growing segments
of each shard exceeds the size threshold(default 4GB). Introducing this
policy can help keep the size of growing segments within a suitable
level, alleviating the pressure on the delegator.

issue: #34554

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
bigsheeper added a commit to bigsheeper/milvus that referenced this issue Jul 18, 2024
Seals the largest growing segment if the total size of growing segments
of each shard exceeds the size threshold(default 4GB). Introducing this
policy can help keep the size of growing segments within a suitable
level, alleviating the pressure on the delegator.

issue: milvus-io#34554

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
@yanliang567 yanliang567 modified the milestones: 2.4.6, 2.4.7 Jul 19, 2024
sre-ci-robot pushed a commit that referenced this issue Jul 19, 2024
Seals the largest growing segment if the total size of growing segments
of each shard exceeds the size threshold(default 4GB). Introducing this
policy can help keep the size of growing segments within a suitable
level, alleviating the pressure on the delegator.

issue: #34554

pr: #34692

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.
5 participants