milvus: [Bug]: search very slowly

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:v2.0.2
- Deployment mode(standalone or cluster):standalone 
- SDK version(e.g. pymilvus v2.0.0rc2):pymilvus v2.0.2
- OS(Ubuntu or CentOS): docker build on debian
- CPU/Memory: 32g

Current Behavior

search bery slowly

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

[2022/05/09 01:13:21.888 +00:00] [INFO] [flush_manager.go:740] [SaveBinlogPath] [SegmentID=433015533994246145] [CollectionID=433015533968031745] [startPos="[]"] ["Length of Field2BinlogPaths"=0] ["Length of Field2Stats"=0] ["Length of Field2Deltalogs"=0] [vChannelName=by-dev-rootcoord-dml_1_433015533968031745v1]
[2022/05/09 01:13:21.888 +00:00] [INFO] [services.go:313] ["receive SaveBinlogPaths request"] [nodeID=30] [collectionID=433015533968031745] [segmentID=433016895989874689] [isFlush=false] [isDropped=false] [startPositions=null] [checkpoints="[{\"segmentID\":433016895989874689,\"position\":{\"channel_name\":\"by-dev-rootcoord-dml_0_433015533968031745v0\",\"msgID\":\"6j0C5IKYAgY=\",\"msgGroup\":\"by-dev-dataNode-30-433015533968031745\",\"timestamp\":433077302522544129},\"num_of_rows\":2267}]"]
[2022/05/09 01:13:21.888 +00:00] [INFO] [services.go:313] ["receive SaveBinlogPaths request"] [nodeID=30] [collectionID=433015533968031745] [segmentID=433015533994246145] [isFlush=false] [isDropped=false] [startPositions=null] [checkpoints="[{\"segmentID\":433015533994246145,\"position\":{\"channel_name\":\"by-dev-rootcoord-dml_1_433015533968031745v1\",\"msgID\":\"6z0C5IKYAgY=\",\"msgGroup\":\"by-dev-dataNode-30-433015533968031745\",\"timestamp\":433077302522544129},\"num_of_rows\":9908}]"]
[2022/05/09 01:13:21.890 +00:00] [INFO] [services.go:362] ["flush segment with meta"] [id=433016895989874689] [meta=null]
[2022/05/09 01:13:21.892 +00:00] [INFO] [services.go:362] ["flush segment with meta"] [id=433015533994246145] [meta=null]
[2022/05/09 01:13:22.156 +00:00] [INFO] [rocksmq_impl.go:228] ["Rocksmq stats"] [cache=155394504] ["rockskv memtable "=10283640] ["rockskv table readers"=488] ["rockskv pinned"=0] ["store memtable "=15031832] ["store table readers"=534992] ["store pinned"=0] ["store l0 file num"=1] ["store l1 file num"=3] ["store l2 file num"=0] ["store l3 file num"=0] ["store l4 file num"=0]
[2022/05/09 01:13:22.281 +00:00] [INFO] [compaction_trigger.go:249] ["global generated plans"] [collection=0] ["plan count"=0]
[2022/05/09 01:13:22.281 +00:00] [INFO] [compaction_trigger.go:249] ["global generated plans"] [collection=0] ["plan count"=0]
[2022/05/09 01:13:22.287 +00:00] [DEBUG] [root_coord.go:334] ["check flushed segments"]
[2022/05/09 01:13:22.287 +00:00] [DEBUG] [services.go:613] ["received get flushed segments request"] [collectionID=433015533968031745] [partitionID=433015533968031746]
[2022/05/09 01:13:22.687 +00:00] [DEBUG] [query_coord.go:530] ["loadBalanceSegmentLoop: memory usage rate of all online QueryNode"] ["mem rate"="{\"26\":0.3326433770555627}"]
[2022/05/09 01:13:22.687 +00:00] [WARN] [query_coord.go:532] ["loadBalanceSegmentLoop: there are too few available query nodes to balance"] [onlineNodeIDs="[26]"] [availableNodeIDs="[26]"]
[2022/05/09 01:14:22.281 +00:00] [INFO] [compaction_trigger.go:249] ["global generated plans"] [collection=0] ["plan count"=0]
[2022/05/09 01:14:22.281 +00:00] [INFO] [compaction_trigger.go:249] ["global generated plans"] [collection=0] ["plan count"=0]
[2022/05/09 01:14:22.694 +00:00] [DEBUG] [query_coord.go:530] ["loadBalanceSegmentLoop: memory usage rate of all online QueryNode"] ["mem rate"="{\"26\":0.33301521490540326}"]
[2022/05/09 01:14:22.694 +00:00] [WARN] [query_coord.go:532] ["loadBalanceSegmentLoop: there are too few available query nodes to balance"] [onlineNodeIDs="[26]"] [availableNodeIDs="[26]"]

Anything else?

No response

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 26 (13 by maintainers)

Most upvoted comments

The consistency level is given as a parameter for search, you can do this:

from pymilvus.orm.types import CONSISTENCY_EVENTUALLY
result = collection.search(data, field_name, search_params, TOPK, consistency_level=CONSISTENCY_EVENTUALLY)

search successfully! thanks!

Hi. I also met this issue, and finally found the consistency level is important in search and query function.

When I set ‘Bounded’ as default, the search/query is hanging. And I changed it as ‘Strong’, it works well. But, I just have a question. So, is ‘Bounded’ always faster than ‘Strong’ in terms of search and query?

Usually yes, but in this issue it’s clear that Bounded is slower. Because the system time of sdk is inconsistent with the system time of milvus. Consistency level does not affect search performance, it only affects that total amount of data queried. If you query while inserting, Bounded will query less data than Strong.

Thanks. I understood the point. Then, I want to resolve the system time issue as you mentioned. So I applied https://stackoverflow.com/a/44440563 to sync the time between my host and milvus docker containers. And then I changed the consistency level to ‘Bounded’ as default, but again it hangs.

Could you tell me more about how to make consistent the system time of sdk and milvus system time?

I tried this way and it works fine. I guess this way just syncs local timezone. Maybe you need to check that your system time is correct.

Oh. I found and resolved my local time difference issue. (As you said, there was the system time difference between my local and milvus docker container)

And I tried it with ‘Bounded’ setting and it worked well as expected!

Thank you!