milvus: [Bug]: The specified key does not exist.
Is there an existing issue for this?
- I have searched the existing issues
Environment
- Milvus version: v2.0ga
- Deployment mode(standalone or cluster):cluster
- SDK version(e.g. pymilvus v2.0.0rc2):milvus-sdk-go v2
- OS(Ubuntu or CentOS): Ubuntu
- CPU/Memory: 48/384G
- GPU: V100
- Others:
Current Behavior
[2022/02/21 06:42:40.940 +00:00] [ERROR] [impl.go:342] [“The specified key does not exist.”] [stack=“github.com/milvus-io/milvus/internal/querynode.(*QueryNode).LoadSegments.func1\n\t/go/src/github.com/milvus-io/milvus/internal/querynode/impl.go:342\ngithub.com/milvus-io/milvus/internal/querynode.(*QueryNode).LoadSegments\n\t/go/src/github.com/milvus-io/milvus/internal/querynode/impl.go:351\ngithub.com/milvus-io/milvus/internal/distributed/querynode.(*Server).LoadSegments\n\t/go/src/github.com/milvus-io/milvus/internal/distributed/querynode/service.go:356\ngithub.com/milvus-io/milvus/internal/proto/querypb._QueryNode_LoadSegments_Handler.func1\n\t/go/src/github.com/milvus-io/milvus/internal/proto/querypb/query_coord.pb.go:3456\ngithub.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing.UnaryServerInterceptor.func1\n\t/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/tracing/opentracing/server_interceptors.go:38\ngithub.com/milvus-io/milvus/internal/proto/querypb._QueryNode_LoadSegments_Handler\n\t/go/src/github.com/milvus-io/milvus/internal/proto/querypb/query_coord.pb.go:3458\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934”]
Expected Behavior
Querynode automatically loads after restart
query_node.log
Steps To Reproduce
No response
Anything else?
No response
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 30 (21 by maintainers)
datacoord.dropTolerance is set too small, only one hour. , according to our 1 billion data benchmark test, it will take about an hour to load 170 million data. If compaction occurs during load, and the segment before compaction is dropped before loading into querynode memory, load will fail