greptimedb: [bug] Possible memory leak when ingesting Nginx access logs generated by Vector at 100 QPS whether the data is stored as metrics or logs
What type of bug is this?
Performance issue
What subsystems are affected?
Standalone mode
Minimal reproduce step
func BulkInsert(data []Message) error {
logs.Info(" log lens", len(data))
tx, err := db.Begin()
if err != nil {
log.Println("Error beginning transaction:", err)
return err
}
defer tx.Rollback()
stmt, err := tx.Prepare("INSERT INTO log_test (host, datetime, method, request, protocol, bytes, referer) VALUES (?, ?, ?, ?, ?, ?, ?)")
if err != nil {
log.Println("Error preparing statement:", err)
return err
}
defer stmt.Close()
// 批量值插入
values := []interface{}{}
for _, row := range data {
values = append(values, row.Host, row.Datetime, row.Method, row.Request, row.Protocol, row.Bytes, row.Referer)
}
query := "INSERT INTO log_test (host, datetime, method, request, protocol, bytes, referer) VALUES "
placeholders := "(?, ?, ?, ?, ?, ?, ?)"
queryValues := []string{}
for i := 0; i < len(data); i++ {
queryValues = append(queryValues, placeholders)
}
query += fmt.Sprintf("%s", strings.Join(queryValues, ","))
_, err = tx.Exec(query, values...)
if err != nil {
log.Println("Error executing bulk insert:", err)
return err
}
err = tx.Commit()
if err != nil {
log.Println("Error committing transaction:", err)
return err
}
return nil
}
What did you expect to see?
内存应该文档在一个区间,不会一直增长,一直增长可能运行不了几小时,内存就爆了
What did you see instead?
几十分钟,内存一直从开始几十M,涨到3G多,一直增长不会停止。 docker-compose down 把greptime 停了,cpu 会降下,但是内存不会降下来。 插入性能比mysql还差很多,远远低于postgres
What operating system did you use?
Ubuntu 18.04
Relevant log output and stack trace
似乎有内存泄漏,vector 生成日志,qps 100/s sql 匹配插入, 内存一直增长
About this issue
- Original URL
- State: closed
- Created 6 months ago
- Comments: 23 (8 by maintainers)
所有优化手段都用上了性能实在太差,qps 100/s 每2秒插入100条。go原生database/sql 40分钟,就是内存80M上涨到4.5G, 内存每秒上涨几M 4个月了,性能没有任何优化,有些失望,之前就提过一个issues,测评单机插入和查询性能远远低于mysql,比pg差距更远, 内存高于mysql. 这次又在公司推希望用上,就是看中greptime 支持sql 和promeQL 能同时存log 和metrics 方面统一查询. 希望好好优化下内存和查询速度问题,不优化下,根本没法用