mysql2: Inexplicable Mysql2::Error: Lost connection to MySQL server during query

It seems that after upgrading to 0.4.10 the library throws the above error even for simplest queries for me. I’m using it with MySQL 5.7.11.

The issue appears when I try to upgrade to the latest version of GitLab (10.4.x). The docker image I use has upgraded the mysql2 library from 0.4.5 to 0.4.10. I never had any issues with 0.4.5 and I was using it for quite some time now with the same MySQL version. The error is thrown when I try to migrate the database to the latest version.

There’s a full stack trace over here: https://github.com/sameersbn/docker-gitlab/issues/1492

The head of the stack trace looks like this:

Jan 29 17:37:40 noc docker/gitlab[853]: Migrating database… Jan 29 17:38:17 noc docker/gitlab[853]: rake aborted! Jan 29 17:38:17 noc docker/gitlab[853]: ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query: SELECT application_settings.* FROM application_settings ORDER BY application_settings.id DESC LIMIT 1 Jan 29 17:38:17 noc docker/gitlab[853]: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in _query' Jan 29 17:38:17 noc docker/gitlab[853]: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in block in query’ Jan 29 17:38:17 noc docker/gitlab[853]: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in handle_interrupt' Jan 29 17:38:17 noc docker/gitlab[853]: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in query’

The relevant model that initiated the query can be found here: https://gitlab.com/gitlab-org/gitlab-ce/blob/b1c501c916825626685826f9ef88efb9a9d02a3d/app/models/application_setting.rb#L227

I’ve enabled query logging on the MySQL server. It’s quite obvious that there’s no timeout issue here, as the error is thrown almost immediately after the connection is opened:

2018-01-31T14:36:40.130682Z 345 Connect gitlab@192.168.0.8 on gitlab using TCP/IP 2018-01-31T14:36:40.142067Z 345 Query SET NAMES utf8 COLLATE utf8_general_ci, @@SESSION.sql_auto_is_null = 0, @@SESSION.wait_ti meout = 2147483, @@SESSION.sql_mode = ‘STRICT_ALL_TABLES’ 2018-01-31T14:36:40.145440Z 345 Query SHOW TABLES LIKE ‘application_settings’ 2018-01-31T14:36:40.149082Z 345 Query SHOW FULL FIELDS FROM application_settings 2018-01-31T14:36:40.172786Z 345 Query SHOW TABLES 2018-01-31T14:36:40.174938Z 345 Query SHOW CREATE TABLE application_settings 2018-01-31T14:36:40.308565Z 345 Query SELECT application_settings.* FROM application_settings ORDER BY application_settin gs.id DESC LIMIT 1 2018-01-31T14:36:40.328810Z 345 Query SELECT application_settings.* FROM application_settings ORDER BY application_settin gs.id DESC LIMIT 1

I found that similar errors where caused by reaper_frequency. But AFAIK this has been disabled now for a long time. I also couldn’t find this setting anywhere in the app directory and the default is off.

So I’m stuck now. Any help or hint how to tackle the issue would be highly welcome. I can also try more debugging, if it helps. But please note, that I’m not a ruby developer, so I might need some pointers on how to do that.

If I’m wrong about my assumption that it’s a problem with mysql2 I’d appreciate some pointers on where to look instead.

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 8
  • Comments: 42

Most upvoted comments

Our issues were fixed by explicitly disabling strict mode on the client side via database.yml, by adding strict: false to our database.yml. It appears to be related to an issue in older versions of libmysql on our end, but we aren’t 100% certain about this.

After trying everything you all have suggested here (without any success), let me share what I did to stop my Mysql2::Error: MySQL client is not connected issues while running massive tests:

Added idle_timeout: 0 to my database.yml, which actually makes senses since ‘Set this to zero to keep connections forever’ (accordding to docs)

Source: https://api.rubyonrails.org/v5.2.2/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html

For reference (again), the problem is related (at least in my case) to libmysqlclient20 - I don’t know if it’s a bug in the library itself, or in the mysql2 gem.

When on the problematic stack (the 18.04 one) I install the Ubuntu 14.04 libmysqlclient libraries:

  • libmysqlclient-dev: 5.5.61-0ubuntu0.14.04.1
  • libmysqlclient18: 5.5.61-0ubuntu0.14.04.1

there is no issue.

Hopefully this may help somebody. I’m not sure whether this is desirable in a production setup (the libmysqlclient-dev package will need to be pinned, and manually updated when there are 14.04 updates), however it may be preferrable to a non-working system 😃 .

Based on the comments above, the same principle should be valid for 16.04, as the libmysqlclient package versions are very close to 18.04.

Summary

In an Ubuntu 20.04 environment, a combination of using the OS packages libmysqlclient21 and mariadb-client-10.3, as well as disabling the query cache on MySQL server’s side - solved the issue for me.

Background story

I was experiencing this issue, as similarly described by @saveriomiroddi here: https://github.com/brianmario/mysql2/issues/938#issuecomment-425953296

MySQL server version: 5.7.29-0ubuntu0.18.04.1 Rails 6.0.3.2 app using Ruby 2.7.1

App’s Gemfile had the mysql2 gem, version 0.5.0

App lived inside a Docker container, having as base image: passenger-ruby27:1.0.11

Dockerfile contained as extra library mysql-client-5.6

By using passenger-ruby27:1.0.11, Docker container had:

  • as OS: Ubuntu 18.04
  • following MySQL OS libraries:
ii  libmysqlclient-dev             5.7.30-0ubuntu0.18.04.1             amd64        MySQL database development files
ii  libmysqlclient20:amd64         5.7.30-0ubuntu0.18.04.1             amd64        MySQL database client library
ii  mysql-client-5.7               5.7.32-0ubuntu0.18.04.1             amd64        MySQL database client binaries
ii  mysql-client-core-5.7          5.7.32-0ubuntu0.18.04.1             amd64        MySQL database core client binaries
ii  mysql-common                   5.8+1.0.4                           all          MySQL database common files, e.g. /etc/mysql/my.cnf

How I reproduced the issue

In the rails console:

  1. with a simple select, I got no result on the first attempt

ActiveRecord::RecordNotFound (Couldn’t find <…>)

  1. same find, a Lost connection to MySQL server on the second (I guess something happens during the first query).

ActiveRecord::StatementInvalid (Mysql2::Error::ConnectionError: Lost connection to MySQL server during query)

  1. same find, result is properly retrieved

How the issue could not be reproduced

Ref: https://stackoverflow.com/questions/3658000/how-to-access-mysqlresult-in-activerecord

In the rails console:

result = ActiveRecord::Base.connection.execute("select * from X where id = Y;")
result.first
# an array containing the fields of the record I was looking for

Debugging the MySQL connections

FYI note: my database.yml has:

reconnect: true

In the MySQL console, I’ve inspected the MySQL connections using:

select * from information_schema.processlist;

Noticed that for attempt 1, a connection was made; after attempt was made, connection lived. For attempt 2, the initial connection died. For attempt 3, a new connection was made.

Resolving the issue

Note: each following attempt carried over the changes made by its predecessor.

[Unsuccessful] Attempt 1: Use mysql-client-5.7

[Unsuccessful] Attempt 2: Use Rails 6.1.0

[Unsuccessful] Attempt 3: Use mysql2 gem, version 0.5.3

[Unsuccessful] Attempt 4: Using the latest : passenger-ruby27:1.0.12 Docker image

As explained: https://github.com/phusion/passenger-docker/blob/master/CHANGELOG.md#1012-release-date-2020-11-18 - here, it had a number of changes, one of which was changing from Ubuntu 18.04 to Ubuntu 20.04

I had to also use ruby 2.7.2 and, instead of mysql-client-5.7, use mariadb-client-10.3

FYI, inside the container, the packages needed to interact with MySQL are:

ii  libdbd-mysql-perl:amd64              4.050-3                           amd64        Perl5 database interface to the MariaDB/MySQL database
ii  libmysqlclient-dev                   8.0.22-0ubuntu0.20.04.2           amd64        MySQL database development files
ii  libmysqlclient21:amd64               8.0.22-0ubuntu0.20.04.2           amd64        MySQL database client library
ii  mysql-common                         5.8+1.0.5ubuntu2                  all          MySQL database common files, e.g. /etc/mysql/my.cnf

ii  mariadb-client-10.3                  1:10.3.25-0ubuntu0.20.04.1        amd64        MariaDB database client binaries
ii  mariadb-client-core-10.3             1:10.3.25-0ubuntu0.20.04.1        amd64        MariaDB database core client binaries
ii  mariadb-common                       1:10.3.25-0ubuntu0.20.04.1        all          MariaDB common metapackage

Worked fine while interacting with the DB through the Rails console. A fresh db import from production to staging (where the issue occurred and still does) caused the issue to occur again…

What worked in the end

As indicated by: a-maas’s comment And explained by: rilian’s comment

Potential reason behind what causes the issue to occur: very large records in the database. I also disabled the query cache.

The issue was finally resolved. Any further code deployments or DB imports did not cause the issue to occur again.

We were encountering these errors while trying to spin up an existing app, that uses a MySQL 5.7 database, in a new environment with the latest ubuntu/libmysqlclient. While running our old environment and the new environment at the same time, these errors would pop up pretty frequently.

On the Rails side, we got:

Mysql2::Error::ConnectionError: Lost connection to MySQL server during query

In the MySQL log, we got:

2020-09-10T20:39:26.039266Z 1385 [Note] Aborted connection 1385 to db: 'database' user: 'user' host: '172.18.4.222' (Got an error writing communication packets)

What finally fixed the issue was disabling MySQL 5.7’s query cache feature, which MySQL ended up deprecating and removing in later versions anyways.

SET GLOBAL query_cache_size = 0;

I hope this helps others encountering this issue!

Not sure if this will help anybody, but we were having this issue with a Discourse importer that uses this gem. We ended up increasing some MySQL server settings:

# GLOBAL:
net_write_timeout=3600;
net_read_timeout=3600;
delayed_insert_timeout=3600;
max_length_for_sort_data=8388608;
max_sort_length=8388608;
net_buffer_length=1048576;
max_connections=10000;
connect_timeout=31536000;
wait_timeout=31536000;
max_allowed_packet=1073741824;
mysqlx_read_timeout=2147483;
mysqlx_idle_worker_thread_timeout=3600;
mysqlx_connect_timeout=1000000000;

# SESSION:
net_write_timeout=3600;
net_read_timeout=3600;
max_length_for_sort_data=8388608;
max_sort_length=8388608;
wait_timeout=31536000;

We’re not sure specifically which setting(s) fixed the connection issue, but we no longer got the error after these changes.

Added idle_timeout:0 to my database.yml and did rails db:migrate:reset and I haven’t faced the same issue again I think

note: adding strict:false to my database.yml didn’t fix the issue

Anyone Fixed it YET? After adding idle_timeout:0 to my database.yml, getting the error (Mysql2::Error: MySQL client is not connected: ROLLBACK):

Not sure if it will help anybody, but we started facing “Lost connection to MySQL server during query” issue from time to time after upgrading MySQL version via Amazon RDS from 5.7.20 to 5.7.22. Interestingly, we have been using non RDS 5.7.22 version of MySQL for a long time on another environment, and we have never encountered that issue before

Added idle_timeout:0 to my database.yml and did rails db:migrate:reset and I haven’t faced the same issue again I think

NOTE: rails db:migrate:reset will drop the existing database and will create a new database with no tables. Be careful!

I’ve recently run into this issue again, this time I’m trying to use an RDS Proxy, which has resurfaced this problem (I was previously experiencing it with Istio when using sidecars).

I’ve tried quite a few configurations with these variables:

 24   reconnect: false,
 25   reaping_frequency: 120,
 26   connect_timeout: 5,
 27   read_timeout: 5,
 28   idle_timeout: 5

But I haven’t had much success. I’m using Puma with 2 workers and 5 threads, with the on_worker_boot and before_fork ActiveRecord manipulations documented by Heroku.

I’m also using Sinatra, not rails, if that is relevant. I’m unable to reliably reproduce locally, but can on occasion. The queries are also quite simple, not lots of data coming through.

Anyone have any further ideas?

Update I’ve resolved my issues - they were unrelated to mysql2 (unsurprisingly). This article saved me from my many hours of anguish

For reference, I’m experiencing this problem as well, and it’s very obscure, since, also in my case, it’s unrelated from typical misconfigurations.

To be specific, on:

  • AWS, Ubuntu 14.04, libmysqlclient18, mysql2 0.5.2, MySQL server 5.7.19, Ruby 2.5

I don’t experience any problem. When I try the exact same stack with the exception of:

  • Ubuntu 18.04, libmysqlclient20

with a simple select, I get a wrong result on the first attempt, and a Lost connection to MySQL server on the second (I guess something happens during the first query).

Rails itself is unrelated; I can reproduce the issue on an irb session.