i3: "i3-msg restart" yields error code 1 even on successful restart

I’m submitting a…

[x] Bug
[ ] Feature Request
[ ] Documentation Request
[ ] Other (Please describe in detail)

Current Behavior

$ i3-msg restart
$ echo $?
1

Restarting i3wm in place with i3-msg yields an error code of 1 – however, the restart appears to complete successfully with no clear error with debugging and verbosity enabled.

Expected Behavior

$ i3-msg restart
$ echo $?
0

Yielding the right error code allows me to run commands in a systemd unit file and script certain behavior with i3wm. Currently, whenever my systemd service restarts, it fails because the error code yields 1. However, I visibly notice the restart takes place and the desired functionality from my systemd unit file works.

This also makes it difficult for me to automate the installation of my systemd unit file with Ansible, because when the systemd service returns a failure, the running Ansible playbook halts.

Reproduction Instructions

This happens any time I call the command after logging into i3wm from gdm.

Environment

Output of i3 --moreversion 2>&-:

i3 version: 4.15 (2018-03-10)
https://github.com/jwflory/swiss-army/blob/master/roles/apps/i3wm/files/config
- Logfile URL: https://logs.i3wm.org/logs/5730225442783232.bz2
- Linux Distribution & Version: Fedora 28
- Are you using a compositor (e.g., xcompmgr or compton): no

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 16 (7 by maintainers)

Commits related to this issue

Most upvoted comments

Please, try to explain it in terms of sockets so that I can understand how to implement client part of IPC.

For example:

  1. Will commands be executed one by one?

Yes.

  1. Will every command(even restart) write return status in socket?

Yes.

  1. Will i3 use same sockets after the restart or will it create new ones(forcing client to connect again)?

Only the socket which sent the restart command will remain open, and our recommendation is to close that in your client library to enforce all state being reset. That’s just a recommendation, though—if you want to clear state manually, feel free to do that.

  1. Will command(move right) after the restart be executed afterwards or just ignored?

It will be ignored, as restart clears all IPC state on the i3 side, including buffers containing not-yet-executed commands.

I think it might be easiest to provide a restart function in your library (which sends precisely 1 restart command) and just document that users shouldn’t send command requests which contain any commands after a restart.

I’m working on a change which fixes this, too. Will send the PR soon.

Sorry I went dark! I’ve been struggling to keep up with deadlines for the next school year (on top of school and work) so I haven’t had much time to work on it. I’ll get back to it as soon as I can, but it may take a while for things to settle down.

Just a quick update, the problem seems to be specifically caused by the IPC clients getting freed before a shutdown or restart. cmd_restart() in commands.c calls ipc_shutdown(), which works fine for exiting, although when restarting, the connection is closed before the client receives a response.

I think the fix would be to make the connection persist across restarts (possibly needed for #3570, too). I’ve been brainstorming a little on how to implement that, but I’ll have more time once the holidays are over.

From what I can tell, i3-msg successfully sends the command to i3, but doesn’t get a response (ipc_recv_message returns -2), causing the error status. Putting another command in the payload (e.g., i3-msg 'restart; workspace 1') makes it restart without running the second command, too. I’m not familiar enough with the project yet to know why, but I’ll have more time to try figuring it out later today, if no one else has started working on a fix.

I’m fairly sure we didn’t change anything about this in 4.16 but would you mind double checking anyway? Thanks!

Sorry, we can only support the latest major version. Please upgrade from 4.15 to 4.16, verify the bug still exists, and re-open this issue.