k9s: Problem with Configuration for macOS is can't find configuration directory




Describe the bug

Hello, after the update I have problem with K9S

To Reproduce

Steps to reproduce the behavior:

  1. Update to at least 0.25.4
  2. Open the terminal
  3. Execute terminal command k9s --context cluster-1
  4. See error
 ____  __.________       
|    |/ _/   __   \______
|      < \____    /  ___/
|    |  \   /    /\___ \ 
|____|__ \ /____//____  >
        \/            \/ 

Boom!! app run failed `meow` command not found.

Expected behavior

Аfter running the command k9s in the terminal, I should see the cluster and its resources

Versions (please complete the following information):

  • OS: Apple M1 macOS Monterey 12.0.1
  • K9s: 0.25.4
  • K8s: v1.18.9

Additional context

$ ls -la /Users/my_user/Library/Application\ Support/k9s
total 40
drwxr-xr-x   8 my_user  staff   256 Nov 22 09:58 .
drwx------+ 82 my_user  staff  2624 Nov 21 21:47 ..
-rw-r--r--   1 my_user  staff  2592 Nov 21 00:53 cluster-1_skin.yml
-rw-r--r--   1 my_user  staff  2556 Nov 21 00:53 cluster-2_skin.yml
-rw-r--r--   1 my_user  staff  1268 Nov 22 09:59 config.yml
-rw-r--r--   1 my_user  staff  2592 Nov 21 00:39 cluster-3_skin.yml
-rw-r--r--   1 my_user  staff  2547 Nov 21 21:48 cluster-4_skin.yml
9:59AM INF 🐶 K9s starting up...
9:59AM INF ✅ Kubernetes connectivity
9:59AM WRN Custom view load failed /Users/my_user/Library/Application Support/k9s/views.yml error="open /Users/my_user/Library/Application Support/k9s/views.yml: no such file or directory"
9:59AM ERR CustomView watcher failed error="lstat /Users/my_user/Library/Application Support/k9s/views.yml: no such file or directory"
9:59AM ERR Context switch failed error="context  does not exist"
9:59AM ERR Default run command failed error="context  does not exist"
9:59AM ERR Boom! app run failed `meow` command not found
9:59AM ERR goroutine 1 [running]:
runtime/debug.Stack()
  runtime/debug/stack.go:24 +0x88
github.com/derailed/k9s/cmd.run.func1()
  github.com/derailed/k9s/cmd/root.go:55 +0xd8
panic({0x1042ffea0, 0x14000cbf990})
  runtime/panic.go:1038 +0x21c
github.com/derailed/k9s/cmd.run(0x105a3aaa0, {0x105a99b60, 0x0, 0x0})
  github.com/derailed/k9s/cmd/root.go:68 +0x2a0
github.com/spf13/cobra.(*Command).execute(0x105a3aaa0, {0x1400004e200, 0x0, 0x0})
  github.com/spf13/cobra@v1.2.1/command.go:860 +0x640
github.com/spf13/cobra.(*Command).ExecuteC(0x105a3aaa0)
  github.com/spf13/cobra@v1.2.1/command.go:974 +0x410
github.com/spf13/cobra.(*Command).Execute(...)
  github.com/spf13/cobra@v1.2.1/command.go:902
github.com/derailed/k9s/cmd.Execute()
  github.com/derailed/k9s/cmd/root.go:46 +0x30
main.main()
  github.com/derailed/k9s/main.go:50 +0x194

When I create file views.yml

ls -la                                                 
total 40
drwxr-xr-x   9 my_user  staff   288 Nov 22 11:39 .
-rw-r--r--   1 my_user  staff  2592 Nov 21 00:53 cluster-1_skin.yml
-rw-r--r--   1 my_user  staff  2556 Nov 21 00:53 cluster-2_skin.yml
-rw-r--r--   1 my_user  staff  1268 Nov 22 09:59 config.yml
-rw-r--r--   1 my_user  staff  2592 Nov 21 00:39 cluster-3_skin.yml
-rw-r--r--   1 my_user  staff  2547 Nov 21 21:48 cluster-4_skin.yml
-rw-r--r--   1 my_user  staff     0 Nov 22 11:39 views.yml

then error looks like this:

11:40AM INF 🐶 K9s starting up...
11:40AM INF ✅ Kubernetes connectivity
11:40AM ERR Context switch failed error="context  does not exist"
11:40AM ERR Default run command failed error="context  does not exist"
11:40AM ERR Boom! app run failed `meow` command not found
11:40AM ERR goroutine 1 [running]:
runtime/debug.Stack()
	runtime/debug/stack.go:24 +0x88
github.com/derailed/k9s/cmd.run.func1()
	github.com/derailed/k9s/cmd/root.go:55 +0xd8
panic({0x104753ea0, 0x1400078fd10})
	runtime/panic.go:1038 +0x21c
github.com/derailed/k9s/cmd.run(0x105e8eaa0, {0x14000379240, 0x0, 0x2})
	github.com/derailed/k9s/cmd/root.go:68 +0x2a0
github.com/spf13/cobra.(*Command).execute(0x105e8eaa0, {0x1400004e1c0, 0x2, 0x2})
	github.com/spf13/cobra@v1.2.1/command.go:860 +0x640
github.com/spf13/cobra.(*Command).ExecuteC(0x105e8eaa0)
	github.com/spf13/cobra@v1.2.1/command.go:974 +0x410
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.2.1/command.go:902
github.com/derailed/k9s/cmd.Execute()
	github.com/derailed/k9s/cmd/root.go:46 +0x30
main.main()
	github.com/derailed/k9s/main.go:50 +0x194

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 11
  • Comments: 28 (4 by maintainers)

Most upvoted comments

I fixed my issue:

brew remove k9s

rm -rf /Users/my_user/.config/k9s

rm -rf /Users/my_user/Library/Application\ Support/k9s

brew install k9s

I got the same error with 0.25.4 on MacOS 11.6.1 (20G224). Deleting preference files in rm -rf ~/Library/Application\ Support/k9s/ & rm rf ~/.config/k9s seems to fix the issue.

I also ran into this issue on 0.25.4. Removing ~/.k9s didn’t help, but removing ~/Library/Application\ Support/k9s did. I didn’t have ~/.config/k9s.

started happening for me today on my Mac OS Monterey (k9s version 0.25.18) and deleting the Configuration file from /Users/<user>/Library/Application\ Support/k9s/config.yml fixed the issue

For me the issue were :

Boom!! app run failed `networkattachmentdefinition` command not found.

Somehow k9s managed to store and save an active-view to a resource that was not available atm (it had previously been though). k9s saves this information in the file /Users/<username>/Library/Application Support/k9s/config.yml (on macos). So every-time I tried to start k9s it tried to load up the view with a resource non existing.

By setting active-view to something familiar eg. pods and start k9s again it worked.

...
view:
  active: networkattachmentdefinition -> pods

The same issue on macos monterey

For me the issue were :

Boom!! app run failed `networkattachmentdefinition` command not found.

Somehow k9s managed to store and save an active-view to a resource that was not available atm (it had previously been though). k9s saves this information in the file /Users/<username>/Library/Application Support/k9s/config.yml (on macos). So every-time I tried to start k9s it tried to load up the view with a resource non existing.

By setting active-view to something familiar eg. pods and start k9s again it worked.

...
view:
  active: networkattachmentdefinition -> pods

You save my day!! It works.

Works again with 0.25.5 (and 0.25.6) after failing on 0.25.4, installation via Scoop on Win10. Thanks for the quick fix!

I also ran into this issue on 0.25.4. Removing ~/.k9s didn’t help, but removing ~/Library/Application\ Support/k9s did. I didn’t have ~/.config/k9s.

this helped

This bug should not be closed. It hit me today in Linux Ubuntu 18.04 and k9s 0.25.18, I had to go and delete ~/.config/k9s/config.yaml

Rats! Thank you all for reporting and add context. That would be on me 😭 Must get mo sleep!! Let’s see if we’re back on track with v0.25.5…

This ^^ helped. I renamed:

view:
  active: contexts

to

view:
  active: context

(no plural)

that fixed it for me.

my solution: vim your k9s config file (linux may be ~/.config/k9s/config.yml), change “active: xxxx” to be “active: pod”, save and quit vim. command “k9s” works now!

Removing the offending view from the config file worked on my mac, so you don’t use your config. It looks like it is trying to save context from the last session and it no longer applies to the cluster. From the original example, this would look like vi "/Users/tlhowe/Library/Application Support/k9s/config.yml" and remove:

      view:
        active: meow

This bug should not be closed. It hit me today in Linux Ubuntu 18.04, I had to go and delete ~/.config/k9s/config.yaml

Same happened to me today, deleting that file fixed it for me too.

I’m getting the same issue on Ubuntu 20.04 even though I can fully work with the cluster on kubectl.

I did the following and still get Unable to connect to context "k8sedge_config1_3" (some contexts work, just not this one):

brew remove k9s
rm -rf ~/.config/k9s
rm -rf ~/.k9s
# I do not have a "~/Library" path
brew install k9s

version after reinstall:

k9s version
Version:    0.25.5
Commit:     4ae57817bd4f7b169461e39f2f8aa506fa43c79c

Also getting same error on Ubuntu 20.10. Rolling back to v0.25.3 seems to have helped.