kube: Silent error getting kube config in-cluster

Hi, I’m having a really basic problem with getting kube working in cluster with my k3s cluster (v1.18.9-k3s1). I have a very simple application which is basically just the following:

#![feature(proc_macro_hygiene, decl_macro)]

extern crate kube;
extern crate serde;
extern crate serde_json;

use kube::{ Config };

#[tokio::main]
async fn main() -> Result<(), kube::Error> {
    println!("Starting...");

    println!("Getting kube config...");
    let config_result = Config::from_cluster_env(); //This is breaking

    println!("Resolving kube config result...");
    let config = match config_result {
        Ok(config) => config,
        Err(e) => {
            println!("Error getting config {:?}", e);
            return Err(e);
        }
    };

    println!("Finished!");
    Ok(())
}
[dependencies]
kube = { version = "0.43.0", default-features = false, features = ["derive", "native-tls"] }
kube-runtime = { version = "0.43.0", default-features = false, features = [ "native-tls" ] }
k8s-openapi = { version = "0.9.0", default-features = false, features = ["v1_18"] }
serde =  { version = "1.0", features = ["derive"] }
serde_derive = "1.0"
serde_json = "1.0"
tokio = { version = "0.2.22", features = ["full"] }
reqwest = { version = "0.10.8", default-features = false, features = ["json", "gzip", "stream", "native-tls"] }

The output I’m getting is:

[server] Starting...
[server] Getting kube config...
[K8s EVENT: Pod auth-controller-6fb8f87b4d-5stf5 (ns: ponglehub)] Back-off restarting failed container

I’m hoping this is something obvious in my dependencies, but am suspicious that it’s a K3S compatibility issue, since I tried using rustls originally and had to switch to native openssl because the k3s in-cluster api server address is an IP address rather than a hostname…

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 27 (15 by maintainers)

Most upvoted comments

Oh hey, found the difference. Turns out, it seems to work if the builder installs cargo via apk, but not via rustup. I guess Alpine applies some patch on rustc somewhere.

I might leave a caveat in the readme for when others come by trying to use musl. Will close after that.

Okay, I’ve been able to narrow it down to the following:

Cargo.toml:

[package]
name = "kube-331"
version = "0.1.0"
authors = ["Teo Klestrup Röijezon <teo@nullable.se>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
openssl-sys = "0.9.58"

[profile.release]
opt-level = 0
debug = true
debug-assertions = false
overflow-checks = false
lto = false
panic = 'unwind'
incremental = true
codegen-units = 256
rpath = true

main.rs:

fn main() {
    openssl_sys::init();
}

Curiously, it doesn’t seem to happen when I build and run it inside of the same container…