litmus: Network latency is not injected into my application
What happened: Executed Pod Network Latency Experiment, i do not see latency injected into the application
What you expected to happen: I have a very simple application deployed tom y K8S cluster. The demo application is based on this docker image
How to reproduce it (as minimally and precisely as possible): Created a basic deployment file that includes the above mentioned image. Created a service account based exactly as specified in this link. Chaos engine manifest also exactly looks like what was in the above mentioned link. I have replaced the app labels wherever appropriate. When I apply my chaos engine manifest, i can see that k8s creates a pod-network-latency pod as expected. When i look at the logs for the network latency pod, it is able to find the Application pod to inject latency into. Pumba pod appears to get created successfully. I have verified that the network interface of my application pod is eth0. The log files from network latency pod appear to be normal. Here is a sample from it
W0810 16:13:26.444004 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. time="2020-08-10T16:13:26Z" level=info msg="[PreReq]: Getting the ENV for the experiment" time="2020-08-10T16:13:26Z" level=info msg="[PreReq]: Updating the chaos result of pod-network-latency experiment (SOT)" time="2020-08-10T16:13:26Z" level=info msg="The application informations are as follows\n" Namespace=MyNameSPace Label="app=kuard" Ramp Time=0 time="2020-08-10T16:13:26Z" level=info msg="[Status]: Verify that the AUT (Application Under Test) is running (pre-chaos)" time="2020-08-10T16:13:26Z" level=info msg="[Status]: Checking whether application pods are in running state" time="2020-08-10T16:13:26Z" level=info msg="The running status of Pods are as follows" Pod=kuard-deployment-6c65d5c8fb-trllp Status=Running time="2020-08-10T16:13:26Z" level=info msg="[Status]: Checking whether application containers are in running state" time="2020-08-10T16:13:26Z" level=info msg="The running status of container are as follows" container=kuard Pod=kuard-deployment-6c65d5c8fb-trllp Status=Running time="2020-08-10T16:13:26Z" level=info msg="[Info]: Details of application under chaos injection" NodeName=ip-10-98-83-22.ec2.internal ContainerName=kuard PodName=kuard-deployment-6c65d5c8fb-trllp time="2020-08-10T16:13:26Z" level=info msg="[Status]: Checking the status of the helper pod" time="2020-08-10T16:13:26Z" level=info msg="[Status]: Checking whether application pods are in running state" time="2020-08-10T16:13:28Z" level=info msg="The running status of Pods are as follows" Status=Running Pod=pumba-netem-judhlx time="2020-08-10T16:13:28Z" level=info msg="[Status]: Checking whether application containers are in running state" time="2020-08-10T16:13:28Z" level=info msg="The running status of container are as follows" container=pumba Pod=pumba-netem-judhlx Status=Running time="2020-08-10T16:13:28Z" level=info msg="[Wait]: Waiting for 300s till the completion of the helper pod" time="2020-08-10T16:13:28Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:29Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:30Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:31Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:32Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:33Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:34Z" level=info msg="helper pod status: Running" time="2020-08-10T16:13:35Z" level=info msg="helper pod status: Running"
I have port-forwarded my application on port 8080. When I access the website localhost:8080, i expect to see network latency, but i don’t seem to see any. All XHR requests, that happens as part of the application are able to execute within 50 ms. Is this expected?
Anything else we need to know?:
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 15 (9 by maintainers)
With ref to comment https://github.com/litmuschaos/litmus/issues/1853#issuecomment-673890423:
Figured with @suhrud-kumar from a slack conversation that the go-runner image tag was
latestin ChaosExperiment CR (CR having been picked from the master branch of chaos-charts repo). This has since been changed to 1.6.2 (sourced from the versioned hub) & the experiment is now seen to execute as expected.