dagster: Execution Timeline does not live update with Azure Database for PostgreSQL server 10

Summary

Execution timeline is not updating in real-time when connected via port forward on k8s (same when connected via Ambassador ingress)

Manually refreshing page updates the timeline.

Chrome network:

'ws://localhost:8080/graphql' failed: WebSocket is closed before the connection is established.

While it is executing and watching in Chrome network tab it seems there are some payloads coming across websocket but nothing new is getting visualized to page.

There seems to be 4 websocket connections attempted each time pipeline is executed and through multiple attempts to reproduce 3 seem to connect and provide payloads and the 4th never connects.

Reproduction

Port forward connection from pod

kubectl --namespace dagster port-forward $DAGIT_POD_NAME 8080:80

Execute a pipeline from Playground

Will switch to execution screen with Engine started and [K8sRunLauncher] Kubernetes run worker job launched

No other timeline updates will occur unless page is reloaded

Dagit UI/UX Issue Screenshots

chrome1 chrome4 chrome3

Graphql error and different type of payload messages during one run chrome5

Additional Info about Your Environment

Deployed on AKS (Azure Kubernetes Service) via Helm chart (0.11.0) External Postgres and user provided code image provided (all other helm chart defaults)


Message from the maintainers:

Impacted by this bug? Give it a 👍. We factor engagement into prioritization.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 19 (6 by maintainers)

Commits related to this issue

Most upvoted comments

aren’t showing up in container logs

This is not intended, but also not the only report we’ve gotten https://github.com/dagster-io/dagster/issues/4226 The expectation is that the logs are tee-ed to both stdout/stderr and a file that gets uploaded.

cc @prha

only in Dagit after the solid completes

This is a current known problem, where the azure/s3 etc compute log managers only upload on completion so you can’t view them mid computation. We are in the early phases of solving this problem.

Turned on verbose logging in Postgres (this happens after every NOTIFY statement) 2021-04-13 14:11:53 UTC-6075a6a9.db868-LOG: statement: NOTIFY run_events, ‘eb55a21a-20ed-4eef-94d4-95bad310a932_11799’; 2021-04-13 14:11:53 UTC-6075a6a9.db868-LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.