ray: Node's reported static resources sometimes corrupted when using placement groups

It seems that the reported “total” (static) resources for nodes are sometimes corrupted when using placement groups. The issue is easy to reproduce, if I run the following script:

import ray
ray.init()

pg = ray.util.placement_group([{"CPU": 1}])
import time
time.sleep(999)

Sometimes I get (correct):

Usage:
 1.0/8.0 CPU
 0.0/1.0 CPU_group_0_96ba2e7864f6042c4b829cb2ba0f15ff
 0.0/1.0 CPU_group_96ba2e7864f6042c4b829cb2ba0f15ff
 0.00/8.035 GiB memory
 0.00/4.017 GiB object_store_memory

and sometimes this (incorrect— and even the other resources have gone missing!)

Usage:
 0.0/1.0 CPU_group_0_a38a03683b26d7b35c3f1fbf53fefc92
 0.0/1.0 CPU_group_a38a03683b26d7b35c3f1fbf53fefc92

Original issue:

I started a cluster, I run the following commands:

import ray
from ray.util import placement_group
pg = placement_group([{"CPU": 1, "GPU": 1}] + [{"CPU": 1}] * 100, name="__test_")
ray.get(pg.ready())
#wait..

the event summarized prints:

(autoscaler +7m58s) Resized to 10 CPUs, 1 GPUs.
(autoscaler +8m5s) Resized to 16 CPUs, 1 GPUs.
(autoscaler +8m12s) Resized to 36 CPUs, 1 GPUs.
(autoscaler +8m19s) Resized to 68 CPUs, 1 GPUs.
(autoscaler +8m26s) Resized to 86 CPUs, 1 GPUs.
<ray.util.placement_group.PlacementGroup object at 0x7f76c6bdafd0>
>>> (autoscaler +8m33s) Resized to 100 CPUs, 1 GPUs.
 (autoscaler +8m40s) Resized to 80 CPUs, 1 GPUs.

>>> (autoscaler +8m47s) Resized to 52 CPUs.
(autoscaler +8m53s) Resized to 64 CPUs.
(autoscaler +9m0s) Resized to 56 CPUs.
(autoscaler +9m6s) Resized to 96 CPUs, 1 GPUs.
(autoscaler +9m13s) Resized to 76 CPUs.
(autoscaler +9m19s) Resized to 54 CPUs.

looking at the logs (49 ray.worker.default are always Healthy, but the utilization keeps changing from 10/10-100/100 back and forth:

Node status
---------------------------------------------------------------
Healthy:
 1 ray.head.default
 49 ray.worker.default
Pending:
 (no pending nodes)
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------

Usage:
 77.0/78.0 CPU
 0.0/71.0 CPU_group_0412ba1eeb915fbd679e1a13b51bcaa3

cluster yaml:

# An unique identifier for the head node and workers of this cluster.
cluster_name: ameer_rllib_default

# The maximum number of workers nodes to launch in addition to the head
# node.
max_workers: 100

# The autoscaler will scale up the cluster faster with higher upscaling speed.
# E.g., if the task requires adding more nodes then autoscaler will gradually
# scale up the cluster in chunks of upscaling_speed*currently_running_nodes.
# This number should be > 0.
upscaling_speed: 100.0

# This executes all commands on all nodes in the docker container,
# and opens all the necessary ports to support the Ray cluster.
# Empty string means disabled.
docker:
    image: "rayproject/ray-ml:nightly-gpu" # You can change this to latest-cpu if you don't need GPU support and want a faster startup
    # image: rayproject/ray:latest-gpu   # use this one if you don't need ML dependencies, it's faster to pull
    container_name: "ray_container"
    # If true, pulls latest version of image. Otherwise, `docker run` will only pull the image
    # if no cached version is present.
    pull_before_run: True
    run_options: []  # Extra options to pass into "docker run"

    # Example of running a GPU head with CPU workers
    # head_image: "rayproject/ray-ml:latest-gpu"
    # Allow Ray to automatically detect GPUs

    # worker_image: "rayproject/ray-ml:latest-cpu"
    # worker_run_options: []

# If a node is idle for this many minutes, it will be removed.
idle_timeout_minutes: 5

# Cloud-provider specific configuration.
provider:
    type: aws
    region: us-west-2
    # Availability zone(s), comma-separated, that nodes may be launched in.
    # Nodes are currently spread between zones by a round-robin approach,
    # however this implementation detail should not be relied upon.
    availability_zone: us-west-2a,us-west-2b
    # Whether to allow node reuse. If set to False, nodes will be terminated
    # instead of stopped.
    cache_stopped_nodes: True # If not present, the default is True.

# How Ray will authenticate with newly launched nodes.
auth:
    ssh_user: ubuntu
# By default Ray creates a new private keypair, but you can also use your own.
# If you do so, make sure to also set "KeyName" in the head and worker node
# configurations below.
#    ssh_private_key: /path/to/your/key.pem

# Tell the autoscaler the allowed node types and the resources they provide.
# The key is the name of the node type, which is just for debugging purposes.
# The node config specifies the launch config and physical instance type.
available_node_types:
    ray.head.default:
        # The minimum number of worker nodes of this type to launch.
        # This number should be >= 0.
        min_workers: 0
        # The maximum number of worker nodes of this type to launch.
        # This takes precedence over min_workers.
        max_workers: 0
        # The node type's CPU and GPU resources are auto-detected based on AWS instance type.
        # If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler.
        # You can also set custom resources.
        # For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set
        # resources: {"CPU": 1, "GPU": 1, "custom": 5}
        resources: {}
        # Provider-specific config for this node type, e.g. instance type. By default
        # Ray will auto-configure unspecified fields such as SubnetId and KeyName.
        # For more documentation on available fields, see:
        # http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances
        node_config:
            InstanceType: p2.xlarge
            ImageId: ami-0a2363a9cff180a64 # Deep Learning AMI (Ubuntu) Version 30
            # You can provision additional disk space with a conf as follows
            BlockDeviceMappings:
                - DeviceName: /dev/sda1
                  Ebs:
                      VolumeSize: 100
            # Additional options in the boto docs.
    ray.worker.default:
        # The minimum number of worker nodes of this type to launch.
        # This number should be >= 0.
        min_workers: 0
        # The maximum number of worker nodes of this type to launch.
        # This takes precedence over min_workers.
        max_workers: 100
        # The node type's CPU and GPU resources are auto-detected based on AWS instance type.
        # If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler.
        # You can also set custom resources.
        # For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set
        # resources: {"CPU": 1, "GPU": 1, "custom": 5}
        resources: {}
        # Provider-specific config for this node type, e.g. instance type. By default
        # Ray will auto-configure unspecified fields such as SubnetId and KeyName.
        # For more documentation on available fields, see:
        # http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances
        node_config:
            InstanceType: m5.large
            ImageId: ami-0a2363a9cff180a64 # Deep Learning AMI (Ubuntu) Version 30
            # Run workers on spot by default. Comment this out to use on-demand.
            InstanceMarketOptions:
                MarketType: spot
                # Additional options can be found in the boto docs, e.g.
                #   SpotOptions:
                #       MaxPrice: MAX_HOURLY_PRICE
            # Additional options in the boto docs.

# Specify the node type of the head node (as configured above).
head_node_type: ray.head.default

# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH, e.g.
file_mounts:
    "~/tennis": "./tennis"
    "~/unity3d_env_local.py": "./unity3d_env_local.py"

# Files or directories to copy from the head node to the worker nodes. The format is a
# list of paths. The same path on the head node will be copied to the worker node.
# This behavior is a subset of the file_mounts behavior. In the vast majority of cases
# you should just use file_mounts. Only use this if you know what you're doing!
cluster_synced_files: []

# Whether changes to directories in file_mounts or cluster_synced_files in the head node
# should sync to the worker node continuously
file_mounts_sync_continuously: False

# Patterns for files to exclude when running rsync up or rsync down
rsync_exclude:
    - "**/.git"
    - "**/.git/**"

# Pattern files to use for filtering out files when running rsync up or rsync down. The file is searched for
# in the source directory and recursively through all subdirectories. For example, if .gitignore is provided
# as a value, the behavior will match git's behavior for finding and using .gitignore files.
rsync_filter:
    - ".gitignore"

# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []

# List of shell commands to run to set up nodes.
setup_commands: []
    # Note: if you're developing Ray, you probably want to create a Docker image that
    # has your Ray repo pre-cloned. Then, you can replace the pip installs
    # below with a git checkout <your_sha> (and possibly a recompile).
    # Uncomment the following line if you want to run the nightly version of ray (as opposed to the latest)
    # - pip install -U "ray[full] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl"

# Custom commands that will be run on the head node after common setup.
head_setup_commands: []

# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands: []

# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands:
    - ray stop
    - ulimit -n 65536; ray start --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml

# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands:
    - ray stop
    - ulimit -n 65536; ray start --address=$RAY_HEAD_IP:6379 --object-manager-port=8076

head_node: {}
worker_nodes: {}

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 24 (13 by maintainers)

Most upvoted comments

I have run the above repro script 10x in a row and always get in ray status:

Resources
------------------------------------------------------------
Usage:
 1.0/8.0 CPU
 0.0/1.0 CPU_group_0_3fe88ca5491a57b00fc77730dd6525c8
 0.0/1.0 CPU_group_3fe88ca5491a57b00fc77730dd6525c8
 0.00/7.349 GiB memory
 0.00/3.675 GiB object_store_memory

So it seems to be fixed. Thanks @AmeerHajAli for surfacing this in the first place!

should it be showing something like 8.0/8.0 CPU or 0.0/8.0 CPU if the placement group is not running anything?

It should show 8.0/8.0 (since the placement group is “using” the CPU).

Bizarrely, the issue above only happens sometimes. If I run the following script:

import ray
ray.init()

pg = ray.util.placement_group([{"CPU": 1}])
import time
time.sleep(999)

Sometimes I get (correct):

Usage:
 1.0/8.0 CPU
 0.0/1.0 CPU_group_0_96ba2e7864f6042c4b829cb2ba0f15ff
 0.0/1.0 CPU_group_96ba2e7864f6042c4b829cb2ba0f15ff
 0.00/8.035 GiB memory
 0.00/4.017 GiB object_store_memory

and sometimes this (incorrect— and even the other resources have gone missing!)

Usage:
 0.0/1.0 CPU_group_0_a38a03683b26d7b35c3f1fbf53fefc92
 0.0/1.0 CPU_group_a38a03683b26d7b35c3f1fbf53fefc92