kompose: Unable to kompose up my django container from windows or azure cloudshell.

version: '2' 

volumes:
  postgres_data_local: {}
  postgres_backup_local: {}

services:
  django: &django
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: enjoithesk8life/test_respo:django
    depends_on:
      - postgres
    volumes:
      - .:/app
    environment:
      - POSTGRES_USER=nh2d
      - USE_DOCKER=yes
    ports:
      - "8000:8000"
    command: /start.sh

  postgres:
    build:
      context: .
      dockerfile: ./compose/production/postgres/Dockerfile
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
      - /var/lib/postgresql:/var/run/postgresql
    environment:
      - POSTGRES_USER=nh2d


  redis:
    image: redis:3.0

  celeryworker:
    # https://github.com/docker/compose/issues/3220
    <<: *django
    image: enjoithesk8life/test_respo:celeryworker
    depends_on:
      - redis
      - postgres
    ports: []
    command: /start-celeryworker.sh

  celerybeat:
    # https://github.com/docker/compose/issues/3220
    <<: *django
    image: enjoithesk8life/test_repo:celerybeat
    depends_on:
      - redis
      - postgres
    ports: []
    command: /start-celerybeat.sh

From cookie-cutter django. Did not change much. Just trying to test a deploy here… I get

WARN Unsupported root level volumes key - ignoring WARN Unsupported depends_on key - ignoring INFO Build key detected. Attempting to build and push image ‘enjoithesk8life/test_repo:django’ INFO Building image ‘enjoithesk8life/test_repo:django’ from directory ‘E:\projects\hoos_feeding\src\hoos_fed’ FATA Error while deploying application: k.Transform failed: Unable to build Docker image for service django: open \tmp\kompose-image-build-929081207: The system cannot find the path specified.

From powershell.

When I tried in azure cloudshell I get an error at same point but with different message:

navid@Azure:~/clouddrive/hoos_feeding/src/hoos_fed$ …/…/…/kompose --file local.yml up WARN Unsupported root level volumes key - ignoring WARN Unsupported depends_on key - ignoring INFO Build key detected. Attempting to build and push image ‘enjoithesk8life/test_repo:celerybeat’ INFO Building image ‘enjoithesk8life/test_repo:celerybeat’ from directory ‘hoos_fed’ FATA Error while deploying application: k.Transform failed: Unable to build Docker image for service celerybeat: Unable to build image. For more output, use -v or --verbose when converting.: dial unix /var/run/docker.sock: connect: no such file or directory

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 16 (6 by maintainers)

Most upvoted comments

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

got this to work by removing the build section for those seeing FATA Error while deploying application: k.Transform failed: Unable to build Docker image for service … Unable to create a tarball: archive/tar: write too long.

still seeing this issue in 1.16.0 (0c01309)

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

I have the same problem