copilot-cli: Unable to deploy shared load balancer on different paths in same domain

Hi, first and foremost, thank you to the Copilot Team. This CLI is amazing and has saved me weeks worth of learning the in/outs of deploying a modern container based stack.

That said, I’m encountering a few issues when trying to deploy a load balancer on two different paths. My application is a simple Rails API backed by Postgres and uses a Next JS frontend. I have one environment in copilot: staging. I have two services defined, backend (the Rails API with Aurora add-on) and frontend (the Next JS app).

I’m using version 1.8.1 of copilot so that I could make use of the custom domain alias. My use case is to have my application served from app.example.com, so my manifest defines the http.alias as app.example.com for both services.

Following recommendations I read on the gitter.im room, I deployed backend service listening on the /api/v1 path. Upon deployment, my service says it’s available at https://app.example.com//api/v1. Note the extra double //, it seems that copilot is adding this extra / on it’s own. When I attempted to correct that by using only api/v1' for the path in my manifest, copilot alerted that this was invalid configuration and required the leading /`. 🤷‍♂️

That issue aside, I then proceeded to deploy my frontend service listening on the root path / using the http.alias of app.example.com. That produces an error when it attempts to add A-record to the hosted zone.

Removing the http.alias allows the frontend service to be deployed to the default frontend.staging.{app}.{domain} path but doesn’t allow me to run my application as desired.

For reference, I’m sharing my two service manifests (sanitized) below. Any help/guidance you could provide is GREATLY appreciated.

# backend/manifest.yml
name: backend
type: Load Balanced Web Service

http:
  path: '/api/v1'
  healthcheck: '/api/v1/health_check'

build
  build: api/Dockerfile
  port: 3000

cpu: 256
memory: 512
count: 1
exec: true

secrets:
  RAILS_MASTER_KEY: RAILS_MASTER_KEY

environments:
  staging:
    count: 1
    http:
      alias: app.example.com
    secrets:
      RAILS_MASTER_KEY: /copilot/myapp/staging/secrets/RAILS_MASTER_KEY
    variables:
      RAILS_ENV: 'staging'
# frontend/manifest.yml
name: frontend
type: Load Balanced Web Service

http:
  path: '/'
  healthcheck: '/api/health'

image:
  # Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#image-build
  build: app/Dockerfile
  port: 3001

cpu: 256
memory: 512
count: 1
exec: true

environments:
  staging:
    count: 1
    http:
      alias: app.example.com

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 20 (9 by maintainers)

Most upvoted comments

@iamhopaul123 Thanks, I ended up removing the stack which seemed to clear all the old attempts. The fresh start approach worked for me. I suspect there were multiples of the lambda functions lying around which likely caused confusion to copilot when it saw existing resources. If I had attempted to remove all of those first, I expect that I wouldn’t have needed to take the teardown approach.

That said, in my case, I actually changed my configuration slightly so the teardown was useful.