turbo: [turborepo] Using turbo prune via Dockerfile doesn't generate turbo out folder

What version of Turborepo are you using?

1.9.3

What package manager are you using / does the bug impact?

pnpm

What operating system are you using?

Mac

Describe the Bug

Hello,

I am in the process of migrating from Yarn to PNPM and have written a generic Dockerfile to reuse for Next.js applications. I am facing an issue when running the project via Docker, where the turbo out folder is not being generated. Based on my understanding, this should happen when we run the following command:

RUN turbo prune ${APP_NAME} --docker
ARG NODE=bitnami/node:20.9.0

FROM ${NODE} as builder
WORKDIR /app
ARG APP_NAME
COPY . .
RUN npm install -g pnpm
RUN npm install -g turbo
RUN echo $(turbo --version)
RUN turbo prune ${APP_NAME} --docker

FROM builder as installer
ARG CACHE_CLEAN=true
ARG APP_NAME
WORKDIR /app/apps/${APP_NAME}
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install && \
    if [ "$CACHE_CLEAN" = "true" ]; then pnpm store prune; fi
ENV PATH /app/apps/${APP_NAME}/node_modules/.bin:$PATH



# Build the project and its dependencies
COPY --from=builder /app/out/full/ .
COPY turbo.json turbo.json
ENV NEXT_TELEMETRY_DISABLED 1
RUN turbo run build --filter=${APP_NAME}...

FROM installer as production
ARG APP_NAME
ARG PORT
ENV NEXT_TELEMETRY_DISABLED 1
WORKDIR /app
ENV NODE_ENV=production

RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 nextjs

COPY --from=installer /app/apps/${APP_NAME}/ ./
USER nextjs
EXPOSE ${PORT}
ENV PORT ${PORT}

CMD node --max-old-space-size=200 --optimize_for_size --gc_interval=100 server.js

For local development, I run it via docker-compose and pass the APP_NAME variable like this:

app-1:
    container_name: app1
    build:
      context: .
      dockerfile: ./tools/docker/nextjs/Dockerfile
      target: installer
      args:
        APP_NAME: app1
        PORT: 3000
        CACHE_CLEAN: "false"
    working_dir: /app
    restart: always
    environment:
      - NODE_ENV=development
    volumes:
      - .:/app
    ports:
      - "3000:3000"
    command: pnpm --filter app1 run dev

Here is my turbo.json configuration:

{
  "$schema": "https://turbo.build/schema.json",
  "pipeline": {
    "build": {
      "outputs": ["dist/**", ".next/**", "!.next/cache/**", "public/dist/**"],
      "dependsOn": ["^build"]
    },
    "test": {
      "outputs": ["coverage/**"],
      "dependsOn": []
    },
    "lint": {
      "dependsOn": ["^build"]
    },
    "dev": {
      "cache": false,
      "persistent": true
    },
    "clean": {
      "cache": false
    }
  }
}

Any help or guidance on resolving this issue would be greatly appreciated.

Expected Behavior

out folder should be generated on the root of monorepo

To Reproduce

Build the docker image and run the container.

Reproduction Repo

No response

About this issue

  • Original URL
  • State: open
  • Created 8 months ago
  • Comments: 18 (2 by maintainers)

Most upvoted comments

So, my version is

“turbo”: “^1.9.3”, “next”: “13.1.5”,

For now, i decided to use this Dockerfile which works perfectly fine for now, and not utilizing turbo for building the app

# This is a generic Dockerfile for all apps which uses variable names and based on the execution context it takes the context of the apps
# APP_NAME and PORT locally are taken from docker-compose.yaml and in pipeline it's passed via the yaml file 
# of the repository host/DevSecOps like Giltab with .gitlab-ci.yaml
ARG NODE=bitnami/node:20.9.0

# BUILDER STAGE
FROM ${NODE} as installer
WORKDIR /app
ARG APP_NAME
COPY apps/${APP_NAME}/ ./
RUN npm install -g pnpm && pnpm install

# INSTALL DEPENDENCIES STAGE
FROM installer as builder
WORKDIR /app
RUN pnpm run build

# PRODUCTION DEPLOYMENT STAGE
FROM ${NODE} as runner_production
WORKDIR /app
ARG PORT
ENV NODE_ENV=production

RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 nextjs

COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/package.json ./package.json
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

EXPOSE ${PORT}

ENV PORT ${PORT}


USER nextjs
# When using sidecar/istio-proxy you need to tweak --max-old-space-size and --gc_interval for optimal memory management, usually at the moment
# one istio-proxy is taking 150Mi, when cloudops allow us to create sidecar objects we need to define namespaces from whom we use service mesh,
# that will reduce the memory footprint of istio-proxy, since currently one pod app starts with 20-25Mi while one istio-proxy pod starts with 100-150Mi
# That 100-150Mi should be taken in consideration when defining K8 resource limits.
# Usually in node.js you define 70-80% of what resource memory limit you define in Kubernetes, 
# but that's not the golden rule (tweak lower or higher values for optimal management), also substract 150Mi (istio-proxy memory) from that 70-80%. 
CMD node --max-old-space-size=200 --optimize_for_size --gc_interval=100 server.js