micronaut-core: StreamingFileUpload fails when multipart.disk=true

In #975, I confirmed that netty does not allow for “streaming” file uploads. The entire request body is either buffered in memory, or to a file on disk. The in-memory approach is limited to 2GB, after which the buffer capacity overflows Integer.MAX_VALUE.

I therefore changed my focus to testing with micronaut.server.multipart.disk=true.

Task List

  • Steps to reproduce provided
  • Stacktrace (if present) provided
  • Example that reproduces the problem uploaded to Github
  • Full description of the issue provided (see below)

Steps to Reproduce

  1. Clone my micronaut-examples fork
  2. cd micronaut-examples/hello-world-kotlin
  3. git checkout large-file-upload-disk
  4. ./gradlew :cleanTest :test --info --tests example.UploadControllerTest
  5. In the provided test, a temp file of any size fails.

Expected Behaviour

The file upload should succeed when using micronaut.server.multipart.disk=true.

Actual Behaviour

The file upload fails. Here’s the stacktrace:

java.lang.IndexOutOfBoundsException: null
	at io.netty.buffer.EmptyByteBuf.checkIndex(EmptyByteBuf.java:1055)
	at io.netty.buffer.EmptyByteBuf.retainedSlice(EmptyByteBuf.java:893)
	at io.micronaut.http.server.netty.multipart.NettyPartData.getByteBuf(NettyPartData.java:109)
	at io.micronaut.http.server.netty.multipart.NettyPartData.getInputStream(NettyPartData.java:63)
	at example.UploadController$postSet$upload$1.apply(UploadController.kt:42)
	at example.UploadController$postSet$upload$1.apply(UploadController.kt:23)
	at io.reactivex.internal.operators.flowable.FlowableMap$MapSubscriber.onNext(FlowableMap.java:63)
	at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onNext(FlowableSubscribeOn.java:97)
	at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.drain(FlowableOnBackpressureBuffer.java:187)
	at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.onNext(FlowableOnBackpressureBuffer.java:112)
	at io.reactivex.internal.operators.flowable.FlowableFromObservable$SubscriberObserver.onNext(FlowableFromObservable.java:54)
	at io.reactivex.subjects.ReplaySubject$UnboundedReplayBuffer.replay(ReplaySubject.java:770)
	at io.reactivex.subjects.ReplaySubject.subscribeActual(ReplaySubject.java:330)
	at io.reactivex.Observable.subscribe(Observable.java:12090)
	at io.reactivex.internal.operators.flowable.FlowableFromObservable.subscribeActual(FlowableFromObservable.java:29)
	at io.reactivex.Flowable.subscribe(Flowable.java:14479)
	at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer.subscribeActual(FlowableOnBackpressureBuffer.java:46)
	at io.reactivex.Flowable.subscribe(Flowable.java:14479)
	at io.reactivex.Flowable.subscribe(Flowable.java:14426)
	at io.micronaut.http.server.netty.multipart.NettyStreamingFileUpload.subscribe(NettyStreamingFileUpload.java:169)
	at io.reactivex.internal.operators.flowable.FlowableFromPublisher.subscribeActual(FlowableFromPublisher.java:29)
	at io.reactivex.Flowable.subscribe(Flowable.java:14479)
	at io.reactivex.Flowable.subscribe(Flowable.java:14426)
	at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.run(FlowableSubscribeOn.java:82)
	at io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:53)
	at io.micronaut.http.context.ServerRequestContext.lambda$instrument$0(ServerRequestContext.java:69)
	at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
	at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

After some debugging, I discovered the subscriber is invoked as expected with a NettyPartData, but with multipart.disk=true, NettyPartData.getByteBuf always returns io.netty.buffer.EmptyByteBuf.

Regardless, the current Micronaut implementation would encounter the same problem as reported in #975 (see io.netty.handler.codec.http.multipart.AbstractDiskHttpData:283). When multipart.disk=true, NettyPartData.getByteBuf will call that method, which attempts to read the entire file into a byte array, then return a buffer wrapping the array. That buffer would also overflow capacity at 2GB.

So it appears that Micronaut will need to behave differently depending on whether the FileUpload created by Netty is using memory or disk.

Environment Information

  • Operating System: macOS 10.14 (Mojave)
  • Micronaut Version: 1.1.0.BUILD-SNAPSHOT
  • JDK Version: 1.8.0_191

Example Application

  • See above

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 20 (11 by maintainers)

Most upvoted comments

@mmindenhall Upon further inspection it appears not to be the case. I think I have a solution ready. I’ll push it after i add a test