nodejs-logging-winston: Library still throws if we try to log a large object
Environment details
- OS: Ubuntu 18.04
- Node.js version: v10.18.0
- npm version: 6.13.4
@google-cloud/logging-winston
version: 3.0.2
Steps to reproduce
- Suppose
veryLargeString
is a variable containing a string with about 100,000 characters. - Try to run the “Quickstart” code snippet in the README (having appropriately set the authentication environment variables), but try to run the following line:
logger.info(veryLargeString);
. The code will run without issues, and log the string to Stackdriver Logging. - Now try to run the following line:
logger.info('foo', { veryLargeString });
. The code throws with the following error trace:
(node:20708) UnhandledPromiseRejectionWarning: Error: 3 INVALID_ARGUMENT: Log entry with size 1.00M exceeds maximum size of 256.0K
at Object.callErrorFromStatus (/tmp/logging.D0fP/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.call.on (/tmp/logging.D0fP/node_modules/@grpc/grpc-js/build/src/client.js:96:33)
at Http2CallStream.emit (events.js:203:15)
at process.nextTick (/tmp/logging.D0fP/node_modules/@grpc/grpc-js/build/src/call-stream.js:98:22)
at process._tickCallback (internal/process/next_tick.js:61:11)
(node:20708) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:20708) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Am I to believe that this is not intended behavior, as #404 should fix this?
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 3
- Comments: 16 (6 by maintainers)
Forgive my directness, but what’s the point of closing an issue on the basis of not yet having a viable solution to it, only to have to open a new issue later, isn’t the entire purpose of logging the issue to track the identification and resolution?
Leaving the issue open to retain visibility of the problem seems like a reasonable approach in such a situation, no?
In my case, to solve the issue I created a wrapper for the LoggingWinston with a custom truncate logic before send the message to the logger.
A while ago, I did a fix in base
nodejs-logging
library to support more fields for truncation - here is a PR 1177. Thats said, I never exposed this logic innodejs-logging-winston
- please let me know if this is something you would like me to do. In addition, recently I changed the logic to include partialSuccess to betrue
by default - this at least make sure that only oversized entries will be dropped from request (if of course the size of the request itself does not goes beyond limit as specified in log-limits) - in this case while the error still would be returned, in response it will indicate which entries were dropped. Also, I believe we should implement log splitting as mentioned in LogSplit - this is a feature we can work on if there will be enough demandI’d use that @losalex !
Could you propose an acceptable algorithm that might work the way we’ed need on objects? I’m not sure if there is a generic enough computationally viable solution to this.
like data could be preserved based on how close it is to the top of the json object. doing a shallow first count of bytes. Then arbitrary keys at a certain depth would be truncated after bytes are exhausted. It’s hard to say this is what every user will expect/