pytest: pytest capture logging error still happening

Good morning,

I am seeing the same issue as described in https://github.com/pytest-dev/pytest/issues/14 but using a much more modern combination of both (anaconda) Python and pytest.

Test session starts (platform: linux, Python 3.6.8, pytest 4.6.3, pytest-sugar 0.9.2)
rootdir: /ml/tests/ml/services, inifile: all-tests.ini
plugins: forked-1.0.2, xdist-1.29.0, sugar-0.9.2, cov-2.7.1, mock-1.10.4

In the relevant .ini file I have this:

[pytest]
testpaths = tests
addopts =
    -n 4
    --durations=20
    --disable-warnings

where -n 4 is to use 4 parallel test runners with pytest-xdist. Edit: I was able to isolate the behavior to when using parallel worker with xdist, so likely it is an issue with an xdist worker prematurely closing a logger stream.

Basically, when I run one particular test file, I see a large number of repeated error messages in the pytest output:

--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit
    stream.write(msg)
  File "/usr/local/lib/python3.6/dist-packages/_pytest/capture.py", line 441, in write
    self.buffer.write(obj)
ValueError: I/O operation on closed file
Call stack:
  File "/usr/lib/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/ml/ml/models/neighbors.py", line 289, in re_index
    success = self._re_index()
  File "/ml/ml/models/neighbors.py", line 349, in _re_index
    logger.info('== Fit new model with {} ids and a dimensionality of {} =='.format(n, dim))
Message: '== Fit new model with 48 ids and a dimensionality of 2 =='
Arguments: ()

The issue appears to be related to pytest prematurely closing the stream associated with a given logger for the underlying code module in question. I can’t really post all the code since it is an extended example from my work, but I can confirm there is nothing exotic or unusual happening with this logger. The module being tested just uses the normal convention to define

logger = logging.getLogger(__name__)

and there are no duplicate loggers or conflicts with this logger’s name. The logger itself is not defined in any multiprocessing context or anything like that. Just a boring import of a top-level module from our code base.

But during the test execution, something weird is happening with pytest such that the eventual calls into that logger produce these errors.

If I turn off pytest capturing with --capture=no, then the messages go away, but unfortunately so does a bunch of other necessary output that I want pytest to capture and display.

How can I debug this further? I’m sorry that I cannot provide a minimal working example, but I can definitely confirm that there is nothing weird going on with these tests. It’s a very straightforward use of logging and a very straightforward test file with simple imports and function calls.

  • [x ] a detailed description of the bug or suggestion
  • [ x] output of pip list from the virtual environment you are using
  • [ x] pytest and operating system versions
  • [ x] minimal example if possible

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 58
  • Comments: 63 (27 by maintainers)

Commits related to this issue

Most upvoted comments

Clearing all the log handlers during at test teardown works for me.

I invoke the following function in my shared test fixture to remove all logging handlers. In my case, I only need to reset the root logger, but depending on your application, you may need to do all of them.

def clear_loggers():
    """Remove handlers from all loggers"""
    import logging
    loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
    for logger in loggers:
        handlers = getattr(logger, 'handlers', [])
        for handler in handlers:
            logger.removeHandler(handler)

i’d rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

if you set logging in import, you deal with the fallout ! pytest shouldn’t suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don’t play well with the rest of the world

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

I think it suggests the internals are in a messy state if a certain aspect of basing end-to-end usage isn’t well supported by existing internal primitives. If this use case is somehow weird or special from point of view of what assumption pytest is making, that strikes me as a criticism of those assumptions, not a criticism of the use case.

if you set logging in import, you deal with the fallout ! pytest shouldn’t suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don’t play well with the rest of the world

Setting logging in import is such a ubiquitous thing that I just can’t agree with this. It’s like saying, “if you define your own functions, then you deal with the following mess”. These are not at all broken / discouraged setups, but are straightforward basic workflows for tons of use cases that need pytest to support it.

I understand it’s a difference of opinion, but it feels like it’s straying very far into a philosophical contortion to justify not doing something that clearly is an end to end use case users need supported. I really don’t think, “don’t do it that way,” is a reasonable answer to this.

All i read from this is “i want to keep my broken logging setup, acomodate meeeee”

Fixing broken logging setup does work better that removing cleanup and silently stealing the logs from other tests

If we just keep random files open, the next thing people with broken setup will do is asking for monkeypatching the stdlib to fix the mess

The correct answer is no, fix your logging setup

It would be nice if the stdlib had redirect aware stdio loggers, as stream breaking redirection is part of the stdlib these days

At the end of the day i prefer telling people to do correct logging setups instead of making the already complicated capture mechanism even more of a mess

+1 This issue makes flask_matomo not work as well.

As Linus Torvalds would say, “never break userspace”, and as a user, we expect that our tools will play nice together. If some logging setup is easier to do, people will do it. It’s inevitable. When reading this, there seems to be a lot of “not my problem” and other type judgement here. The reality of the situation is there IS a problem, and we can fix it in one place (pytest) or we can fix it in a thousand other places. Seems to me that it would be benefitial to at least acknowledge the problem and that pytest has the power to fix it, and that doing so might be the most fruitful way of moving forward together. Otherwise, if we try and fix it elsewhere, there will always be someone who makes a package and “hasn’t gotten the logging memo” and the problem will persist forever.

It seems like our rational options are either:

  1. Ignore the problem and have many users get a sub-optimal pytest experience.😞
  2. Stick with the “The problem is the rest of the world” frame, in which case we’ll need to devote the next year of our lives to a campaign to educate the rest of the world about the “right” way to log or write to stdout. This should be our only focus in life until we’ve achieved 100% saturation and no one ever makes a library or a bit of code that tries to write to stdout on exit. Only then will we have achieved a good experience for our pytest users.
  3. Acknowledge that we have power to change pytest and fix this issue once and for all, and that it may be the most resource conscious way to get a good pytest user experience without taking up the monumental task of changing the rest of the world.

To quote Gary Goldstein: “All you can change is yourself, but sometimes that changes everything!”

A quick fix for me was to turn off capturing following https://www.stefaanlippens.net/pytest-disable-log-capturing.html I added the following to my pytest invocation --capture=no --log-cli-level=INFO though I’m still a little miffed at an external library I am using that dumps info via logging and print invocations, making life way noisier.

Our codebase tampers with sys.stdout and sys.stderr, so the various handler workarounds did not work for me. I ended up doing this:

@pytest.fixture(autouse=True)
def capture_wrap():
    sys.stderr.close = lambda *args: None
    sys.stdout.close = lambda *args: None
    yield

Not a great general-purpose workaround, but posting it here in case someone else finds the approach useful.

A reproducing case, which doesn’t need xdist:

import atexit
import logging

LOGGER = logging.getLogger(__name__)

def test_1():
    pass    

atexit.register(lambda: LOGGER.error("test in atexit"))
logging.basicConfig()
def clear_loggers():
    """Remove handlers from all loggers"""
    import logging
    loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
    for logger in loggers:
        handlers = getattr(logger, 'handlers', [])
        for handler in handlers:
            logger.removeHandler(handler)

This worked for me. I never even setup logging on import. I use it inside daemon threads that might close unexpectedly. The logging errors were preventing me from seeing the tests report with nox. It shouldn’t be captured.

This function with with teardown setup saved me, thanks a ton!

Why pytest can’t leave system stream open? It is totally wrong that you substitute sys.stdout with something which can be closed at any moment. Don’t restore it, just keep it open. @nicoddemus

(Answering this message specifically as I was pinged)

I understand why some users are frustrated about something which to them should “just work”, but the capture code is complex and has to make compromises, which means it cannot support every possible use case.

Unfortunately there is no standard way to redirect a global resource like sys.stdout other than just overwritting the attribute, so this clashes with some other standard features (like atexit).

If pytest overwrites a global resource such as sys.stdout with its own, the right thing to do is to close that resource and restore the original sys.stdout at the end of the session. Unfortunately this has the clear drawback of not being compatible with all possible ways sys.stdout can be used.

It is clear however that eventually people are bitten by this, so I think one thing we should do is improve the docs:

  1. Explain how capture works, and which features it could clash with.
  2. Recommend how to use logging so it will work well with capture.
  3. If all else fails, show how to disable capture altogether.

Pytest correctly cleaning up after itself is not negotiable

People setting up logging broken is something people need to fix

After reading this thread and considering this some more, I think the real issue is that pytest is not playing well with any system which hijacks sys.stdout or sys.stderr, and I can see this happening for other cases other than logging (something like hijacking sys.stdout to write to both stdout and to a file).

Given that this does not affect logging only, might make sense for pytest to make an effort here if feasible, after all pytest itself is hijacking stdout and stderr, so it might as well try to play nice with others when possible.

#6034 by @blueyed is an attempt at this.

whats preventing issuing a actual issue against ros?

i an case its safe to say that hiding this issue will just make things hideeous and strangely, i would strongly suggest to disable capture on broken platforms and reporting issues against the platform instead of putting resource leaks into pytest to “work” on broken platforms

your message can very easily be read as “our vendor wont fix things anyway, so lets make pytest worse instead”

and i reject that notion, please get at your vendor and have them fix stuff,

Thanks @andyDoucette for the clear remark. I agree we should strive to arrive at a solution.

@nicoddemus as part of fd capture at least we have to destroy the textio, the buffer and the file-descriptor of stdout/err so the file object being held would be broken to begin with

I see what you are saying, but this issue is the context of sys capture (due to logging), so I don’t think we need to mess with fd capture?

i propose that pytest adds a own logging setup that adds a handler well before importing user code, thus preventing the problem with logging basic setup as a starting point

That would be ideal, however it is hard to convince/educate all users to do that.

I still like the “wrapper” idea, seems like it would solve the original issue, no?

To be clear, I mean something like this:

  1. When capture starts, we install a wrapper over sys.stdout and sys.stderr, which capture can control to start/stopping capture.
  2. That wrapper is never removed: when capture ends (at the end of the session), we leave the wrapper there, but from that point on it will always redirect all write calls to the original stream.

This lets users setup logging as they are used to; the only downside I see is that “wrapper” object living forever, but should be a small class and I do not see much problem with it.

Again this would only affect sys capture.

Do other people envison other problems with this approach?

@spearsem the technical best practice is - libraries don’t set up logging on import, applications set up logging on startup

and that prevents a number of issues - like triggering configuration of logging in a situation where the system state is unexpected

so as far as i’m concerned, this is about practical missuse of the stdlib logging module

the stdlib itself even suggests to not to trigger this at import, but rather at program startup

so for me this is not a topic about far off philosophy, this is about ignoring the best practices that are in place to prevent the very fallout that’s now being complained about

Hi everyone,

What I believe is happening is:

  1. pytest changes sys.stdout and sys.stderr to a buffer while importing test modules.
  2. If there’s user code setting up logging and/or creating a logging.StreamHandler at the import level, it will attach itself to pytest’s buffer.
  3. When pytest is about to finish the test session, it will restore sys.stdout and sys.stderr to the original values, and close the “capture” buffer.
  4. Here the problem happens: if any message is emitted at this point, the StreamHandler will try to attach itself to the buffer, hence the error.

Unfortunately I don’t see how pytest can work around this, because we can’t know who kept a reference to the capture buffer and somehow tell the owner of the reference that we are now “dead”.

@spearsem, are you calling basicConfig at import time in your application, or setting up logging yourself at import time?

If I’m getting this right, my suggestion is to avoid setting up your logging configuration at import time, moving it to a function which is called only when running your actual production code.

This function with with teardown setup saved me, thanks a ton!

Just to understand, where am I supposed to place and call this function? I’m encountering the same exact problem as everyone else

I don’t think my solution is good, it was rushed to deliver a poc, but what I did was to create a pytest fixture with that code, lets call this fixture clear_log.

After that, I just used the decorator usefixtures in the problematic functions, like:

@pytest.mark.usefixtures("clear_log")
def test_something()
    ...

Not very neat, but solved the problem for me.

@analog-cbarber’s fixed worked for me within a pytest_sessionfinish hook within conftest.py to cleanup handlers in multi-threaded libraries.

Whatever your philosophy about how logging should be set up, I would hope that the underlying philosophy of a testing framework should be that it should not change the underlying semantics of the runtime system.

If pytest breaks your code when doing something that works perfectly fine otherwise – recommended or not – then you are putting a heavy burden on the user when they encounter a problem like this.

Practically speaking even after reading all of the above I still don’t understand how to fix my broken tests.

Following fixture based on the previously mentioned idea of handlers cleanup works for my async app to avoid closed resources error after the tests have been actually finished.

Be careful about fixture order if you have another session teardowns with logging.

@pytest.fixture(scope='session', autouse=True)
def cleanup_logging_handlers():
    try:
        yield
    finally:
        for handler in logging.root.handlers[:]:
            if isinstance(handler, logging.StreamHandler):
                logging.root.removeHandler(handler)

They think every other library is wrong except theirs XD

On Thu, Jan 26, 2023, 12:01 PM gatopeich @.***> wrote:

Still similar issue happens in latest pytest wheel pytest-7.2.1-py3-none-any.whl:

— Logging error — Traceback (most recent call last): File “/usr/lib/python3.8/logging/init.py”, line 1088, in emit stream.write(msg + self.terminator) ValueError: I/O operation on closed file. Call stack: File “/usr/lib/python3.8/multiprocessing/util.py”, line 332, in _exit_function info(‘process shutting down’) File “/usr/lib/python3.8/multiprocessing/util.py”, line 54, in info _logger.log(INFO, msg, *args) Message: ‘process shutting down’ Arguments: ()

— Reply to this email directly, view it on GitHub https://github.com/pytest-dev/pytest/issues/5502#issuecomment-1404713747, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZ5CUDSDEGYYIWBYPTAZLOLWUI4OPANCNFSM4H34OUIQ . You are receiving this because you commented.Message ID: @.***>

def clear_loggers():
    """Remove handlers from all loggers"""
    import logging
    loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
    for logger in loggers:
        handlers = getattr(logger, 'handlers', [])
        for handler in handlers:
            logger.removeHandler(handler)

This worked for me. I never even setup logging on import. I use it inside daemon threads that might close unexpectedly. The logging errors were preventing me from seeing the tests report with nox. It shouldn’t be captured.

Pytest probably needs to warn every time a log handler does the “crime” of setting up stdio handlers in test setup

i’d rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

I understand the motivation for this, but it would be a very unfortunate decision e.g. for people using ROS (robot operation system). I ran into https://github.com/pytest-dev/pytest/issues/5577 specifically in this context. The problem is that the “tactically broken” logging setup can come out of ROS itself, and – what makes this an impossible problem to work-around as a user – the ROS Python side is tightly coupled to its native distribution (yes, it is quite ugly…). As a result, “fixing” these issues is not a very realistic option, because one would have to maintain their own ROS distribution fork 😦.

That is just one example. It would be nice if we were able to use py.test even if third party libraries are guilty of logging setup at import – in particular if these libraries are not pip-installable and thus hard to fix. Even worse: Third party library authors may even prefer to stick to the anti-pattern for personal reasons (I recently had such a discussion about a related on-import anti-pattern).

I’ll have to give #6034 a try.

@Zac-HD Thanks for migrating that example from the other issue. In my case with my original bug, there was simply no way to reproduce an example, since it contained so many moving parts related to pytest and pytest-xdist and a large test suite in the project. In general it is quite complex to try to reduce pytest errors to small & reproducible errors, though I agree that’s the ideal thing to strive for. Unfortunately most of the time the best effort someone can really do is to paste in the error message and give general context around the use case where the error appears, and hope pytest devs who know much more about the internals can take it from there to scope it down to the reproducible essence of the bug. Just wanted to say it is very appreciated!