vstest: Entire test runner workflow is too slow and unenjoyable

I’m on the ASP.NET team and using the VS 2017 Community RTM build and CLI tools. There are a few key issues that totally disrupt my workflow with the test runner and generally make the experience unenjoyable. We use xunit with the test explorer and here are the issues I experience daily:

  1. The test explorer window is too narrow and test names are unreadable by default. image Test names contain the full namespace which make it impossible to fit into the tiny winow. After I expand it to see actual, it moves the summary window to the side (which I never wanted to begin with): image

  2. The test runner rebuilds the test project and it’s dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of “up to date” check. The entire project graph is checked for stale assets to run a single test.

  3. I’m unable to right click a specific test in source and run the test. It seems to run all tests in that class instead of running the specific test method I targeted. This used to work before the move to csproj and has been broken with the new test runner and tooling.

  4. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that?

  5. The test runner inside visual studio doesn’t show console output. I want to see my Console.WriteLine output in the test runner window. I use it to debug race conditions all the time. Sometimes using the debugger doesn’t cut it because it slows everything down to a point where it’s hard to reproduce the problem. There are developers are my team that built a hacked up version of the test runner to enable this on their dev boxes. This is a real blocker.

  6. I can’t see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line.

  7. Certain crashes make the test runner implode. I’m working on a refactoring and when I run tests sometimes this happens:

The active test run was aborted. Reason: Unable to communicate with test host process.

Which probably means the process crashed but the test runner doesn’t help me diagnose things here. I don’t know which test was running when it crashed, so I have to manually binary search until I find the one that is the culprit.

I’ve been actively using this product for the last week in-depth and I feel like a few simple tweaks could really make this much much better.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 213
  • Comments: 57 (21 by maintainers)

Most upvoted comments

Hey all, I just wanted to give an update on the progress we’ve made in this experience. We still have a long way to go, but here are some improvements you can see in Visual Studio 2017 Update 15.6.

  1. The test explorer window is too narrow and test names are unreadable by default.

We’ve added a hierarchy view that should help the readability of test names as well as navigation. The Projects, namespaces, and classes are different tiers in the hierarchy. image

  1. Test Discovery is super slow across multiple projects.

Real Time Test Discovery is on by default in Update 15.6 and should improve this greatly. This discovers tests from source code instead of by built assemblies and is much faster.

@issafram Wouldn’t it be great if Console.WriteLine just worked?

It’s been a long beat since we’ve given an update on this ticket. Almost every experience request should be addressed in the past several Visual Studio and VS Test Platform updates. Let me give an roll call:

  • 1. Too narrow, can’t read test names.
  • 2. Test runner rebuilds test project and dependencies when running any test.
    • A lot of work has gone into optimizing this. Here are a few improvements:
    • Visual Studio Test Explorer no longer builds projects that are unrelated to the selected tests in the “Run selected” case.
    • The Test Explorer will still always request to build test projects that contain the tests that the user selected to run. However, VS will short-circuit the build if it detects nothing has changed for the test project or its dependencies.
    • If code in either the test project or in the production project that is referenced from the test project has changed then VS will build the test project.
  • 3. Running one test from the editor runs all tests in that class
    • Fixed for .NET test frameworks.
  • 4. Slow test discovery
    • Addressed with source based discovery, with more improvements to come. Also helped by cached test results between closing and opening Visual Studio
  • 5. I want to see Console.WriteLine in output in the test runner window
    • We have some major updates coming to the test detail pane in Visual Studio Update 16.10 that should improve viewing output in the Test Explorer. We paid extra attention on how to gracefully truncate long output so the user is not as often required to ‘Open additional output’ in a separate view.
    • Console.Writeline handling depends on the test framework used. If the Test framework/adapter sends Console.Writeline to StdOut/Stderr so that it is returned in the TestResult (which has fields for stdout and stderr) then it will appear in the summary pane (truncated as needed) and its full text will be inserted as a collapsible section in the full log file.
  • 6. I can’t see active test being run.
    • You can see a spinner icon on any test currently being run. There is a slight delay so it only appears for longer running tests. The delay was added because showing a different icon for only a few milliseconds at a time was too distracting by default.
  • 7. Test process failures and crashes need better reporting and blame on which test crashed the process.

Going forward, there is still lots to fix in the testing experience, but I’d suggest we open a fresh issues for the individual features and close this one. I’d like to thank the community for following, upvoting, and commenting their scenarios! Hopefully most of you see a night and day difference between the test experience of 2017 and what we have today. (And if you don’t, please keep the feedback coming!)

Also, a note on this repo, the vstest repo specifically refers to the VSTest Platform. The majority of Visual Studio Test Explorer feedback should be submitted on https://developercommunity.visualstudio.com/ with the Provide Feedback Tool. in Visual Studio. This distinction is because the Test Explorer code is closed source and not a part of this repo. Thank you again for helping us improve this experience for the millions of developers who use this tool!

@Jehoel I believe we will be able to look at adding a splitter option in the next few sprints. Please follow this developer community ticket: “Group Summary” section in “Test Explorer” moves. If you are still experiencing perf issues, I’d also encourage you to file a bug on developer community.

On the names/width issue; since the names are hierarchical, this seems to be a perfect opportunity for an implicit tree structure, perhaps grouped so that you don’t have 5 levels with only one sub-level and nothing else; so in your example there would be just one parent node of

  • Microsoft.AspNetCore.Server.Kestrel

With nodes underneath that for every separate thing with a different FQN, and the test methods as the leaf level

Relatively simple UI change that can be done purely on the text of the FQN (no input from the provider needed), and could make it so much more usable

As some of you know, I’m on a mission to be able to stop using ReSharper. Without a doubt the most frequent reason I need to keep turning ReSharper back on is the convenience of these things:

  1. A hierarchical display of tests by project, namespace, class, method, and test cases within parameterized methods.
  2. The ability to filter the view to see only those nodes (and their ancestors) with failing tests, or ignored tests, or warning tests, etc.
  3. The ability to right click anywhere in a tree and run or debug that node and all children or navigate.
  4. Multiple unit testing sessions, tabbed within the test window. Each session running independently of the rest and showing a separate chosen subset of tests. I can’t stress this enough. I rarely want to see all tests in the same window. To run all tests I prefer either continuous testing tools or keyboard shortcuts to run all. If I’m using the test window, that means I’m repeatedly running or debugging a very specific selection of tests. I should not have to re-select them each time or run more than I need.
  5. The ability to add tests to the current session by clicking icons inline with a class or method declaration in code.
  6. The ability to start a new session by doing the same, or by right-clicking a node in an existing session.
  7. The ability to press the delete key to remove a selection of tests from a test session.

Of the several testing UIs I’ve seen, Visual Studio is the only one that lacks the intuitive hierarchical paradigm. When trying to run an arbitrary selection of tests repeatedly, scrolling through a massive flat list with near-uselessly-primitive grouping abilities is distracting and tediously manual. It leaves me switching ReSharper or NCrunch back on so that I feel like I can focus on coding.

I genuinely believe that the VS teams want to provide a focused, considerate tool out of the box. You wouldn’t want to blindly copy another tool. This is one of the few areas in which I can say urgently that specific other tools have gotten it very right; thus this feedback. 😃

First time VS Test user, long time R# user.

Started with VS 2017 Community Edition to see if it’s caught up with Resharper (aka R#) for some/specific features. Literally unable to do any testing because of the tooling … or more precise, the same issues @davidfowl has accurately and kindly mentioned.

Literally all the points raised impact us here too 😦

Even something as simple and similar as David’s point 3 - run/debug current test which the cursor is flashing, in … or right click in that test method (code) and run/debug.


Here’s some sample screenies from R#.

No, this is not trying to be 'MS is copying another product and trying to destroy it" … but … please be inspired by this premium product.

image

image

image

A number of us feel that we just can’t use the built in VS Test while these features aren’t around.


Sure we have paid R# lic’s. But we find R# really slows down our VS experience (it does SOOO much, to be fair to it … just tooo much for us) … which is why we would love to see if we can start using VS Test instead.

This project should please be OpenSourced. Plz. Let the community help!

6a. Please update the icon used for “test currently running”. Ideally to an icon that moves, but at least to an icon that is not mainly green the same color indicating passed tests. This also make the currently running test very hard to find, especially if it is in a list of tests you already ran and passed.

  1. Session management. Please introduce test sessions or “playlists” without the need for saving them in a file.

@pvlakshm or @codito - how about my questions about:

  • Tier’d heriachy instead of the current two tiers. I think this question is different to question (1) “test window to narrow…”
  • Moving the test visual Green/Red button/circle from codlens to free. Not all of codelens stuff, just the RED/GREEN test buttons. (who is the manager to ask about this one? yes, i know this is a huge long shot … but you never know…)

Ref:

image

Also … today we were trying to use it again and had this issue: We ran all tests (say 50 or 100). A test (or two? three?) errored. We saw the “red error bar” at the top but had no idea which actual test errored.

Here’s a simple example…

image

Which test has errored? Then compare this to a tool that makes things easier to work with:

image

Not hating at all - just trying to help and hope this feedback (under the OP’s same topic) can be addressed 😃

🙏

@codito Is anyone working on fixing any of the issues mentioned? How much of it is open sourced? Maybe the community can help with some of it.

In 2.x, xUnit.net doesn’t capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html

The problem is I’m not trying to Console.WriteLine from my test. I’m trying to do it in product code. Plumbing that through manually isn’t fun.

@kendrahavens Thanks for the update and I really don’t want to add another +1 comment to a github thread … but …

just adding the hierarchy view has been such a godsend! Testing is already a bazillion (yes, I used scientific metrics to calculate the improvement level) times easier to manage.

Thank you for listening!

Looking forward to the other improvements that hopefully coming too 🥂

Often also leaves lots of dotnet processes hanging around

@davidfowl great feedback. Taking up the test window ones:

  1. The test runner rebuilds the test project and it’s dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of “up to date” check. The entire project graph is checked for stale assets to run a single test.

This could be related to https://github.com/dotnet/roslyn-project-system/issues/62? /cc @davkean

  1. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that?

It is possible to do parallel discovery with “Parallel” option enabled in Test Explorer. https://github.com/Microsoft/vstest/issues/499 is tracking if discovery should be parallel by default. test_explorer_parallel

  1. I can’t see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line.

This is an usability issue. Created #626. Test explorer shows the in progress test in the Not Run tests group. And the Passed Tests group shows first. If a user has thousand tests, clearly Not Run is below the screen real estate to provide a meaningful feedback to user. test_explorer_inprogress

  1. Certain crashes make the test runner implode. I’m working on a refactoring and when I run tests sometimes this happens:
The active test run was aborted. Reason: Unable to communicate with test host process.

Which probably means the process crashed but the test runner doesn’t help me diagnose things here. I don’t know which test was running when it crashed, so I have to manually binary search until I find the one that is the culprit.

Created #627 to track this.

Thanks again for bringing this up. Request readers to vote the individual issues (it will help us prioritize).

With apologies for the thread necromancy (this issue is still open after-all), but is there any “fix” for how the Test Explorer window automatically moves to a vertical-split when its made wider? (The whole point for making the Test Explorer window wider is to see more of the test-list section!) - David Fowler reported it in his original posting in 2017 and it’s still a problem today:

After I expand it to see actual, it moves the summary window to the side (which I never wanted to begin with):

Screenshot

As I’m currently using xUnit_style_verbose_test_names_that_describe_what_should_happen and the still narrow layout of the Test Explorer window doesn’t work-well with this naming convention (even with namespace and type grouping).

Also, right-clicking on the top few nodes in the explorer when tests are grouped by namespace and class (with xUnit at least, I don’t think it affects MSTest but I might be wrong) has a 2-3 second delay before the context-menu actually appears - this is consistently reproducible on different machines. Right-clicking on a test node results in the menu appearing near-instantly (but still with a subtle delay).

Yeah, this is unusable. It builds every single time there is a test. I’m looking at a minute and a half before it runs my single test that is: Assert.True(1 == 1).

@codito

@ManishJayaswal few scenarios: Editors (not all of which will be the same runtime as the test runner or the test host), the ability to decouple testhost in scenarios like Devices or remote execution. However I don’t have the data to conclude that JSON protocol is the biggest bottleneck right now. There has been extensive discussion and profiling in #349 and #485 which suggested other big rocks. We learnt about extra appdomains, source information (with x-appdomain calls) in those exercises that we’re working to fix first. What will be an acceptable percent for infrastructure overhead in a test run?

This does appear to be a significant bottleneck for LUT runs. With every LUT run ( which we do after every edit) here is what we are seeing:

  • This greatly increases the size of payload we have to send over the wire for IPC. Since we have to do a lot of IPC between Test Host, test console and LUT process, it is showing up as a significant bottleneck. For a test run of about 2000 tests, the payload appears to be about 30 MB which needs to go over IPC for every run.
  • This also results in a significant serialization/de-serialization cost which happens at every hop across the process boundary. Since LUT does not need the JSON format, we could have significant time saving if we do not have to serialize and de-serialize this data.
  • This also increases the memory used in LUT process because we are generating the strings to hold the data we get over IPC and then additional JSON objects when these strings are converted to JSON. We will save all of this memory cost if we don’t do this.

Additionally. I think VS process will also have these memory overhead that I have mentioned below.

I do understand why you guys have designed it the way it is ( to make it work for all scenarios) , however to deliver the performance that LUT needs, we do need a by-pass from this. It should be an alternative that is available to TP clients. if the existing mechanism is too bulky for their scenarios. We ( @genlu ) do have perf traces which show these bottlenecks pretty clearly. We made changes to the Test Platform code to collect this information. Please let us know if you would like to see them.

@sbaid - we would need this to address perf and scale issues for LUT for the quarterly release. Please let us know if you want us to file a separate issue for this. Additionally, it seems that a lot of issues are getting mixed up in this meta issue. It would be good to list all of the related issues at the very top and start marking them with updated status ( like under investigation, completed etc) so that all of us know about the progress. If we do not do that then I worry that things may fall through the cracks.

@bradwilson A thought about the Console problem - what if you set your own writer for Console (via SetOut) that stored it’s data based on AsyncLocal<T>. That way you could associate console output with the currently running test.

Thank you for all the feedback and discussions.

Here is a summary of the concerns raised in this thread. I have pointed to the relevant issue tracking it. At the end I have added a pointer to the Test Explorer backlog. Please take a look there and vote up the individual issues therein. We will be executing based on the vote count.

1. The test explorer window is too narrow and test names are unreadable by default. This is xUnit specific. Names are controlled by config. Use “methodDisplay”: “method”. Please see here: https://xunit.github.io/docs/configuring-with-json.html

2. The test runner rebuilds the test project and it’s dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of “up to date” check. The entire project graph is checked for stale assets to run a single test. This is related to dotnet/roslyn-project-system#62. /cc @davkean, @srivatsn

3. I’m unable to right click a specific test in source and run the test. It seems to run all tests in that class instead of running the specific test method I targeted. This used to work before the move to csproj and has been broken with the new test runner and tooling. This is xUnit-specific. This appears to be newly broken in 2.2, and there is an open bug: xunit/xunit#1140

4. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that? We are using the following issue to track scaling up test discovery for large solutions: #674

5. The test runner inside visual studio doesn’t show console output. I want to see my Console.WriteLine output in the test runner window … This is xUnit specific. Please see here: https://xunit.github.io/docs/capturing-output.html. Also, In 2.x, xUnit.net doesn’t capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html Also, see here: xunit/xunit#1119

6. I can’t see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line. We are using the following issue to track this: #626

7. Certain crashes make the test runner implode. I’m working on a refactoring and when I run tests sometimes this happens: “The active test run was aborted. Reason: Unable to communicate with test host process.” We are using the following issue to track this: #627

8. Is anyone working on fixing any of the issues mentioned? How much of it is open sourced? Maybe the community can help with some of it. I have created a UV item: https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/18785326-make-the-vs-test-explorer-open-source. Please vote. Along with your vote feel free to indicate if you would be willing to contribute.

9. Group by namespace Please note that the Test Explorer already supports group-by-namespace.

10. Run all or just several tests from Test Explorer under performance profiler Please vote for the issue here: #660. We will use that to inform moving this into the backlog.

Test Explorer backlog Here is our Test Explorer backlog: #676 It captures the focus of our upcoming work. Please considering voting on the individual issues therein.

@davkean That’s just one level though. Notice how I mentioned Tiered levels?

It’s nice to break it down by Tiers. I think @mgravell said the same thing, above?

Keeping an excessive-overhead protocol for backwards compatibility isn’t a good answer. There needs to be a way for software which is regularly updated to opt in to some fast communication mechanism. Really it should just be in-process function calls- it’s not ok for the adapter infrastructure to be 90% of the time involved in running tests, not in this world of pervasive TDD and new features like live unit testing.

@codito, Repro Steps:

  • Enlist in Roslyn: https://github.com/dotnet/roslyn
  • Run Restore.cmd
  • Open Roslyn.sln
  • Build Roslyn.sln
  • Wait minutes for Test Explorer to Discover all 60k tests
  • Make a whitespace change in some project
  • Build Roslyn.sln
  • Wait minutes for Test Explorer to Discover all 60k tests

I’m also not sure that reading source information is the slow part here, I’m pretty sure its the serialization mechanism between the test adapter and VS.

xUnit can discover all 60k tests in Roslyn in seconds (by default its just reflecting over all attributes that inherit from [Fact] and [Theory], it can be slightly more expensive with a custom discoverer, but we don’t have one).

You can also crack the PDBs and pull out the source information for the tests in a very small amount of time (seconds) as well (and if this was a bottleneck, then it could be made lazy and only pulled/displayed when the user selects an individual testcase, either for navigation or for viewing details).

With both of those problems “gone”, the remaining issues are:

  • The constant rediscovery of all tests, when you only need to rediscover for the assemblies that have been rebuilt (99% of the time, custom discoverers may break this logic)
  • The serialization of data between the test adapter and VS. It looks like you are using JSON and TCP, which is incredibly inefficient. Using IPC (when the host and client are on the same machine) and a binary serialization mechanism would be much more efficient. The only data that needs to be ‘deserialized’ is the data you care about. Everything else you can store as a byte[] and transmit back directly, without deserialization (on read from the adapter) and without reserialization (on write to the adapter).

The problem is I’m not trying to Console.WriteLine from my test. I’m trying to do it in product code. Plumbing that through manually isn’t fun.

I completely agree. I have been always staying out of using XUnit for this sole reason (+ another one that when using output capturing and you have lots of output, XUnit is damn slow - or with its resharper integration, not sure… - while NUnit works perfectly there)

For the things which appear to be xUnit.net-centric:

  1. Names are controlled by config. Use "methodDisplay": "method". https://xunit.github.io/docs/configuring-with-json.html
  2. This appears to be newly broken in 2.2, and there is an open bug: https://github.com/xunit/xunit/issues/1140
  3. In 2.x, xUnit.net doesn’t capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html

@davidfowl

5.2) I experimented with using AsyncLocal (as suggested above) to preserve the output per test in MSTest, and found this amazing XUnit extension by @simoncropp that already implements it for XUnit. https://github.com/SimonCropp/XunitContext#xunitcontextbase

All you need to do is install the XUnitContext package and use the provided base class or register the context yourself. Then all Console writes (and other sources as well), will be written to the output stream per test and echoed in the test output when the test fails.

Here output from two tests running in parallel:

image

As you can hopefully see in the above output, the results are written to their respective tests even though the tests are running in parallel. The nice thing is that even non-awaited task that finishes while the test is still running will write into the correct output.

There are 3 more edge cases, that I was able to identify:

  1. When non-awaited task takes longer than the whole test run, you won’t see the output in any test, because the process is killed before it completes.
  2. When non-awaited task completes after its test finishes, but before the process is killed, you won’t see the output in any test because it went to the standard output and was discarded.
  3. When task explicitly suppresses dependency flow, you won’t see the output because it went to the standard output and was discarded.

I was able to capture 2 and 3, in my extension because I used a global setup to replace the Console logger before any test run. That said we don’t have a way of showing that output in TestExplorer at the moment anyway, but it might be an improvement to consider, and get it at least into logs. Each of the edge cases has an example in the code below, in case @simonCropp would like to comment, or correct my usage of his nuget package.

using System;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
using Xunit;
using Xunit.Abstractions;

namespace TestProject1
{
    public static class TestContext
    {
        public static bool Flag = true;
        public static Stopwatch Sw = Stopwatch.StartNew();
    }

    public class TestedClass
    {
        public async Task Method1(string id)
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} method1 called from {id}");
            await Method2(id);
        }

        public Task Method2(string id)
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} method2 called from {id}");

            // not awaited on purpose
            Task.Run(() => { Thread.Sleep(200); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited short task finished. {id}."); });
            // this will never show in output because the process terminates before this task can finish
            Task.Run(() => { Thread.Sleep(10000); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited long task finished. {id}."); });

            using (ExecutionContext.SuppressFlow())
            {
                // this won't show up in the test result because the flow of dependencies was suppressed
                Task.Run(() => Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} - Supressed. {id}"));
            }

            return Task.CompletedTask;
        }
    }

    public class UnitTest1 : IDisposable // Register the outputs yourself.
    {
        public UnitTest1(ITestOutputHelper output)
        {
            XunitContext.Register(output);
        }
        public void Dispose()
        {
            XunitContext.Flush();
        }

        [Fact]
        public async Task TestMethod1()
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-start");
            var tc = new TestedClass();
            await tc.Method1("t1");
            // wait for the other test so we know there are tests running in parallel
            while (TestContext.Flag)
            {
                Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-wait");
                await Task.Delay(100);
            }

            await tc.Method1("t1");
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-end");

            // wait for the short non-awaited tasks to complete to see if they print in the correct 
            // test output, if the test happens to run long enough
            await Task.Delay(500);
            
            throw new Exception("err");
        }
    }

    public class UnitTest2 : XunitContextBase // Or use the provided base class
    {
        public UnitTest2(ITestOutputHelper output) : base(output)
        {
        }

        [Fact]
        public async Task TestMethod2()
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t2-start");
            var tc = new TestedClass();
            await tc.Method1("t2");
            await Task.Delay(1000);
            TestContext.Flag = false;
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t2-end");

            Task.Run(() => { Thread.Sleep(200); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited short task finished after test t2 finished."); });

            throw new Exception("err");
        }
    }
}

Regarding the Console.WriteLine thing, I was working on something yesterday and wishing that Core had an IConsole interface, which could be automatically wired up with a simple pass-through to the static class by the IoC. I ended up daydreaming about a framework-level attribute or interface which would allow you to specify a default implementation for interfaces to use if one has not been specified during IoC configuration, something that could be supported by all DI systems. Like this:

public class DefaultConsole : IConsole, IFallbackImplementationFor<IConsole>
{
    public void WriteLine(string s) => Console.WriteLine(s);
}

Polite ping to @pvlakshm or @codito again ☝️

Ah, misread your second image, apologies. 😃

@tannergooding try opening Task manager next time and killing VsTest.Console while you are waiting… (Once cpu has hit ~0%)

@onovotny, you’re correct, just didn’t explain myself very well 😄. I meant that the default discoverers (asumming you don’t have any custom discoverers) appear to use reflection over attributes to locate the tests. Customer discoverers can do whatever mechanism they wish.

In either case, xUnit is able to discover all 60k of the Roslyn tests in seconds (not minutes), so discovery itself is not the likely bottleneck.

@davidfowl

Wouldn’t it be great if Console.WriteLine just worked?

Yes it would, but this is very similar to the Microsoft Build & Release Management Team deciding that Write-Host should not be allowed in PowerShell scripts. I think it is a design decision where they decided that there is no concept of a Console (Host) when running on an agent. That is my guess at least.

The problem is I’m not trying to Console.WriteLine from my test. I’m trying to do it in product code. Plumbing that through manually isn’t fun.

I like to use the adapter pattern for this issue. I use an ILogger, or your favorite logger in my code. I then configure my logger to write to the console for certain classes or certain log types (trace or verbose or debug or whatever). NLog is great for this and has a built in logger for the console (I’m sure most loggers do as well). This keeps my production code working as expected.

I then setup my unit tests to call XUnit’s ITestOutputHelper for my ILogger calls. This is very flexible and you get what you want in your product code while also getting the output in your unit tests.

I know it sounds painful, but you are basically hard coding a dependency with Console.WriteLine. Abstract it and you will be ok.

I’m not saying that I agree with the decision to disallow the Console from testing, but I’m ok with making my code more flexible.

@bradwilson I added an issue with examples for changes to xunit that would allow ITestOutputHelper to be registered for all tests without the need for explicit constructor injection. The changes don’t affect existing functionally, just provide better subclassing opportunities for those who want/need it. Sorry to partially hijack this issue!

See https://github.com/xunit/xunit/issues/1119

@davkean @bradwilson @Mike-EEE

Mike, thanks for mentioning Frigate. The project is still very early, but if you want to know about it, you can watch the webinar recording from the 19th minute. Don’t hesitate to ping me with questions or feedback. I’m happy to chat about integration with test runners.

I would add one more to that list: not only is discovery really, really slow - to even get to the discovery phase when opening a Solution takes > 1 minute on http://github.com/dotnet/roslyn-project-system. I suspect waiting for output groups from CPS to populate is probably a lot of that - but I’ve not investigated.