angular-cli: v7.rc.0 serving 2x - 3x slower than stable versions on my projects

Bug or Feature ?

- [x] bug report

Command ?

- [x ] serve --sourceMap=false

Versions ?

Angular CLI: 7.0.0-rc.1 Node: 10.9.0 OS: win32 x64 Angular: 7.0.0-rc.0 … common, compiler, compiler-cli, core, forms, http … language-service, platform-browser, platform-browser-dynamic … router Package @angular-devkit/* 0.9.0-rc.1 @angular-devkit/core 7.0.0-rc.1 @angular-devkit/schematics 7.0.0-rc.1 @angular/cdk 7.0.0-beta.2 @angular/cli 7.0.0-rc.1 @angular/material 7.0.0-beta.2 @ngtools/webpack 7.0.0-rc.1 @schematics/angular 7.0.0-rc.1 @schematics/update 0.9.0-rc.1 rxjs 6.3.3 typescript 3.1.1 webpack 4.19.1

Repro steps ?

Update from 6.2.3 to v7 rc0, it is first time to test cli v7 in my projet. It is slower even when serving without sourcemaps. So ng s or ng s --sourceMap=false same refresh cli server around 10seconds while it was around 2-3s before.

Log ?

https://gist.github.com/istiti/9da2393a26aeb0117932e56bb04edd9a

Useful details ?

I can say project is big relativly to what angular/cli can support without bazel (1000+ files) and can’t share project unfortunately I just come back to stable version of cli v.6.2.4 and angular (see working package.json) and I get this score when serving https://gist.github.com/istiti/fc8b629e0a7f7cab13b41b070cbfb94b

Thanks @clydin

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 2
  • Comments: 41 (30 by maintainers)

Commits related to this issue

Most upvoted comments

I’ve had some questions in the past about how we debug these performance regressions, so thought I could do a write up here for anyone that’s curious.

The first step is always to reproduce the problem in a side by side comparison, and to get some data on it. Since the projects in this issues were not open source I tried to reproduce using Angular.io (AIO), which you can find inside the aio folder of https://github.com/angular/angular.

I cloned AIO twice. In one the the clones I use the latest v6 CLI (@angular/cli@6.2.4 + @angular-devkit/build-angular@0.8.4), and the other clone I use the upcoming v7 RC (@angular/cli@7.0.0-rc.1+@angular-devkit/build-angular@0.9.0-rc.1).

The reports were in non-AOT builds but AIO has some custom configurations, so I added a new configuration that disabled most things to ensure I was getting data from the simplest case:

          "configurations": {
            "debug": {
              "aot": false,
              "optimization": false,
              "buildOptimizer": false,
              "outputHashing": "none",
              "sourceMap": false,
              "statsJson": false,
              "extractCss": false,
              "extractLicenses": false,
              "vendorChunk": true
            },

I ran ng serve --configuration=debug, triggered changes by adding console.log(1); to src/main.ts, and gathered some numbers. It’s important to actually add code when triggering a rebuild because the build system will do a lot less work if the AST of the files does not change. I ignore the numbers from the first few rebuilds as well, since sometimes it’s artificially inflated as caches are being populated.

With this setup I got v6 at ~300ms rebuilds and v7 at ~6000ms. This confirmed the original report: rebuilds were much slower in v7.

Since I didn’t really know where to start looking I tried to get a profile of where time was being spent using https://github.com/GoogleChromeLabs/ndb, which is a simplified version of what’s described in https://medium.com/@paul_irish/debugging-node-js-nightlies-with-chrome-devtools-7c4a1b95ae27.

I tried taking a few profiles of the initial build plus the rebuild but ndb kept crashing. I’d seen this happen in the past when there are a lot of profile events so instead tried to profile a single rebuild. I took a sample of a v6 and a v7 rebuild by following these steps:

  • ran ndb node ./node_modules/@angular/cli/bin/ng serve --configuration=debug
  • waited for the initial build to finish
  • wait 5s, manually appended console.log(1); to src/main.ts, wait for the rebuild to finish, wait 5s to let all processes stop activity
  • triggered 3 rebuilds like this
  • went to ndb, performance tab, clicked record button on top, waited for recording to start
  • triggered another rebuild, waiting 8s before and after to be double sure I got only an isolated rebuild
  • in ndb stopped the recording, then saved it to disk
  • killed the serve process, which kills ndb

I shared these profiles with the team, then opened ndb twice, side by side, and loaded the profiles I had saved. The initial view is called “Summary” and looked like this:

v6 Summary image

v7 Summary image

There’s 3 processes listed: Main (which is from ndb I think), the ng serve, and the forked type checker. We run type checking in a separate process to speed up rebuilds.

  • v6 has the main thread working from 8080 to 8900 (820ms), typechecker from 8200 to 10200 (2000ms).
  • v7 has the main thread working from 10360 to 23420 (13060ms), typechecker from 10450 to 32450 (22000ms).

Remember that these numbers are for a process while being profiled: individual numbers are not representative of real world apps, but the comparison between two profiled processes is.

Couldn’t say much here besides that v7 took way longer. But it was interesting to see that even the type checker took way longer. Since the type checker doesn’t really use any webpack internals, it was indicative that whatever was slowing things down wasn’t related to webpack.

At the bottom of the ndb window I switched to the “Bottom-Up” view. This tells you how much time is spent on functions. Bear in mind there is one table for each process.

ng serve bottom-Up, v6 left, v7 right image

type checker bottom-Up, v6 left, v7 right image

What we care about here is the column called “Self Time”, which is the time spent on that specific function, but not the functions it calls. We assume the v6 profile as the normal one, and see what’s different in the v7 one.

We (the team) went over these numbers and drew some conclusions:

  • v7 spends a lot more time on virtual file system functions (an abstraction we added)
  • there’s a lot more time spent inside RxJs observables.
  • normalize looks expensive or called excessively
  • The v6 type checker is dominated by TS function calls, v7 has a lot of internal and fs errors. Drilling down shows they actually come from our devkit/core hosts.

We use a lot of Observables and knew RxJs had been updated in our packages, so wondered if there was a performance regression there.

To debug this I forced the v7 devkit packages to use the same RxJs as was in v6 by copying it over to node_modules/@angular-devkit/node_modules/. This forced node resolution to use that copy instead of the top level one. I made sure it was being used by deleting a couple of files, which showed a bunch of RxJs errors. No real change in rebuild times, so this didn’t seem the culprit.

Then I started replacing more of the @angular-devkit/* modules src/ folder with their v6 versions to see when the rebuild times changed. The most crucial ones seemed @angular-devkit/core (where our virtual file system is) and @ngtools/webpack (where our webpack plugin is). This isn’t a great of debugging things but in this case worked because not too much had changed.

Changing the @ngtools/webpack source back to v6 made rebuilds fast again. The regression was somewhere in the source of this package. We only had some 15 commits to this package that were in v7 but not v6 so started looking at those.

https://github.com/angular/angular-cli/pull/12462 and https://github.com/angular/angular-cli/pull/12461 were tested and together reduced the rebuild times by some 40% (4.5s-> 2.7s). Still too big but it helped.

You might notice that the 4.5s rebuild time I just mentioned was different than the 6s one that I reported at the beginning. I don’t know the specific reason for the discrepancy. Likely my machine had more resources available, less things in the background, or the process just ended up on a CPU with less load. All of this is common, so comparing numbers from different debugging sessions doesn’t mean much. If you want accurate numbers you need to do a before and after that’s close together, which is how I got the 4.5s-> 2.7s.

I took some more profiles and compared the latest changes (v7-64d1524) with the original v7.

ng serve bottom-Up, v7 left, v7-64d1524 right image

The effect seemed as expected from https://github.com/angular/angular-cli/pull/12462 and https://github.com/angular/angular-cli/pull/12461: normalize calls took a lot less time and there was less file system calls in general.

Now most of the time was being spent in some kind of file system error. Drilling down showed it ultimately came from TypeScript looking up files:

image

After some time debugging, we say that it was typescript trying to resolve modules, which tries to see if files with various names exist. We discussed this for a while and saw there were some changes to how files were cached.

We tried to add a cache to the Typescript module name resolution which further reduced rebuild time, but didn’t leave them that close to the original. @clydin discovered that we weren’t actually caching the TypeScript SourceFiles anymore which caused full instead of incremental TypeScript rebuilds.

He put up https://github.com/angular/angular-cli/pull/12483, that showed ~254ms (266, 281, 244, 246, 233) rebuilds, while v6 has ~270ms (318, 233, 248, 238, 313) rebuilds. So perhaps slightly faster even, but might also just be some noise in the data. It looks like it re-establishes parity.

I’d like to say that we are looking at better automated ways to detect these performance regressions as they are introduced. We have a internal benchmarking tool (https://github.com/angular/angular-cli/pull/12022) and a new --profile flag (https://github.com/angular/angular-cli/pull/11497). But unfortunately neither of these are very useful for rebuilds right now, which is why this is still a lot of manual work.

I hope someone finds this write-up useful!

I could reproduce with https://github.com/angular/angular/tree/master/aio.

Using @angular-devkit/build-angular@0.8.3 I saw non-AOT ng serve rebuilds of ~300ms.

With @angular-devkit/build-angular@0.8.3 rebuilds were ~6000ms. Most of the time seemed to be spent at 0% progress.

@istiti I’m looking over the profile you sent me. Thanks for taking and for being so careful with the timing, it looks like you got a very good isolated sample!

The overview looks like this: image

It seems like the main process (middle) worked from ~16.5s to ~25s (total ~8.5s seconds), while the type checker process (bottom) ran from ~17s to ~22s (total ~5s). There’s also 3 things marked as function calls at 6s, 7s and 9s. I don’t know what they are but they don’t show up as having any kind of detailed CPU load on the profile so I am ignoring them.

This makes me think whatever takes so long is not related to TypeScript, because type checking is the most costly part of the whole type scripting compilation but the type checker process runs a fraction of the total. If TypeScript was taking a very long time, the type checker process would take longer than the main process.

For contrast, let’s look at the AIO v6 profile from my other comment. You can see here that the type checker (bottom) took a lot longer than the main process (middle). image

Since the type checker seems like it is fairly normal I won’t look much into it now.

Going back to the main process, this is how the bottom-up view looks like: image

These are the functions where time is most spent. A lot of it seems like native functions at the top, so in order to know more we must drilldown by clicking the forward arrow in front of the function name.

The first drilldown, for set length, shows this: image

The seal phase in webpack is one of the final ones, when dependencies settle into their final chunks. I’ve seen other projects with a lot of lazy modules and know it can take a long time in the seal phase. Sometimes it also has performance issues because of that (like https://github.com/webpack/webpack/issues/6248). I know your project has a lot of lazy loaded chunks so this doesn’t look very weird to me.

Next up is the from call: image

Looks like a fair bit of time is being spent in sorting chunks. I think this is just another facet of the previous point (lots of lazy chunks).

The next one are file open calls: image

It doesn’t overly surprise me that 600ms are spent opening files in a big project but it does surprise me that copy-webpack-plugin is doing most of them. We use it for assets. I think we should look into this.

The fourth enty is a provideSync call: image

It’s curious that a lot of this time is license-webpack-plugin. Maybe it’s not a very performant plugin but I don’t think it needs to be. Parsing licenses is the kind of thing that only matters for production builds.

Sixth one is normalizeString: image

Here we see again that there is a lot of time spent in license-webpack-plugin.

Seventh is has: image

This looks like more chunk processing by webpack.

Eight is stat calls: image

They seem to be split roughly half and half between copy-webpack-plugin and typescript.

Ninth is an anonymous call: image

Again we can see license-webpack-plugin is doing a lot of work.

The tenth and last I will look at is handler: image

I think this is related to chunk processing overall by webpack.

So right now I have two things on my mind: why is extractLicenses enabled on your development builds, and is asset copy taking too long?

The answer to the first question is that we have a bug. extractLicenses defaults to true and its description reads:

Extract all licenses in a separate file, in the case of production builds only.

But I checked in our source code and we don’t actually add it only in production builds. I have a fix for that in https://github.com/angular/angular-cli/pull/12546. For now, you can add "extractLicenses": false to your build config and your rebuilds should get faster.

It’s harder to answer the second question. But we can test it. Can you try removing everything in the config assets array and tell me how that changes your rebuilds please?

@Enngage can I ask you to take a profile as well please? Analysing @istiti’s one was very helpful so I wonder what I can get from yours.

@cydin : Also, can you provide the output of ng --version? Before and After? Thank you

Here you can check my package.json and ng --version when everything is okay (cli v6.2.4) before update cli to --next

Angular CLI: 6.2.4
Node: 10.9.0
OS: win32 x64
Angular: 6.1.9
... common, compiler, compiler-cli, core, forms, http
... language-service, platform-browser, platform-browser-dynamic
... router

Package                           Version
-----------------------------------------------------------
@angular-devkit/architect         0.8.4
@angular-devkit/build-angular     0.8.4
@angular-devkit/build-optimizer   0.8.4
@angular-devkit/build-webpack     0.8.4
@angular-devkit/core              0.8.4
@angular-devkit/schematics        0.8.4
@angular/cdk                      6.4.7
@angular/cli                      6.2.4
@angular/material                 6.4.7
@ngtools/webpack                  6.2.4
@schematics/angular               0.8.4
@schematics/update                0.8.4
rxjs                              6.3.3
typescript                        2.9.2
webpack                           4.20.2

@filipesilva : are the build times you showed with AOT?

Thank you to thank me for try cli7rc 😄 No it is JIT. PS: I dream for a long time to be able to serve my project with aot ^^ if you remember aot was by default in cli1.5 then disable due to big project : see my comment followed by yours before your merge to disable again aot on large project

@filipesilva :Do you also have a lot of scss/sass files?

I don’t use scss/sass only css, you can check my angular.json

@cydin For updating an existing CLI project from 6 to 7, you can use ng update @angular/cli --next

Am I missing something or this is normal behavior : image Anyway I updated cli manually by remove cli globally then install @next then install @angular-devkit/build-angular@0.9.0-rc.1 the

@cydin If you also have some time, can you also try using CLI 7.0.0-rc.1 (and @angular-devkit/build-angular@0.9.0-rc.1) but with the core Angular packages at 6.1.9?

I install manually cli (remove lock file, uninstall global/local cli, remove nodes_modules, install cli @next and @angular-devkit/build-angular@0.9.0-rc.1 now my package.json and ng --version is : image So, with all this, I tried again ng s --sourceMap=false and I get again slow refresh check this GIF to real example (consider I edit file adding comment+SAVE then removing comment+SAVE) : cli7rc1slow PS: might be useful angular.json

Please apologize me if structure of this reponse is not easy readable.

@kondi that’s another (different) regression, we are looking at it in https://github.com/angular/angular-cli/issues/13102.

@istiti @Enngage just wanted to say thank you again for taking the time to take the profiles and try to get to the bottom of this with us. It’s really hard to find some performance regressions as sometimes they don’t manifest in all projects.

@Enngage if you get around to making that new profile, can you open a new issue and indicate what types of builds it happens on, and also add the details about the scss please?

The rc.3 should contain the fix I mentioned about licenses not being extracted by default. I think that was the last bit to restore rebuilds to parity.

@istiti ok that’s still too slow then. Unsure what’s causing it now…

I think the only way to get some insight is to take a performance profile. Would you be willing to do that?

To get a profile of a rebuild you can do this:

  • npm install -g ndb
  • ndb node ./node_modules/@angular/cli/bin/ng serve, ignore the ndb window but don’t close it
  • wait for the initial build to finish
  • wait 10s, manually append console.log(1); to src/main.ts, wait for the rebuild to finish, wait 10s to let all processes stop activity
  • trigger 3 rebuilds like this
  • go to the ndb window, performance tab, click record button (circle) on top left, wait for recording to start
  • trigger another rebuild, waiting 15s before and after to be double sure we got only an isolated rebuild
  • in ndb stop the recording, then save it to disk using the download button (down arrow) to the right of the record button
  • kill the serve process, which kills ndb
  • upload the profile somewhere like google drive (it’s a big file), and link it here

Meanwhile I’ll also try to find another big project that I can try to run benchmarks on and see if I can see similar results.