csla: Business layer code (using LocalProxy) is ~4x slower after preview 8

We have a huge automated test suite in the form of “Integration Tests” that run against an instance of LocalDB (SQL server) and test around 80% of our business layer.

Preview 7 - tests take around 6 minutes to run

Preview 8 - tests take around 20 minutes to run

Given our code didn’t change, something fairly basic and significant changed related to performance in the framework. Got any ideas what the culprits might be? In this latest preview we added a loop in the LocalProxy.SetApplicationContext which might be a suspect just because it’s more processing. Not sure what else might be an issue. Thinking off the top of my head, I’m wondering if we could add some parallelism in that to increase performance. Thinking out-loud.

Not necessarily saying it’s a problem since automated tests don’t really mimic real live users who are not that fast and clicking 😃 But it’s just note worthy at least.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (14 by maintainers)

Commits related to this issue

Most upvoted comments

Finally got time to sit down and run tests to gather data for ya this morning. Hopefully I’ll have it for ya within the hour.

My co-worker suggested another way of looking at this…to focus on the sheer number of calls (+1M) to LocalProxy CRUD methods. We do have a lot of automated tests but not THAT many 😃 I think that is because Csla says do not create an object directly through a CTOR (because of analyzers) when creating items for a list. So if one of my tests is fetching a ReadOnlyList of whatever…then the Fetch method gets called once for each child item being added. Of course this ensures all the plumbing and stuff is done correctly. And the individual time for each of those is sub milliseconds…but that does add up.

And…

This is one of the things that makes CSLA unique with its support for mobile objects, and that architectural choice has its consequences (good and bad).

Exactly. The “Class-in-charge” approach is big reason why I love Csla so much. But we realize, like everything, this has it’s costs. All architectures have their costs. I’d say proudly that Csla has managed to stay modern and relevant while doing what it was fundamentally designed to do since 97…which is several lifetimes in technology. It’s looking like Csla 6 secures another 5 to 10 years if history is any guide. It was the hardest upgrade of Csla yet, but worth it.

I plan to spend some time today/tomorrow trying to find a better solution, because I don’t want to accept this perf hit. I’d prefer not adding back in the option to not create a new scope, but if I can’t figure out a solution I agree that’s a workaround.

I muse that this kind of stuff is likely why some architects recommend using only POCOs. But I’d argue that just shifts shells of effort around and you still must pay the pied piper somewhere! But their arguments are definitely valid related to complexity and higher maintenance/education costs.

No doubt! This is one of the things that makes CSLA unique with its support for mobile objects, and that architectural choice has its consequences (good and bad).