testcafe: Visual regression testing
@DevExpress/testcafe Let’s discuss functionality of layout testing
I suggest to use the following approach. Perhaps it looks too over engineered, but it is pretty flexible at my sight.
Screenshot provider
I suggest to provide possibility to use any comparing screenshots library. We do two options out-of-the-box - per-pixel comparison and perceptual hash comparison.
To avoid working with huge binary images we can calculate perceptual hash. In this case we don’t create screenshot file and compare only hash value. To build the diff images use should run the local tests in per-pixel mode.
Also we can provide possibility of taking screenshot in black and white mode or ultra contrast mode. I think it can be passed to the chosen library as layoutTestOptions.
So we create:
- Pixel perfect provider with options
{
mode: 'color' | 'black_and_white',
threshold: Number
}
- Perceptual hash with options
{
threshold: Number
// or
exactMatching: true | false
//it will be depend of implementation
}
Run options
For the run function we add options for screenshot comparing:
{
etalonsPath: <relative_or_absolute_path>,
layoutTestProvider: <provider_name>,
layoutTestOptions: {...},
updateEtalons: true | false
}
Accordingly, we add the similar options to the CLI interface:
--etalons-path, --layout-provider, --update-etalons.
Test API
We provide two different ways to testing layout.
- We look for an etalon automatically and user should just call
t.checkLayout()method.
checkLayout() is searching for a screenshot for test in according to index of using in the test code body, starting with 0.
test('testName', async t => {
await t
.click('#el')
.checkLayout() //looking for .../<testName>/<workerName>/1.png
.type('.inputClass', 'value')
.checkLayout() //looking for .../<testName>/<workerName>/2.png
......
.click(...)
.checkLayout() //looking for .../<testName>/<workerName>/<N>.png
});
- We provide
testControler.Imageconstructor and.equalLayout()assertion. That means user should decide on their own how to store artifacts and etalons, we just use comparing logic from provider.
E.g.:
test('testName', async t => {
await t.expect(new t.Image(<image_path>)).equalLayout();
});
Also we should resize browser window up to screenshot size.
Screenshots storage
Every provider implements mechanism of storing artifacts. For per-pixel provider we store screenshots similarly with current screenshots directory. Also we should create difference files in that directory.
Etalons will be got from path that specified in etalonsPath for programmatically API or from --etalons-path parameter for CLI.
For hash comparison we write key-value pairs to the .json file.
{
'<testName1><workerName1><etalonId1>': '3c3e0e1a3a1e1e2e',
...
'<testNameN><workerNameN><etalonIdN>': 'ac3e0e1a3a1e1e2F'
}
Updating of etalons
As soon as the first screenshot will be different from etalon we run web application with GUI interface for managing etalons.
Service will run only if was passed --update-etalons to CLI or updateEtalons to the runOptions in programmatically API.
In this case we output in report only path to the difference file.
As alternative we can just output paths to the artifacts, etalons and difference files, but it does not look convenient.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 70
- Comments: 30 (9 by maintainers)
Any news for this issue? I would like to use Testcafe but I need visual regression…
waiting on this …
Hi @loggen
This feature has high priority for us. At present, we are preparing the feature’s spike. We will post any news in this issue. Track it to be notified of our progress.
Any news on this since May? I’m evaluating TestCafe for a new project, and layout testing is a crucial deciding factor on TestCafe vs a Selenium based stack. I’m guessing I’m not alone.
Following this… Let us know if this feature is on track and any possible ETA.
Any updates?
3d party solutions JFYI : https://www.npmjs.com/package/devextreme-screenshot-comparer https://github.com/tacoss/testcafe-blink-diff
@vladnauto thank you for your suggestions and shared resources. I think the ability to ignore some regions or elements on layout testing would be useful, e.g. if they are constantly changing. We’ll consider the implementation of this functionality.
We’ve planned to fix #1357 in this release iteration. We will be able to provide you with an alpha version as soon as the fix is ready
@mjhea0 We haven’t started to work on this feature yet. Usually we work on any feature in own fork and make a pull request from there. In the upstream repository usually we don’t create new branches.
Let’s not use the word
etalonforbaseorbaselineimages. There is no such a word as etalon. It’s even underlined by spell checkers.The repo is private and is intended for internal use only. No support is provided for it.
We also want to implement visual regression in our project and that would be great to be able to deal with dynamic content the similar way it is done in applitools https://applitools.com/tutorials/selenium-javascript.html#part-5-use-advanced-ai-tools-to-work-with-real-world-scenarios where you can ignore region or webdriverio visual regression, https://webdriver.io/blog/2019/05/18/visual-regression-for-v5.html Where you can provide list of selectors to hide it before the test. I assume, the latter one can be done easily. By setting for each of those elements display: none, or visibility: hidden.
Hi all, I’m a bit curious if there was a decision made between “pixel perfect” and “perceptual hash” methods to implement this feature. I’ve been looking at this interesting repository from Amex: https://github.com/americanexpress/jest-image-snapshot and it looks promising.