How to test a DOM API for Servo
A couple weeks ago, Jeena wrote a post about How to implement a new DOM API for Servo, in which she presented the shiny new Doge API. In this post, the ✨Doge✨ epic continues. Now that we’ve written the Rust code for Servo’s Doge API, we should test that it works as expected.
When Jeena and I worked on Servo over the summer, most of the tests were already written for us as a standard set of web-platform-tests which are used by all of the major browser developers. Similarly, here we’re assuming that the Doge API tests have already been provided for us. If you’re interested in how to make test files, there is some helpful information on the Servo github.
The Doge API has a constructor, an append()
function, and a random()
function. Therefore the corresponding JavaScript tests should 1) test the constructor, 2) test that the append method doesn’t throw an error, and 3) check that the random method returns a valid value. You can check out the tests here.
Tests are expected to fail prior to implementation
Let’s run the tests!
Here’s what we get if we run the doge_basic.html
tests without implementing the Doge API:
servo$ ./mach test-wpt tests/wpt/web-platform-tests/doge/doge_basic.html
Running 1 tests in web-platform-tests
Ran 1 tests finished in 12.0 seconds.
• 1 ran as expected. 0 tests skipped.
By looking at the output above, it looks like all the tests worked “as expected”.
This doesn’t necessarily mean the tests passed, however.
➟ If we look in the corresponding doge_basic.html.ini
file, we can see the test expectations:
servo$ more tests/wpt/metadata/doge/doge_basic.html.ini
[doge_basic.html]
type: testharness
[Create Doge from no parameter]
expected: FAIL
[Create Doge from undefined parameter]
expected: FAIL
[Create Doge from empty object]
expected: FAIL
[Create Doge with sequence]
expected: FAIL
[Check append method]
expected: FAIL
[Check append and random methods]
expected: FAIL
[Check that random method throws error when list is empty]
expected: FAIL
Such fail!
As you can see, there is a FAIL
expectation for each of the corresponding tests in doge_basic.html
.
This means that Servo’s testing framework can run a large number of tests without printing information about failed tests that developers might not be interested in at the moment.
As features get added, the test expectations can be updated, as we’ll see below.
Updating test expectations after implementation
Here’s what we get if we run the tests after implementing the Doge API and re-building Servo:
servo$ ./mach test-wpt tests/wpt/web-platform-tests/doge/doge_basic.html --log-raw /tmp/servo.log
Running 1 tests in web-platform-tests
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Create Doge from no parameter
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Create Doge from undefined parameter
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Create Doge from empty object
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Create Doge with sequence
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Check append method
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Check append and random methods
▶ Unexpected subtest result in /doge/doge_basic.html:
└ PASS [expected FAIL] Check that random method throws error when list is empty
Ran 1 tests finished in 19.0 seconds.
• 0 ran as expected. 0 tests skipped.
• 1 tests had unexpected subtest results
and:
Much success!
Thanks to the --log-raw
option when we ran the Doge tests, the test results got stored in /tmp/servo.log
. This can be used to update the expected test results with the following command:
servo$ ./mach update-wpt /tmp/servo.log
Then if you try to look in doge_basic.html.ini
, you’ll see this:
servo$ more tests/wpt/metadata/doge/doge_basic.html.ini
tests/wpt/metadata/doge/doge_basic.html.ini: No such file or directory
Instead of replacing each FAIL
with a PASS
, the file was removed. If there are no explicit test expectations in the .ini
file, the default expectation is PASS
.
Now when we run the tests, everything passes and there are no unexpected test results.
servo$ ./mach test-wpt tests/wpt/web-platform-tests/doge/doge_basic.html
Running 1 tests in web-platform-tests
Ran 1 tests finished in 17.0 seconds.
• 1 ran as expected. 0 tests skipped.
That wasn’t too bad! :)
Nit-picky things
Up until now we just tested one file by specifying the full path to doge-basic.html
. For things like pull requests, Servo’s continuous integration bot only runs the tests specified as skip: false
in the file tests/wpt/include.ini
:
...
...
[cors]
skip: false
[cssom-view]
skip: false
[dom]
skip: false
[doge]
skip: false
[domparsing]
skip: false
...
...
So we’ll set the [doge]
folder to false
so it isn’t skipped during automatic testing.
Another thing that will save both us and the reviewer time is to run ./mach test-tidy
before each commit. It checks for minor things like trailing whitespace and Rust use
statements that are not in alphabetical order.
Finally, because the Doge webidl is exposed to Window and Worker, we should list it as such in interfaces.html
and interfaces.worker.js
.
The end
That’s it for now! This workflow is pretty much the one that we (Jeena and I) followed over and over again.
We still feel like we have a long way to go before understanding all the intricacies of the Servo codebase, but we’ve been able to make substantial contributions just by knowing a few key commands. Once again, our journey would not have been possible without the help of all the friendly people on github and the Servo and Rust IRC channels. So, please reach out to them for ideas and debugging help if you’re stuck.
Thanks a bunch to Josh Matthews, Jeena, and Nick for reviewing!