Browser performance tests through selenium
It is possible to do performance regression testing with Selenium. However as you might have already noted. The core essence of Selenium is that it mimics user behavior. This means that Selenium will only perform the action (e.g. clicking on the button) if the user is able to perform the same action. Also taking into account certain code, workarounds (i.e. hard waits, various checks and custom code), required to even be able to run the Selenium script. This means that the "definition" of performance testing using Selenium will be slightly different compared to traditional performance testing.
What you will want to do is have a timer (start/stop) for each action Selenium is performing. For example: Clicking on a button and log this to a file for later use.
Using Selenium you can create a performance baseline and from there on onwards compare each consecutive result with the baseline. This will give you statistics that you can then use for further analysis.
Selenium nor Webdriver (Selenium 2.0) come with this feature out of the box. So some custom coding needs to happen for this to work.
There is a possibility to get closer to what browser-perf
is doing by collecting the chrome performance logs and analyzing them.
To get performance logs, turn on performance
logs by tweaking loggingPrefs
desired capability:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get('https://stackoverflow.com')
logs = [json.loads(log['message'])['message'] for log in driver.get_log('performance')]
with open('devtools.json', 'wb') as f:
json.dump(logs, f)
driver.close()
At this point, devtools.json
file would contain a bunch of trace records:
[
{
"params": {
"timestamp": 1419571233.19293,
"frameId": "16639.1",
"requestId": "16639.1",
"loaderId": "16639.2",
"type": "Document",
"response": {
"mimeType": "text/plain",
"status": 200,
"fromServiceWorker": false,
"encodedDataLength": -1,
"headers": {
"Access-Control-Allow-Origin": "*",
"Content-Type": "text/plain;charset=US-ASCII"
},
"url": "data:,",
"statusText": "OK",
"connectionId": 0,
"connectionReused": false,
"fromDiskCache": false
}
},
"method": "Network.responseReceived"
},
{
"params": {
"timestamp": 1419571233.19294,
"encodedDataLength": 0,
"requestId": "16639.1"
},
"method": "Network.loadingFinished"
},
..
]
Now, the question is, what to do with it.
One option that was initially suggested during the Google Test Automation Conference is to submit the logs to webpagetest.org. There is an example in java available here, but, at the moment, I had no luck implementing it in Python.
In theory, the UI report generated by webpagetest.org would look like this:
They also provide the metrics in JSON/XML and other formats that can be further analyzed.
This is really something, thanks to Vivek Singh for the pointing comment.
browser-perf also uses the logging functionality to pick up the tracing logs, and analyzes the data.