5 Great Ways To Use AI In Your Test Automation
Before looking at test automation examples affected by machine learning, you need to define what machine learning is. ML is a pattern-recognition technology that uses patterns your machine learning algorithms identified to predict future trends.
Ml can consume tons of complex information, find predictive patterns, and then alert you to those differences. AI is about to change testing in many ways. Here are five test automation scenarios that leverage AI and how to use it successfully in your testing.
1. Do visual, automated validation UI testing
What kind of patterns can ML recognise? One becoming more popular is image-based testing using automated visual validation tools. “Visual testing is a quality assurance activity that is meant to verify that the UI appears correctly to users,” explained Adam Carmi, co-founder and CTO of Applitools, a dev-tools vendor. Many people confuse that with traditional, functional testing tools, which were designed to help you test the functionality of your application through the UI.
With visual testing, “we want to make sure that the UI itself looks right to the user and that each UI element appears in the right colour, shape, position, and size,” Carmi said. “We also want to ensure that it doesn’t hide or overlap any other UI elements.”
He added that many of these types of tests are difficult to automate and are manual. This makes them a perfect fit for AI testing.
By using ML-based visual validation tools, you can find differences that human testers would most likely miss.
2. Testing APIs
Another ML change that affects how you do automation is the absence of a user interface to automate. Much of today’s testing is back-end-related, not front-end-focused.
In her TestTalks interview, “The Reality of Testing in an Artificial World,” Angie Jones, an automation engineer at Twitter, mentioned that much of her recent work has relied heavily on API test automation to help her ML testing efforts.
Jones explained that in test automation, she focused on machine learning algorithms. “And so the programming I had to do was a lot different. … I had to do a lot of analytics within my test scripts, and I had to do a lot of API calls.”
3. Running More Automated Tests That Matter
How many times have you run your entire test suite due to a change in your application that you couldn’t trace?
If you’re doing continuous integration and testing, you’re probably already generating a wealth of data from your test runs. But who has time to go through it all to search for common patterns over time?
Wouldn’t it be great if you could answer the question, “If I’ve made a change in this piece of code, what’s the minimum number of tests I should be able to run to figure out whether or not this change is good or bad?”
Many companies are using AI tools that do just that. Using ML, they can tell you the precision what the smallest number of tests to test the piece of changed code. The tools can also analyse your current test coverage and flag areas with little coverage or at risk.
Geoff Meyer, a test engineer, will discuss this in his upcoming session at the AI summit Guild. He will tell how this team member was caught in the test automation trap. Before the next testable build was released, they could not complete the test-failure triage from a preceding automated test run.
They needed insight into the pile of failures to determine which were new and which were duplicates. Their solution was to implement an ML algorithm that established a “fingerprint” of test case failures by correlating them with system and debug logs. The algorithm could predict which failures were duplicates.
Once armed with this information, the team could focus its efforts on new test failures and come back to the others as time permitted or not at all. “This is a really good example of a smart assistant enabling precision testing,” Meyer said.
4. Spidering AI
The most popular AI automation area is machine learning to automatically write tests for your application by spidering. For example, you need to point some of the newer AI/ML tools at your web app to begin automatically crawling the application.
As the tool crawls, it also collects data having to do with features by taking screenshots, downloading the HTML of every page, measuring load times, and so forth. And it continues to run the same steps again and again.
So, over time, it’s building up a dataset and training your ML models for your application’s expected patterns.
When the tool runs, it compares its current state to all the known patterns it has already learned. If there is a visual difference deviation or a problem of running slower than average, the tool will flag that as a potential issue.
Some of these differences might be valid. For example, say there was a valid new UI change. In that case, a human with domain knowledge of the application still needs to go in and validate whether or not the issue(s) flagged by the ML algorithms are bugs.
Although this approach is still in its infancy, Oren Rubin, CEO and founder at machine learning tool vendor Testim, says he believes that “the future holds a great opportunity to use this method to also automatically author tests or part of a test. The value I see in that is not just about the reduction of time you spend on authoring the test; I think it’s going to help you lot in understanding which parts of your application should be tested.”
ML does the heavy lifting, but a human tester does the verification.
5. Creating more reliable automated tests
How often do your tests fail due to developers making changes to your application, such as renaming a field ID? It happens to me all the time.
But tools can use machine learning to adjust automatically to these changes. This makes tests more maintainable and reliable.
For example, current AI/ML testing tools can start learning about your application, understanding relationships between the parts of the document object model, and learning about changes throughout time.
Once such a tool starts learning and observing how the application changes, it can make decisions automatically at runtime about what locators it should use to identify an element – all without you having to do anything.
If your application keeps changing, it’s no longer a problem because, with ML, the script can automatically adjust itself.
This was one of the main reasons Dan Belcher and his team developed an ML testing algorithm. In my recent interview with him, he said, “Although Selenium is the most broadly used framework, the challenge is that it’s pretty rigidly tied to the specific elements on the front end.
Bevause of this, script flakiness can often arise when you make what seems like a pretty innocent change to a UI,” he explained. “Unfortunately, in most cases these changes cause the test to fail due to it being unable to find the elements it needs to interact with. So one of the things that we did at the very beginning of creating Mabl was to develop a much smarter way of referring to front-end elements in our test automation so that those types of changes don’t actually break your tests.”
Become a domain model expert
Being able to train an ML algorithm requires that you come up with a testing model. This activity needs someone with domain knowledge; many automation engineers are getting involved with creating models to help with this development endeavour.
As you have seen, machine learning is not magic. AI is already here. Are you worried? Probably. Are you out of a job? Probably not. So stop worrying and do what you do best: Keep automating.
Get articles like
this via email
- Join 2,800 others
- Never miss an insight