AI Speeds UI Testing in New Parasoft Solution
- By Richard Seeley
Development teams get an AI assist with Parasoft Selenic, a user interface UI testing solution based on the open source Selenium automated testing framework.
The new product Parasoft announced Tuesday is designed to help agile development teams break through QA bottlenecks in cases where they are repeatedly testing constantly evolving user interfaces. AI can replace time-consuming chores such as manually rewriting test scripts to account for even trivial changes like the placement of a start button.
With rapidly changing markets, UI updates require development teams to frequently put their application through QA. But previously used test scripts may not be up to handling the changes. That is where AI comes into the process.
"You can imagine that as you're going from sprint to sprint the amount of functionality that you need to test is not going down, it's increasing," Mark Lambert, vice president of products - automated software testing, told Pure AI. "Once something is working, you want the automation to take care of itself and you want it to take care of itself with minimum manual incidents. So, we focused on addressing the maintenance issues of the test in an automated way assisting the developers and testers through our AI implementation to streamline that process, taking in essence what would normally take you hours to do to just a handful of minutes."
Parasoft's AI approach to testing creates an historical analysis of previous versions of the UI and the test used in QA.
"Then when the application changes, we go back in time and use that history to help us predict what the intended behavior of the application is," Lambert explained. "The intent here is to remove the bottlenecks that appear when you've got rapidly changing UIs, so you can progress forward but not to blindly fix the code."
The developer using Parasoft Selenic is still in the loop. "The AI applies a Band-Aid to the process so you can keep moving forward as you do the functional validation," Lambert said. "Then what it will do is report back to the human about the Band-Aids that it applied. The human can make the decision as to whether or not it's the right Band-Aid." At that point the developer can apply the fix to the test or steer the AI component to find a better option.
"It very much a human in the loop," the Parasoft executive said. "It's not blind."
AI is especially useful in cases where a change as trivial as moving a button to a different part of a page flummoxes the previous test scripts.
"I know that sounds relatively simple and it is when you're talking about a human," Lambert explained. "When a human looks at a webpage and the submit button moved and now becomes an OK button, it seems very easy for a human to make the determination. But your automation script, if it doesn't have any analytics built into it, it doesn't know. All it knows is the thing it was looking for is not where it was before."
This issue, known as "identifying element locators," is a UI testing problem identified by developers Parasoft surveyed.
"The trouble is that different locators use different use cases," Lambert explained. "Often it's very hard to get the right locator to balance stability with accuracy."
That's where AI can make a difference in an example he offered.
"What we're able to do is to create smart locators where we can tell the locators need to be changed and then we can tell you how to change it. Then by coupling in the self-healing, we apply the change at runtime and say, here are the 10 different locators that would have found the element this time and we tried the first one and it worked. Here's what we actually ran in production in your safe environment. This looks correct. Here's what we recommend. This is the thing you should change your test to. Then through our IDEs we're able to give you a very quick fix workflow where you import the results from the CI process and with the click of a button refactor the code so it uses the recommended locator strategy."
In the spirit of open source, the company says its product integrates "seamlessly with Selenium," so development teams can use it with the UI tests they already have without moving to a proprietary platform.
Parasoft Selenic was developed based on testing needs that emerged from surveys of developers conducted by the Monrovia, CA-based software testing company, Lambert said. The surveys found that 64 percent of development teams are using Selenium as compared to 15 percent using commercial tools, and 14 percent doing manual testing. Parasoft refers to open source Selenium as "the de facto standard."
More information about the new testing product is available here.