Enhancing Software Quality Assurance: A Dual Approach to Automated and Human Testing for Web Applications
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This research project compares automated testing, using the software Cypress, and manual human testing. This was done by creating three versions of a simple login page as an application and creating a quality assurance checklist relating to the quality attributes described in ISO/IEC 9126. The different versions each had a random set of faults related to the quality attribute requirements that were supposed to be fulfilled according to the quality assurance checklist. After creating the website, tests were written in Cypress using the checklist. The developers that created the website and that wrote the tests had very limited communication to ensure that the tests were not influenced by the implementation. Four tests subjects used the checklist to identify issues with the login page. Automatic testing was more reliable than human testing, a computer will not get tired or have opinions about what is worth testing. The human test subjects produced feedback that was never even considered by the research team. Automated tests written in Cypress might not have this ability. The results of the testing showed that the automated tests were more accurate and consistent. Furthermore, the results suggest that a combination of automated tests and human user testing in order to get the best of both worlds with the current tooling available. Evaluating if the detailed feedback that the human test subjects gave is a uniquely human trait was outside the scope of this paper. The team was surprised by how well-automated testing could find faults in usability and frustrated by how hard it was to write automated usability tests.