Which of the following statements about a test progress report produced for an automated test suite is TRUE?
As a TAE, you are evaluating a test automation tool to automate some UI tests for a web app. The automated tests will first locate the required HTML elements on the web page using their corresponding identifiers (locators), then perform actions on those elements, and finally check the presence of any expected text for an HTML element. These tests are independent of each other and are organized into a test suite that must be run every night against the most recent build of the web app. There is a high risk that the web app will crash while running some automated tests. Based only on the given information, which of the following is your MOST important concern related to the evaluation of the test automation tool?
An automated test script makes a well-formed request to a REST API in the backend of a web app to add a single item for a product (with ID = 710) to the cart and expects a response confirming that the product is successfully added. The status line of the API response is HTTP/1.1 200 OK, while the response body indicates that the product is out of stock. The API response is correct, the test script fails but completes, and the message to log is: The product with ID = 710 is out of stock. Cart not updated. When this occurs, you are already aware that both the failed test and the API are behaving correctly and that the problem is in the test data. The TAS supports the following test logging levels: FATAL, ERROR, WARN, INFO, DEBUG. Which of the following is the MOST appropriate test logging level to use to log the specified message?
An automated test case that should always pass sometimes passes and sometimes fails intermittently (non-deterministic behavior) when executed in the same test environment, even if no code (i.e., SUT code or the test automation code) has been changed. Which of the following statements about the root cause of this non-deterministic behavior is TRUE?
An API's response to a request made to the corresponding endpoint should return some specific data about a payment transaction in JSON format. In particular, your goal is to write the test automation code, keeping it as short as possible, aimed at determining whether that response includes certain properties (transaction_id, amount, status, timestamp) with the data types and formats expected. Assuming that the TAF provides all the necessary support to validate the specified API response, how would you BEST achieve your goal?
Which of the following recommendations can help improve the maintainability of test automation code?
A SUT (SUT1) is a client-server system based on a thin client. The client is primarily a display and input interface, while the server provides almost all the resources and functionality of the system. Another SUT (SUT2) is a client-server system based on a fat client that relies little on the server and provides most of the resources and functionality of the system. A given TAS is used to implement automated tests on both SUT1 and SUT2. The main objective of the TAS is to cover as many system functionalities as possible through automated tests executed as fast as possible. Which of the following statements about the automation solution is BEST in this scenario?
A TAS is used to run on a test environment a suite of automated regression tests, written at the UI level, on different releases of a web app: all executions complete successfully, always providing correct results (i.e., producing neither false positives nor false negatives). The tests, all independent of each other, consist of executable test scripts based on the flow model pattern which has been implemented in a three-layer TAF (test scripts, business logic, core libraries) by expanding the page object model via the façade pattern. Currently the suite takes too long to run, and the test scripts are considered too long in terms of LOC (Lines of Code). Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?
Which of the following practices can be used to specify the active (i.e., actually available) features for each release of the SUT and determine the corresponding automated tests that must be executed for a given release?