This project is a Java-based agent that leverages Generative AI models and Retrieval-Augmented Generation (RAG) to execute automated test cases at the graphical user interface (GUI) level. It understands explicit natural language test case instructions (both actions and verifications), performs corresponding actions using the mouse and keyboard, locates the required UI elements on the screen (if needed), and verifies whether actual results correspond to the expected ones using computer vision capabilities.
Here the corresponding article on Medium: AI Agent That’s Rethinking UI Test Automation
config.properties
and
AgentConfig.java
, allowing specification of providers, model names (instruction.model.name
, vision.model.name
), API keys/tokens,
endpoints, and generation parameters (temperature, topP, max output tokens, retries).AgentConfig.getVectorDbProvider
-> chroma
), configured via vector.db.url
in
config.properties
.UiElement
records, which include a name, self-description, description of surrounding
elements (anchors), and a screenshot (UiElement.Screenshot
).retriever.top.n
in config) most relevant UI elements based on semantic similarity between the query (derived
from the test step action) and based on the stored element names. Minimum similarity scores (element.retrieval.min.target.score
,
element.retrieval.min.general.score
in config) are used to filter results for target element identification and potential refinement
suggestions.org.bytedeco.opencv
) for visual template matching to locate UI elements on the screen based on the
element screenshot provided during the first attended test case execution.ModelFactory.getVisionModel
) to disambiguate when multiple visual matches are found or to
confirm that a single visual match, if found, corresponds to the target element’s description and surrounding element information.Robot
class.unattended.mode
flag in config.properties
.unattended.mode=false
): Designed for initial test case runs or when execution in unattended mode
fails for debugging/fixing purposes. In this mode the agent behaves as a trainee, who needs assistance from the human tutor/mentor
in order to identify all the information which is required for the unattended (without supervision) execution of the test case.unattended.mode=true
): The agent executes the test case without any human assistance. It relies entirely on the
information stored in the RAG database and the AI models’ ability to interpret instructions and locate elements based on stored data.
Errors during element location or verification will cause the execution to fail. This mode is suitable for integration into CI/CD
pipelines./testcase
endpoint (port configured via port
in config.properties
). The request
body should contain the test case in JSON format. The server accepts only one test case execution at a time (the agent has been
designed as a static utility for simplicity purposes). Upon receiving a valid request when idle, it returns 200 OK
and starts
the test case execution. If busy, it returns 429 Too Many Requests
.The test execution process, orchestrated by the Agent
class, follows these steps:
TestStep
s. Each TestStep
includes a stepDescription
(natural language
instruction), optional testData
(inputs for the step), and expectedResults
(natural language description of the expected state after
the step).TestStep
sequentially.testData
. The model analyzes the action and determines which tool(s) to call and with what arguments (including
extracting the description of the target UI element if needed). The response is expected to contain a selected tool.test.step.execution.retry.timeout.millis
). If the
error persists after the deadline, the test case execution is interrupted.action.verification.delay.millis
) is introduced to allow the UI state to change after the preceding
action.test.step.execution.retry.interval.millis
) until a timeout (verification.retry.timeout.millis
) is reached. If it still fails after
the deadline, the test case execution is interrupted.The ElementLocator class is responsible for finding the coordinates of a target UI element based on its natural language description provided by the instruction model during an action step. This involves a combination of RAG, computer vision, analysis, and potentially user interaction (if run in attended mode):
retriever.top.n
) most
semantically similar UiElement
records are retrieved based on their stored names, using embeddings generated by
the all-MiniLM-L6-v2
model. Results are filtered based on configured minimum similarity scores (element.retrieval.min.target.score
for high confidence, element.retrieval.min.general.score
for potential matches).MIN_TARGET_RETRIEVAL_SCORE
:
element.locator.visual.similarity.threshold
).MIN_TARGET_RETRIEVAL_SCORE
, but some meet the
MIN_GENERAL_RETRIEVAL_SCORE
:
MIN_GENERAL_RETRIEVAL_SCORE
:
UiElement
record (with UUID, name, descriptions, screenshot) is stored into the vector DB.This project uses Maven for dependency management and building.
git clone <repository_url>
cd <project_directory>
mvn clean package
This command downloads dependencies, compiles the code, runs tests (if any), and packages the application into a standalone JAR file in
the target/
directory.
Instructions for setting up the currently only one supported vector database Chroma DB could be found on its official website.
Configure the agent by editing the config.properties file or by setting environment variables. Environment variables override properties file settings.
Key Configuration Properties:
unattended.mode
(Env: UNATTENDED_MODE
): true
for unattended execution, false
for attended (trainee) mode. Default: false
.test.mode
(Env: TEST_MODE
): true
enables test mode, which saves intermediate screenshots (e.g., with bounding boxes drawn) during
element location for debugging purposes. false
disables this. Default: false
.port
(Env: PORT
): Port for the server mode. Default: 7070
.vector.db.provider
(Env: VECTOR_DB_PROVIDER
): Vector database provider. Default: chroma
.vector.db.url
(Env: VECTOR_DB_URL
): Required URL for the vector database connection.retriever.top.n
(Env: RETRIEVER_TOP_N
): Number of top similar elements to retrieve from the vector DB based on semantic element name
similarity. Default: 3
.model.provider
(Env: MODEL_PROVIDER
): AI model provider (google
or openai
). Default: google
.instruction.model.name
(Env: INSTRUCTION_MODEL_NAME
): Name/deployment ID of the model for processing test case actions and
verifications.vision.model.name
(Env: VISION_MODEL_NAME
): Name/deployment ID of the vision-capable model.model.max.output.tokens
(Env: MAX_OUTPUT_TOKENS
): Maximum amount of tokens for model responses. Default: 5000
.model.temperature
(Env: TEMPERATURE
): Sampling temperature for model responses. Default: 0.0
.model.top.p
(Env: TOP_P
): Top-P sampling parameter. Default: 1.0
.model.max.retries
(Env: MAX_RETRIES
): Max retries for model API calls. Default: 10
.google.api.provider
(Env: GOOGLE_API_PROVIDER
): Google API provider (studio_ai
or vertex_ai
). Default: studio_ai
.google.api.token
(Env: GOOGLE_AI_TOKEN
): API Key for Google AI Studio. Required if using AI Studio.google.project
(Env: GOOGLE_PROJECT
): Google Cloud Project ID. Required if using Vertex AI.google.location
(Env: GOOGLE_LOCATION
): Google Cloud location (region). Required if using Vertex AI.openai.api.key
(Env: OPENAI_API_KEY
): API Key for Azure OpenAI. Required if using OpenAI.openai.api.endpoint
(Env: OPENAI_API_ENDPOINT
): Endpoint URL for Azure OpenAI. Required if using OpenAI.test.step.execution.retry.timeout.millis
(Env: TEST_STEP_EXECUTION_RETRY_TIMEOUT_MILLIS
): Timeout for retrying failed test case
actions. Default: 10000 ms
.test.step.execution.retry.interval.millis
(Env: TEST_STEP_EXECUTION_RETRY_INTERVAL_MILLIS
): Delay between test case action retries.
Default: 1000 ms
.verification.retry.timeout.millis
(Env: VERIFICATION_RETRY_TIMEOUT_MILLIS
): Timeout for retrying failed verifications. Default:
10000 ms
.action.verification.delay.millis
(Env: ACTION_VERIFICATION_DELAY_MILLIS
): Delay after executing a test case action before performing
the corresponding verification. Default: 1000 ms
.element.bounding.box.color
(Env: BOUNDING_BOX_COLOR
): Required color name (e.g., green
) for the bounding box drawn during element
capture in attended mode. This value should be tuned so that the color contrasts as much as possible with the average UI element color.element.retrieval.min.target.score
(Env: ELEMENT_RETRIEVAL_MIN_TARGET_SCORE
): Minimum semantic similarity score for vector DB UI
element retrieval. Elements reaching this score are treated as target element candidates and used for further disambiguation by a vision
model. Default: 0.85
.element.retrieval.min.general.score
(Env: ELEMENT_RETRIEVAL_MIN_GENERAL_SCORE
): Minimum semantic similarity score for vector DB UI
element retrieval. Elements reaching this score will be displayed to the operator in case they decide to update any of them (e.g., due to
UI changes, etc.). Default: 0.4
.element.locator.visual.similarity.threshold
(Env: VISUAL_SIMILARITY_THRESHOLD
): OpenCV template matching threshold. Default: 0.8
.element.locator.top.visual.matches
(Env: TOP_VISUAL_MATCHES_TO_FIND
): Maximum number of visual matches of a single UI element from
OpenCV to pass to the AI model for disambiguation. Default: 3
.dialog.default.horizontal.gap
, dialog.default.vertical.gap
, dialog.default.font.type
,
dialog.user.interaction.check.interval.millis
, dialog.default.font.size
: Cosmetic and timing settings for interactive dialogs.Runs a single test case defined in a JSON file.
mvn clean package
).Agent
class directly using Maven Exec Plugin (add configuration to pom.xml
if needed):
mvn exec:java -Dexec.mainClass="org.tarik.ta.Agent" -Dexec.args="<path/to/your/testcase.json>"
Or run the packaged JAR:
java -jar target/<your-jar-name.jar> <path/to/your/testcase.json>
Starts a web server that listens for test case execution requests.
Server
class using Maven Exec Plugin:
mvn exec:java -Dexec.mainClass="org.tarik.ta.Server"
Or run the packaged JAR:
java -jar target/<your-jar-name.jar>
7070
).POST
request to the /testcase
endpoint with the test case JSON in the request body.200 OK
if it accepts the request (i.e., not already running a test case) or
429 Too Many Requests
if it’s busy. The test case execution runs asynchronously.Please refer to the CONTRIBUTING.md file for guidelines on contributing to this project.
all-MiniLM-L6-v2
) as a dependency of LangChain4j, and the native OpenCV libraries required for
visual element location.availableBoundingBoxColors
field in ElementLocator). If more visual matches are found than available colors,
an error will occur. This might happen if the element.locator.visual.similarity.threshold
is too low or if there are many visually
similar elements on the screen (e.g., the same check-boxes for a list of items). You might need to use a different labelling method for
visual matches in this case (the primary approach during development of this project was to use numbers located outside the bounding box
as labels, which, however, proved to be less efficient compared to using different bounding box colors, but is still a good option if the
latter cannot be applied).main
branch should
include relevant unit tests. Contributing by adding new unit tests to existing code is, as always, welcome.