In image template-based desktop automation, you provide the robot with screenshots of the parts of the interface that it needs to interact with, like a button or input field. The images are saved together with your automation code. The robot will compare the image to what is currently displayed on the screen and find its target.
Robocorp provides cross-platform desktop automation support with the RPA.Desktop library. It works on Windows, Linux, and macOS.
FORT Robotics Industrial Automation Philadelphia, Pennsylvania 768 followers The protective layer of advanced safety and security for the autonomous frontier. ā€ˇScreen Recorder Robot Lite is a powerful screen record app. It can screenshot and support edit it. It can screen record and support edit the video. Select Area. Auto-identify windows, menus etc. Magnifying glass help you accurate select area. Magnifying glass will show the color of the mouse l.
This example robot demonstrates the use of image templates and keyboard shortcuts to find travel directions between two random locations on Earth.
The robot:
Note: This robot requires macOS Big Sur. The layout and the behavior of the Maps app vary between macOS releases. macOS will ask for permissions the first time you run the robot. Go to System Preferences
->Security & Privacy
and check Robocorp Lab
, Code
, or Terminal
(depending on where you run the robot from) in the Accessibility
and Screen Recording
sections.
Another important topic:
System settings can impact the recognition of the images: How the interface elements look on a screen depends on system settings like color schemes, transparency, and system fonts. Images taken on a system might end up looking different than the target system, and the robot might not recognize them, stopping the process.
In this case, macOS should use the 'Dark' appearance under System Preferences
->General
. See our Desktop automation page for more information.
The robot uses three libraries to automate the task. Finally, it will close all the browsers it happened to open.
The robot uses a web browser to scrape and return two random locations from a suitable website.
The robot opens the Maps app using the Run Process
keyword from the Process
library. It executes the open -a Maps
command. You can run the same command in your terminal to see what happens!
The robot knows when the Maps app is open by waiting for the Maps.MapMode
image template to return a match.
The robot maximizes the Maps app window using a keyboard shortcut unless the app is already maximized. The Run Keyword If
is used for conditional execution.
The robot knows the Maps app is maximized when the Desktop.WindowControls
image template does not return a match (when the close/minimize/maximize icons are not anywhere on the screen).
The robot sets the directions view in the Maps app to a known starting state (empty from and to locations).
The robot waits until Google Maps has loaded the directions and takes a full web page screenshot.
The robot needs to input the from and to locations. This keyword provides a generic way to target those elements on the UI.
The robot tries to find the directions using the Maps app. If that fails, the robot gets the directions from Google Maps.