Hello (again) :-)! In a previous blog post we took a look at combining the Viam platform with a ROS1 robot. Continuing here we will focus on ROS2, explaining how we successfully connected a ROS2/Humble Turtlebot 4 with Viam and added a mobile app for remote control with ease.
Why remote control? Well, first of all because it’s fun! But it also has broad applicability, and is a step towards enabling fully autonomous robots in the future. Stay tuned, because more blog posts will follow where we introduce many other Viam services, such as code deployment and extending your robot’s capabilities with machine learning, SLAM, obstacle detection navigation and so on.
ROS 2 Integration - Extending ROS with Viam
Viam is a platform for smart machines, such as robots, which can be used instead of existing code, OR in conjunction with other code. At its core, Viam uses the concept of modular resources, which consist of components and services.
A component is basically an abstraction of a piece of hardware such as a camera or a motor. A service is a software functionality such as data management, SLAM or navigation. All of these resources are accessible through a standardized set of APIs and data structures (gRPC/ Protocol Buffers), which makes software development straightforward and very efficient by supporting a variety of different programming languages.
ROS uses the concept of nodes communicating with each other through topics via publish/subscribe. A node is similar to a Viam resource, an encapsulated piece of functionality, or a hardware component such as a camera.
Rebuilding a robot is a time-consuming task, and not an optimal use of time. As a result, we were looking into co-deploying Viam alongside existing ROS capabilities, and integrating Viam with the ROS capabilities. To combine the two worlds with each other, we decided to use Viam’s extensibility through modular resources.
Creating ROS component models
To do this, we created Viam ROS component models for each of the Turtlebot components we wanted to integrate. This way, a software developer can talk to ROS-managed components or nodes through Viam standard APIs, while in the background within the custom component, the Viam API calls are translated into ROS messages transparently.
Creating these Viam ROS component models may sound like a crazy software development project on its own, but I can guarantee you that it is actually quite easy! We will talk about this in more detail in an upcoming blog post, but at a high level, all you do is subclass an existing Viam component, let’s say a camera, and implement the required Viam standard APIs. In the case of a camera, it is basically as simple as converting a ROS Image Message into a PIL Image and exposing it through the get_image API, as you can see in the Github repo.
In the case of sending a command to the robot, we simply take the parameters and translate them into the ROS messages inside of the Viam ROS component. The component then publishes the message to the appropriate ROS topic, and the robot starts executing.
Retrieving the state of a component is not much different. The Viam component instance subscribes to a defined or configurable ROS topic, and whenever a status update is received by the component, the state of the Viam component is updated through the standard Viam component APIs.
To publish and subscribe to ROS topics, we had to create at least one single ROS node which is managed by the Viam SDK. We also thought about creating a node for each component, but decided to stick to one for all components, which makes the integration leaner and less complicated. Adding additional nodes at a later point in time is no problem anyway.
For all the code people out there, feel free to dive into the github repo here.
Now that we have made some of the ROS nodes accessible through Viam APIs, we can also directly start using all of the out-of-the-box Viam features, such as data collection, cloud integration, and applying machine learning models on top of the camera stream. This will also be explained in another blog post to be published soon.
Adding Remote Control to the Turtlebot
The goal for this blog post was to add remote control capabilities to our Turtlebot. As all of the components required for remote control (camera and base) are accessible through Viam standard APIs by now, this should be rather easy. Let’s put it to the test!
While there are multiple mobile app development frameworks, one of the most recent and popular cross-platform frameworks is Flutter. Incidentally, Viam has just recently released its own Flutter SDK. Sounds like a great way to start exploring!
While I had some experience with Apple’s Swift framework, Flutter was new to me, but I grew comfortable with it very quickly. So don’t shy away from it if you haven’t touched it yet. I promise that it is very fun!
The Viam Flutter SDK integrates seamlessly with the Viam RDK (Robot Development Kit) APIs through gRPC and Protocol Buffers and solves a lot of headaches around configuring networking out of the box. Remote control literally means you are not next to your robot, and that also implies that you are likely not connected to the same network as your robot.
With traditional approaches this would require you to change your firewall or router configuration, but with Viam you don’t have to do any of that. Just like modern peer-to-peer collaboration software, Viam uses WebRTC to establish connectivity between your robot and your smartphone across network boundaries, and grants you access to real time video streams.
Building remote control functionality with Flutter
Building the actual Flutter remote control app was actually also surprisingly simple. In addition to the directly accessible Viam standard APIs for moving the base and getting access to the camera stream, there is also a Viam base widget available.
This widget, like many others, gives you ready-to-use GUI components, and all I had to do was provide the specific components as input. In our case, this was the specific base and camera I wanted to use for remote control. The widget then shows you the camera picture and provides a joystick ready to use, as shown on the overview diagram below:
Since I am planning to use this app with other robots as well, I have also added a simple login screen, allowing you to connect to different robots in the future. Now here comes the test ride!
Seeing the ROS2 integration in action
While I had concerns regarding latency before the test ride, I was actually positively surprised about how responsive the setup was without any additional tuning. It was in real time, just as you would expect it. Ok, to clarify for the hardware people, it was not “hard real time,” but close enough, and this was with just 200 lines of Flutter/Dart code. I think it's pretty impressive!
For those of you who want to test it out for yourselves, please do so, and reach out if you have any questions, suggestions, or ideas you would like to discuss.
As I mentioned before, stay tuned, as we are looking forward to “seeing” you on our next blog post!