Getting Started

This section of the manual contains introductory tutorials for installing labgrid, running your first test and setting up the distributed infrastructure. For an overview about the basic design and components of labgrid, read the Overview first.


Depending on your distribution you need some dependencies. On Debian stretch and buster these usually are:

$ apt-get install python3 python3-virtualenv python3-pip python3-setuptools virtualenv

In many cases, the easiest way is to install labgrid into a virtualenv:

$ virtualenv -p python3 labgrid-venv
$ source labgrid-venv/bin/activate

Start installing labgrid by cloning the repository and installing the requirements from the requirements.txt file:

$ git clone
$ cd labgrid && pip install -r requirements.txt
$ python3 install


Previous documentation recommended the installation as via pip (pip3 install labgrid). This lead to broken installations due to unexpected incompatibilities with new releases of the dependencies. Consequently we now recommend using pinned versions from the requirements.txt file for most use cases.

labgrid also supports the installation as a library via pip, but we only test against library versions specified in the requirements.txt file. Thus when installing directly from pip you have to test compatibility yourself.


If you are installing via pip and intend to use Serial over IP (RFC2217), it is highly recommended to uninstall pyserial after installation and replace it with the pyserial version from the labgrid project:

$ pip uninstall pyserial
$ pip install

This pyserial version has two fixes for an Issue we found with Serial over IP multiplexers. Additionally it reduces the Serial over IP traffic considerably since the port is not reconfigured when labgrid changes the timeout (which is done inside the library a lot).

Test your installation by running:

$ labgrid-client --help
usage: labgrid-client [-h] [-x URL] [-c CONFIG] [-p PLACE] [-d] COMMAND ...

If the help for labgrid-client does not show up, open an Issue. If everything was successful so far, proceed to the next section:

Optional Requirements

labgrid provides optional features which are not included in the default requirements.txt. The tested library version for each feature is included in a seperate requirements file. An example for snmp support is:

$ pip install -r snmp-requirements.txt


Onewire support requires the libow library with headers, installable on debian via the libow-dev package. Use the onewire-requirements.txt file to install the correct onewire library version in addition to the normal installation.


SNMP support requires to additional packages, pysnmp and pysnmpmibs. They are included in the snmp-requirements.txt file.


Modbus support requires an additional package pyModbusTCP. It is included in the modbus-requirements.txt file.

Running Your First Test

Start by copying the initial example:

$ mkdir ../first_test/
$ cp examples/shell/* ../first_test/
$ cd ../first_test/

Connect your embedded board (raspberry pi, riotboard, …) to your computer and adjust the port parameter of the RawSerialPort resource and username and password of the ShellDriver driver in local.yaml:

        port: "/dev/ttyUSB0"
        name: "example"
      SerialDriver: {}
        prompt: 'root@\w+:[^ ]+ '
        login_prompt: ' login: '
        username: 'root'

You can check which device name gets assigned to your USB-Serial converter by unplugging the converter, running dmesg -w and plugging it back in. Boot up your board (manually) and run your first test:

$ pytest --lg-env local.yaml

It should return successfully, in case it does not, open an Issue.

If you want to build documentation you need some more dependencies:

$ pip3 install -r doc-requirements.txt

The documentation is inside doc/. HTML-Documentation is build using:

$ cd doc/
$ make html

The HTML documentation is written to doc/.build/html/.

Setting Up the Distributed Infrastructure

The labgrid distributed infrastructure consists of three components:

  1. Coordinator

  2. Exporter

  3. Client

The system needs at least one coordinator and exporter, these can run on the same machine. The client is used to access functionality provided by an exporter. Over the course of this tutorial we will set up a coordinator and exporter, and learn how to access the exporter via the client.


To start the coordinator, we will download the labgrid repository, create an extra virtualenv and install the dependencies via the requirements file.

$ git clone
$ cd labgrid && virtualenv -p python3 crossbar_venv
$ source crossbar_venv/bin/activate
$ sudo apt install libsnappy-dev
$ pip install -r crossbar-requirements.txt
$ python install

All necessary dependencies should be installed now, we can start the coordinator by running crossbar start inside of the repository.


This is possible because the labgrid repository contains the crossbar configuration the coordinator in the .crossbar folder. crossbar is a network messaging framework for building distributed applications, which labgrid plugs into.


For long running deployments, you should copy and customize the .crossbar/config.yaml file for your use case. This includes setting a different workdir and may include changing the running port.


The exporter needs a configuration file written in YAML syntax, listing the resources to be exported from the local machine. The config file contains one or more named resource groups. Each group contains one or more resource declarations and optionally a location string (see the configuration reference for details).

For example, to export a USBSerialPort with ID_SERIAL_SHORT of ID23421JLK, the group name example-group and the location example-location:

  location: example-location


Use labgrid-suggest to generate the YAML snippets for most exportable resources.

The exporter can now be started by running:

$ labgrid-exporter configuration.yaml

Additional groups and resources can be added:

  location: example-location
      'ID_SERIAL_SHORT': 'P-00-00682'
    speed: 115200
    model: netio
    host: netio1
    index: 3

Restart the exporter to activate the new configuration.


The ManagedFile will create temporary uploads in the exporters /var/cache/labgrid directory. This directory needs to be created manually and should allow write access for users. The /contrib directory in the labgrid-project contains a tmpfiles configuration example to automatically create and clean the directory. It is also highly recommended to enable fs.protected_regular=1 and fs.protected_fifos=1 for kernels>=4.19, to protect the users from opening files not owned by them in world writeable sticky directories. For more information see this kernel commit.


Finally we can test the client functionality, run:

$ labgrid-client resources

You can see the available resources listed by the coordinator. The groups example-group and example-group-2 should be available there.

To show more details on the exported resources, use -v (or -vv):

$ labgrid-client -v resources
Exporter 'kiwi':
  Group 'example-group' (kiwi/example-group/*):
    Resource 'NetworkPowerPort' (kiwi/example-group/NetworkPowerPort[/NetworkPowerPort]):
      {'acquired': None,
       'avail': True,
       'cls': 'NetworkPowerPort',
       'params': {'host': 'netio1', 'index': 3, 'model': 'netio'}}

You can now add a place with:

$ labgrid-client --place example-place create

And add resources to this place (-p is short for --place):

$ labgrid-client -p example-place add-match */example-group/*

Which adds the previously defined resource from the exporter to the place. To interact with this place, it needs to be acquired first, this is done by

$ labgrid-client -p example-place acquire

Now we can connect to the serial console:

$ labgrid-client -p example-place console


Using remote connection requires microcom installed on the host where the labgrid-client is called.

See Remote Access for some more advanced features. For a complete reference have a look at the labgrid-client(1) man page.

udev Matching

labgrid allows the exporter (or the client-side environment) to match resources via udev rules. The udev resources become available to the test/exporter as soon es they are plugged into the computer, e.g. allowing an exporter to export all USB ports on a specific hub and making a NetworkSerialPort available as soon as it is plugged into one of the hub’s ports. labgrid also provides a small utility called labgrid-suggest which will output the proper YAML formatted snippets for you. The information udev has on a device can be viewed by executing:

 $ udevadm info /dev/ttyUSB0
 E: ID_MODEL_FROM_DATABASE=CP210x UART Bridge / myAVR mySmartUSB light
 E: ID_MODEL_ID=ea60
 E: ID_PATH=pci-0000:00:14.0-usb-0:5:1.0
 E: ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_1_0
 E: ID_SERIAL=Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_P-00-00682
 E: ID_SERIAL_SHORT=P-00-00682
 E: ID_TYPE=generic

In this case the device has an ID_SERIAL_SHORT key with a unique ID embedded in the USB-serial converter. The resource match configuration for this USB serial converter is:

    'ID_SERIAL_SHORT': 'P-00-00682'

This section can now be added under the resource key in an environment configuration or under its own entry in an exporter configuration file.

As the USB bus number can change depending on the kernel driver initialization order, it is better to use the @ID_PATH instead of @sys_name for USB devices. In the default udev configuration, the path is not available for all USB devices, but that can be changed by creating a udev rules file:

SUBSYSTEMS=="usb", IMPORT{builtin}="path_id"

Using a Strategy

Strategies allow the labgrid library to automatically bring the board into a defined state, e.g. boot through the bootloader into the Linux kernel and log in to a shell. They have a few requirements:

  • A driver implementing the PowerProtocol, if no controllable infrastructure is available a ManualPowerDriver can be used.

  • A driver implementing the LinuxBootProtocol, usually a specific driver for the board’s bootloader

  • A driver implementing the CommandProtocol, usually a ShellDriver with a SerialDriver below it.

labgrid ships with two builtin strategies, BareboxStrategy and UBootStrategy. These can be used as a reference example for simple strategies, more complex tests usually require the implementation of your own strategies.

To use a strategy, add it and its dependencies to your configuration YAML, retrieve it in your test and call the transition(status) function.

>>> strategy = target.get_driver(strategy)
>>> strategy.transition("barebox")

An example using the pytest plugin is provided under examples/strategy.