Getting Started
This section of the manual contains introductory tutorials for installing labgrid, running your first test and setting up the distributed infrastructure. For an overview about the basic design and components of labgrid, read the Overview first.
Installation
Depending on your distribution you need some dependencies. On Debian these usually are:
$ sudo apt install python3 python3-virtualenv python3-pip python3-setuptools virtualenv microcom
In many cases, the easiest way is to install labgrid into a virtualenv:
$ virtualenv -p python3 labgrid-venv
$ source labgrid-venv/bin/activate
labgrid-venv $ pip install --upgrade pip
Install Latest Release
labgrid-venv $ pip install labgrid
Install Development State
Start by cloning the repository and installing labgrid:
labgrid-venv $ git clone https://github.com/labgrid-project/labgrid
labgrid-venv $ cd labgrid && pip install .
Note
If you are not installing via pip and intend to use Serial over IP (RFC2217), it is highly recommended to uninstall pyserial after installation and replace it with the pyserial version from the labgrid project:
https://github.com/labgrid-project/pyserial/tags
This pyserial version has two fixes for an Issue we found with Serial over IP multiplexers. Additionally it reduces the Serial over IP traffic considerably since the port is not reconfigured when labgrid changes the timeout (which is done inside the library a lot).
Test Installation
Test your installation by running:
labgrid-venv $ labgrid-client --help
usage: labgrid-client [-h] [-x ADDRESS] [-c CONFIG] [-p PLACE] [-d] COMMAND ...
...
If the help for labgrid-client does not show up, open an Issue. If everything was successful so far, proceed to the next section:
Optional Requirements
labgrid provides optional features which are not included in the default installation. An example for snmp support is:
labgrid-venv $ pip install ".[snmp]"
Onewire
Onewire support requires the libow library with headers, installable on debian via the libow-dev package. Use the onewire extra to install the correct onewire library version in addition to the normal installation.
SNMP
SNMP support requires to additional packages, pysnmp and pysnmpmibs. They are included in the snmp extra.
Modbus
Modbus support requires an additional package pyModbusTCP. It is included in the modbus extra.
ModbusRTU
Modbus support requires an additional package minimalmodbus. It is included in the modbusrtu extra.
Running Your First Test
Start by copying the initial example:
$ mkdir ../first_test/
$ cp examples/shell/* ../first_test/
$ cd ../first_test/
Connect your embedded board (raspberry pi, riotboard, …) to your computer and
adjust the port
parameter of the RawSerialPort
resource and username
and password
of the ShellDriver driver in local.yaml
:
targets:
main:
resources:
RawSerialPort:
port: "/dev/ttyUSB0"
drivers:
ManualPowerDriver:
name: "example"
SerialDriver: {}
ShellDriver:
prompt: 'root@\w+:[^ ]+ '
login_prompt: ' login: '
username: 'root'
You can check which device name gets assigned to your USB-Serial converter by
unplugging the converter, running dmesg -w
and plugging it back in. Boot up
your board (manually) and run your first test:
labgrid-venv $ pytest --lg-env local.yaml test_shell.py
It should return successfully, in case it does not, open an Issue.
Setting Up the Distributed Infrastructure
The labgrid distributed infrastructure consists of three components:
The system needs at least one coordinator and exporter, these can run on the same machine. The client is used to access functionality provided by an exporter. Over the course of this tutorial we will set up a coordinator and exporter, and learn how to access the exporter via the client.
Attention
Labgrid requires your user to be able to connect from the client machine via ssh to the exporter machine _without_ a password prompt. This means that public key authentication should be configured on all involved machines for your user beforehand.
Coordinator
We can simply start the coordinator:
labgrid-venv $ labgrid-coordinator
Exporter
The exporter needs a configuration file written in YAML syntax, listing the resources to be exported from the local machine. The config file contains one or more named resource groups. Each group contains one or more resource declarations and optionally a location string (see the configuration reference for details).
For example, to export a USBSerialPort
with ID_SERIAL_SHORT
of
ID23421JLK
, the group name example-group and the location
example-location:
example-group:
location: example-location
USBSerialPort:
match:
ID_SERIAL_SHORT: ID23421JLK
Note
See the udev matching section on how to match ManagedResources and the resources sections for a description of different resource types.
The exporter requires additional dependencies:
$ sudo apt install ser2net
It can now be started by running:
labgrid-venv $ labgrid-exporter configuration.yaml
Additional groups and resources can be added:
example-group:
location: example-location
USBSerialPort:
match:
ID_SERIAL_SHORT: P-00-00682
speed: 115200
NetworkPowerPort:
model: netio
host: netio1
index: 3
example-group-2:
USBSerialPort:
match:
ID_SERIAL_SHORT: KSLAH2341J
Restart the exporter to activate the new configuration.
Attention
The ManagedFile will create temporary uploads in the exporters
/var/cache/labgrid
directory. This directory needs to be created manually
and should allow write access for users. The /contrib
directory in the
labgrid-project contains a tmpfiles configuration example to automatically
create and clean the directory.
It is also highly recommended to enable fs.protected_regular=1
and
fs.protected_fifos=1
for kernels>=4.19, to protect the users from opening
files not owned by them in world writeable sticky directories.
For more information see this kernel commit.
Client
Finally we can test the client functionality, run:
labgrid-venv $ labgrid-client resources
kiwi/example-group/NetworkPowerPort
kiwi/example-group/NetworkSerialPort
kiwi/example-group-2/NetworkSerialPort
You can see the available resources listed by the coordinator. The groups example-group and example-group-2 should be available there.
To show more details on the exported resources, use -v
(or -vv
):
labgrid-venv $ labgrid-client -v resources
Exporter 'kiwi':
Group 'example-group' (kiwi/example-group/*):
Resource 'NetworkPowerPort' (kiwi/example-group/NetworkPowerPort[/NetworkPowerPort]):
{'acquired': None,
'avail': True,
'cls': 'NetworkPowerPort',
'params': {'host': 'netio1', 'index': 3, 'model': 'netio'}}
...
You can now add a place with:
labgrid-venv $ labgrid-client --place example-place create
And add resources to this place (-p
is short for --place
):
labgrid-venv $ labgrid-client -p example-place add-match */example-group/*
Which adds the previously defined resource from the exporter to the place. To interact with this place, it needs to be acquired first, this is done by
labgrid-venv $ labgrid-client -p example-place acquire
Now we can connect to the serial console:
labgrid-venv $ labgrid-client -p example-place console
Note
Using remote connection requires microcom
or telnet
installed
on the host where the labgrid-client is called.
See Remote Access for some more advanced features. For a complete reference have a look at the labgrid-client(1) man page.
Systemd files
Labgrid comes with several systemd files in contrib/systemd
:
service files for coordinator and exporter
tmpfiles.d file to regularly remove files uploaded to the exporter in
/var/cache/labgrid
sysusers.d file to create the
labgrid
user and group, enabling members of thelabgrid
group to upload files to the exporter in/var/cache/labgrid
Follow these instructions to install the systemd files on your machine(s):
Copy the service, tmpfiles.d and sysusers.d files to the respective installation paths of your distribution.
Adapt the
ExecStart
paths of the service files to the respective Python virtual environments of the coordinator and exporter.Adjust the
SupplementaryGroups
option in thelabgrid-exporter.service
file to your distribution so that the exporter gains read and write access on TTY devices (forser2net
); most often, these groups are calleddialout
,plugdev
ortty
. Depending on your udev configuration, you may need multiple groups.Set the coordinator address the exporter should connect to by overriding the exporter service file; i.e. execute
systemctl edit labgrid-exporter.service
and add the following snippet:[Service] Environment="LG_COORDINATOR=<your-host>[:<your-port>]"
Create the
labgrid
user and group:# systemd-sysusers
Reload the systemd manager configuration:
# systemctl daemon-reload
Start the coordinator, if applicable:
# systemctl start labgrid-coordinator
After creating the exporter configuration file referenced in the
ExecStart
option of thelabgrid-exporter.service
file, start the exporter:# systemctl start labgrid-exporter
Optionally, for users being able to upload files to the exporter, add them to the labgrid group on the exporter machine:
# usermod -a -G labgrid <user>
Using a Strategy
Strategies allow the labgrid library to automatically bring the board into a defined state, e.g. boot through the bootloader into the Linux kernel and log in to a shell. They have a few requirements:
A driver implementing the
PowerProtocol
, if no controllable infrastructure is available aManualPowerDriver
can be used.A driver implementing the
LinuxBootProtocol
, usually a specific driver for the board’s bootloaderA driver implementing the
CommandProtocol
, usually aShellDriver
with aSerialDriver
below it.
labgrid ships with two builtin strategies, BareboxStrategy
and
UBootStrategy
. These can be used as a reference example for simple
strategies, more complex tests usually require the implementation of your own
strategies.
To use a strategy, add it and its dependencies to your configuration YAML,
retrieve it in your test and call the transition(status)
function.
See the section about the various shipped strategies
for examples on this.
An example using the pytest plugin is provided under examples/strategy.