LabNet hardware control software for the Raspberry Pi

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    LabNet is a C++ package for low-level networked control of hardware on the Raspberry Pi with two main goals: time-critical operations and ease of extensibility, both topics of great interest to experimental neurobiologists. While the authors do present some interesting benchmarks supporting the real-time performance of LabNet, there are important confounding factors that should be addressed in the interpretation of the results. There is surprisingly little mention on how easy the platform is to extend, but with future improvements in documentation, more examples, and hardware support, LabNet is likely to become a very useful tool for experimentalists who need low-latency control for behavioral experiments over the network.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 and Reviewer #2 agreed to share their names with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Single-board computers such as the Raspberry Pi make it easy to control hardware setups for laboratory experiments. GPIOs and expansion boards (HATs) give access to a whole range of sensor and control hardware. However, controlling such hardware can be challenging, when many experimental setups run in parallel and the time component is critical. LabNet is a C++ optimized control layer software to give access to the Raspberry Pi connected hardware over a simple network protocol. LabNet was developed to be suitable for time-critical operations, and to be simple to expand. It leverages the actor model to simplify multithreading programming and to increase modularity. The message protocol is implemented in Protobuf and offers performance, small message size, and supports a large number of programming languages on the client side. It shows good performance compared to locally executed tools like Bpod, pyControl, or Autopilot and reaches sub-millisecond range in network communication latencies. LabNet can monitor and react simultaneously to up to 14 pairs of digital inputs, without increasing latencies. LabNet itself does not provide support for the design of experimental tasks. This is left to the client. LabNet can be used for general automation in experimental laboratories with its control PC located at some distance. LabNet is open source and under continuing development.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    The largest point of improvement that I expect will unfold over this project's development lifecycle will be its documentation.”

    This is indeed one of the biggest source code issues. In the next version, we plan to improve the source code documentation, add more examples and also some small HOWTOs for the RasPi setup.

    Reviewer #2 (Public Review):

    The Design section then introduces the actor model, the C++ library SObjectivizer used to implement it, and the binary message protocol used for transmission of data across nodes. As currently written, however, this section seems overly technical and hard to grasp for readers who might be interested in experimental neuroscience, but who lack the expertise to understand all mentioned functional constructs and required expertise in the C++ language. Several concepts are mentioned only in passing and without introductory references for the non-expert reader. The level of detail also seems to distract from conveying a more meaningful understanding of the remaining trade-offs involved between network communication, latency, synchronization, and bandwidth.

    We wanted to briefly describe why the actor model was used and how it addresses the problem of multithreading programming. We think most of the concepts should be understandable even without prior C++ knowledge. This is also why these concepts are only described briefly. For a more in-depth look e.g. the SObjectizer has a detailed documentation.

    The essence of the actor-model could probably be captured more succinctly, and more time spent discussing some of these critical decisions underlying LabNet's design principles. For example, although each Raspberry Pi device runs a LabNet server, the current implementation allows only one client connection per node. This might be surprising for some readers as it excludes a large number of possible network topologies, and the reason presented for the design decision as currently detailed is hard to understand without further clarification.

    We have removed some unnecessary details about the actor model. In the beginning of the Design section we now describe more in depth why LabNet was designed as a distributed system and why this results in only one connection per node. The hardware low cost made also us prefer simplicity over more complex hardware topologies.

    The main method for evaluating the performance of LabNet is a series of performance tests in the Raspberry Pi comparing clients written in C++, C# and Python, followed by a series of benchmarks comparing LabNet against other established hardware control platforms. While these are undoubtfully useful, especially the latter, the use of benchmarking methods as described in the paper should be carefully revisited, as there are a number of possible confounding factors.

    For example, in the performance tests comparing clients written in C++, C# and Python, the Python implementation is running synchronously and directly on top of the low-level interface with system sockets, while the C++ and C# versions use complex, concurrent frameworks designed for resilience and scalability. This difference alone could easily skew the Python results in the simplistic benchmarks presented in the paper, which can leave the reader skeptical about all the comparisons with Python in Figure 3. Similarly, the complex nature of available frameworks also raises questions about the comparison between C# and C++. I don't think it is fair to say that Figure 3 is really comparing languages, as much as specific frameworks. In general, comparing the performance of languages themselves for any task, especially compiled languages, is a very difficult topic that I would generally avoid, especially when targeting a more general, non-technical audience.

    This is true; comparisons between different languages are always difficult. This is now explicitly addressed in the text. However, since the implementations in C++, C# and Python were so close in all tests, this is more a demonstration then a comparison: the language and framework at the client side is not really important, at least for the simple cases considered here

    The second set of benchmarks comparing LabNet to other established hardware control platforms is much more interesting, but it doesn't currently seem to allow a fair assessment of the different systems. Specifically, from the authors' description of the benchmarking procedure, it doesn't seem like the same task was used to generate the different latency numbers presented, and the values seem to have been mostly extracted from each of the platform's published results. This unfortunately reduces the value of the benchmarks in the sense that it is unclear what conditions are really being compared. For example, while the numbers for pyControl and Bpod seem to be reporting the activation of simple digital input and output lines, the latency presented for Autopilot uses as reference the start of a sound waveform on a stereo headphone jack. Audio DSP requires specialized hardware in the Pi which is likely to intrinsically introduce higher latency versus simply toggling a digital line, so it is not clear whether these scenarios are really comparable. Similarly, the numbers for Whisker and Bpod being presented without any variance make it hard to interpret the results.

    We also saw this as a problem. Therefore, all tests were resigned and repeated. Now all platforms were subjected to the same test (with the exception of Whisker, where we did not have suitable hardware available). In this way we now have comparable measurements for all platforms.

    One of the stated aims of LabNet was to provide a system where implementing new functionality extensions would be as simple as possible. This is another aspect of experimental neuroscience that is under active discussion and where more contributions are very much needed. Surprisingly, this topic receives very little attention in the paper itself. It is not clear whether the actor model is by itself supposed to make the implementation of new functionality easier, but if this is the case, this is not obvious from the way the design and evaluation sections are currently written, especially given the choice of language being C++.

    One of the reasons behind the choice of Python for other hardware platforms such as pyControl and Autopilot is the growing familiarity and prevalence of Python within the neuroscience research community, which might assist researchers in implementing new functionality. Other open-hardware projects in neuroscience allowing for community extensions in C++ such as Open-Ephys have informally expressed the difficulty of the C++ language as a point of friction. I feel that the aim of "ease of extensibility" should merit much more discussion in any future revision of the paper.

    Indeed, they only mention in passing that user extensibility is in the conclusion where it is stated that it is not currently possible to modify LabNet without directly modifying and recompiling the entire code base. A software plug-in system is suggested, and indeed this would be extremely beneficial in achieving the second stated aim.

    With the simplicity of implementing new functionality, we rather meant the simplicity to adapt the LabNet source code to new requirements. For which the modularization and the use of the actor model is responsible. This is now explained more explicitly in the text. And yes, a plug-in system is on our roadmap but not a part of LabNet yet.

    Finally, a set of example experimental applications would have been extremely useful to ground the design of LabNet in practical terms, in addition to the example listings. Even in diagrammatic form, describing how specific experiments have been powered by LabNet would give readers a better sense of the kind of designs that might be currently more appropriate for this platform. For example, video is being increasingly used in behavioral experiments, and Raspberry Pi drivers are available for several camera models, but this important aspect is not mentioned at all in the discussion, so readers interested in video would not know from reading this paper whether LabNet would be appropriate for their goals.

    The section "Example" actually shows how a simple experiment can be realized with LabNet. Listings 1-3 are also responsible for this.

    LabNet does not support video acquisition in the current version. Even though this video transmission would be quite easy to implement. Nevertheless, we have not needed this in our experiments so far.

    As the manuscript currently stands, I don't feel the authors have achieved their second stated aim, and I am unfortunately not fully convinced that the experimental results are adequate to demonstrate the achievement of the first aim. I fully agree, however, that a robust, high-performance and flexible hardware layer for combining neuroscience instruments is desperately needed, and so I do expect that a more thorough treatment of the methods developed in LabNet will in the future have a very positive impact on the field.

    Latency measurements are indeed very important, also because in this way a comparison with other tools can be achieved. With the test redesign and an own implementation for each tool the comparability is now a given. Of course, LabNet cannot beat Bpod. After all, Bpod is running on a microcontroller and LabNet has to send all commands via Ethernet. But the results are still very good. The stress test also demonstrates the scalability of LabNet. Above all, LabNet offers the possibility to control many systems at the same time, which cannot be done with other tools, or only in a complicated way.

  2. Evaluation Summary:

    LabNet is a C++ package for low-level networked control of hardware on the Raspberry Pi with two main goals: time-critical operations and ease of extensibility, both topics of great interest to experimental neurobiologists. While the authors do present some interesting benchmarks supporting the real-time performance of LabNet, there are important confounding factors that should be addressed in the interpretation of the results. There is surprisingly little mention on how easy the platform is to extend, but with future improvements in documentation, more examples, and hardware support, LabNet is likely to become a very useful tool for experimentalists who need low-latency control for behavioral experiments over the network.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 and Reviewer #2 agreed to share their names with the authors.)

  3. Reviewer #1 (Public Review):

    Alexej Schatz and York Winter wrote "LabNet," a C++ tool to control Raspberry Pi (raspi) GPIO (General Purpose Input-Output) and other hardware using a network messaging protocol using protobuf. The authors were primarily concerned with performance, specifically low execution latencies, as well as extensibility to a variety of hardware. LabNet's network architecture is asymmetrical and treats one or many raspis as servers that can receive control signals from one or more clients. Servers operate as (approximately) stateless "agents" that execute instructions received in message boxes using a single thread or pool of threads. The authors describe several examples of basic functionality like time to write and read GPIO state to characterize the performance of the system, the code for which is available in a linked GitHub repository.

    The described performance of LabNet is impressive, with near- or sub-millisecond latency across the several tests when conducted over a LAN TCP/IP connection. The demonstrated ability to interact with the server from three programming languages (C++, C#, and Python) also would be quite useful for a tool that intends to be as general-purpose as this one. The design decisions that led to the use of protobuf and SObjectizer seem sound and supportive of the primary performance goal.

    As far as I'm concerned, the authors accomplished their goals and give a convincing demonstration in their performance tests.

    The authors compare LabNet to:

    - Whisker ( https://web.archive.org/web/20200222133946/http://egret.psychol.cam.ac.uk/whisker/index.shtml ), an aging proprietary experimental package typically sold along with purpose-built hardware;
    - pyControl and Bpod, both of which are open-source software frameworks for performing behavioral experiments using a specific combination of microcontrollers and an ecosystem of extension parts;
    - Autopilot, a software framework for performing behavioral experiments on the raspberry pi as well as modular development of hardware controllers and other common experimental components.

    Each of these packages has a different enough scope and accompanying differences in design priorities that I think are worth noting to give context to the niche LabNet fills. For example, pyControl and Autopilot emphasize ease of use, pyControl and Bpod are built around state machines for controlling experiments, etc. All have some facility for designing and performing experiments themselves and are thus a bit "higher level" than LabNet, which is intended more as a GPIO and hardware control system specifically. I think LabNet is more aptly compared to something like pigpio (https://web.archive.org/web/20220130033233/https://abyz.me.uk/rpi/pigpio/) which is also a low-level GPIO control library with network control capabilities. In that respect LabNet fills at least two needs that aren't well-served by existing tools: first, it provides a means to extend the server with additional commands that can be exposed to multiple programming languages. Second, that lets users control additional hardware and implement custom logic aside from simple on/off commands (for example, the ability to output sound) - this would be particularly useful as a way of controlling HATs and other devices. LabNet's agent-based concurrency architecture also seems like it will allow the number of simultaneously controlled devices to scale well. LabNet's network-first design positions it well for behavioral experiments that are often better served by a swarm of networked computers rather than a single controlling computer.

    The largest point of improvement that I expect will unfold over this project's development lifecycle will be its documentation. LabNet has no documentation to speak of, outside a brief description of the build process for a relatively voluminous body of code (~27k lines) with relatively few comments. There is no established norm as to what stage in a scientific software package's development a paper should be written, so I take the lack of documentation at this stage as just a sign that this project is young. The primary barrier for the broader landscape of scientific software is less that of availability of technically proficient packages, but the ease with which they can be adopted and used by people outside the development team. The ability of downstream researchers to use and extend the library to suit their needs will depend on future documentation. For example, at the moment the Python adapter to the client and server is present in the examples folder but relatively un-annotated, so it might be challenging to adapt to differing needs at the moment (https://github.com/WinterLab-Berlin/LabNet/blob/34e71c6827d2feef9b65d037ee5f2e8ca227db39/examples/python/perf_test/LabNetClient_pb2.py and https://github.com/WinterLab-Berlin/LabNet/blob/34e71c6827d2feef9b65d037ee5f2e8ca227db39/examples/python/perf_test/LabNetServer_pb2.py ). Documentation for projects like this that aim to serve as the basis from which to build experimental infrastructure can be quite challenging, as it often needs to spread beyond the package itself to more general concerns like how to use Raspberry Pis, how to set them up to be available over a network, and so on, so I look forward to seeing the authors meet that challenge.

    I would like to thank the authors for their work and thank them for bringing us a fast way to control experimental hardware over the network.

  4. Reviewer #2 (Public Review):

    The manuscript introduces LabNet as a network-based platform for the control of hardware in Neuroscience. The authors recognize and attempt to address two fundamental problems in constructing systems neuroscience experiments: on one hand the importance of precise timing measurements of behavior; on the other hand, the need for flexibility in experimental design. These two goals are often at great odds with each other. Precise timing is more easily achieved when using fewer, dedicated homogeneous devices such as embedded microcontrollers. Flexibility can be found in the diversity of devices and programming languages available for commercial personal computers, but this often comes at the cost of a non-real-time operating system, where timing can be much harder to predict accurately. There is also a limitation on the number of devices which can be simultaneously controlled by a single processor, which can be an impediment for high-throughput behavior studies where the ability to run dozens of experiments in parallel is desirable.

    LabNet proposes to address this tension by focusing on the design of a pure hardware control and instrumentation layer implemented on top of the Raspberry Pi family of microprocessors. The idea is to keep coordination of experimental hardware in a central computer, but keep time-critical components at the edge, each node running the same control software in a Raspberry Pi to provide precise timing guarantees. Flexibility would be provided by the ability to connect an arbitrary number of nodes to the central computer using a unified message-passing protocol by which the computer can receive events and send commands to each node.

    The authors propose the use of the C++ programming language and the actor-model as a unifying framework for implementing individual nodes and present a series of benchmarks comparing their system against other established hardware control platforms.

    The idea of keeping time-critical components at the edge, and the use of network communication protocols, and in particular message-passing systems such as the actor-model, to scale up experimental control is reasonable. These principles have undoubtedly been very successful in enabling the creation of massively distributed systems such as web applications connecting millions of devices to each other every second.

    The Design section then introduces the actor model, the C++ library SObjectivizer used to implement it, and the binary message protocol used for transmission of data across nodes. As currently written, however, this section seems overly technical and hard to grasp for readers who might be interested in experimental neuroscience, but who lack the expertise to understand all mentioned functional constructs and required expertise in the C++ language. Several concepts are mentioned only in passing and without introductory references for the non-expert reader. The level of detail also seems to distract from conveying a more meaningful understanding of the remaining trade-offs involved between network communication, latency, synchronization, and bandwidth.

    The essence of the actor-model could probably be captured more succinctly, and more time spent discussing some of these critical decisions underlying LabNet's design principles. For example, although each Raspberry Pi device runs a LabNet server, the current implementation allows only one client connection per node. This might be surprising for some readers as it excludes a large number of possible network topologies, and the reason presented for the design decision as currently detailed is hard to understand without further clarification.

    The main method for evaluating the performance of LabNet is a series of performance tests in the Raspberry Pi comparing clients written in C++, C# and Python, followed by a series of benchmarks comparing LabNet against other established hardware control platforms. While these are undoubtfully useful, especially the latter, the use of benchmarking methods as described in the paper should be carefully revisited, as there are a number of possible confounding factors.

    For example, in the performance tests comparing clients written in C++, C# and Python, the Python implementation is running synchronously and directly on top of the low-level interface with system sockets, while the C++ and C# versions use complex, concurrent frameworks designed for resilience and scalability. This difference alone could easily skew the Python results in the simplistic benchmarks presented in the paper, which can leave the reader skeptical about all the comparisons with Python in Figure 3. Similarly, the complex nature of available frameworks also raises questions about the comparison between C# and C++. I don't think it is fair to say that Figure 3 is really comparing languages, as much as specific frameworks. In general, comparing the performance of languages themselves for any task, especially compiled languages, is a very difficult topic that I would generally avoid, especially when targeting a more general, non-technical audience.

    The second set of benchmarks comparing LabNet to other established hardware control platforms is much more interesting, but it doesn't currently seem to allow a fair assessment of the different systems. Specifically, from the authors' description of the benchmarking procedure, it doesn't seem like the same task was used to generate the different latency numbers presented, and the values seem to have been mostly extracted from each of the platform's published results. This unfortunately reduces the value of the benchmarks in the sense that it is unclear what conditions are really being compared. For example, while the numbers for pyControl and Bpod seem to be reporting the activation of simple digital input and output lines, the latency presented for Autopilot uses as reference the start of a sound waveform on a stereo headphone jack. Audio DSP requires specialized hardware in the Pi which is likely to intrinsically introduce higher latency versus simply toggling a digital line, so it is not clear whether these scenarios are really comparable. Similarly, the numbers for Whisker and Bpod being presented without any variance make it hard to interpret the results.

    One of the stated aims of LabNet was to provide a system where implementing new functionality extensions would be as simple as possible. This is another aspect of experimental neuroscience that is under active discussion and where more contributions are very much needed. Surprisingly, this topic receives very little attention in the paper itself. It is not clear whether the actor model is by itself supposed to make the implementation of new functionality easier, but if this is the case, this is not obvious from the way the design and evaluation sections are currently written, especially given the choice of language being C++.

    One of the reasons behind the choice of Python for other hardware platforms such as pyControl and Autopilot is the growing familiarity and prevalence of Python within the neuroscience research community, which might assist researchers in implementing new functionality. Other open-hardware projects in neuroscience allowing for community extensions in C++ such as Open-Ephys have informally expressed the difficulty of the C++ language as a point of friction. I feel that the aim of "ease of extensibility" should merit much more discussion in any future revision of the paper.

    Indeed, they only mention in passing that user extensibility is in the conclusion where it is stated that it is not currently possible to modify LabNet without directly modifying and recompiling the entire code base. A software plug-in system is suggested, and indeed this would be extremely beneficial in achieving the second stated aim.

    Finally, a set of example experimental applications would have been extremely useful to ground the design of LabNet in practical terms, in addition to the example listings. Even in diagrammatic form, describing how specific experiments have been powered by LabNet would give readers a better sense of the kind of designs that might be currently more appropriate for this platform. For example, video is being increasingly used in behavioral experiments, and Raspberry Pi drivers are available for several camera models, but this important aspect is not mentioned at all in the discussion, so readers interested in video would not know from reading this paper whether LabNet would be appropriate for their goals.

    As the manuscript currently stands, I don't feel the authors have achieved their second stated aim, and I am unfortunately not fully convinced that the experimental results are adequate to demonstrate the achievement of the first aim. I fully agree, however, that a robust, high-performance and flexible hardware layer for combining neuroscience instruments is desperately needed, and so I do expect that a more thorough treatment of the methods developed in LabNet will in the future have a very positive impact on the field.