In the afternoon of the May 2019 xAPI Party, we tackled sending xAPI statements from a robot. We already had the robot and a SCORMCloud LRS we’d used for other xAPI Party sources, so our challenge for the afternoon was to develop an xAPI robot driving range over three of the conference’s time slots (a total of just 2 ½ hours):

  1. Get to know the robot and strategize what performance we want to measure and what statements we want to send.
  2. Formalize the xAPI statements and start coding.
  3. Finish coding and test.

Step 1: Get to Know the Robot

Photo of NodeBotThe uncreatively-named NodeBot started life as an Elegoo Smart Robot Car Kit. For under $60, this impressive kit packs an Arduino board as the brain, four motors and four wheels, rechargeable battery pack, ultrasonic distance sensor, three light sensors for line following, and custom boards to link them all together along with precision-cut chassis pieces. The robot quality is very good and the value for price is outstanding—I ordered without realizing everything I was getting or how nicely made the parts would be.

The month before the xAPI Party, I spoke to the Rochester (Michigan) Full Stack Meetup about JavaScript and the Internet of Things (IoT), and I bought this robot as a platform to develop and demonstrate. I continued to tinker with the robot after the Meetup, so by the time it got to the MakerSpace, it had an on-board Raspberry Pi with an additional power supply. The Raspberry Pi, a credit-card sized computer, was set up to run a web server so the robot could be controlled over the Internet. (If you’re keeping track, these additions brought the total to right around $100.)

The web server asked a user to identify by email and optional name before controlling the robot; the user could then choose between two different sets of controls for driving, which are still very much in the Minimum Viable Product stage.

Screenshot of two control options (Dual Stick and Single Stick)

As attendees tried out the robot, they found the controls challenging; the single-stick mode was easier to control but the robot could get away from them very quickly. As a result, we homed in on a fairly simple “driver’s test” for the MakerSpace: the user would log in, then have to drive forward along a curving path to a vertical target (which happened to be a case of Dr. Pepper). This took advantage of the sensors on the robot—the line sensors could detect when the user left the path, and the ultrasonic distance sensor could detect the vertical target—and at the same time didn’t require a high degree of control from someone using the robot for the first time.

As we wrapped up our first phase, we had established a reasonable performance test of an individual’s robot car driving skills:

  • Ultimate goal: drive the robot around the curve and to the Dr. Pepper case
  • How long does it take to reach the Dr. Pepper case?
  • How many times does the car veer off the path?

Step 2: Formulate xAPI Statements

With that, we were ready to formulate our xAPI statements. There’s definitely room to improve and expand on these, but we wanted just a few statements we could be sending by the end of our two and a half hours. We defined Actor, Verbs, Activities, and relevant Context.

Actor for the statements was ready to go: to drive the NodeBot, you had to provide an email address, so we would send that (along with the optional name) as the actor.

For verbs, we decided upon:

  • Launched when the user had logged in and was ready to drive the robot.
  • Terminated when the used had logged out. Currently this sends whether the driver has completed the task or not; a future improvement would be to not send it if there’s a Completed statement.
  • Completed when the user reached the end goal, in this case driving up to the Dr. Pepper case. We discovered that some human refereeing is necessary, as the user could drive up to a wall or just stick their foot in front of the robot to trigger it as well.
  • Triggered when the robot’s light sensors pick up the black tape on either side of the driving path. We opted to define a new verb for this as none of the verbs in the Registry really fit our use case, and this would be a widely usable verb for other IoT applications. We set the verb’s ID as https://torrancelearning.com/xapi/verbs/triggered for the time being. Future work with the IoT profile may “promote” this to broader use, but for this purpose we are managing our in-house definition of “triggered” here.

For their objects, Launched, Completed, and Terminated used the driving course itself. For Triggered, we didn’t necessarily know which side the robot drove off, because triggering a sensor could mean it had turned around on the path and driven off the opposite side or that it had looped around and crossed back onto the path after previously leaving; we did, however, know which sensor was triggered, so we made the objects the three line sensors on the robot: left, middle, and right.

Following Andrew Downes’ excellent Data Governance advice from the xAPI Cohort and blogs (particularly https://www.watershedlrs.com/blog/products/xapi-governance-rules-processes) we formulated Activity IDs for these objects:

  • Driving course: https://torrancelearning.com/xapi/activities/xapi-party/robot/course1
  • Sensors:
    https://torrancelearning.com/xapi/activities/xapi-party/robot/sensor/left
    https://torrancelearning.com/xapi/activities/xapi-party/robot/sensor/middle
    https://torrancelearning.com/xapi/activities/xapi-party/robot/sensor/right

For this particular performance goal and our limited set of statements, two other pieces of information were highly relevant:

  • The time elapsed when a sensor was triggered or the goal was reached. We didn’t have a way of measuring the distance, but from time elapsed we could gain some context; we captured this in the context extension http://id.tincanapi.com/extension/time With more time, we might have included the robot’s motor speeds at the time as well.
  • The number of times the user had triggered a line sensor before completing the course; we created the context extension http://torrancelearning.com/xapi/extensions/xapi-party/robot/curb-checks for this purpose. More curb checks indicated that the driver had veered off the course more often or to a greater degree (for instance, driving completely off would trigger each of the three sensors while drifting slightly over the line would trigger only one).

At this point in the project, we had the data elements and structure established to measure our performance goal of reaching the Dr. Pepper case quickly, with as few instances of veering off path as possible.

Step 3: Coding and Testing

The coding and testing portions went fairly smoothly. We used the TinCanJS Node Package, since the robot was already running Node, and integrated it quickly. After basic statements were sent, we dressed activities up with a bit more description. We had a complete run from Launched through Completed right before the end of the final slot, and as the xAPI Party wrapped up, we had a couple people come into the MakerSpace and test the robot, completing the task.

Here’s the entire start-to-finish run for one user, MB (in reverse chronological order, the way that most Learning Record Stores (LRS) will display the data):

xAPI Statements in LRS, annotated

Future Tasks

There are many possible directions for this project, both xAPI and not:

  • The control schemes could be refined; the robot’s movement could be slowed down and smoothed out in general. Also, for the MakerSpace we controlled it on an iPad, so instead of using controls on the web page we could have steered by angling the iPad (which might or might not prove easier to control).
  • We could send speed information whenever a sensor is triggered.
  • The timer starts when the user logs in, but there’s usually some orientation after that. We could add a Start Driving button when the user really wants to begin the driving test, which would start the timer and begin sending statements.
  • The experience should terminate when the user completes the task; several times in the MakerSpace we missed the fact that it completed and the user kept driving.
  • The robot could have different modes of driving, including one where it stopped when it left the path so you’d have to start over; this mode could send clear Passed/Failed statements.
  • Visualizations could show a user’s performance with the sensors triggered along a timeline.

Thanks go out to Megan Boczar, Dean Castille, Liz Dickson, Matt Kliewer, Marijn Meijer Jill Mohler, Chris Raasch, Matt Robinson (he bought the Dr. Pepper!), Megan Torrance, Linda Yesh-McMaster, and anyone else who participated in the MakerSpace in any way!