Towards on-site collaborative robotics: Voice control, co-speech gesture and context-specific object recognition via ad-hoc communication

Thibault Schwartz, Sebastian Andraos, Jonathan Nelson, Christopher Knapp, Bertrand Arnold

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

This work presents a novel set of accessible, unified hardware and software solutions facilitating the implementation of natural human-machine interactions,as required by collaborative robotics in both indoor and outdoor environments.This extensible framework supports vocal control, co-speech gestures,and object recognition with feature tracking and adaptive resolution. The interactions are based on a new network messaging protocol allowing any device using TCP/IP to share variables with the full abstraction of the original machine software platform and can therefore be used synchronously by a vast array of equipment including CNC machines, industrial robots, construction equipment,mobile devices and PLCs. We conclude with the description of a testing scenario to be deployed during the conference workshop.
Original languageEnglish
Title of host publicationRobotic Fabrication in architecture, art and design 2016
EditorsD Reinhardt, R Saunders, J Burry
Place of PublicationSwitzerland
PublisherSpringer
Pages388-397
Number of pages11
Volumeiv
ISBN (Electronic)978-3-319-26378-6
ISBN (Print)978-3-319-26376-2
DOIs
Publication statusPublished - 2016
EventInternational Conference of the Association for Robotics in Architecture : Robotic Fabrication in Architecture, Art and Design - Pier 2/3 Walsh Bay, Sydney, Australia
Duration: 18 Mar 201619 Mar 2016
http://www.robarch2016.org/conference/

Conference

ConferenceInternational Conference of the Association for Robotics in Architecture
Abbreviated titleROB ARCH
Country/TerritoryAustralia
CitySydney
Period18/03/1619/03/16
Internet address

Fingerprint

Dive into the research topics of 'Towards on-site collaborative robotics: Voice control, co-speech gesture and context-specific object recognition via ad-hoc communication'. Together they form a unique fingerprint.

Cite this