
Smart objects are becoming common-place, with appliances, toys, and housewares increasingly being internet-connected and interactive. However, developing these objects requires programming knowledge, 3D modelling skills, and designing and constructing circuitry. Most of the functionality in these smart objects is already available in devices that many of us have lying around – old smartphones and watches – could they be used to rapidly develop smart objects?
The user interface research group at Autodesk Research worked with researchers from the University of Calgary to develop a system that enables users to create an interactive smart object from a smartphone or smartwatch. The system – Pineal – allows novices to import existing 3D models that they create or download from the web, and program interactive behaviours. The system then modifies the 3D model to fit a phone or a watch inside, and all the user needs to do is 3D print the resulting model and insert the phone/watch. This approach has the potential for users to build their own smart objects on-demand and could change the way interactive objects are designed and deployed to customers. It could also reduce unnecessary waste, as the watch and phone could be used for a number of smart objects, rather than each object containing its own circuitry and batteries which would eventually be thrown out.
“We wanted to develop something that would enable people to create new interactive objects without having to know about electronics, soldering or programming. Mobile phones and watches currently include most of the sensors, actuators, power and connectivity found in existing commercial smart objects, so it makes sense to leverage them for these tasks” says David Ledo, a Ph.D. Student at the University of Calgary who developed the system at Autodesk Research.
The process begins by importing a 3D model. Using a simple visual programming environment, users can easily define interactive behaviours. The user specifies inputs such as “Shaking the Device” or “Button Pressed”, and outputs such as lights, vibration, sound or display. The system looks at the programmed behaviours and tells the 3D modeling environment (powered by Autodesk Meshmixer) what changes to make, so that the model can expose what the user programmed, create a cavity for the device, and separate the model into two pieces to allow for assembly. The program is automatically streamed to the phone or watch which runs a custom app. The user can then use fiber optic cables (to simulate LED lights) or conductive 3D print material (to make a button) to finish making the object interactive and assemble the pieces together.
The group created 5 examples to show what’s possible with Pineal: a Magic 8 Ball, a Toy Firetruck, a Level, a Twitter Light Planter and a Voice Activated Lightbulb. These devices represent a range of functionality, including lights, sound, internet connectivity, and transducing input (i.e., converting one type of input to another).
This work will be presented by the lead author, David Ledo, at the upcoming CHI 2017 conference in Denver, Colorado. For more details, please see the project page which includes a full technical report on the system and the design space.