Home
Search
Collections
Journals
About
Contact us
My IOPscience
Accelerating sensor development to the speed of light
This content has been downloaded from IOPscience. Please scroll down to see the full text. 2015 Transl. Mater. Res. 2 040301 (http://iopscience.iop.org/2053-1613/2/4/040301) View the table of contents for this issue, or go to the journal homepage for more
Download details: IP Address: 188.53.211.144 This content was downloaded on 07/03/2016 at 06:31
Please note that terms and conditions apply.
Transl. Mater. Res. 2 (2015) 040301
doi:10.1088/2053-1613/2/4/040301
Analysis
Accelerating sensor development to the speed of light published
3 November 2015
Mark Bünger Lux Research, 234 Congress Street, 5th Floor, Boston, MA 02110, USA E-mail:
[email protected]
Bringing new materials from the lab to any market is difficult, and sensor materials are no exception. But increasingly, simple sensors like cameras, microphones, and accelerometers—combined with advanced software— can do the job of more complex sensor systems and arrays, lowering cost and speeding time to market. In this article, Mark Bünger—Vice President of Research at Lux Research—looks at the impact of new manufacturing and design approaches on device development and offers advice on how materials experts can incorporate softwaredefined sensing into their toolkit. Materials development is slow, compared with other fields like software, so it’s relatively expensive and timeconsuming getting a new sensor to market. It can take decades for higher-performing alloys, composites, and coatings materials to breakthough and find customers—and sensor materials are no different. In terms of payback, new materials have enormous collective impact on a broad range of industries. However, bringing a novel material to market is a slow-paced process that’s out of step with the needs of fast-moving market opportunities for sensors such as the internet of things (IoT), urgently-needed medical diagnostic tests, or regulation-driven environmental monitoring compliance. To move more rapidly and take advantage of the largest growth opportunities in sensors, developers are rethinking the use of new material technologies, and finding places where software can accelerate or even replace steps in the process. Meeting that demand without incurring an exorbitant R&D burden will require smarter, more efficient design methods for materials and parts. Lux has highlighted the main approaches to addressing this problem, which—at a high level—work by converting slow-iterating material problems to fast-iterating digital information problems: digitization and automation of material and part design. While bringing new material chemistries to market (especially to highly regulated markets) requires extensive research, consider that many performance improvements rely on modifying not the chemistry, but the material microstructure and macro-scale part geometry: alloy grain structures, composite fiber paths, 3D printed parts with complex internal geometries, and emerging metamaterials fall into this category. Traditionally, researchers develop the required geometries by trying them out in the lab and in modeling software.
Accelerating sensor development with structured materials Today, material developers like Metamaterial Technologies1 are extending this concept to develop fully digital material design software to predict what microstructures will produce parts with a given set of (in their case, electronic and optical) properties, shortening material development time from years to weeks. In sensors, 3D printer company 3-Spark2 has taken a similar approach at the part design level: after developing a ‘smart’ 3D printed knee brace with embedded conductive traces that act as strain sensors, the company is now developing software that will enable users to automatically place conductive traces in a given part geometry to provide strain, force, temperature, pressure, and torsion sensing capabilities. As 3D printing technologies mature and become able to produce production-quality parts in a wider range of material chemistries, they will increasingly enable immediate production of the resulting part and material designs, further reducing the cost of manufacturing optimized products. In another example, researchers at Lawrence Livermore National Laboratory recently announced that they 3D printed graphene oxide aerogel microlattices with periodic porous structures3. Specifically, they combined aqueous graphene oxide and silica to form a viscous ink, loaded that ink into a syringe-based 3D printer, and extruded the material through a micronozzle to produce a microlattice with periodicity at a scale of 250 μm. The 1
http://metamaterial.com/ http://3-spark.com/ 3 https://llnl.gov/news/3d-printed-aerogels-improve-energy-storage 2
© 2015 IOP Publishing Ltd
Transl. Mater. Res. 2 (2015) 040301
M Bünger
material offers high specific surface area (704 m2 g−1 to 1066 m2 g−1), specific stiffness, conductivity (87 S m−1 to 278 S m−1), and compressibility, combined with low density (31 mg cm−3 to 123 mg cm−3) and the capability of the 3D printing process to tune the above properties locally. The process can be used to make pressure sensors, but also applied to filtration and separation, flow batteries, supercapacitors, waveguides, catalyst scaffolds, and other electronics and energy storage devices. These sensors use the arrangement of materials, rather than changes in composition, to achieve new properties and functions. Since the arrangement of materials in the sensor can be tweaked in an effectively infinite number of ways (and modeled in advance of making it) this software-centric approach promises to speed up the development of new classes of sensors (compared with a material-centric approach alone).
Even faster roads to market with software-defined sensors In addition to these ‘structured material’ sensors, another important evolution in sensing is in what we call softwaredefined sensors (SDS). Taking a cue from the development of software-defined radio, in SDS a simple sensor plus software can substitute for more complex sensor systems. That’s important because off the shelf sensors carry lower technology risk and cost, and the system can be easily upgraded with new software, as we improve our understanding of the phenomena being sensed. This approach is thus generally even faster than developing structured materials for sensors. The following list provides examples of SDS and the markets they address— 1. Accelerometer replaces heart rate monitor and other devices for fitness/health tracking. It used to be the case that if you needed to track someone’s level of activity—say exercise, walking, and sleep—you would use a heart rate monitor, gas sensors (to monitor breathing), GPS (to track the distance they’d travelled), and an electronic mattress pad (to keep track of how long and how soundly they slept). Today of course, a simple accelerometer—plus software that translates the readings—is enough to give a basic understanding of all of these behaviors and more. Companies like Jawbone, Misfit, Pebble, and dozens of others making these devices don’t really compete by having the best accelerometers—they battle by having the best software to analyze the output—even to the point of accepting data from competitors’ devices. 2. Depth camera recognizes gestures, replacing touchscreen UI on consumer electronics. One of the early leaders in smartphones, Blackberry, lost its advantage over the iPhone because it clung to the concept of a physical keyboard and miniature trackball. Apple, on the other hand, developed a touchscreen sensor for the iPhone that—by adjusting the images displayed and the interpretation of touches at various points on the screen—could adapt the keyboard to any language, offer more symbols, and use more of the device surface for a display than Blackberry’s physically constrained approach. But touchscreens aren’t the end of the road, and now a new type of interface, the depth camera, could eliminate keyboards, mice, and touchscreens for a large number of applications. Depth cameras create a 3D real time image of an environment, by combining conventional optical imaging with an IR projector (which paints the scene with a regular grid of light) and camera (which converts the shape of the grid on the surface of objects in the scene into a 3D model). There are many names for depth cameras—ranging camera, flash lidar, time-of-flight (ToF) camera, and RGB-D camera, for example. One of the first commercial depth cameras, the Microsoft Kinect4, is a gaming accessory that recognizes body and facial gestures, identifies people by face, and is even capable of measuring your pulse by sight (based on slight changes in skin color as blood is pumped). It was hacked for a $3000 bounty set by Adafruit5, which led to a proliferation of developers using it for their own projects, and the developer of the core technology inside it, PrimeSense, was bought by Apple for $345M. Since then, depth cameras have grown smaller and cheaper, appearing on kickstarted wearables (Meta Spaceglasses6, Structure.IO7) as well as Intel’s RealSense8, which launched in 2014, and Google’s Project Tango9, which it claims ‘combines 3D motion tracking with depth sensing to give your mobile device the ability to know where it is and how it moves through space.’ 3. Microphone recognizes speech and sounds (e.g. something burning on stove, front door opening) for smart home applications. Voice recognition software like Nuance’s Dragon10 has long been used for dictation and machine control (this article was partially dictated via Dragon) on PCs. Today, mobile voice apps like Apple’s Siri, Microsoft Cortana, OK Google, Facebook’s M, and Viv are acceptably accurate typists, 4
https://microsoft.com/en-us/kinectforwindows/ https://blog.adafruit.com/2010/11/10/we-have-a-winner-open-kinect-drivers-released-winner-will-use-3k-for-morehacking-plus-an-additional-2k-goes-to-the-eff/ 6 https://getameta.com/products 7 http://structure.io/ 8 http://intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html 9 https://google.com/atap/project-tango/ 10 http://nuance.com/dragon/index.htm 5
2
Transl. Mater. Res. 2 (2015) 040301
M Bünger
and useful assistants as well. IoT-based voice control is already available in TVs from LG and Changhong, a leading Chinese TV brand; as well as cars from Ford (Ford SYNC), Mercedes (Linguatronic), Toyota (Entune), and Volvo (Sensus Connect). New devices like Amazon’s Echo11 and Cubic12—launched from a Moscow hackerspace—are beefing up their intelligence in hopes of becoming the UI for the smart home. Imperial College London spinout Eddy Labs13 is working on what might be the smartest home audio UI yet—a device that can tell from the distinctive jingle sound of a set of keys whether the person at the door is the homeowner; or hear whether dinner is ready from the sound of food frying on the stove.
Merging material science and software-defined sensors: how to get started Taken together, these trends provide both risk and opportunity. Manufacturers that adopt these methods soonest will have the ability to provide more optimized products than competitors, while others will find it increasingly difficult to catch up. Material and chemical companies may find that a single new chemistry can have many geometries applied to it so that it can work in many applications, but conversely software plus simple sensors also represents a shift in focus away from premium-priced components and formulations. So what should materials science experts do to incorporate software-defined sensing into their toolkit? 1. Know what’s already available—startups, universities, corporations, and communities developing and deploying software-defined sensors. In this age of open innovation, there’s no reason to reinvent the wheel. Search patents and scientific literature to find others who are attacking similar use cases. Partner with them, license their technology, or engineer around their approach. Above all, look to open-source groups like the Accelerated Innovation Community14 developing sensor fusion algorithms that are free to use and improve. 2. Use off-the-shelf sensors and other hardware to rapidly prototype. In addition to the large-scale semiconductor manufacturers and others that typically provide sensor hardware, today there are many websites like Adafruit15 and Sparkfun16 that can supply useful building blocks. They sell increasingly complex sensors—from accelerometers, gyros and fingerprint scanners through to Geiger counters, LIDAR, RFID and FTiR development kits and even a headset to monitor brain waves. What’s more, these sites offer tutorials, software sketches, and a community of practice that support rapid prototyping and testing. 3. Look for simple solutions—imagine how you might replace a more complex future system with one you can make today. Often, all that’s needed is a simple mindset change to come up with a shortcut to a market ready sensor solution. While we usually began with a new material or other technology and try to push it into market, sometimes it’s easier to identify a market need and figure out the simplest way to solve it. If you assume that you needed to do so using only existing, inexpensive approaches what would your solution look like? Necessity is the mother of invention, and self-imposed constraints (plus learning by doing) can help you envision solutions faster.
Mark Bünger is Vice President of Research at Lux Research17—an independent research and advisory firm providing strategic advice and ongoing intelligence for emerging technologies. Based in San Francisco, Mark currently leads the firm’s Future Computing Platforms and Industrial Big Data practices.
11
http://amazon.com/Amazon-SK705DI-Echo/dp/B00X4WHP5E http://cubicrobotics.com/ 13 http://eddy.io/ 14 http://memsindustrygroup.org/?aicgroup 15 http://adafruit.com/category/35 16 https://sparkfun.com/categories/23?page=all 17 http://luxresearchinc.com/coverage-areas/advanced-materials 12
3