Oscillator Phase Noise

102 downloads 184002 Views 4MB Size Report
everyone else, could software and regular video from an arbitrary. UAS act as a ...... performed for certification of aircraft autopilot software (and other avionics ...
The Draper Technology Digest 2011 Volume 15 | CSDL-R-3028

Front cover photo: (from left to right) Troy B. Jones, Autonomous Systems Capability Leader; Nirmal Keshava, Group Leader for Fusion, Exploitation, and Inference Technologies; Sungyung Kim, Senior Member of the Technical Staff in Strategic and Space Guidance and Control; and Amy E. Duwel, Group Leader for RF and Communications

The Draper Technology Digest (CSDL-R-3028) is published annually under the auspices of The Charles Stark Draper Laboratory, Inc., 555 Technology Square, Cambridge, MA 02139. Requests for individual copies or permission to reprint the text should be submitted to: Draper Laboratory Media Services Phone: (617) 258-1887 Fax: (617) 258-1800 email: [email protected] Editor-in-Chief Michael J. Matranga Artistic Director Pamela Toomey Designer Lindsey Ruane Editor Beverly Tuzzalino Writers Jeremy Singer Amy Schwenker Alicia Prewett Illustrator William Travis Photographer James Thomas Photography Coordinator Drew Crete Copyright © 2011 by The Charles Stark Draper Laboratory, Inc. All rights reserved.

Table of Contents 4

Introduction by Dr. John Dowdle, Vice President of Engineering

6

2011 Charles Stark Draper Prize

Papers 9

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity Paul A. Ward and Amy E. Duwel

21

In Vitro Generation of Mechanically Functional Cartilage Grafts Based on Adult Human Stem Cells and 3D-Woven poly(ε-caprolactone) Scaffolds Piia K. Valonen, Franklin T. Moutos, Akihiko Kusanagi, Matteo G. Moretti, Brian O. Diekman, Jean F. Welter, Arnold I. Caplan, Farshid Guilak, and Lisa E. Freed

33

General Bang-Bang Control Method for Lorenz Augmented Orbits Brett J. Streetman and Mason A. Peck

47

Tactical Geospatial Intelligence from Full Motion Video Richard W. Madison and Yuetian Xu

55

Model-Based Design and Implementation of Pointing and Tracking Systems: From Model to Code in One Step Sungyung Lim, Benjamin F. Lane, Bradley A. Moran, Timothy C. Henderson, and Frank A. Geisel

71

Detection of Deception in Structured Interviews Using Sensors and Algorithms Meredith G. Cunha, Alissa C. Clarke, Jennifer Z. Martin, Jason R. Beauregard, Andrea K. Webb, Asher A. Hensley, Nirmal Keshava, and Daniel J. Martin

83

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships Troy B. Jones and Mitch G. Leammukda

Dithering of 5-arcsec LOS change

2 Table of Contents

92

List of 2010 Published Papers and Presentations

Patents 101

Patents Introduction

102

Systems and Methods for High Density Multi-Component Modules U.S. Patent No. 7,727,806; Date Issued: June 1, 2010 Scott A. Uhland, Seth M. Davis, Stanley R. Shanfield, Douglas W. White, and Livia M. Racz

105

List of 2010 Patents

Awards 106 107

The 2010 Draper Distinguished Performance Awards Design and Demonstration of a Guided Bullet for Extreme Precision Engagement of Targets at Long Range Laurent G. Duchesne, Richard D. Elliott, Robert M. Filipek, Sean George, Daniel I. Harjes, Anthony S. Kourepenis, and Justin E. Vican

108

Development of an Ultra-Miniaturized, Paper-Thin Power Source Stanley R. Shanfield, Albert C. Imhoff, Thomas A. Langdo, Balasubrahmanyan “Biga” Ganesh, and Peter A. Chiacchi

109 110

The 2010 Outstanding Task Leader Awards COTS Guidance, Navigation, and Targeting Ian T. Mitchell

111

MK6 System Test Complex Daniel J. Monopoli

112

The 2010 Howard Musoff Student Mentoring Award Sarah L. Tao

114

The 2010 Excellence in Innovation Award Navigation by Pressure Catherine L. Slesnick, Benjamin F. Lane, Donald E. Gustafson, and Brad D. Gaynor

116

List of 2010 Graduate Research Theses

Table of Contents

3

Introduction by This publication, the 15th issue of the Draper Technology Digest, presents a collection of publications, patents, and awards representative of the outstanding technical achievements by Draper staff members. Seven technical papers are presented in this issue to showcase work associated with Draper’s capabilities and technologies. These publications represent long-standing Draper core capabilities in the areas of guidance, navigation, and control, autonomous systems and information systems, as well as emerging strengths in biomedical systems and multimodal sensor fusion. This issue also recognizes the winners of several Draper awards for technical excellence, leadership, and mentoring. The Distinguished Performance Award is the most prestigious technical achievement award that the Laboratory bestows upon its employees. This year's award was presented to two teams. Laurent Duchesne, Richard Elliott, Robert Filipek, Sean George, Daniel Harjes, Anthony Kourepenis, and Justin Vican were acknowledged for “Design and Demonstration of a Guided Bullet for Extreme Precision Engagement of Targets at Long Range,” work that resulted in the development of a guidance system for a 50-caliber bullet. Stanley Shanfield, Albert Imhoff, Thomas Langdo, Balasubrahmanyan “Biga” Ganesh, and Peter Chiacchi were recognized for “Development of an Ultra-Miniaturized, Paper-Thin Power Source,” which represents a dramatic breakthrough in miniature portable energy. Exceptional technical efforts require outstanding leadership. Two individuals were awarded the Outstanding Task Leader Award this year: Ian Mitchell was recognized for his leadership of “COTS Guidance, Navigation, and Targeting,” while Daniel Monopoli was acknowledged for directing the “MK6 System Test Complex.” Student leadership and mentoring remain

Introduction by Dr. John Dowdle, Vice President of Engineering

Dr. John Dowdle, Vice President of Engineering a priority at Draper. The Howard Musoff Student Mentoring Award recognizes exceptional student mentoring while simultaneously honoring a former Draper mentor who devoted much time and energy to many students. Sarah Tao was the sixth recipient of the Howard Musoff Student Mentoring Award. Innovation is a key element to Draper’s success. Two awards acknowledge innovation of our technical staff. The 2010 Excellence in Innovation Award recognized a team effort by Catherine Slesnick, Benjamin Lane, Donald Gustafson, and Brad Gaynor for “Navigation by Pressure.” The second team recognized for innovation was Seth Davis, Stanley Shanfield, Douglas White, and Livia Racz, who received the 2010 Vice President’s Best Patent Award for “Systems and Methods for High Density Multi-Component Modules.” The Vice President’s Best Paper Award recognizes an original publication that represents Draper’s high standards of professionalism, originality, and creativity. This year’s recipients of the Best Paper Award were Paul Ward and Amy Duwel, who authored the paper “Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity.” Their paper, which provides a straightforward approach for trading off oscillator design parameters, is the first paper in this digest. The Draper Prize, endowed by Draper Laboratory and awarded by the National Academy of Engineering, honors individuals who have developed a unique concept that advances science and technology while promoting the welfare and freedom of society. Since its inception in 1988, the Draper Prize has recognized the developers of the integrated circuit, the turbojet engine, FORTRAN, the Global Positioning System, and the World Wide Web, to name a few. This year, the Draper Prize was awarded to Frances H. Arnold and Willem P.C. Stemmer for “directed evolution,” a process that mimics natural mutation and selection to guide the creation of desirable properties in proteins and cells in an accelerated laboratory environment. On behalf of Draper Laboratory, I would like to congratulate both recipients for their achievements, which are highlighted in greater detail on the following pages.

Introduction by Dr. John Dowdle, Vice President of Engineering

5

The 2011 Charles Stark Draper Prize The Charles Stark Draper Prize was established in 1988 to honor the memory of Dr. Charles Stark Draper, “the father of inertial navigation.” The Prize was instituted by the National Academy of Engineering and endowed by Draper Laboratory. The Prize is recognized as one of the world's preeminent awards for engineering achievement, and honors individuals who, like Dr. Draper, developed a unique concept that has made significant contributions to the advancement of science and technology, as well as the welfare and freedom of society. For information on the nomination process, contact the Public Affairs Office at the National Academy of Engineering at 202.334.1237. The 2011 Charles Stark Draper Prize was awarded on February 22 at a ceremony in Washington, D.C. to Frances H. Arnold and Willem P.C. Stemmer, who individually contributed to a process called “directed evolution.” This process, now used in laboratories worldwide, allows researchers to guide the creation of certain properties in proteins and cells. Directed evolution is postulated on the idea that the mutation and selection processes that occur in nature can be accelerated in the laboratory to obtain specific, targeted improvements in the function of single proteins and multiprotein pathways. Arnold showed that randomly mutating genes of a targeted protein, especially enzymes, would result in some new proteins with more desirable traits than they had before. Selecting the best proteins and repeating this process multiple times, she essentially directed the evolution of the proteins until they had the desired properties. Stemmer concentrated on a different natural process for creating diversity and concentrated on recombining preexisting natural diversity, a process he called “DNA shuffling.” Rather than causing random mutations, he shuffled the same gene from diverse but related species to create clones that were as good as or better than the parental genes in a given targeted property. An important aspect of directed evolution is that it provides a practical and costeffective way to improve protein function. Previous efforts, especially those that involved a design based on enzyme structures and the predicted effects of mutations, were often not successful and were expensive and labor-intensive. According to George Georgiou, a professor at the University of Texas at Austin. “Arnold and Stemmer’s joint development of directed protein evolution was a milestone in biological research. It is impossible to overstate the impact of their discoveries for science, technology, and society; nearly every industrial product and application involving proteins relies on directed evolution.” Arnold is the Dick and Barbara Dickinson Professor of Chemical Engineering and Biochemistry at the California Institute of Technology. She is listed as co-inventor on more than 30 U.S. patents and has served as science advisor to more than 10 companies. In 2005, Arnold cofounded Gevo Inc., which develops new microbial

6 The 2011 Charles Stark Draper Prize

routes to produce fuels and chemicals from renewable resources. She is among the few individuals who is a member of all three membership organizations of the National Academies: The National Academy of Engineering (2000), The Institute of Medicine (2004), and the National Academy of Sciences (2008). She holds a B.S. in Mechanical and Aerospace Engineering from Princeton University (1979) and a Ph.D. in Chemical Engineering from the University of California, Berkeley. Stemmer is founder and CEO of Amunix Inc., which creates pharmaceutical proteins with extended dosing frequency. In 2008, Amunix joined with Index Ventures to create Versartis Inc. for the purpose of clinical development of three specific products for the treatment of metabolic diseases. Stemmer has invented other technologies that have led to other successful companies and products. In 1993, he invented DNA shuffling and co-founded Maxygen to commercialize the process. Prior to 1993, he was a Distinguished Scientist at Affymax and a scientist at Hybritech. In 2001, he invented the Avimer technology and founded Avidia in 2003 to commercialize it; he was chief scientific officer of the company until 2005. Stemmer has 68 research publications, 97 U.S. patents, and is a recipient of the Doisy Award, the Perlman Award, and the NASDAQ VCynic Award. He received his Ph.D. from the University of Wisconsin-Madison in 1985.

Recipients of the Charles Stark Draper Prize 2009: Robert H. Dennard for the invention and development of Dynamic Random Access Memory (DRAM) 2008: Rudolf Kalman for the development and dissemination of the Kalman Filter 2007: Timothy Berners-Lee for creation of the World Wide Web 2006: Willard S. Boyle and George E. Smith for the invention of the charge coupled device (CCD) 2005: Minoru S. Araki, Francis J. Madden, Don H. Schoessler, Edward A. Miller, and James W. Plummer for their invention of the Corona reconnaissance satellite technology 2004: Alan C. Kay, Butler W. Lampson, Robert W. Taylor, and Charles P. Thacker for the development of the Alto computer at Xerox's Palo Alto Research Center (PARC) 2003: Ivan A. Getting and Bradford W. Parkinson for their technological achievements in the development of the Global Positioning System 2002: Robert Langer for bioengineering revolutionary medical drug delivery systems 2001: Vinton Cerf, Robert Kahn, Leonard Klienrock, and Lawrence Roberts for their individual contributions to the development of the Internet 1999: Charles Kao, Robert Maurer, and John MacChesney for spearheading advances in fiber-optic technology 1997:

Vladimir Haensel for the development of the chemical engineering process of “Platforming” (short for Platinum Reforming), which was a platinum-based catalyst to efficiently convert petroleum into high-performance, cleaner-burning fuel

1995: John R. Pierce and Harold A. Rosen for their development of communication satellite technology 1993: John Backus for his development of FORTRAN, the first widely used, general-purpose, high-level computer language 1991: Sir Frank Whittle and Hans J.P. von Ohain for their independent development of the turbojet engine 1989: Jack S. Kilby and Robert N. Noyce for their independent development of the monolithic integrated circuit

The 2011 Charles Stark Draper Prize

7

n(t) H • e jΦ

Resonator

θO(t)

Σ

G

e jΨn Ψn(n(t))

φ = θO - θR

Amplifier

nr(t)

ne(t) e jΨr

Σ

θR(t)

Ψn(n(t))

H•e



Resonator

θO(t)

Σ

e

jΨn

Ψn(ne(t))

θR(t) G•e jΦ Amplifier

Engineers building new communications, navigation, and radar systems seek to minimize phase noise, which can harm performance. In a navigation system, phase noise can make it take longer for the device to acquire the GPS satellite signal, which also drains power. In communications and radar systems, phase noise can diminish range and disrupt low-level signals. This paper provides a general model for engineers studying design trades who are seeking to understand how various noise sources, as well as environmental disturbances, manifest as phase noise in oscillators, which produce electronic signals. This work could lead the way to improved oscillators that support defense and intelligence customers’ communications and navigation needs with more capable systems in smaller packages.

8 Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity Paul A. Ward and Amy E. Duwel Copyright ©2011 by the Institute of Electrical and Electronics Engineers (IEEE), published in IEEE Transactions on Ultrasonics and Frequency Control, Vol. 58, No. 1, January 2011

Abstract This paper offers a derivation of phase noise in oscillators resulting in a closed-form analytic formula that is both general and convenient to use. This model provides a transparent connection between oscillator phase noise and the fundamental device physics and noise processes. The derivation accommodates noise and nonlinearity in both the resonator and feedback circuit, and includes the effects of environmental disturbances. The analysis clearly shows the mechanism by which both resonator noise and electronics noise manifest as phase noise, and directly links the manifestation of phase noise to specific sources of noise, nonlinearity, and external disturbances. This model sets a new precedent, in that detailed knowledge of component-level performance can be used to predict oscillator phase noise without the use of empirical fitting parameters.

I. Introduction This paper provides a predictive model for phase noise that does not require fitting parameters, but instead is rigorously derived from the fundamental dynamics of an oscillator loop. We build on the work and insight of predecessors, including Leeson [1], Hajimiri [2], and Rubiola [3] in particular. Section IIA briefly summarizes key concepts from the literature that make our work possible. This section also introduces a phase criterion that is new to oscillator analysis and enables our modeling approach; specifically, the phase criterion requires that the phase differences around a closed-loop sum to zero instantaneously. Leveraging the insight of the linear time variant (LTV) effect [2], we add a rigorous derivation that provides a closed-form expression for the LTV gain function and clearly shows the associated frequency-translation of the additive noise as it manifests in phase noise (Sections IIB and IIC). By capturing the LTV behavior in a general analytical model, one can further appreciate the elegance of topologies such as Colpitts, in which the feedback is periodically applied, to provide low-phasenoise oscillators. Section IID shows how oscillator phase noise is obtained from injected phase noise using the phase criterion. In this section, we benchmark our approach by using it to derive Leeson’s expression for phase noise. In Section IIE, we briefly address noise that is derived directly from the resonator itself. Finally, Sections IIF and IIG discuss the role of nonlinearity and time variance on phase noise. Although it is already well known that nonlinearity in active devices can degrade phase noise, we provide a formalism for exactly how the mapping to phase noise occurs. In particular, we show how the measurable property of voltage to phase conversion in an amplifier can parameterize the resulting oscillator phase noise. Our formalism also offers new results. We can predict for the spectral density of the phase noise the 1/f3 corner frequency from the amplifier properties and discuss why this frequency is usually much lower than the 1/f corner of the individual amplifier. We also provide an analytical expression for the phase noise resulting from

resonator nonlinearity and time variance and offer new insight into an earlier finding that a nonlinear resonator can actually improve the phase noise of an oscillator [4]. In particular, we show that nonlinearity in the resonator reduces the phase noise that would be predicted by the Leeson equation; however, we explain how nonlinearity can add a new term to the phase noise because of the coupling between frequency and amplitude. The derivation focuses on mechanical or lumped-element-based resonators. Though our approach is quite general, future work should explore the broader applicability of these results, e.g., to photonic resonators. II. Modeling of Phase Noise A. Conceptual Basis of Model The analysis relies heavily on two key insights. The first insight was articulated by Hajimiri and Lee [2]. They recognized that phase noise is an LTV function of the additive noise. Building on that insight, the present work provides a closed-form solution to the LTV gain function that is valid for small noise but can be extended for arbitrarily large noise. The analysis leverages the concept of an analytic signal that possesses the same phase (and amplitude) as the oscillator signal at each respective node. This technique and its application to oscillators are also described nicely in [3]. The second key to analyzing this system is a requirement that at an instant in time, the phase at any point in the loop is well-defined and single-valued with respect to a reference phase. The loop topology then constrains the system such that the phase shifts around the loop sum to zero. This includes phase shifts caused by noise processes. The constraint appears reminiscent of the Barkhausen criterion, which describes the condition for steady-state oscillation. The Barkhausen criterion, however, captures the condition that the closed-loop transfer function has a resonance, so that there is a finite response even with zero input. The statement is often made

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

9

that any disturbances in the loop not meeting the Barkhausen phase criterion will decay in time. In the present analysis, a physically realizable system must meet the zero-loop-sum phase condition at all times and instantaneously because of the topology of being in a loop. This seemingly intuitive condition has not been stated explicitly in this context before. It allows one to use feedbacktheory-based models, elegantly presented in [3], when the system is fully linearized. B. Determination of Injected Phase Noise We consider the case in which a random signal n(t) is added to a sinusoid of amplitude A, as shown in Figure 1. The only restriction placed on n(t) is that it is wide-sense stationary (along with, by extension, its Hilbert transform nˆ(t)), and thus we can define its power spectrum. The noise phasor has random amplitude and random phase. Thus, referring to Figure 1, it is clear that the resultant phasor representing c(t) possesses both random amplitude modulation and random phase modulation.

n(t) into the loop. We use the analytic signal formalism to show how Ψn(t) is derived from n(t).

n(t) H • e jΦ

Resonator

θO(t)

φ = θO - θR

Σ

e jΨn Ψn(n(t))

G

θR(t)

Amplifier

Figure 2. Simplified block diagram for an oscillator.

From our analytic signal representation, we can express the injected phase noise as

Im{c(t)} noise



Yn(t) = tan-1

sin(wot) A sin(wot) + n^(t) – tan-1 , A cos(wot) + n(t) cos(wot)

(1)

where ωo is the time-average oscillator frequency. Equation (1) is exact, but not particularly convenient. To obtain a working equation for phase noise, it can be expanded in a Taylor series. Considering that the noise is small compared with the signal amplitude, we can keep only the first-order terms of the expansion, resulting in

c(t)

Re{c(t)}

Aejw0t

Figure 1. Graphic representation of an analytic signal.

Yn(t) ≅

n^(t) n(t) cos(wot) – sin(wot). A A

(2)

We see that the phase noise is an LTV function of additive electronics noise n(t). Furthermore, the zero-loop-sum criterion requires that φ = -Ψn. Equation (2) is important because it forms the basis for the mapping of additive noise to feedback phase noise. It should be noted that this analysis implicitly assumes that there is no parametric modulation in the feedback electronics. This is not a linear time invariant (LTI) effort and is addressed in more detail later in the paper.

It is customary with phase noise analysis to ignore amplitude noise on the oscillator output. We shall follow this custom here. However, we will consider the effects of amplitude noise within the loop on the oscillator phase noise. As we shall see later, the amplitude noise can excite nonlinear effects that can increase phase noise.

In addition, all oscillators possess some nonlinearity that keeps the amplitude from growing without bound. This nonlinearity may be intrinsic or may be added as part of the design. In Section IIG, we address the exacerbation in phase noise for the specific case in which active amplitude stabilization is used and the resonator possesses coupling between amplitude and frequency.

The phasor c(t) is represented as an analytic signal. In Appendix A, we show how the amplitude and phase of this signal are related to n(t). Figure 2 represents the analytic signal at different nodes of an oscillator block diagram at a snapshot in time. We consider that the noise n(t) is referred to a single node and is due to the feedback electronics. An explicit phase shift Ψn is introduced to identify the noise-derived phase shift introduced through the LTV mapping of

C. Spectrum of Injected Phase Noise Because we are dealing with random signals, we wish to work in terms of frequency spectra. To that end, we wish to compute the spectrum of the injected phase noise. Note that the spectrum of the phase noise is not a signal power spectrum per se, but instead represents the distribution of phase power as a function of frequency, having units of Hz-1.

10 Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

The derivation of injected phase spectrum is rather lengthy, and therefore has been included as Appendix B. The result is 1 SYn(w) ≅ 2 [Sn(w – wo) + Sn(w + wo)], (3) A where Sn(ω) is the double-sided spectrum of the additive electronics noise (in V2/Hz) and ω is the baseband frequency. In words, the double-sided injected phase noise spectrum is proportional to the double-sided additive noise spectrum shifted toward positive frequency by ωo plus the double-sided electronics noise spectrum shifted toward negative frequency by ωo. Equation (3) also shows that the injected phase noise is given by the ratio of additive amplifier noise power to signal power, as expected. For the case in which the amplifier noise is attributed to white noise [Sn(ω) = Snw], this ratio can be put into more fundamental units by dividing double-sided available noise power density by available signal power:

SYn(w) ≅

2Snw 2FkBT = , A2 Ps

(4)

where F is the amplifier noise factor and Ps is the signal power. D. Oscillator Phase Noise and the Leeson Formula The oscillator phase noise, which we will also refer to as the readout phase noise θR(t), is the sum of two terms: the phase noise of the resonator output θo(t) and the phase noise injected, Ψn(t). This can be seen by walking through the loop in Figure 2.

where Q is the loaded quality factor of the resonator. Considering also the phase criterion that φ = -Ψn, the phase noise is given by qR(t) = Yn(t) +





qR(t) = Yn(t) + ∫ wo(t)dt. –∞



(5)

In the case in which the resonator is LTI, the frequency noise of the resonator output signal is given by



wo(t) =

∂f(wo) -1 f(t). ∂w

(6)

Thus, the general expression for phase noise of an oscillator employing an LTI resonator is



qR(t) = Yn(t) +

∂f(wo) -1 t ∫ f(t)dt, ∂w –∞

(7)

where the transfer function of the resonator is

H(w) = |H(w)|ejf(w).

(8)

Very often, the resonator is a second-order LTI system. In this case, the phase slope at resonance is



∂f(wo) -2Q = , ∂w wo

t

∫ Yn(t)dt. –∞

(10)

This equation is a time-domain representation of the phase noise from the oscillator. It leads directly to the Leeson equation because it shows how the output phase noise is derived from the sum of injected phase noise plus its integral. Note that determination of the time-domain oscillator phase noise involves evaluation of a running integral of the injected phase noise. We run into a difficulty when attempting to integrate white noise because the integral tends to grow without bound. We can circumvent this difficulty by considering the integration time to be finite. To accommodate this restriction in the frequency domain, we will restrict validity of the spectrum of the integrated feedback phase noise, so that ω = 0 is excluded. The spectrum of the oscillator phase noise is found by taking the power spectral density (PSD) of (10) subject to the integration time restriction, given by



SqR(w) ≅ SYn(w) 1 +

∂f(wo) ∂w

-2

1 , |w| > 0. w2

(11)

For an oscillator having a second-order LTI resonator, the phase noise spectrum is given by

The phase noise of the resonator output is also the time integral of the frequency noise of the resonator output. That is, t

wo 2Q

2

SqR(w) ≅ SYn(w) 1 +

wo . 2Qw

(12)

It is clear from these expressions that injected phase noise is a critical determinant of oscillator phase noise, particularly in the case in which the resonator has a finite phase slope at resonance. In the case in which only white additive noise with a PSD of Sn = Snw is considered, the injected phase noise becomes SΨn(ω) = 2Snw/A2, and (12) parallels the familiar Leeson result. E. Inclusion of Resonator Noise Refer to Figure 3 on the following page, which shows an oscillator loop having electronics noise and resonator noise. We can use the results obtained earlier for mapping of additive noise to phase noise and for generating the corresponding phase noise spectrum. The resonator noise will contribute to injected phase noise and injected amplitude noise, but only the phase noise is important in the case of a linear resonator. If we assume that the resonator input is a sinusoid of amplitude B plus additive white noise nr(t) having a PSD of Srw, the injected phase spectrum including both electronics noise and resonator noise is given by

SY(w) ≅

2S 1 [Sn(w – wo) + Sn(w + wo)] + 2rw . 2 B A

(13)

(9) For simplicity, we took the resonator noise to be white, though it can be any stationary noise process. Section IIF will show how a

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

11

nr(t) Σ

ne(t) H • e jΦ

e jΨr Ψn(n(t))

Resonator

θO(t)

Σ

e jΨn Ψn(ne(t))

θR(t) G•e jΦ Amplifier

Figure 3. Simplified block diagram for an oscillator. This model explicitly includes electronics noise, resonator noise, and parametric phase shifts in the amplifier.

nonwhite noise process in the electronics (1/f noise in this case) contributes to oscillator phase noise, and identical steps can be applied to analyze the impact of nonwhite resonator noise on phase noise. By separating out the electronics phase noise from resonator phase noise, it is possible to use material-based models for the spectra of each component and see how the noise propagates through a given system. For the case in which the resonator is mechanical, a fundamental noise source is the white Brownian force associated with the finite mechanical loss in the resonator [5]. If given an equivalent circuit for the mechanical resonator, the model for this noise term is exactly like Johnson noise in a resistor [6]: 2Srw = 4kBTRx.



(14)

Rx is the equivalent circuit resistance at resonance and depends on both the resonator Q and the design-specific electromechanical coupling. F. Effects of Nonlinearities and Parametric Sensitivities in the Amplifier For an LTI resonator with a feedback network free of parametric modulation, the expressions developed thus far for the phase noise spectrum, (12) and (13), are as accurate as (and, without resonator noise, equate to) the Leeson formula. Oscillator phase noise predictions based on Leeson’s equation typically underestimate the phase noise below about 1 kHz, in part because of the effect of flicker noise. In general, the excess phase noise will be caused by non-LTI effects. In the electronics, we refer to these effects as parametric modulation, which can be driven by additive noise, amplitude noise, or effects such as temperature variation and vibration. We will model the noise caused by parametric modulation by including an additive term SΦ(ω) in the expression for feedback electronics phase. Important components of SΦ(ω) are the terms produced by amplitude noise, power supply variations, and temperature. The feedback amplifier will generally include transistors, and the transistor parameters are signal-dependent. Therefore, the phase shift imparted by the transistor amplifier will change with bias point or signal amplitude, as well as temperature.

Low-frequency additive noise (such as flicker) produces a lowfrequency variation in the bias point, which in turn produces a low-frequency parametric modulation and a corresponding lowfrequency phase modulation. This is responsible for the propagation of additive flicker noise to flicker noise in feedback phase, which produces a 1/f3 oscillator phase noise PSD. Considering the effects of additive noise-to-phase conversion and amplitude noise-to-phase conversion, the spectral density caused by parametric modulation is given by



SF(w) ≅

∂F 2 ∂F(w) 2 Sn(w) + S (w). ∂A A ∂n

(15)

Note that the parametric modulation coefficients of (15) are easy to determine either from circuit simulation or by empirical measurement at the circuit level. To capture the phase noise caused by parametric modulation in the electronics, Figure 3 explicitly identifies a parametric phase shift in the amplifier, Φ(t). In this case, the spectrum of the feedback phase becomes: Sqf(w) = SY(w) + SF(w)



2S 1 S (w – wo) + Sn(w + wo)] + 2rw + SF(w). 2 [ n B (16) A

Note that (16) makes it clear that feedback phase noise is composed of the sum of injected phase noise (the phase produced by the LTV mapping of additive noise) and parametric phase noise. The associated oscillator phase noise of (11) generalizes to



SqR(w) ≅ Sqf(w) 1+

∂f(wo) ∂w

-2

1 , |w| > 0. w2

(17)

To elaborate on the propagation of additive flicker noise to flicker feedback phase, note that additive flicker noise does not convert to appreciable phase noise without a nonlinear element coupling additive noise to phase noise (i.e., without parametric modulation). For example, let the additive flicker noise PSD be



Sn(w) ≅

Kfe . |w|

(18)

In the absence of direct conversion of additive noise to phase noise, the corresponding feedback phase noise spectrum is



Kfe Kfe + Sqf(w) ≅ 12 ≅ 0, 0 < |w| « wo. (19) A |w – wo| |w + wo|

Thus, in this case, we are insensitive to additive flicker noise because of the effect of the LTV mapping of additive noise to phase noise (the up-modulation). However, in the case in which the feedback electronics possesses parametric modulation that results in direct coupling between additive noise voltage and phase (for example, because of biasdependent semiconductor devices in the electronics), there will be

12 Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

a flicker component in the feedback phase noise spectrum given by



SF(w) ≅

Kfp . |w|

(20)

At very low frequency, the phase noise spectrum becomes:



SqR(w) ≅

∂f(wo) ∂w

-2

Kfp . |w|3

(21)



2

A2 w , 2 c.amp

To define the parametric noise caused by shifts in the resonator natural frequency, we must make a distinction between resonator oscillation frequency noise ωo(t) and resonator natural frequency noise ωn(t). The natural frequency noise is the resonator oscillation frequency noise in the absence of feedback phase noise. The resonator oscillation frequency noise can be expressed as (see (6)) -1

Thus, the presence of direct coupling between voltage (or current) and phase will result in a sensitivity to flicker noise, producing a 1/ f3 close-in slope to the phase noise spectrum. The corner frequency of the oscillator phase noise, using the variables introduced in this analysis, becomes ∂F(0) wc.osc ≅ ∂n

such as temperature and vibration, or by a nonlinear coupling between oscillator amplitude and natural frequency.

(22)

where ωc.amp is the corner frequency of the electronics noise, at which the flicker component (Kfe /|ω|) equals the white noise component (2Snw /A2). Eq. (22) represents a significant deviation from the convention: the corner frequency where 1/f3 oscillator noise begins to dominate is not equal to the amplifier corner frequency, but is scaled by the amplifier nonlinearity. Though many have noted that the oscillator corner can be significantly lower than the amplifier corner, this analysis offers a specific closed-form model. It should be noted that Kfp can be reduced by several techniques that amount to linearization through the use of degenerative feedback in the electronics. In addition, the electronics flicker coefficient, Kfe is process-dependent, but for a given process, can be reduced through device scaling. The phase shift in the amplifier section can also fluctuate because of temperature, vibration, and other effects. Device models for fluctuation in the amplifier components and their resulting phase shifts can be inserted into (15)-(17) to identify the impact on phase noise. G. Effects of Nonlinearities and Parametric Sensitivities in the Resonator In its simplest form, the oscillator phase noise includes only the additive electronics noise mapped into phase noise through the LTV transformation and passed through the resonator in a closed loop. This is expressed in (11). Equations (16) and (17) generalize this to include more noise sources feeding into the loop. The derivations have intentionally left the resonator phase slope at resonance as a general function. For mechanical resonators that have nonlinear response characteristics, the slope near ωo can become higher than that of a linear device and even bifurcate into a multivalued function [7]. It has been shown that some types of nonlinearity can actually improve the phase noise of the device as long as the operating point can be well-defined [4].



wo(t) ≅ wn(t) +

∂f(wo) f(t). ∂w

(23)

We can express the readout phase noise in a form that is explicit in feedback phase noise and natural frequency noise as follows:



qR(t) = Yn(t) +

∂f(wo) ∂w

-1 t

t

∫ f(t)dt +–∞∫ wn(t)dt. –∞



(24)

Notice that this reduces to the expression for the LTI resonator when there is no natural frequency noise. With this effect, together with the resonator noise, the Leeson equation (12) is more generally written as



SqR(w) ≅ Sqf(w) 1 +

wo 2Qw

2

+

Swn(w) . w2

(25)

Mechanical resonators often exhibit amplitude-frequency sensitivity at high amplitudes, which can be due to materials, geometric, or even electrostatic effects [8]-[10]. Thus, stochastic variation of the resonator amplitude can convert directly to phase noise. In general, the phase noise contribution caused by amplitudefrequency coupling can be written as



Swn(w) ≅

∂wn 2 S (w), ∂X X

(26)

where in this case X is the amplitude of the resonator. Typical values for mechanical nonlinearity are often quoted in terms of power and range from 10-9/μW for AT- and BT-cut quartz, to 10-11/μW for SCcut quartz [8]. Resonator amplitude noise will depend, in part, on the amplitude control approach. In the case in which an amplitude control loop is designed to control the oscillator output amplitude to a specific value, the amplitude noise is given by the amplitude noise injected by the oscillator feedback electronics, provided that we neglect noise added by the amplitude control circuitry. In this case, we can determine oscillator amplitude noise using the analytic signal formulation in a way that parallels our derivation of injected phase noise. Doing so results in injected amplitude noise spectrum given by 1 (27) SAn(w) = [Sne(w – wo) + Sne(w + wo)]. 2

In addition to being affected by feedback phase, the phase noise of an oscillator can also be affected by variations in natural frequency. The natural frequency can be affected by environmental influences,

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

13

The resulting natural frequency noise spectrum is given by

Swn(w) ≅



∂wn 2 ∂X 2 S (w). ∂X ∂A An

(28)

Therefore, amplitude noise will increase oscillator phase noise in the case of a resonator with amplitude-frequency coupling, and white amplitude noise will increase 1/f2 phase noise in this case. Finally, external influences on the resonator frequency, such as those described by [11], can be inserted into the phase noise predictions using (25). For example, if random vibration induces changes in the natural frequency of the resonator, we may write: Swn(w) ≅



∂wn 2 Sg(w), ∂g

(29)

where Sg(ω) is the vibration PSD in g2/Hz. This expression is consistent with well-established results [8], [11], [12]. Typical frequency sensitivities (Δω/ωo) are in the range of 10-10/g, and typical rms vibration levels inside a quiet building are given on the order of 20 mg [11], resulting in substantial phase noise that cannot be neglected in the interpretation of real test data. H. Single-Sideband Noise Spectral Density Phase noise is typically expressed in terms of decibels per hertz with respect to the carrier power (dBc/Hz) using the single-sideband noise spectral density, defined per IEEE Standard 1139–2008 [13] as L(w) =



1 S (w), 2 qR,SSB

(30)

where SθR,SSB (ω) is the single-sideband spectrum and Sφ,SSB(ω) = 2Sφ(ω), ω > 0. Appendix C discusses the definitions in more detail. Hereafter, we replace ω by Δω in the phase noise expression to emphasize the fact that the frequency to which we refer is the offset from the carrier frequency. In the case in which the resonator is LTI, the electronics do not include parametric modulation, where we neglect resonator noise and consider only the electronics noise, the single-sideband noise spectral density (in dBc/Hz) is given by (Sn(Dw – wo) + Sn(Dw + wo)) A2 -2 1 2 . 1 + ∂f(wo) . ∂w Dw (31)

L(Dw) ≅ 10 log



I. General Expression for Single-Sideband Noise Spectral Density For the general case, the readout phase spectrum is given by SqR(w) ≅ Sqf(w) 1 +



∂f(wo) ∂w

-2

1 w2

+

Swn(w) , w2

(32)

where Sθf was defined in (16). Assuming additive white resonator noise, stationary electronics noise, a non-LTI resonator, and

feedback electronics with parametric modulation, the singlesideband noise spectral density is given by (33), see above, where ωo is the oscillator frequency; 0 < |Δω| 0. (A3)



Consider the case in which a random noise signal n(t) is added to a sinusoid of amplitude A. The only restriction placed on n(t) is that it is wide-sense stationary, and thus we can define its power spectrum. The noisy sinusoid is given by

It can be shown that [15]

〈n(t)n^(t + t)〉 = –〈n^(t)n(t + t)〉.

Therefore, the cross-correlation terms add, and we have RYn(t) =

s(t) = A cos(wot) + n(t). (A4)

The analytic signal is given by



c(t) = A cos(wot) + jA sin(wot) + n(t) + jn^(t). (A5)



RYn(t) =

The analytic signal can be expressed in a polar form as c(t) = Aejwot +



n2(t) + n^2(t)ej tan

-1

(n^(t)/n(t))

. (A6)

We see that the analytic signal is the sum of two phasors, one corresponding to the signal and another corresponding to the noise. The signal phasor has amplitude A and is rotating counterclockwise at a constant rate of ωo. The noise phasor has random amplitude and random phase. The resultant phasor representing c(t) possesses both random amplitude modulation and random phase modulation.



Yn(t) ≅



n^ (t) n(t) cos(wot) – sin(wot). A A

n(t) n^(t) cos(wot) – sin(wot) A A

where the angle brackets denote the expected value operator. The ACF can be expanded as 1 RYn(t) = 2 〈n^(t)n^(t + t) cos wot cos wo(t + t) A + n(t)n(t + t) sin wot sin wo(t + t) – n(t)n^(t + t) sinwot cos wo(t + t) – n^(t)n(t + t) cos wot sin wo(t + t)〉. (B3)



sn^n(w) =

-jSnn(w) w < 0 jSnn(w) w < 0.

FT

(B1)

. n^(t + t) cos(w (t + t)) – n(t + t) sin(w (t + t)) , o o A A (B2)



1 cos wot〈n^(t)n^(t + t)〉 2A2 1 + 2 cos wot〈n(t)n(t + t)〉 2A 1 + 2 sin wot〈n^(t)n(t + t)〉. (B7) 2A

(B8)

Now, note the following Fourier transform property applied to the autocorrelation function:

The autocorrelation function (ACF) of the injected phase is given by RYn(t) =

1 cos wot〈n^(t)n^(t + t)〉 2A2 1 + 2 cos wot〈n(t)n(t + t)〉 2A 1 + 2 sin wot〈n^(t)n(t + t)〉. A (B6)

It can also be shown that [15]

Appendix B Derivation of Feedback Phase Spectrum The injected phase is given by

(B5)

We can eliminate terms that average to zero, yielding 1 RYn(t) = cos wot〈n^(t)n^(t + t)〉 2A2 1 + 2 cos wot〈n(t)n(t + t)〉 2A 1 + 2 sin wot〈n(t)n^(t + t)〉 2A 1 – 2 sin wot〈n^(t)n(t + t)〉. 2A (B4)



R1(t) R2(t)↔

1 S (w) * S2(w), 2p 1

(B9)

where the asterisk denotes the convolution operator. From (B7), we have 1 1 R (t) cos wot + R (t) cos wot RYn(t) = 2A2 n 2A2 n^ 1 + 2 Rn^n(t) sin wot A (B10) Using (B9) and (B10), we obtain SYn(w) =



1 FT{Rn^(t)} * FT{cos wot} 4pA2 1 + FT{Rn(t)} * FT{cos wot} 4pA2 1 + FT{Rn^n(t)} * FT{sin wot}. 2pA2 (B11)

This equation can be rewritten as 1 S (w) * FT{cos wot} SYn(w) = 4pA2 n 1 + S (w) * FT{cos wot} 4pA2 n 1 + jS (w) sgn(w) * FT{sin wot}. 2pA2 n (B12) Note that:

FT{cos wot} = p[d(w – wo) + d(w + wo)] (B13)

16 Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

and

Therefore, FT{sin wot} = jp[d(w + wo) – d(w – wo)]. (B14)

Simplifying (B12), we obtain SYn(w) =

1 S (w) * FT{cos wot} 2pA2 n j – S (w) sgn(w) * FT{sin wot}. 2pA2 n (B15)



(C7)

Because the phase spectrum will always be a low-pass function, we will treat it here as a strictly band-limited low-pass function for simplicity. Then from (C7) and Figure C1, it is clear that 1 SSB(wo + w) ≅ Sφ(w). 2



Finally, we obtain the injected phase noise spectrum: SYn(w) =

1 SSB(w) ≅ [Sφ(w + wo) + Sφ(w – wo)]. 2



1 [S (w – wo)[1 – sgn(w – wo)]] 2A2 n 1 + 2 [Sn(w + wo)[1 + sgn(w + wo)]]. 2A (B16)

(C8)

SSB(w) Sφ(w) 1 2 Sφ(w + w0)

1 2 Sφ(w - w0)

Because we are concerned with lower frequencies, we know that for our frequencies of interest (B17) 0 < | w| 0.

(C9)

Note that because the signal amplitude in (C2) was normalized to unity, then Po = 1/2. Finally, we note that to avoid any confusion, because frequency ω in the NCR expression represents the frequency deviation from the carrier frequency, we will replace ω by Δω in the NCR expression, so that: NCR ≅ Sφ(Dw). (C10) Figure C1 shows a representative baseband phase spectrum and the corresponding spectrum for a phase-modulated sinusoid having normalized amplitude. For analytical purposes, double-sided spectra are used in this manuscript unless specifically mentioned otherwise. The IEEE Standard 1139 reports phase noise spectra in terms of the single- sided spectrum as [13]

The sideband term can be approximated as

NCR(w) ≡



1 L(Dw) = Sφ,SSB(Dw) = Sφ(Dw), Dw > 0, 2

(C11)

where, per the usual definition, the single-sided spectrum is defined only when Δω > 0 and is given by S,SSB(Δω) = 2S(Δω). (C5)

-j S (w) sgn(w) * jp[d(w + wo) – d(w – wo)]. (C6) 2p φ

Acknowledgments The authors wish to thank The Charles Stark Draper Laboratory, Inc. for supporting this work, and especially Jeff Lozow of Draper Laboratory for verifying the derivation in Appendix B.

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

17

References [1] Leeson, D.B., “A Simple Model of Feedback Oscillator Noise Spectrum,” Proc. IEEE, Vol. 54, No. 2, 1966, pp. 329-330. [2] Hajimiri A. and T.H. Lee, “A General Theory of Phase Noise in Electrical Oscillators,” IEEE J. Solid-state Circuits, Vol. 33, No. 2, 1998, pp. 179-194. [3] Rubiola, E., Phase Noise and Frequency Stability in Oscillators, Cambridge University Press, New York, NY, 2009. [4] Greywall, D.S., B. Uurke, P.A. Busch, A.N. Pargellis, and R.L. Willett, “Evading Amplifier Noise in Nonlinear Oscillators,” Phys. Rev. Lett., Vol. 72, No. 9, 1994, pp. 2992-2995. [5] Gabrielson, T., “Mechanical-Thermal Noise in Microma chined Acoustic and Vibration Sensors,” IEEE Trans. Electron. Dev., Vol. 40, No. 5, 1993, pp. 903-909. [6] Nguyen, C. “Micromechanical Resonators for Oscillators and Filters,” Proceedings, IEEE Int. Ultrasonics Symp., Seattle, WA, 1995, pp. 489-499. [7] Nayfeh A.H. and D. Mook, Nonlinear Oscillations, Wiley, New York, NY, 1995. [8] Walls F. and J. Gagnepain, “Environmental Sensitivities of Quartz Oscillators,” IEEE Trans. Ultrason. Ferroelectr. Freq. Figure, Vol. 39, No. 2, 1992, pp. 241-249. [9] Agarwal, M., K. Park, B. Kim, M. Hopcroft, S.A. Chandorkar, R.N. Candler, C.M. Jha, R. Melamud, T.W. Kenny, and B. Murmann, “Amplitude Noise-Induced Phase Noise in Electrostatic MEMS Resonators,” Solid-State Sensor, Actuator, and Microsystems Workshop, Hilton Head, SC, 2006, pp. 90-93. [10] Kusters, J., “The SC Cut Crystal - An Overview,” Proceedings, Ultrasonics Symp., 1981, pp. 402-409. [11] Vig. J., Available [Online]: “Quartz Crystal Resonators and Oscillators,”nhttp://www.ieee-uffcorg/frequency_control/ teaching.asp, February 2010. [12] Filler, R., “The Acceleration Sensitivity of Quartz Crystal Oscillators: A Review,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, Vol. 35, No. 3, 1988, pp. 297-305. [13] IEEE Standard Definitions of Physical Quantities for Funda mental Frequency and Time Metrology–Random Instabilities, IEEE Standard 1139 –2008, 2008, pp. c1–35. [14] Ellinger, F., Radio Frequency Integrated Circuits and Technologies, Springer, New York, NY, 2007. [15] Papoulis, A., Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, NY, 1965.

18 Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

Paul A. Ward is Laboratory Technical Staff (highest technical tier) at The Charles Stark Draper Laboratory. He has extensive experience in the development of high-performance electronics for a wide array of systems. He has been with Draper for 25 years and has developed innovative circuits and signal processing to support precision signal references, fiber-optic gyroscopes, Microelectromechanical System (MEMS) gyroscopes and accelerometers, strategic radiation-hard inertial instruments, and other instruments and systems. He received Draper’s Distinguished Performance Award in both 1994 and 1997, as well as Draper’s Best Patent Award in 1996, 1997, and 1998. Mr. Ward has managed Draper’s Microelectronics group, Analog and Power Systems group, and MixedSignal Control Systems group. He currently holds 22 U.S. patents with several in application and has coauthored numerous papers. Mr. Ward holds B.S. and M.S. degrees in Electrical Engineering from Northeastern University. Amy E. Duwel is Group Leader for RF and Communications at The Charles Stark Draper Laboratory after many years managing Draper’s MEMS Group. Her technical interests focus on microscale energy transport and on the dynamics of MEMS resonators in application as inertial sensors, RF filters, and chemical detectors. Dr. Duwel received a B.A. in Physics from the Johns Hopkins University and M.S. and Ph.D. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT).

Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity

19

Due to the limited lifespan of artificial joints and the ineffectiveness of current drug regimens, patients below the age of 65 with end-stage osteoarthritis often live with severe pain and disability. Researchers from Cytex Therapeutics, Draper, and MIT are collaborating on the project under a grant from the National Institutes of Health with the hope to develop treatments that replace damaged tissue at the joint surface with a living cartilage tissue substitute. Cytex and MIT began working on the project in 2007, and Draper joined in 2010. Current cell-based cartilage repair procedures are limited to small defects. The researchers believe that using a mechanically functional biomaterial scaffold may enable repair of the entire joint surface while also allowing loadbearing associated with normal daily activities. The researchers also expect that using a live cell component may allow the new cartilage tissue to maintain itself. This may enable improvement over current artificial joint prostheses, which tend to wear over time, resulting in an effective life span of approximately 10 to 20 years. Young patients with end-stage osteoarthritis stand to benefit the most from this work, which could also bring down the high cost associated with treating end-stage osteoarthritis by reducing the number of revision surgeries and the need for drugs to treat pain. The ability to postpone replacement of an osteoarthritic joint for 5 to 10 years would be highly welcome by both patients and payers. The next step is to demonstrate the long-term efficacy of this type of implant in animal models. Cytex Therapeutics expects that the work will be ready for human clinical trials in the 2015 to 2020 time frame.

20 In Vitro Generation of Mechanically Functional Cartilage Grafts . . .

In Vitro Generation of Mechanically Functional Cartilage Grafts Based on Adult Human Stem Cells and 3D-Woven poly (ε-caprolactone) Scaffolds Piia K. Valonen, Franklin T. Moutos, Akihiko Kusanagi, Matteo G. Moretti, Brian O. Diekman, Jean F. Welter, Arnold I. Caplan, Farshid Guilak, and Lisa E. Freed Copyright ©2009 by Elsevier Ltd. All rights reserved. Published in Biomaterials, Vol. 31, January 19, 2010, pp 2193 - 2200.

Abstract Three-dimensionally (3D) woven poly(ε-caprolactone) (PCL) scaffolds were combined with adult human mesenchymal stem cells (hMSC) to engineer mechanically functional cartilage constructs in vitro. The specific objectives were to: (1) produce PCL scaffolds with cartilage-like mechanical properties, (2) demonstrate that hMSCs formed cartilage after 21 days of culture on PCL scaffolds, and (3) study the effects of scaffold structure (loosely vs. tightly woven), culture vessel (static dish vs. oscillating bioreactor), and medium composition (chondrogenic additives with or without serum). Aggregate moduli of 21-day constructs approached normal articular cartilage for tightly woven PCL cultured in bioreactors, were lower for tightly woven PCL cultured statically, and lowest for loosely woven PCL cultured statically (p < 0.05). Construct DNA, total collagen, and glycosaminoglycans (GAG) increased in a manner dependent on time, culture vessel, and medium composition. Chondrogenesis was verified histologically by rounded cells within a hyaline-like matrix that immunostained for collagen type II but not type I. Bioreactors yielded constructs with higher collagen content (p < 0.05) and more homogenous matrix than static controls. Chondrogenic additives yielded constructs with higher GAG (p < 0.05) and earlier expression of collagen II mRNA if serum was not present in medium. These results show the feasibility of functional cartilage tissue engineering from hMSC and 3D-woven PCL scaffolds. Introduction Degenerative joint disease affects 20 million adults with an economic burden of over $40 billion per year in the U.S. [1]. Once damaged, adult human articular cartilage has a limited capacity for intrinsic repair [2] and hence injuries can lead to progressive damage, joint degeneration, pain, and disability. Cellbased repair of small cartilage defects in the knee joint was first demonstrated clinically 15 years ago [3]. Many cartilage tissue engineering studies use chondrocytes as the cell source [4], [5], however, this approach is challenged by the limited supply of chondrocytes, their limited regenerative potential due to age, disease, dedifferentiation during in vitro expansion, and the morbidity caused by chondrocyte harvest [6]. Therefore, other studies use mesenchymal stem cells (MSC) as the cell source [7], [8], as these stem cells can be harvested safely by marrow biopsy, readily expanded in vitro, and selectively differentiated into chondrocytes [9]. Clinical translation of tissue engineered cartilage is currently limited by inadequate construct structure, mechanical function, and integration [2], [10]. Currently, most tissue engineered constructs for articular cartilage repair possess cartilage-mimetic material properties only after long-term (e.g., 1-6 months) in vitro culture [5], [11], [12]. This lack of early construct mechanical function implies a need for new tissue engineering technologies such as scaffolds and bioreactors [13], [14]. For example, the stiffness and strength of previously used scaffolds were several orders of magnitude below normal articular cartilage, particularly in tension [12], [15], [16]. Likewise, mechanical properties of engineered cartilage produced using these scaffolds and hMSC were at least one order of magnitude below values reported for normal cartilage despite prolonged in vitro culture [17], [18].

The goal of the present study was to produce mechanically functional tissue engineered cartilage from adult hMSC and 3D-woven PCL scaffolds in 21 days in vitro. Effects of (1) scaffold structure (loosely vs. tightly woven PCL); (2) culture vessel (static dish vs. oscillating bioreactor); and (3) medium composition (chondrogenic additives with or without serum) on construct mechanical, biochemical, and molecular properties were quantified. A 3D weaving method [19] was applied to multifilament PCL yarn to create scaffolds with cartilage-mimetic mechanical properties. The PCL was selected because it is a FDA-approved, biocompatible material [20], [21] that supports chondrogenesis [22] and degrades slowly (i.e., less than 5% degradation at 2 years, as measured by mass loss) into byproducts that are entirely cleared from the body [23], [24]. The 3D-woven PCL scaffolds were seeded with hMSC mixed with Matrigel® such that gel entrapment enhanced cell seeding efficiency [25] and also helped to maintain spherical cell morphology for the promotion of chondrogenesis [26]. The hMSC-PCL constructs were cultured either in static dishes or in an oscillatory bioreactor that provided bidirectional percolation of culture medium directly through the construct [27]. Bioreactors were studied because these devices are known to enable functional tissue engineering due to the combined effects of enhanced mass transport and mechanotransduction [14], [28]-[34]. Bidirectional rather than unidirectional perfusion was selected because the latter yielded different conditions at opposing construct upper and lower surfaces resulting in spatial concentration gradients and inhomogeneous tissue development [35], [36].

In Vitro Generation of Mechanically Functional Cartilage Grafts . . .

21

Three different culture media were tested as follows. Differentiation medium 1 (DM1) containing serum and chondrogenic additives (TGFβ, ITS+ Premix, dexamethasone, ascorbic acid, proline, and nonessential amino acids) was selected based on our previous work [8], [17], [37]. Differentiation medium 2 (DM2) containing chondrogenic additives but not serum was selected based on reports that serum inhibited chondrogenesis by synoviocytes [38], [39] and caused hypertrophy of chondrocytes [40]. Control medium (CM) without chondrogenic additives was tested to assess spontaneous chondrogenic differentiation in hMSC-PCL constructs. Materials and Methods All tissue culture reagents were from Invitrogen (Carlsbad, CA) unless otherwise specified. Poly(ε-caprolactone) (PCL) Scaffolds A custom-built loom [19] was used to weave PCL multifilament yarns (24 µm diameter per filament; 44 filaments/yarn, Grilon KE-60, EMS/Griltech, Domat, Switzerland) in three orthogonal directions (x-warp, y-weft, and a vertical z-direction) (Figure 1A). A loosely woven scaffold was made with widely spaced warp yarns (8 yarns/cm), closely spaced weft yarns (20 yarns/cm), and two z-layers between each warp yarn (Figure 1B). A tightly woven scaffold was made with closely spaced warp and weft yarns (24 and 20 yarns/cm, respectively) and one z-layer between each warp yarn (Figure 1C). These weaving parameters, in conjunction with fiber size and the density of PCL (1.145 g/cm3) [41], determined scaffold porosity and pore dimensions. The loosely woven scaffold had porosity of 68 ± 0.3%, approximate pore dimensions of 850 µm × 1100 µm × 100 µm, and approximate thickness of 0.9 mm. The tightly woven scaffold had porosity of 61 ± 0.2%, approximate pore dimensions of 330 µm × 260 µm × 100 µm and approximate thickness of 1.3 mm. Prior to cell culture, scaffolds were immersed in 4 N NaOH for 18 h, thoroughly rinsed in deionized water, dried, ethylene oxide sterilized, and punched into 7-mm diameter discs using dermal punches (Acuderm Inc., Ft. Lauderdale, FL). Human Mesenchymal Stem Cells The hMSC were derived from bone marrow aspirates obtained from a healthy middle-aged adult male at the Hematopoietic Stem Cell Core Facility at Case Western Reserve University. Informed consent was obtained, and an Institutional Review Board-approved aspiration procedure was used [42]. Briefly, the bone marrow

Figure 1. The 3D-woven PCL scaffold. (A) schematic; (B-C) scanning electron micrographs of (B) loosely and (C) tightly woven scaffolds. Scale bars: 1 mm.

sample was washed with Dulbecco’s modified Eagle’s medium (DMEM-LG, Gibco) supplemented with 10% fetal bovine serum (FBS) from a selected lot [9]. The sample was centrifuged at 460×g on a preformed Percoll density gradient (1.073 g/mL) to isolate the mononucleated cells. These cells were resuspended in serumsupplemented medium and seeded at a density of 1.8 × 105 cells/ cm2 in 10-cm diameter plates. Nonadherent cells were removed after 4 days by changing the medium. For the remainder of the cell expansion phase, the medium was additionally supplemented with 10 ng/mL of recombinant human fibroblast growth factor-basic (rhFGF-2, Peprotech, Rocky Hill, NJ) [43], and was replaced twice per week. The primary culture was trypsinized after approximately 2 weeks, and then cryopreserved using Gibco Freezing medium. Tissue Engineered Constructs The hMSC were thawed and expanded by approximately 10-fold during a single passage in which cells were plated at 5500 cells/ cm2 and cultured in DMEM-LG supplemented with 10% FBS, 10 ng/mL of rhFGF-2, and 1% penicillin-streptomycin-fungizone. Medium was completely replaced every 2-3 days for 7 days. Multipotentiality was verified for the expanded hMSC by inducing differentiation into the chondrogenic lineage in pellet cultures of passage 2 (P2) cells [44] and into adipogenic and osteogenic lineages in monolayer culture [45]. The PCL scaffolds (a total of n = 15-20 per group, in three independent studies) were seeded with P2 hMSC by mixing cells in growth factor-reduced Matrigel® (B&D Biosciences) while working at 4°C, and pipetting the cellgel mixture evenly onto both surfaces of the PCL scaffold. Each 7 mm diameter, 0.9 mm thick loosely woven scaffold was seeded with a cell pellet (1 million cells in 10 µL) mixed with 25 µL of Matrigel®, whereas each 7 mm diameter, 1.3 mm thick tightly woven scaffold was seeded with a similar cell pellet mixed with 35 µL of Matrigel®. Freshly seeded constructs were placed in 24-well plates (one construct per well), placed in a 37°C in a humidified, 5% CO2/room air incubator for 30 min to allow Matrigel® gelation, and then 1 mL of medium was added to each well. After 24 h, constructs were transferred either into 6-well plates (one construct per well containing 9 mL of medium) and cultured statically, or into bioreactor chambers as described previously [27]. Briefly, each construct allocated to the bioreactor group was placed in a custom-built poly(dimethyl-siloxane) (PDMS) chamber that was connected to a loop of gas-permeable silicone rubber tubing (1/32-in wall thickness, Cole Parmer, Vernon Hills, IL). Each loop was then mounted on a supporting disc, and medium (9 mL) was added, such that the construct was submerged in medium in the lower portion of the loop and a gas bubble was present in the upper portion of the loop [27]. Multiple loops were mounted on an incubator-compatible base that slowly oscillated the chamber about an arc of ~160 deg. Importantly, bioreactor oscillation directly applied bidirectional medium percolation and mechanical stimulation to their upper and lower surfaces of the discoid constructs. Three different medium compositions (DM1, DM2 and CM) were studied. Differentiation medium 1 (DM1) was DMEM-HG supplemented with 10% FBS, 10 ng/mL hTGFβ-3 (PeproTech, Rocky Hill, NJ), 1% ITS+ Premix (B&D Biosciences),

22 In Vitro Generation of Mechanically Functional Cartilage Grafts . . .

10-7 M dexamethasone (Sigma), 50 mg/L ascorbic acid, 0.4 mM proline, 0.1 mM nonessential amino acids, 10 mM HEPES, 100 U/mL penicillin, 100 U/mL streptomycin, and 0.25 µg/mL of fungizone. Differentiation medium 2 (DM2) was identical to DM1 except without FBS. Control medium (CM) was identical to DM1 except without chondrogenic additives (TGFβ-3, ITS+ Premix, and dexamethasone). Media were replaced at a rate of 50% every 3-4 days, and constructs were harvested after 1, 7, 14, and 21 days. Mechanical Testing Confined compression tests [46] were performed on 3-mm diameter cylindrical test specimens, cored from the centers of 21-day constructs or acellular (initial) scaffolds, using an ELF3200 materials testing system (Bose-Enduratec, Framingham, MA). Specimens (n = 5-6 per group) placed in a 3 mm diameter confining chamber within a bath of phosphate buffered saline (PBS), and compressive loads were applied using a solid piston against a rigid porous platen (porosity of 50%, pore size of 50100 µm). Following equilibration of a 10 gf tare load, a step compressive load of 30 gf was applied to the sample and allowed to equilibrate for 2000 s. Aggregate modulus (HA) and hydraulic permeability (k) were determined numerically by matching the solution for axial strain (εz) to the experimental data for all creep tests using a two-parameter, nonlinear least-squares regression procedure [47], [48]. Unconfined compression tests were done by applying strains, ε, of 0.04, 0.08, 0.12, and 0.16 to the specimens (n = 5-6 per group) in a PBS bath after equilibration of a 2% tare strain. Strain steps were held constant for 900 s, which allowed the specimens to relax to an equilibrium level. Young’s modulus was determined by linear regression on the resulting equilibrium stress-strain plot. Histology and Immunohistochemistry Histological analyses were performed after specimens (n = 2 constructs per time point per group) were fixed in 10% neutral buffered formalin for 24 h at 4°C, post fixed in 70% ethanol, embedded in paraffin, and sectioned both en-face and in cross section. Sections 5 µm thick were stained with safranin-O/fast green for proteoglycans. For immunohistochemical analysis, 20 µm thick sections were deparaffinized in xylene and rehydrated. To efficiently expose epitopes, the sections were incubated with 700 U/mL bovine testicular hyaluronidase (Sigma) and 2 U/mL pronase XIV (Sigma) for 1 h at 37°C. Double immunostaining for collagen type I (mouse monoclonal antibody, ab6308, Abcam Inc., Cambridge, MA) and collagen type II (mouse monoclonal antibody, CII/CI, Hybridoma Bank, University of Iowa) was performed using

an Avidin/Biotin kit (Vector Lab, Burlingame, CA). Control sections were incubated with PBS/1% bovine serum albumin (Sigma) without primary antibody. Biochemical Analyses Standard assays for DNA, ortho-hydroxyproline (OHP, an index of total collagen content), and GAG (an index of proteoglycan content) were performed (n = 3-4 bisected constructs per time point per group). Values obtained for all 1-day constructs produced from tightly woven PCL scaffolds in DM1, DM2, and CM groups were pooled, averaged, and used as a basis for comparison for subsequent (7, 14, and 21-day) constructs produced from tightly woven scaffolds. After measuring wet weight, constructs were diced and digested in papain for 12 h at 60°C. DNA was measured using the Quant-iTTM PicoGreen® dsDNA assay (Molecular Probes, Eugene, OR). GAG was measured using the BlyscanTM sulphated GAG assay (Biocolor, Carrickfergus, Northern Ireland). To measure total collagen, papain digests were hydrolyzed in HCl at 110°C overnight, dried in a desiccating chamber, reconstituted in acetate-citrate buffer, filtered through activated charcoal, and OHP was quantified by Ehrlich’s reaction [49]. Briefly, hydrolysates were oxidized with chloramine-T, treated with dimethylaminobenzaldehyde, read at 540 nm against a trans-4-hydroxy-L-proline standard curve, and total collagen was calculated by using a ratio of 10 µg of collagen per 1 µg of 4-hydroxyproline. The conversion factor of 10 was selected since immunohistochemical staining showed that type II collagen represented virtually all of the collagen present in the constructs [50], [51]. Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) The presence of two cartilage biomarkers was tested: Sox-9, one of the earliest markers for MSC differentiation toward the chondrocytic lineage, preceding the activation of collagen II [52], and collagen type II, a chondrocyte-related gene. Collagen type I provided a marker for undifferentiated MSC, and GAPDH provided an intrinsic control [53]. Total RNA was isolated from hMSC prior to and after culture on PCL scaffolds (n = 3-4 bisected constructs per group per time point) using a Qiagen RNeasy mini kit. DNase treated RNA was used to make first stranded cDNA with the SuperScript III First-Strand Synthesis for RT-PCR. The cDNA was amplified in an iCycler Thermal Cycler 582BR (Bio-Rad, Hercules, CA) using primer sequences given in Table 1. The cycling conditions were as follows: 2 min at 94°C; 30 cycles of (30 s at 94°C, 45 s at 58°C, 1 min at 72°C), and 5 min at 72°C. The PCR products were analyzed by means of 2% agarose gel electrophoresis containing ethidium bromide (E-Gel® 2%, Invitrogen).

Table 1. The Sequence of PCR Primers (Sense and Antisense, 5’ to 3’).

Primer

Sense

Antisense

Product size

Collagen type II (Col II)

atgattcgcctcggggctcc

tcccaggttctccatctctg

260 bp

Sox-9

aatctcctggaccccttcat

gtcctcctcgctctccttct

198 bp

Collagen type I (Col I)

gcatggccaagaagacatcc

cctcgggtttccacgtctc

300 bp

In Vitro Generation of Mechanically Functional Cartilage Grafts . . .

23

Statistical Analysis Data were calculated as mean ± standard error and analyzed using multiway analysis of variance (ANOVA) in conjunction with Tukey’s post hoc test using Statistica (v. 7, Tulsa, OK, USA). Values of p < 0.05 were considered statistically significant. Results Effects of Scaffold Structure Scaffold structure did not have any significant effect on the amounts of DNA, total collagen, or GAG in constructs cultured statically for 21 days in DM1 (Table 2, Group A vs. B). In contrast, scaffold structure significantly impacted aggregate modulus (HA) and Young’s modulus (E) of initial (acellular) scaffolds and cultured constructs (Figure 2). Acellular loosely woven scaffolds exhibited lower (p < 0.05) mechanical properties (HA of 0.18 ± 0.011 MPa and E of 0.042 ± 0.004 MPa) than acellular tightly woven scaffolds (HA of 0.46 ± 0.049 MPa and E of 0.27 ± 0.017 MPa). Likewise, 21-day constructs based on loosely woven scaffolds exhibited lower (p < 0.05) mechanical properties (HA of 0.16 ± 0.006 MPa and E of 0.064 ± 0.004 MPa) than constructs based on tightly woven scaffolds (HA of 0.37 ± 0.030 MPa and E of 0.41 ± 0.023 MPa) (Figure 2). As compared to acellular scaffolds, the 21-day constructs exhibited similar aggregate modulus and higher (p < 0.05) Young’s modulus. Effects of Culture Vessel Aggregate modulus of 21-day constructs based on tightly woven PCL and cultured in bioreactors was higher (p < 0.05) than that measured for otherwise similar constructs cultured statically (Figure 2, Table 2). Construct amounts of DNA, total collagen, and

A

GAG increased in a manner dependent on time, culture vessel, and medium composition. DNA and GAG contents were similar in 21-day constructs cultured in bioreactors and statically (Figure 3A and C). Total collagen content was 1.5-fold higher (p < 0.05) in bioreactors compared to static cultures (Figure 3B; Table 2, Group B vs. C). Bioreactors yielded more homogeneous tissue development than static cultures based on qualitative histological appearance of cross sections (Figure 4, row I). Chondrogenesis was demonstrated histologically by rounded cells within a hyalinelike matrix that immunostained strongly and homogeneously positive for collagen type II. Bioreactors yielded constructs in which Coll-II immunostaining was more pronounced than static cultures (Figure 4, row IV). Immunostaining for Col I was minimal under all conditions tested. The RT-PCR analysis showed type of culture vessel did not affect the temporal expression of mRNAs for collagen type II (Figure 5A), Sox-9 (Figure 5B), collagen type I (not shown), and GAPDH (not shown). Effects of Medium Composition DNA content was 1.4-fold higher (p < 0.05) in 21-day constructs cultured in DM1 compared to DM2 (Figure 3A, Table 2, Group B vs. D). Also, total collagen content was 1.8-fold higher (p < 0.05) in 21-day constructs cultured in DM1 compared to DM2 (Figure 3B). Conversely, GAG content was lower (42% as high, p < 0.05) in 21-day constructs cultured in DM1 compared to DM2 (Figure 3C). Likewise, the GAG/DNA ratio was lower (30% as high, p 0 for e = 0

Ω

-

-

a/i coupled for e = 0 or i = 90 deg

ω ν

+

+

Ω undefined for i = 0 deg

±

ω undefined for i = 0 deg and e = 0

40000 30000 20000 10000 0

0

5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000 Capacitor Potential (V)

Approximate Orbit-Average q/m, (C/kg)

of the force is set by the magnetic field and the velocity of the spacecraft with respect to that magnetic field, neither of which can be altered by the spacecraft control system.

0.008 0.007 0.006 0.005 0.004



0.003 0.002 0.001 0

0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000 Capacitor Potential (V)

Figure 7. Orbit-average power and q/m vs. capacitor potential.

Lorentz Augmented Orbit Maneuvers and Limitations Maneuver Limitations A Lorentz augmented orbit cannot experience arbitrary changes for all initial orbital elements. In certain regimes, as evidenced by Eq. (10), changes in orbital elements are tightly coupled. This coupling stems from the basic physics of the Lorentz force. The direction

The Lorentz force is at its strongest in LEO. The strength of the dipole component of the magnetic field drops off with the cube of radial distance. Additionally, spacecraft velocities with respect to the magnetic field tend to be larger in LEO. A geostationary spacecraft has no velocity with respect to the magnetic field and thus experiences no Lorentz force. Example Maneuver: Low-Earth-Orbit Inclination Change and Orbit Raising The minimum inclination a spacecraft can be launched into is equal to the latitude of its launch site. For a U.S. launch, this minimum inclination is generally 28.5 deg, the latitude of Cape Canaveral, FL. However, for certain missions, equatorial orbits are desirable. The plane change between i = 28.5 deg and i = 0 deg is expensive in terms of ∆V and requires either a launch vehicle upper stage or a significant expenditure of spacecraft resources. We develop a

General Bang-Bang Control Method for Lorentz Augmented Orbits

39

control algorithm to use the Lorentz force to perform this inclination change without the use of propellant, while simultaneously raising the orbital altitude.

where -(q/m)max is largest available negative charge-to-mass ratio. However, when this simple quadrant control is used, the eccentricity of the orbit tends to grow undesirably large. Maintaining an identically zero eccentricity is impossible, though. Any charge on a circularorbiting spacecraft causes an increase in the eccentricity. However, if the oblateness of the Earth is considered, the eccentricity remains bounded by a small value. Figure 8 shows this result, plotting a short simulation of an orbit under the quadrant controller. The blue line shows the growth of eccentricity with J2 absent, while the green line shows the bounding of e under the influence of J2. The effect of J2 on the eccentricity of the orbit is larger than that of the Lorentz force. The J2 perturbation does not affect the overall performance of the maneuver, though. The Lorentz force depends on the velocity of the spacecraft, which only changes by small amount due to the presence of J2. The presence of J2 only creates a small periodic disturbance to both a and e. Figure 9 shows the results of a simulation using the e-limiting quadrant method. The simulation begins with a 600-km altitude circular orbit. The charge-to-mass ratio is q/m = -0.007 C/kg. A full model of J2 is used. The simulation lasts until an equatorial orbit is reached. The IGRF95 magnetic field model is used to 10th degree and order. Figure 9a shows the increase in semimajor axis given by the quadrant method. The initial 600-km orbit is raised to a 724.0-km circular orbit, an increase of 124 km. Figure 9b shows the desired decrease in inclination. Because the magnetic equator does not align with the true equator, the inclination can be brought to exactly zero. Zero inclination is reached in about 340 days with this value of charge. Figure 9c shows the eccentricity. The eccentricity is bounded by the J2 perturbation to a small value. Finally, Figure

0.015 0.01 0.005 0

0

10

20

30 40 Time (days)

50

60

Figure 8. Effect of Earth oblateness on the eccentricity under the quadrant control.

7200 7100 7000 6900 0

200 400 Time (days) a) Semimajor Axis, a

0.015 0.01 0.005 0

0

200 400 Time (days)

c) Eccentricity, e

Angle (degrees)



if cos u > 0, (B . r^) < 0 if cos u > 0, (B . r^) > 0 if cos u < 0, (B . r^) > 0 if cos u < 0, (B . r^) < 0 (13)

0.02

30 20 10 0 0

200 400 Time (days) b) Inclination, i

Angle (degrees)

–(mq )max q 0 = m –(mq )max 0

J2 Present J2 Absent

0.025

km

This maneuver is primarily concerned with inclination change in circular orbit. Equation (10) describes the relevant dynamics. As energy change and inclination change are coupled in this situation, Eq. (8) describes both the energy and plane changes. In this circular case, only the radial component of the magnetic field affects the energy and inclination. For the inclination to decrease, the energy must increase. With these facts, we develop a bang-off controller based on the argument of latitude and the sign of the radial component of the field. Using q/m < 0, the term cos u(B • r^) must be negative. We know that (B • r^) is positive below the magnetic equator (zones I, II, III, and IV) and negative above the magnetic equator (zones V, VI, VII, and VIII). Thus, for northward motion of the satellite (cos u > 0), the charge should be nonzero within zones V-VIII. For southward satellite motion (cos u < 0), nonzero charge is applied in zones I-IV. In other words, the charge should be off for the first quadrant of the orbit, on for the second quadrant, off for the third, and on for the fourth. This control can be represented as

Eccentricity, e 0.03

200 0 -200

0

200 400 Time (days)

d) Right Ascension, Ω

Figure 9. Orbital elements for the LEO plane change and orbit-raising maneuver.

9d shows the RAAN. For a negative q/m in LEO, the RAAN always decreases; however, in this simulation, the effect of J2 on RAAN dominates. If the aforementioned simulated maneuver is performed using conventional impulsive thrust, it requires a ∆V of 3.75 km/s. Thus, using LAOs could significantly increase the payload ratio of a spacecraft that needed such a maneuver. However, this mass savings comes at a cost of time spent, the mass of the capacitor and power system, and electrical power consumed during the maneuver.

40 General Bang-Bang Control Method for Lorentz Augmented Orbits

Lorentz Augmented Orbit Power Consumption and Plasma-Density-Based Control The preceding simulations use a code that does not include a model of the Earth’s ionosphere. The spacecraft design process is carried out initially using the IRI model, and then that design is used in simulation. The simulation assumes the spacecraft maintains its design charge-to-mass ratio, regardless of the local plasma conditions. In this section, we explore the use of a more in-depth LAO simulation by revisiting the LEO inclination-change maneuver. This simulation uses a code that takes into account local ionospheric conditions and their effect on the instantaneous charge-to-mass ratio and power consumption of the spacecraft. The high-fidelity, plasma dynamics simulation is based on the Global Core Plasma Model (GCPM) [24]. The GCPM model is a framework for blending multiple empirical plasma-density models and extending the IRI model to full global coverage. For the next simulations, the GCPM model at one particular time is used. This time corresponds to mean solar conditions. Although there is a strong correlation between plasma conditions and time of day, this effect is averaged out by simulating over the course of multiple days. This simulation functions in a different fashion from the results presented in the “Lorentz Augmented Orbit Maneuvers and Limitations” section. The earlier simulations assume that q/m is either zero or constant at a value of -0.007 C/kg. The GCPM simulation assumes that the spacecraft maintains a constant potential on the capacitor. Because of local variations in plasma density, a constant potential results in varying values of chargeto-mass ratio and varying power required to hold the constant potential. Although the mean q/m and orbit-average power are consistent with those predicted in our earlier analysis in the “Space-Vehicle Design” section, they have peak and minimum values that depend on the local plasma environment. The local electron number density ne is a strong predictor of power usage and is readily available from the GCPM model. A higher ne

corresponds to a denser plasma, which in turn, results in more current collection for a stocking at a given potential. Thus, high-ne values correlate to high power usage. In a gross sense, ne is larger in the low- to mid-latitudes on the daytime side of the Earth. The density of the plasma also drops sharply as a function of altitude. Assuming the spacecraft has knowledge of its local plasma conditions, significant power savings can be realized by limiting the charge-on time when ne is high. The spacecraft simply follows its normal control law, but turns off the charge whenever ne exceeds a particular value. A sample of this power savings and cost in time is shown in Table 6. This table lists the results of four simulations performed with the constant-potential code. Each simulation integrates over three days and begins with the same initial conditions. Reported for each run is the mean q/m achieved (during times of nonzero charging), the average power used (over the entire simulation), the peak instantaneous power used, the total inclination change over the simulation, and an efficiency in the form of degrees of inclination change per day divided by the average power used. The first simulation uses the e-limiting quadrant controller discussed earlier with no modification based on electron density. The other three runs superimpose an nebased, density-limited control, turning off the charge when ne is greater than some value. The average q/m achieved by each successive simulation is slightly lower, as seen in the first row of Table 6. In regions of high plasma density, the capacitance of the stocking is increased by a tighter plasma sheath, leading to a larger capacitance. However, this increase in q/m requires significantly more power to maintain, as the denser plasma greatly increases the current collected by the stocking. The power reduction due to density-limited control is shown in the second and third rows of Table 6. Without densitybased control, the average power usage over the simulation is 53.54 kW, but with a peak instantaneous power usage of 418.57 kW. When charge is only applied for an ne of less than 1.1 × 1011 m-3 (the mean electron density in this orbit), the power usage drops to a mean of 12.94 kW, with a peak of 89.59 kW. Of course,

Table 6. Limiting Power Usage via ne (in m-3) Sensing for 3-day Simulations at an Initial Inclination of i = 28.5 deg at a 600-km Altitude.

No ne Control

ne < 2 × 1011

ne < 1.5 × 1011

ne < 1.1 × 1011

(q/m)mean, C/kg

-0.0060

-0.0057

-0.0054

-0.0048

Pmean, kW

53.54

35.88

24.33

12.94

Ppeak, kW

418.57

220.01

140.64

89.59

∆i, deg

0.3764

0.3292

0.2653

0.1747

deg/day/kWmean

0.0023

0.0031

0.0036

0.0045

General Bang-Bang Control Method for Lorentz Augmented Orbits

41

Figure 10 shows the results of this hybrid simulation. The top plot of this figure shows the semimajor axis, while the lower plot gives the inclination, both versus time in days. The solid green lines are the results of the hybrid constant q/m, density-limited simulation. For comparison, the dashed blue lines show the results of the constant charge-only simulation. The hybrid strategy completes the inclination-change maneuver in 380 days compared with 340 days for the original strategy. To provide insight into the power saved by using the density-limited hybrid strategy, short-duration simulations are run using the full GCPM, constant voltage code. These simulations are run for three

7200 km

The profile of electron densities experienced by a spacecraft varies greatly depending on its orbit. In the 28.5-deg inclination-change example, both the change in inclination and the change in altitude during the maneuver cause no one limit on ne to be appropriate. However, recreating this entire maneuver using the GCPM, constant voltage simulation is impractical in its computational demands. A reasonable approximation is a hybrid simulation in which a constant charge-to-mass ratio is used, but the electron density is calculated at each step in the integration to superimpose the density-limited control strategy. To take advantage of the orbit raising that occurs during the maneuver, the ne cutoff value is made a linear function of the spacecraft altitude. This line is defined by two points: ne equal to 2.0 × 1011 m-3 at an altitude of 600 km, and ne equal to 1.6 × 1011 m-3 at an altitude of 700 km. These values are chosen to give a reasonable tradeoff between power savings and maneuver time.

points in the trajectory of both the hybrid simulation and the original inclination-change maneuver. When each trajectory reaches 28.5, 10, and 1 deg of orbital inclination, its state is retrieved and used as the initial conditions for a 3-day simulation using the full GCPM code. The results of these simulations are summarized in Table 7. This table gives the mean achieved charge-to-mass ratio, average and peak power consumptions, and inclination change over the 3-day simulation for each control strategy for each inclination considered. The addition of density-limited control reduces both the mean and peak power usage, but also decreases the speed of the inclination change.

7100 ne Control No ne control

7000 6900 0

200 Time (days) a) Semimajor axis, a

Angle (degrees)

the decreased power usage is coupled with a lengthening of the maneuver time. Row 4 of Table 6 shows the inclination change achieved over 3 days for each level of density control. The unlimited control changes inclination at a rate about 2.2 times higher than the ne < 1.1 × 1011 case. However, the density-limited controllers achieve inclination changes in a more efficient way. The fifth row of Table 6 displays an efficiency metric for each simulation, namely degrees of inclination change achieved per day per average kilowatt used. Charging only at low values of ne uses the available power more efficiently to effect inclination change.

100

300

400

30 ne Control No ne control

20 10 0 0

100

200

300

400

Time (days) b) Inclination, i Figure 10. Comparison of hybrid simulation of constant charge-tomass ratio with plasma-density-limited control to constant q/monly control.

Table 7. Comparison of Power Usage During both the Hybrid Simulation and the Original, Constant Charge Simulation.

Inclination

Constant Charge

Hybrid, Density-Limited

28.5 deg (q/m)mean, C/kg Pmean(Ppeak), kW ∆i, deg

-0.0060 53.54 (418.57) 0.3764

-0.0057 36.50 (217.15) 0.3308

10 deg (q/m)mean, C/kg Pmean(Ppeak), kW ∆i, deg

-0.0055 56.79 (259.47) 0.1806

-0.0053 43.64 (208.58) 0.1622

1 deg (q/m)mean, C/kg Pmean(Ppeak), kW ∆i, deg

-0.0055 58.39 (235.89) 0.1447

-0.0053 43.54 (208.39) 0.1330

42 General Bang-Bang Control Method for Lorentz Augmented Orbits

Conclusions Lorentz augmented orbits use the Earth’s magnetic field to provide propellantless propulsion. Although the direction of the Lorentz force is fixed by the velocity of the spacecraft and the local field, varying the magnitude of the charge-to-mass ratio of the satellite can produce novel and useful changes to an orbit. A simple onoff (or bang-off) charging scheme is sufficient to perform most available maneuvers and can create large ∆V savings. A preliminary evaluation of some possible architectures leads us to the tentative conclusion that up to 0.0070 C/kg can be reached by a negatively charged LEO spacecraft of 600-kg mass. These designs use cylindrical mesh “stocking” capacitive structures that are shorter than most proposed electrodynamic tethers and offer the important benefit that their performance is independent of their attitude in the magnetic field. That simplicity largely decouples attitude control from propulsion, a consideration that can complicate the operation of tether-driven spacecraft. The Earth’s magnetic field is a complex structure. Accurate analytical expressions for orbital perturbations are difficult to obtain. The proposed control method accommodates this complexity by breaking the geomagnetic field into distinct zones based on its sign in three orthogonal directions, leading to eight zones. Within each zone, an LAO tends to evolve in certain directions for certain orbital elements. Understanding how the orbital evolution relates to the zone the spacecraft is in allows us to develop control strategies to execute complex maneuvers. A simple, but effective strategy is to operate a bang-off control scheme that switches only at zone boundaries. This scheme allows for the execution of a sample maneuver of a LEO plane change without the use of propellant, saving a ∆V of 3.75 km/s required for a conventional propulsive maneuver. However, this maneuver lasts for 340 days and requires about 53 kW of power on average. A controller that limits charging in response to local plasma-density measurements reduces this power requirement to an average of 40 kW, but increases the maneuver time to 380 days.

References [1] Peck, M.A., “Prospects and Challenges for Lorentz-Augmented Orbits,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, AIAA Paper 2005-5995, August 2005. [2] Streetman, B. and M.A. Peck, “New Synchronous Orbits Using the Geomagnetic Lorentz Force,” Journal of Guidance, Control, and Dynamics, Vol. 30, No. 6, 2007, pp. 1677-1690. [3] Streetman, B. and M.A. Peck, “Gravity-Assist Maneuvers Augmented by the Lorentz Force,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, AIAA Paper 2007-6846, August 2007. [4] Schaffer, L. and J.A. Burns, “The Dynamics of Weakly Charged Dust: Motion Through Jupiter’s Gravitational and Magnetic Fields,” Journal of Geophysical Research, Vol. 92, No. A3, 1987, pp. 2264-2280. [5]

Schaffer, L. and J.A Burns, “Charged Dust in Planetary Magnetospheres: Hamiltonian Dynamics and Numerical Simulations for Highly Charged Grains,” Journal of Geophysical Research, Vol. 99, No. A9, 1994, pp. 17211-17223.

[6]

Hamilton, D.P., “Motion of Dust in a Planetary Magnetosphere: OrbitAveraged Equations for Oblateness, Electromagnetic, and Radiation Forces with Applications to Saturn’s F Ring,” Icarus, Vol. 101, No. 2, February 1993, pp. 244-264 (Erratum: Icarus, Vol. 103, p. 161).

[7] Sehnal, L., The Motion of a Charged Satellite in the Earth’s Magnetic Field, Smithsonian Institution Technical Report, Smithsonian Astrophysical Observatory Special Report No. 271, June 1969. [8] Vokrouhlicky, D., “The Geomagnetic Effects on the Motion of Electrically Charged Artificial Satellite,” Celestial Mechanics and Dynamical Astronomy, Vol. 46, 1989, pp. 85-104. [9] Abdel-Aziz, Y., “Lorentz Force Effects on the Orbit of a Charged Artificial Satellite: A New Approach,” Applied Mathematical Sciences [online], Vol. 1, Nos. 29-32, 2007, pp. 1511-1518, http://www. m-hikari.com/ams/ams-password-2007/ams-password29-32-2007/ index.html. [10] Cosmo, M.L. and E.C. Lorenzini, Tethers in Space Handbook, 3rd ed., NASA Marshall Spaceflight Center, Huntsville, AL, 1997, pp. 119-151. [11] King, L.B., G.G. Parker, S. Deshmukh, J. Chong, “A Study of Inter Spacecraft Coulomb Forces and Implications for Formation Flying,” Journal of Propulsion and Power, Vol. 19, No. 3, 2003, pp. 497-505. [12] Schaub, H., G.G. Parker, L.B. King, “Challenges and Prospects of Coulomb Spacecraft Formations,” Proceedings of the AAS John L. Junkins Symposium, American Astronautical Society Paper 03-278, May 2003. [13] Peck, M.A., B. Streetman, C.M. Saaj, V. Lappas, “Spacecraft Formation Flying Using Lorentz Forces,” Journal of the British Interplanetary Society, Vol. 60, July 2007, pp. 263-267, http:// www.bis-spaceflight. com/sitesia.aspx/page/358/id/1444/l/en-us. [14] Burns, J.A., “Elementary Derivation of the Perturbation Equations of Celestial Mechanics,” American Journal of Physics, Vol. 44, No. 10, 1976, pp. 944-949. [15] Roithmayr, C.M., Contributions of Spherical Harmonics to Magnetic and Gravitational Fields, NASA, TR TM-2004-213007, March 2004. [16] Barton, C.E., “International Geomagnetic Reference Field: The Seventh Generation,” Journal of Geomagnetism and Geoelectricity, Vol. 49, Nos. 2-3, 1997, pp. 123-148. [17] Rothwell, P.L., “The Superposition of Rotating and Stationary Magnetic Sources: Implications for the Auroral Region,” Physics of Plasmas, Vol. 10, No. 7, 2003, pp. 2971-2977.

General Bang-Bang Control Method for Lorentz Augmented Orbits

43

[18] Choinière, E. and B.E. Gilchrist, “Self-Consistent 2D Kinetic Simulations of High-Voltage Plasma Sheaths Surrounding Ion Attracting Conductive Cylinders in Flowing Plasmas,” IEEE Transactions on Plasma Science, Vol. 35, No. 1, 2007, pp. 7-22. [19] Wertz, J.R. and W.J. Larson, Space Mission Analysis and Design, Microcosm Press, El Segundo, CA, 1999, pp. 141-156. [20] “Fast Access Spacecraft Testbed (FAST),” Defense Advanced Research Projects Agency Broad Agency Announcement, BAA-07-65, November 2007. [21] Sanmartin, J.R., M. Martinez-Sanchez, E. Ahedo, “Bare Wire Anodes for Electrodynamic Tethers,” Journal of Propulsion and Power, Vol. 9, June 1993, pp. 353-360. [22] Linder, E.G. and S.M. Christian,“The Use of Radioactive Material for the Generation of High Voltage,” Journal of Applied Physics, Vol. 23, No. 11, 1952, pp. 1213-1216. [23] Bilitza, D., “International Reference Ionosphere 2000,” Radio Science, Vol. 36, No. 2, 2001, pp. 261-275. [24] Gallagher, D.L., P.D. Craven, R.H. Comfort, “Global Core Plasma Model,” Journal of Geophysical Research, Vol. 105, No. A8, 2000, pp. 18,819 18,833.

44 General Bang-Bang Control Method for Lorentz Augmented Orbits

Brett J. Streetman is currently a Senior Member of the Technical Staff at Draper Laboratory working primarily in space systems guidance, navigation, and control (GN&C). At Draper, he has worked on the Talaris Hopper, a joint MIT and Draper Lunar and planetary hopping rover GN&C testbed, performed control system analysis for the International Space Station, and worked on the GN&C system for the guided airdrop platform. Dr. Streetman received a B.S. in Aerospace Engineering from Virginia Tech and M.S. and Ph.D. degrees in Aerospace Engineering from Cornell University.

Mason A. Peck is an Associate Professor in Mechanical and Aerospace Engineering at Cornell University. His research focuses on spaceflight dynamics, specifically, the discovery of new behaviors that can be exploited for mission robustness, advanced propulsion, and low-risk GN&C design. He holds 17 U.S. and European patents in space technology. Dr. Peck earned B.S. and B.A. degrees from the University of Texas at Austin, an M.A. from the University of Chicago, and Ph.D. and M.S. degrees from UCLA.

General Bang-Bang Control Method for Lorentz Augmented Orbits

45

The U.S. military’s unmanned aircraft systems are constantly gathering an enormous amount of video imagery, but much of it is not useful to tactical forces due to a shortage of analysts who are needed to process the information. This paper examines four automated methods that address the military’s requirements for turning full motion video into a functional tool for a wide variety of tactical users. The authors have demonstrated the feasibility of these methods, and could complete the development and testing needed for operational use within 3 years if funding is made available.

46 Tactical Geospatial Intelligence from Full Motion Video

Tactical Geospatial Intelligence from Full Motion Video Richard W. Madison and Yuetian Xu Copyright © by IEEE. Presented at Applied Imagery Pattern Recognition 2010: From Sensors to Sense (AIPR 2010), Washington D.C., October 13–15, 2010

Abstract The current proliferation of Unmanned Aircraft Systems provides an increasing amount of full-motion video (FMV) that, among other things, encodes geospatial intelligence. But the FMV is rarely converted into useful products, thus the intel potential is wasted. We have developed four concept demonstrations of methods to convert FMV into more immediately useful products, including more accurate coordinates for objects of interest; timely, georegistered, orthorectified imagery; conversion of mouse clicks to object coordinates; and first-person-perspective visualization of graphical control measures. We believe these concepts can convey valuable geospatial intelligence to the tactical user.

Introduction Geospatial intelligence, which includes maps, coordinates, and other information derived from imagery [1], can address many of the intelligence needs of a tactical user [2], [3]. A potentially rich source of imagery to inform this geospatial intelligence is the Full Motion Video (FMV) from the U.S. military’s thousands [4], [5] of fielded Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future [6], [7]. However, there are too few analysts to process that flood of FMV [8], thus much of it goes unused. At the tactical echelons, raw FMV simply is not used to generate geospatial intelligence [9]. We have developed four concept demonstrations to show how FMV could be shaped into potentially useful forms of geospatial intelligence. This paper describes the four demonstrations in more detail. In the first demonstration (“Object-of-Interest Geolocation” section), we used the contents of Predator FMV to improve the accuracy of telemetered locations of objects by an order of magnitude, averaged over 4000 image frames. Our contributions include one-time userassisted frame alignment, telemetry extraction, altitude telemetry correction, target tracking, and roll estimation. In the second demonstration (“Orthorectified Imagery” section), we combined image stitching with extracted telemetry to generate orthorectified and georegistered imagery of an area overflown by a Predator. This imagery could be produced in short order by brigadelevel air forces to allow ground assets to navigate an area that has no existing up-to-date maps or imagery. In the third demonstration (“Metric Video” section), we used transforms between FMV frames and orthorectified imagery to recover the ground coordinates of objects clicked on in the FMV. This involved automatically detecting, monitoring, and updating the coordinates of moving objects.

Tactical Geospatial Intelligence from Full Motion Video

In the fourth demonstration (“Video Markup” section), we used the same transforms to project graphical control measures drawn over the orthorectified imagery back into the FMV, allowing a user to see how objects in the video move relative to the control measures, facilitating rehearsal and/or after-action review. Object-of-Interest Geolocation One particularly useful form of geospatial intelligence is the coordinate of an object seen in FMV. This could be used, for instance, to call in fire, dispatch forces, cue a sensor, or retrieve locationrelevant video from an archive. Previous object geolocation work at Draper Laboratory [10] focused on sensor-disadvantaged, small UASs triangulating object coordinates from multiple looks. Larger vehicles, such as Predators, could combine accurate Global Positioning System (GPS)-Inertial Navigation System (INS), laser ranging, onboard Digital Terrain Elevation Data (DTED), etc., to identify target coordinates from a single look. Coordinates of the ground “target” at the center of the Predator’s camera reticule can be calculated and overlaid in real time on the camera feed. However, are those coordinates sufficiently accurate that one could call in fire on the correct target or extract archive video of just the target and not the whole neighborhood? We assert that the content of the FMV could be used to improve the accuracy. In the first concept demonstration, we showed how the accuracy of Predator target telemetry can be improved by an order of magnitude with a little operator intervention and image processing. Table 1 shows the relative magnitude of error in object geolocation that we observed with raw and improved Predator target telemetry. We began our pursuit of better geolocation with a simple experiment to evaluate the accuracy of Predator’s target telemetry. We obtained unclassified video from a Predator camera following a truck as it winds through a small town. A sample frame is shown in Figure 1. The targeting telemetry is easily good enough to call up a Google

47

Table 1. Relative Error in Target Lat, Lon at Stages of Improvement

Condition

Observed Error

Improvement

Target lat, lon from CC stream

100%



Target lat, lon from image overlay

115%

0.87×

Plus GUI-based altitude correction

22%

4.6×

Plus image processing

7%

13.4×

Figure 1. Example of a frame of the video sequence as a truck drives through the town.

Earth [11] map of the town. Watching the video, we observed the path taken by the truck and measured the coordinates of that path on the Google Earth map. This formed the “ground truth.” At each of approximately 4000 frames of video, we compared the ground location of the truck (per the telemetry overlaid on the video) against the nearest point on the truck’s ground truth trajectory. We declare the mean distance to be the Predator’s target telemetry error. Similarly, we compared the ground locations reported in the video’s closed caption telemetry stream against the ground truth over the same interval. The closed caption stream provided our “baseline” error. The mean error based on the video overlay was approximately 115% of baseline error, reflecting some errors in the automatic optical character recognition we used to extract telemetry from the screen. Figure 2 shows the telemetered (and processed) paths of the observed truck overlaid on the Google Earth map. At first glance, the locations given by the raw telemetry may seem shifted left relative to the map. However, the shift actually varies with camera azimuth. The error is best explained by an offset in the camera’s estimated height above target such as might arise from inaccuracy of the GPS altitude [12], [13], barometric altimeter, or DTED. To find the ground location of a target, as shown in Figure 3, one begins at the

camera location and extends the camera’s line-of-sight (LOS) until its height matches the camera’s estimated height above target. If the vehicle operates at any reasonable standoff, the LOS has only slight depression, and a few meters of error in the UAS’ altitude estimate corresponds to many meters of lateral error in the target coordinate. The telemetry contains an accurate camera azimuth, so if we know the altitude error and if the terrain is flat near the target, we can calculate the lateral error in the target coordinate.

Figure 2. Path followed by a truck observed in Predator video, according to, in order of increasing accuracy, telemetry overlay (yellow), closed-caption telemetry (brown), altitude-corrected telemetry (pink), and fully corrected telemetry (orange).

Figure 3. Lateral distance from aircraft to target is a function of camera depression angle and altitude above target. For slight depression angles, a small inaccuracy in altitude produces a much larger error in lateral distance.

Conversely, we can calculate altitude error given lateral offset of the target. We implemented a Matlab script that projects a single frame of Predator video onto the ground plane based on the telemetry overlaid on that image. It saves the projection as an image and a KML file. A user imports the projection into Google Earth, shifts it to align visually with the map, and saves it. In theory, this could be done automatically, but the difference in camera modalities (EO vs. infrared (IR)), perspectives (low oblique vs. nadir pointing), and capture times (potentially many seasonal and illumination changes)

48 Tactical Geospatial Intelligence from Full Motion Video

make the task difficult to automate, yet comparatively easy for a human. A second script uses the observed shift and the Predator camera pointing angles (given in the telemetry) to calculate the corresponding altitude offset. This offset is subsequently applied to all telemetry to calculate revised target coordinates, improving mean accuracy by about a factor of 4.6. The resulting path is shown in pink in Figure 2. We can do even better with some image processing and filtering. First, we track the 2D location of the target in the video. Predator target telemetry is based on camera pointing, so when the operator does not hold the camera on target, the telemetry is inaccurate. From information such as the 2D location of the target in each image, the camera pointing angles, field of view, and the corrected altitude yielded by the process described in the paragraph above, we can calculate the ground location of the target wherever it moves in the image. Figure 2 shows the impact where the orange and pink lines diverge at the center and bottom left of the figure. The calculation requires an estimate of camera roll. This does not explicitly appear in either the telemetry overlay or the closedcaption (CC) telemetry stream. However, it can be inferred from the vehicle orientation and camera azimuth and elevation recorded in the CC telemetry stream. Those values are not synchronized with each other or the video, but we can roughly synchronize the CC and video streams by finding the time offset that best aligns the contents of the target location and camera location fields, which appear in both streams. Next, we filter the telemetry extracted by optical character recognition to eliminate sharp jumps in target location. Such a jump occurs in the center of Figure 2, where the trajectory jumps to the left briefly as the truck turns a corner. Here, the ground falls away into a ravine to the left, the LOS from the camera far to the right roughly parallels the ground slope and thus intersects far to the left. Our altitude correction cannot overcome this much error. However, filtering drastically reduces this overshoot. After image processing and filtering, the trajectory shown in Figure 2 by the orange line represents a 13.4× improvement in mean error in the location of the tracked truck compared with the raw Predator target telemetry. This demonstration shows good accuracy improvement for a single image sequence. It is limited by its assumption of locally level terrain, but this limitation could be removed by incorporating a DTED into the correction process. Observed performance improvement will vary to the extent that the Google Earth map coordinates deviate from truth and as the altitude error varies over time and across videos. Orthorectified Imagery Another potentially useful form of geospatial intelligence is timely, orthorectified imagery. A tactical end user, for instance at the company or platoon level embarking on a mission may appreciate intelligence about his area of operations, such as the fact that a particular bridge is out, a river is full, trees block intervisibility in certain key places at this time of year, and he will be observed by the shepherds whose herd of goats is grazing along his intended route. A recent survey shows that he will often be given maps 15 years out of date [9]. Nor does he likely have real-time access to satellites

Tactical Geospatial Intelligence from Full Motion Video

or photogrammetrists to generate up-to-date, georegistered, orthorectified imagery of his small area of operations. However, he may warrant time from brigade-level air assets. Perhaps a Predator can provide FMV, which can be used to generate orthorectified, map-registered, up-to-date imagery to support his mission. To test this theory, we orthorectified imagery from the Predator video used in the previous demonstration. We used the same graphical user interface (GUI) to correct the telemetered altitude, latitude, and longitude of ground at the center of the image. These and the telemetered camera pointing angles define a transform between image coordinates and ground coordinates, assuming flat ground. We selected key frames (every 50th frame) from the video and extracted interest points based on SIFT descriptors [14]. We used the algorithm of [15] to automatically detect reliable sets of tie points matched across images. We retained tie points that appeared in at least four images and projected them to the ground using their images’ image-to-ground transforms. The transforms are predicated on flat terrain and perfect telemetry, neither of which were available, so the projections of matching tie points are imperfect and form clusters in the neighborhood of their correct location. Next, we used an Expectation Maximization (EM) approach to determine a single coordinate for each cluster of matched tie points. The approach consists of a loop wherein each iteration (1) finds the center (mean location) of each cluster and (2) modifies each image’s image-to-ground transform to better align the tie points to the corresponding cluster centers. This loop continues until the calculated changes in cluster centers and transforms are all small. In both phases, weighting favors tie points that appear in more frames (better for enforcing consistency), are better aligned (less likely to be outliers), or come from images with few tie points (to avoid ignoring these images). The weights gradually evolve to favor tie points in images with better telemetry. The approach is inspired by Google PageRank [16], which solves the analogous problem of evolving weights to favor more authoritative web pages connected by hyperlinks. The EM loop runs twice, the first time adjusting image-to-ground transforms by only translation along the ground and the second time finding a full homography for each image to best align tie points. The second pass solves for more parameters and would be poorly conditioned if not for the elimination of bulk translation from the first pass. The EM approach was chosen because it aligns images yet respects the global shape of the image set provided by the telemetry. This compares favorably with three other potential approaches. Simply projecting images based on their telemetry would provide an image set with good global accuracy, but the individual images would not align, so it would be unclear how to combine them into a single mosaic. Conversely, a pure image-stitching algorithm would align the image content, including error in the image content. This error would compound as a map grew around the single frame that was manually aligned in the GUI, such that the geometry of image content would rapidly deviate from reality with distance from that single image. An obvious compromise is a weighted least squares solution that attempts to find the set of image transforms

49

that minimizes both the size of tie point clusters and the distance from the telemetered transform. But it is unclear how to define a meaningful distance in image transform space or how to weight error there relative to pixel distance. The chosen algorithm avoids these problems by minimizing error purely in image space. After the em process converges to a set of consistent image-toground transforms, we use the transforms to combine the images. We normalize the intensity of the images so that frames throughout the sequence have comparable mean intensity. We project each image into ground coordinates and resample into a common grid of pixels. For each pixel in the grid, we identify the set of projected images that overlap at that pixel, extract the intensities at that pixel, and record the median intensity. Thanks to the earlier intensity scaling, the set of intensities at a pixel tends to be a tight cluster except for outliers caused by transient objects moving through the pixel over the course of a few frames. Thus, median filtering excludes most moving objects from the mosaic. Slowly or infrequently moving objects, e.g., cows, may remain in the mosaic, especially in areas built from a small number of images. Figure 4 shows the resulting imagery overlaid on an actual map. It has several valuable traits. First, it shows physical objects that ground troops can compare visually to objects seen in the environment. A building on the image looks like a building that is truly in the scene. This compares favorably to a contour map, which probably does not depict the building, or a year-old image where the building may have a different shape. Second, because the data come from oblique-view video, the buildings are rendered from oblique view, which may be easier for a human to parse than an overhead view. Third, objects seen in video can be located precisely on this imagery. This may be more intuitive than GPS coordinates as a way to convey a precise location to a soldier. Fourth, the new imagery overlays existing imagery and is georegistered moderately well based on corrected telemetry. The existing imagery provides context and archival data where current data are not available. The image stitching implementation we used presumes locally flat terrain and requires overlap between images. Our imagery violated both conditions, so the orthorectified image is distorted in several places. We have begun work to reduce or eliminate these dependencies to produce less distorted imagery. However, this may be unnecessary, because even if orthorectification is distorted and/or georegistration is inaccurate, and even if gps is denied, a coordinate-seeking infantry platoon can navigate directly to a target marked on the up-to-date and easy-to-understand imagery. Metric Video Yet another potentially useful type of geospatial intelligence is metric video. Metric video allows one to obtain an object’s coordinates by simply clicking on it. This capability is being developed [17] in hardware and will be available to some users of some air vehicles. For everyone else, could software and regular video from an arbitrary UAS act as a poor man’s metric sensor? Such a solution could also avoid costs of retrofitting existing systems. We used the same video and the mosaic generated in the previous concept demonstration. Each video key frame used to make the mosaic already has a coordinate transform that warps it into mosaic

Figure 4. Imagery orthorectified from Predator video (gray) overlaid on an existing map (color) [10]. Pseudo-oblique view and up-todate contents may make such maps valuable even with imperfect orthorectification and georegistration.

coordinates, which are roughly aligned with the GPS coordinates from the corrected telemetry. If the user clicks in one of these key frames, the transform converts click coordinates into GPS coordinates. For images between key frames, we reuse components of the mosaicing algorithm to locate tie points in the image and the nearest key frame automatically, and fit a camera rotation and translation that best explains the motion of the tie points. This projects the clicked coordinates from an arbitrary image to a key frame, whence they can be converted to GPS coordinates. In addition to detecting the coordinate of the clicked point, we report whether the click represents a moving or stationary object. We project the clicked image onto the mosaic and compare intensity in the area of the click. If the mosaic and projected image have similar intensity, then the point probably represents a stationary object. If the two have differing intensity, the likely cause is that the mosaic shows the typical intensity at that location and the clicked image shows a transient object. We identify the area covered by the transient object (the area where intensity does not match the mosaic), and report the click as a moving object. Figure 5 shows an example frame from a metric video. As the video plays, clicking on the video causes a square to appear around the click location. GPS coordinates of the clicked location appear above the square. Green squares represent stationary objects. Their coordinates are fixed, and their boxes are back-projected onto each new video frame using the frame-to-mosaic transform. Red squares represent moving objects. They are sized automatically to match the extent of the moving object. They are visually tracked through consecutive frames so that the red box remains on the object, not its original location. Their changing ground coordinates are determined from their tracked 2D coordinates using each new frame’s frameto-mosaic transform. The mosaic, frame-to-mosaic transforms, and the locations of moving objects are all determined in preprocessing so that the application runs for the user at full video speed. This is suitable for operating on archived video where the user does not see the preprocessing time. Additional work would be required to operate on real-time, streaming video.

50 Tactical Geospatial Intelligence from Full Motion Video

Figure 5. Poor-man’s metric video. User clicks on objects, generating boxes that lock onto the objects through later video frames, determine whether the objects are stationary (green) or moving (red), and report their GPS coordinates.

Video Markup A final useful type of geospatial intelligence reverses the concept of the metric video. A user marks up an image mosaic, and the markings are projected into the video. This could provide a useful, nonoverhead perspective of a battlefield with control graphics for mission rehearsal or after-action review. Or if air assets are available during a battle and the software were dramatically accelerated, it could potentially provide real-time observation of how units move relative to intended controls, as an audience currently expects for televised sports [18].

Figure 6. Video markup. Control graphics are drawn over orthorectified imagery using a paint program, then projected into video for comparison against activities shown by the video.

Figure 6 shows an example. Battlefield control graphics [19] are drawn over the mosaic from the previous demonstrations using a paint program that supports layers. The markup layer is saved as an image with the same coordinate system as the original mosaic. When displaying each video frame, the frame’s frame-to-mosaic transform is used to project the markup layer into the frame’s coordinates, and the markings are overlaid on the image. The result is a perspective on the control graphics that may be more intuitive to a human. Conclusion Tactical echelons require geospatial intelligence, such as maps and coordinates, which could be derived from the FMV from UAS that are now ubiquitous on the battlefield. They simply need tools to convert the video into a more immediately useful format. We have shown four possible tools in concept demonstrations that convert FMV into, respectively: accurate coordinates of objects-of-interest; intuitive, timely, orthorectified, and georegistered imagery of an area of interest; object coordinates extractable by clicking directly on video; and battlefield control graphics projected into the video. We believe these applications can provide valuable geospatial intelligence to the tactical user.

Tactical Geospatial Intelligence from Full Motion Video

51

Acknowledgment This work was funded under Draper Laboratory’s Internal Research and Development Program. References [1] 10 U.S.C. S 467: U.S. Code-Section 467: Definitions. [2] “AGC Brochure,” http://www.agc.army.mil/publications/AGCbrochure. pdf, July 28, 2010. [3] Field Manual 34-130, Headquarters, Department of the Army, July 8, 1994. [4] “Too Much Information: Taming the UAV Data Explosion,” http://www.defenseindustrydaily.com/uav-data-volume solutions-06348, March 16, 2010. [5] Drew, C., “Drones Are Weapons of Choice in Fighting Al Qaeda,” The New York Times, March 16, 2009. [6] “ARGUS-IS,” http://www.darpa.mil/ipto/programs/argus/argus. asp, July 28, 2010. [7] “Gorgon Stare Update,” Air Force Magazine, Vol. 98, No. 5, May 2010. [8] Baldor, L., “Air Force Develops New Sensor to Gather War Intel,” The Seattle Times, July 6, 2009. [9] Richards, J.E., Integrating the Army Geospatial Enterprise: Synchronizing Geospatial-Intelligence to the Dismounted Soldier, Master of Science in Engineering and Management Thesis, System Design and Management Program, Massachusetts Institute of Technology, June 2010. [10] Madison, R., P. DeBitetto, A.R. Olean, M. Peebles, “Target Geolocation from a Small Unmanned Aircraft System,” IEEE Aerospace Conference, 2008. [11] Google, “Google Earth,” http://earth.google.com, July 22, 2010. [12] Corp., C.V., “GPS Altimetry,” http://docs.controlvision.com/pages/ gps altimetry.php, 2004. [13] Mehaffey, J., “GPS Altitude Readout > How Accurate?” http:// gpsinformation.net/main/altitude.htm, February 10, 2001. [14] Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vision, Vol. 60, No. 2, 2004, pp. 91-110. [15] Xu, Y. and R. Madison, “Robust Object Recognition Using a Cascade of Geometric Consistency Filters,” Proc. Applied Imagery and Pattern Recognition, 2009. [16] Page, L., S. Brin, R. Motwani, T. Winograd, The PageRank Citation Ranking: Bringing Order to the Web, Stanford InfoLab Technical Report. [17] DARPA, “Standoff Precision ID in 3D (SPI-3D),” http://www.darpa. mil/ipto/programs/spi3d/spi3d.asp, 2010. [18] Sportvision, Inc., “SportVision,” http://www.sportvision.com/, 2008. [19] U.S. Department of Defense, “Common Warfighting Symbology,” MIL-STD-2525C, November 17, 2008.

52 Tactical Geospatial Intelligence from Full Motion Video

Richard W. Madison is a Senior Member of Technical Staff in the Perception Systems group at Draper Laboratory. His work is in vision-aided navigation with forays into related fields, such as tracking, targeting, and augmenting reality. Before joining Draper, he worked on similar projects at the Jet Propulsion Laboratory, Creative Optics, Inc., and the Air Force Research Laboratory. Dr. Madison holds a B.S. in Engineering from Harvey Mudd College and M.S. and Ph.D. degrees in Electrical and Computer Engineering from Carnegie Mellon University.

Yuetian Xu is a Member of Technical Staff at Draper Laboratory. His current research interests include computer vision, robotic navigation, GPU computing, embedded systems (Android), and biomedical imaging. Mr. Xu holds B.S. and M.S. degrees in Electrical Engineering and Computer Science from MIT.

Tactical Geospatial Intelligence from Full Motion Video

53

wp SC_Ctrl_eKF_M

M

SC_Ctrl_eKF_H

H

SC_Ctrl_eKF_R

R

eKFprop

K

K bias_pos

delT

SC_Ctrl_N_Its

tau

SC_Sensor_Gyro_BS_tau

Qst2i_neg

wbGds

stMeasOn Qst2i

averager

Qst2i_pos

wbG

update

2 Qst2i

1

1 z

w

Qst2i Quatprop Qst2i_neg

The rapid development of guidance, navigation, and control (GN&C) systems for precision pointing and tracking spacecraft requires a set of tools that leverages common architectural elements and a modelbased design and implementation approach. The paper presents an approach that can accelerate the speed of development while reducing the cost of GN&C flight software. It uses a spacecraft’s pointing and tracking system as an example, and describes the detailed models of elements such as gyros, reaction wheels, and telescopes, as well as GN&C algorithms and the direct conversion of the models into software for software-in-loop and hardware-in-loop testing. Model-based design and software development is slowly being adopted in the aerospace industry, but Draper is more flexible and is able to adopt these types of time-saving techniques more quickly. Draper is applying this approach today as a member of the Orbital Sciencesled team competing for NASA’s Commercial Orbital Transportation (COTS) program, as well as in the development of the ExoplanetSat planet-finding cubesat in partnership with MIT.

54 Model-Based Design and Implementation of Pointing and Tracking Systems

Model-Based Design and Implementation of Pointing and Tracking Systems: From Model to Code in One Step Sungyung Lim, Benjamin F. Lane, Bradley A. Moran, Timothy C. Henderson, and Frank A. Geisel Copyright © 2010 American Astronautical Society (AAS), Presented at the 33rd AAS Guidance and Control Conference, Breckenridge, CO, February 6 - 10, 2010

Abstract This paper presents an integrated model-based design and implementation approach of pointing and tracking systems that can shorten the design cycle and reduce the development cost of guidance, navigation, and control (GNC) flight software. It provides detailed models of critical pointing and tracking system elements such as gyros, reaction wheels, and telescopes, as well as essential pointing and tracking GNC algorithms. This paper describes the process of developing models and algorithms followed by direct conversion of the models into software for software-in-the-loop (SWIL) and hardware-in-the-loop (HWIL) tests. A representative pointing system is studied to provide valuable insights into the model-based GNC design.

Introduction Pointing and tracking (P&T) systems are very important elements of surveillance, strategic defense applications, optical communications, and science observations, including both astronomical and terrestrial targets. A P&T system is generally required to provide both agility (the ability to rapidly change pointing line-of-sight (LOS) vector over large angles) and jitter suppression; the design challenge comes from trying to achieve both in a cost-effective manner. The Operational Responsive Space (ORS) field seeks to develop and deploy a constellation of small and low-cost yet customized P&T systems in a short period of time [1]. In this situation, certain traditional practices in engineering, development, and operation may become stumbling blocks. Reusability of heritage engineering tools and flight software is seemingly attractive but often conceals a high price tag. Even relatively straightforward efforts to customize or adapt heritage flight software are often time-consuming and costly. A novel approach to address these challenges is model-based design and implementation [2]. This approach can be summarized with three steps: (1) analysis and design of algorithms with simulation and a user-friendly language such as Matlab/Simulink, (2) automatic code generation of flight software from algorithms written in the modeling language, and (3) continuous validation and verification of flight software against source models and simulation. Although this approach does not generate all vehicle and GNC flight software (e.g., mission manager, scheduler, and data management are generally not included), it can significantly reduce the cycle of the design and implementation of core GNC algorithms by streamlining the design and implementation process. In particular, iterative design processes are simplified as single design iteration

only requires modification of the algorithm blocks. Also, critical implementation issues can be detected early, possibly even in the algorithm development stage. They encompass accommodation of processor speed and memory limitations, numerical representation (floating or fixed point), and incorporation of real-time data management such as buffering, streaming, and pipelining. This paper presents a model-based design and implementation approach for P&T systems. First, potential P&T elements are surveyed. Then detailed models of key P&T elements and core GNC algorithms are provided. Models such as reaction wheels, telescopes, focal plane sensors, and several others are presented, together with typical parameter ranges. Next, the GNC algorithms for spacecraft attitude and telescope LOS stabilization are provided. Although detailed analysis and design techniques are not addressed, critical rules-of-thumb are provided to guide gain parameter selection. Within the model-based design approach, the models and algorithms are developed using Matlab/Simulink blocks and Embedded Matlab Language (EML) so that they may be autocoded directly to flight model and software [3]. They are partitioned into flight model and flight software groups, respectively. This partition simplifies implementation of SWIL and HWIL tests. Each model or algorithm is connected to others using “rate transition blocks” [4]. The use of rate transition blocks brings some advantages; such blocks can be implemented at a designated rate in simulation, and the code generated by autocoding is grouped in terms of rate. Integration of autocoded flight software into main flight software is therefore much easier as it needs to identify a few different rate groups and need not identify all functions of autocoded flight software. At the end of the paper, a GNC design example for a representative P&T system is provided with some interesting plots and discussions.

Model-Based Design and Implementation of Pointing and Tracking Systems

55

Pointing and Tracking Systems A P&T system can be roughly grouped into spacecraft elements and payload elements. The spacecraft elements may include actuators (reaction wheels, control momentum gyros, magnetic torque rods, etc.), sensors (star tracker, gyro, fine guidance sensor, etc.), and GNC flight software. Since they have been standardized with unique roles, spacecraft elements are omitted in the discussion of this section, but will be addressed in great detail in subsequent sections. Payload elements may consist of a variety of components, including imaging systems, focal plane sensors, steering mirrors, and references. Figure 1 illustrates potential candidates for each functional group. Since some elements have unique roles and others have partially overlapping roles, they need to be downselected carefully in order to derive a specific P&T architecture. The functional groups and their associated options are briefly discussed as follows: • Mount mechanism: strapdown, (active or passive) vibration isolated optical bench, or gimbaled optical bench.

A strapdown system is simplest; the payload is mounted rigidly to the spacecraft. A vibration-isolation optical bench eliminates vibration coupling between the spacecraft and the payload; such isolation can be active or passive. A gimbal mechanism provides stable and/or agile tracking capability at the cost of additional complexity.

• Pointing reference signal source: payload mission signal, inertially-stable reference signal or an independent observation signal.

A pointing reference for a tracking system is generally required. It can come from the mission payload itself (e.g., from a targettracking algorithm applied to the instrument focal plane data), from a separate reference source (e.g., an inertially stabilized laser [5]), or from a separate tracking sensor.

• Moving elements: A steering mirror may be placed as the first optical element in the payload (“siderostat”) or later in the optical beam-train (“fast-steering mirror” or FSM).

A siderostat provides beam agility for moving object tracking and rapid repointing, but does not generally have the highbandwidth motion capability required for vibration rejection. It is also comparatively massive and costly. An FSM by contrast can provide sufficient high-bandwidth beam steering for jitter rejection at modest cost and complexity. However, an FSM is generally limited by optical constraints to a relatively small angular operating range.



Sensing elements: focal plane array (FPA) sensor, wavefront sensor, inertial sensors (Microelectromechanical System (MEMS) accelerometer or gyro, high-frequency angular rate sensor).



An FPA sensor can be used to locate a target or reference source such as a star. The sensor generally provides very high-accuracy information, and in particular, is not subject to significant lowfrequency drift error. However, due to the often limited update bandwidth (e.g., guide stars are faint, requiring long exposures), it is often desirable to augment the FPA sensor with a highbandwidth inertial sensor.

• Moving-element actuators: brushless DC motor, stepper motor, piezo device (PZT).

Motors are generally used to actuate the gimbaled elements, while PZTs are used for short-stroke high-bandwidth applications such as FSM steering. A DC motor is typically used for high-frequency actuation, while a stepper motor is preferred for low-frequency actuation.

• Moving-element sensors: encoder, gyro, or reference object.

The encoder and gyro directly sense the relative angle of gimbals. The information of known objects could enhance sensing accuracy of encoder and gyro with measurement uncertainty and drift.

A specific P&T system can be designed to meet a set of system requirements by choosing among the menu of available options and constructing models using the available elements. Two representive examples are depicted in Figures 2 and 3. Figure 2 is an example of a “passive strapdown” P&T system in which all pointing and tracking capability is provided by the host spacecraft. In this case, the payload is rigidly mounted to the spacecraft and has no active elements to control its LOS. A variant of this approach uses the payload instrument to provide pointing information, e.g., by tracking a guide star in the instrument focal plane; such an approach can eliminate the effect of misalignments between the instrument and the spacecraft [6]. Figure 3 is an example of an “active strapdown” P&T system. The P&T capability is shared by both the spacecraft and the payload; the spacecraft provides large-angle agility and coarse pointing stability, while an FSM in the payload is used to reject high-frequency and small-amplitude pointing jitter [7], [8]. In this configuration, it is important to manage the interaction between the spacecraft and the payload carefully. This paper will focus mainly on this P&T system. Note that the strapdown passive P&T system is a simplified version, and thus, the discussion in this paper can be applied directly to this system.

56 Model-Based Design and Implementation of Pointing and Tracking Systems

Gimbaled Siderostat • Stepper/DC motor • Encoder/Gyro

Pointing Reference Beam Unit • Mechanical/MEMS Gyro CMD

Optics • Flexible • Rigid

ECC REF STAR GYRO BEAM CAMERA ARRAY ARRAY

SERVO

PLATFORM

Platform • Vibration isolator • Gimbal mechanism

OPTICAL BENCH

TRANSDUCER IMAGER

Fast Steering Mirror • Rigid • Flexible

QUAD

SERVO

SERVO

Detector • Focal plane • Wave front • Inertial • Known object

SPACECRAFT

Figure 1. Elements of P&T system.

Guidance Gyros ACS Software

Star Trackers

Rigid Body

Inertial Sensors

True state

Actuators

Imager

Flex Modes Ref beam

S/C Dynamics Rigid Optics

Steering Mirror Mirror Control Software

Ref Beam Sensor

Ref beam

True LOS

True LOS

Figure 2. Functional block diagram of strapdown passive P&T system.

Model-Based Design and Implementation of Pointing and Tracking Systems

57

Guidance Gyros ACS Software

Star Trackers

Rigid Body

Inertial Sensors

Actuators

S/C Dynamics

Rigid Optics True LOS

Steering Mirror

Mirror Control Software

Ref Beam Sensor

Ref Beam

True LOS

Imager

Figure 3. Functional block diagram of strapdown active P&T system.

Modeling of Pointing and Tracking Systems The spacecraft of interest in this paper are small spacecraft with masses in the 5- to 500-kg range. This class encompasses 6U CubeSats as well as Small Explorer (SMEX) missions [6], [8], [9]. The key characteristics and approximate parameter ranges are summarized in Table 1. This class of spacecraft typically implements one of the two P&T system architectures introduced in the previous section. Spacecraft Attitude Dynamics The spacecraft is assumed to be a rigid body with small flexible appendages such as solar panels and structural modes at high frequencies. The flexibility is modeled as a series of 2nd-order massspring-damper systems, a simplified version of Craig-Bampton or Liken’s approach [10]. The effective mass and natural frequencies are estimated by standard NASTRAN analysis, and the damping ratio is typically assumed to be between 0.1% and 1%. This massspring-damper system can also be used to model a vibration isolation mechanism or an optical bench. Table 1. Spacecraft Parameters.

Condition

Improvement

Mass (kg)

5 ~ 500

Moment of Inertia (kg-m2)

0.05 ~ 100

Dimension (m2)

0.2 × 0.3 ~ 1.5 × 3.0

Power (W)

30 ~ 350

Pointing Accuracy (arcsec, 3σ)

0.2 ~ 60

Spacecraft Disturbance The spacecraft experiences known internal and environmental disturbance during operation. The internal disturbance encompasses reaction wheel (RW)/control momentum gyro (CMG) torque noise, cryocooler disturbance, reaction torque of any moving parts and/or subsystems, and thermal snap during solar eclipse ingress/egress. For small spacecraft, reaction torques and any dynamic interaction between spacecraft structure and periodic internal disturbances are most important. The thermal snap must be taken care of by system design, i.e., use of an attached or stiffened solar array. External torques that act on spacecraft stem from gravity gradients, solar radiation pressure, residual magnetic dipoles, and aerodynamic drag. Solar torque is often coupled to orbit rate and must be considered when sizing the momentum capacity of RWs. Since it varies seasonally, at least four seasonal values need to be assessed. The frequency of gravity gradient variations can vary from the orbit rate to the slew rate. The secular component of the gravity gradient is another factor to be considered in assessing the required momentum capacity of RWs. It is typically calculated during an inertial pointing where the gravity gradient is maximal (45-deg rotation along one axis that has a nonminimum moment of inertia). The importance of aerodynamic drag decreases with increasing altitude and is often ignored beyond 400 km [11]. Reaction Wheels RWs provide the necessary torque for slews and disturbance compensation via the exchange of angular momentum with the spacecraft. Figure 4 shows the implementation of the mathematical model developed in [12]; Table 2 gives the key parameters with a range of typical values based on RWs for small spacecraft of interest.

58 Model-Based Design and Implementation of Pointing and Tracking Systems

Angular torque noise

1

-1

Kπu

1 z

K-

Transformation scale factor Bus to RWs

torque noise

RW electronics Process Delay KTs z-1

K-

quantization error sin

3πN

K-

+ + + - max torque limit

+ ×

+

++

Kπu

1 rwTrq

Transformation RWs to Bus

coeff

SC_Effect_RW_VisFrict ×

SC_Effect_RW_CoulFrict Coulomb Friction

×

+ u2

+

SC_RW_Us * SC_RW_distFromCG

×

Static imblance Const sin

KTs z-1

cos

K-

KTs z-1 o

x

+

×

+

Kπu +

×

SC_RW_Ud

SC_Effector_RW_InitSpeed

×

K-

Kπu

×

+

×

+

+ Kπu

2

Initial RW speed 3

Figure 4. Reaction wheel Simulink block diagram.

Table 2. Reaction Wheel Parameters.

Parameters

Range

Inertia (g-m2)

0.01 ~ 30

Max Speed (rpm)

1000 ~ 10000

Max Momentum (mN-m-s)

1 ~ 8000

Max Torque (mN-m)

0.5 ~ 90

Process Delay (ms)

50 ~ 250

Coulomb Friction (mN-m)

0.2 ~ 2

Quantization Error (mN-m)

0.002 ~ 0.005

Random Noise (mN-m, 1σ)

0.03 ~ 0.05

Angular Torque Noise (deg)

~10

Static Imbalance (mg-cm)

2 ~ 500

Dynamic Imbalance (mg-cm2)

20 ~ 5000

The literature [13] primarily focuses on static and dynamic imbalances in RWs and their influence on spacecraft pointing. However, for small spacecraft with pointing accuracy down to arcsec levels, the effects of processor delay, quantization error, angular torque noise, and RW speed zero-crossing are equal to or

more important than those of static and dynamic imbalance. At that level of performance, due to unfavorable inertia ratios and inexpensive components, everything matters. Processor delay is determined by the GNC computer command output rate and the rate of the RW motor processor. As pointed out in the literature [12], angular torque noise is a critical RW noise source as its frequency is often close to the bandwidth of the spacecraft GNC controller. However, the effect of this torque can be mitigated significantly using a high-bandwidth spacecraft GNC controller. Control Momentum Gyros Miniature CMGs have been developed for small spacecraft [14]. One example produces a torque of 52.25 mN-m, sufficient to generate an average slew of 3 deg/s. It also consumes 20-70% less power than RWs of the same weight. However, the static imbalance torque at the fundamental frequency is larger by an order of magnitude than that of RWs with similar maximum torque capability. Furthermore, bearing lifetime is an important issue since CMGs are typically spinning as fast as 11 krpm. These reasons tend to make RWs preferred as primary actuators for P&T systems as long as pointing stability is more important than a fast slewing capability. Simultaneous pointing stability and fast tracking are difficult to achieve by either RWs or CMGs. The use of both actuators may be required at the cost of increased mass and complexity. Alternative solutions involve either using FSMs or gimbals to articulate the entire payload or siderostat to actuate the payload LOS vector [15], [16].

Model-Based Design and Implementation of Pointing and Tracking Systems

59

Star Tracker The star tracker (ST) is one of the major attitude determination instruments for spacecraft. Miniaturized STs with low mass and power consumption are recently preferred for small spacecraft. Even a low-end, compact ST with limited accuracy (e.g., 18-90 arcsec) is in high demand for autonomous attitude determination of CubeSat spacecraft (mass 200 Hz. However, it is typically reduced (averaged) to an effective rate of 20 Hz or less in order to reduce the effect of angle white noise. Bias stability, which is a random process, is generally a more important factor in determining navigation filter performance than is pure bias, especially when the time constant of the bias stability is relatively short. A gyro can also be used to measure the local attitude of the payload, which can be different from spacecraft attitude determination. Examples encompass attitude measurement of packaged P&T elements such as the “inertial pseudostar reference unit” [5] and attitude measurement of a gimbaled siderostat. In the latter situation, the gyro effectively replaces an encoder.

In the case that the ST is used as the primary attitude determination instrument, ST random noise, output rate, and average measurement delay parameters determine the bandwidth of the navigation filter

SC_Sensor_ST_Qb2st

qi2b vb vi

1 wob

QtransVect

|u|

or_ST_m

|u|

or_ST_m

|u|

or_ST_m

AND

double

fet stateOn

+ -

3 time

ST Acquisition

4 timeReset

+ +

2 Qb2i 1 z

0.5

q qnorm

Qnorm1

Qb2i

Qst2i Qst2i

Coordinate Transformation

×

prev sample

+ -

1

sqt

1 Qst2i

×



Krandn 1sigma

0.5 noiseGain1

q qnorm Qnorm2

Qb2c Qa2c Qa2b

q qnorm Qnorm

Qmult

Figure 5. Star tracker Simulink block diagram.

60 Model-Based Design and Implementation of Pointing and Tracking Systems

Qa2b Qb2b Qpose

SC_Sensor_Gyro_Bias

rate random work K Ts z -1

+ + +

bias stability (n)=Cx(n)+Du(n) (n+1)=Ax(n)+Bu 1st MP with time cost SC_Sensor_Gyro_Qb2gref

qi2b vb

1 wb

vi QtransVect

+

Kxu Transformation matrix nonorthogonality

+

0.5

+

+

+

+ +

-Kdt to derive delta angle

1 z

+

+

-K1/dt to derive rate signals

1 Out

K-

prev sample1

angle random walk

scale factor angle white noise

Figure 6. Gyro Simulink block diagram.

Table 4. Gyro Parameters.

Parameters

Range

Output Rate (Hz)

100 ~ 200

Angle Random Walk (deg/√h)

0.00015 ~ 0.1

Angle White Noise (arcsec/√Hz)

>0.0035

Rate Random Walk (arcsec/√s3)

>9.495E-5

Bias Stability (deg/h)

0.0045 ~ 3.3

Bias Stability Time Constant (s)

~300

Scale Factor (ppm)

1 ~ 100

Telescope A telescope generally compresses beams of light and focuses them onto an FPA detector. A model based on geometric ray-trace optics was developed [15] and implemented using Matlab/Simulink blocks and EML. This formalism provides a way to derive (to linear order) the effect of motion of optical system components on the focal plane image. Thus, it becomes possible to model effects ranging from deliberate actuation or pointing of a body-fixed largeaperture telescope, through the motion of a siderostat, to the motion of a small FSM. It is also possible to include rigid-body motions of the optical elements themselves due to effects such as spacecraft vibration. The model is initialized with an optical prescription specifying the parameters of the imaging system, including mirror dimensions,

conic constants, placement, and orientation. Some key parameters of a representative Ritchey-Chretien telescope with f/D = 6.9 are listed in Table 5 [22]. The model can directly calculate the combined focal plane spot position of input beam angle and position in simulation. Furthermore, it can also drive sensitivity matrices relating input beam angle and position to focal plane spot position. For example, the chief ray of the representative telescope model has the following relationship: δφ 3.45 0 0 -0.08 0 δU = δθ + 0 3.45 0 0 -0.08 δV 1

δx δy

(1)

where [δU, δV] are focal plane spot position changes in meters, [δφ, δθ] are input beam angle changes in radians, and [δx, δy] are FSM angles in radians. This is essentially a simple pinhole model of star tracker or camera modified to include an FSM, however, higherfidelity models can be incorporated with ease. A particularly useful feature of the adopted model is that it accurately accounts for optical effects such as beam walk as well as certain types of optical aberrations of an active P&T system. In particular, consider a strapdown active P&T system: a small FSM is used at high bandwidth to correct for small-angle errors, while the spacecraft is operated so as to keep the FSM centered. This configuration was used in the Joint Astrophysics Nascent Universe Satellite (JANUS) and James Webb Space Telescope (JWST) [7], [8]. However, such an FSM is sometimes limited by the fact that the FSM is not always placed in the system pupil. As a consequence, as the field of view of an imaging system is increased, the nonideal FSM location results in additional blurring, which is a function of the magnitude of the spacecraft pointing error. By accurately modeling the optical system in its entirety, it is possible to accurately derive

Model-Based Design and Implementation of Pointing and Tracking Systems

61

Table 5. Representative Ritchey-Chretien Telescope Parameters.

Parameters

Value

Dimension (m2)

0.5 × 0.9

Primary Focal Length (m)

1.0

Effective Focal Length (m)

3.45

Distance from Primary Mirror to System Focal Point (m)

0.6

Distance between Primary Mirror and Secondary Mirror (m)

0.73

Eccentricity of Primary Mirror

1.23

Eccentricity of Secondary Mirror

1.74

the coupling between spacecraft body pointing stability and image quality, and thus perform better system-level trades during the GNC design process. Focal Plane Array Sensor A simple FPA model was developed using Matlab/Simulink blocks and EML to simulate the effects of realistic detector integration, pixilation, and detector noise. In addition, the effects of diffraction, while not modeled accurately in the geometric ray-trace approach outlined above, can be accounted for by convolving the resulting detector image with a suitable point-spread function (PSF). Some key parameters of a representative FPA detector are listed in Table 6 [22]. Table 6. Representative Focal Plane Array Sensor Parameter.

Parameters

Value

Star Flux at Magnitude Zero Point (photons/cm2/s)

1.26E+6

Dark Current & Detector Background Noise (e/p/s)

5.0

Detector Readout Noise (electrons)

20

Pixel Size (μm)

10

Effective Noise (arcsec, 1σ)

0.1

The detector noise model includes terms for detector dark current, read noise, and scattered light background as well as photon noise. Signal levels (photon count rates) are determined by integrating suitable stellar spectral templates multiplied by detector response functions and mirror coating reflectivity [22]. A new class of FPA sensor has recently become available from Teledyne [23], the HAWAII-2RG detector, which is designed to allow simultaneous multiple readouts of different locations on the chip at different rates. Thus, it is possible to read out a small “guide box” of 10-20 pixels on a side, typically centered on a bright star, at a rapid rate (~10 Hz) while reading out the remaining pixels of the 4096 × 4096 array at a much slower rate for increased sensitivity.

Thus, it becomes possible to use the same focal plane both as a fine guidance sensor and as a science detector, greatly simplifying the optical system and eliminating the need for a second focal plane array devoted entirely to instrument guiding [23]. The post-detection signal processing is also modeled. Such processing takes the simulated image and passes it to a star detection algorithm that compares the magnitude of the candidate star to the detector noise level; once the signal-to-noise ratio exceeds a set threshold, a “star present” flag is set to “ON,” and the position of the detected star is derived using a simple centroiding algorithm. This information is then passed to the spacecraft and/or the payload GNC loop. Angular Rate Sensor An angular rate sensor (ARS) senses high-frequency angular vibration. It accurately detects vibrations with frequencies above 10 Hz as well as vibrations in the 1- to 10-Hz range with some degraded performance. In this range, a simple logic to compensate for known gain and phase loss may improve the controller performance. Recently, some efforts have been made to replace and/or enhance an ARS with MEMS accelerometers for the detection of high-frequency vibration [24]. The ARS sensor can be simply represented by a 2ndorder high-pass filter as follows:



s(s + 10) s2 + (4π)s + (4π)2

(2)

The typical random noise of ARS is 8 μrad/s (1σ) and the range is 10 rad/s. Others We have also developed models for such elements as MEMS accelerometers, magnetic torque rods and magnetometers, gimbal mechanisms, motors, and FSM mechanisms. GNC Algorithms for Pointing and Tracking Systems The choice of a P&T GNC algorithm depends on the specific architecture under investigation. We are focusing on the strapdown active P&T system described before. In this case, the algorithm can be grouped into the spacecraft GNC algorithm and the payload GNC algorithm. The spacecraft GNC algorithm slews the spacecraft toward a designated target and stabilizes the spacecraft attitude around the target using reaction wheels, star tracker, and gyro. The payload GNC algorithm precisely stabilizes the LOS vector of the payload to the target using the FSM, FPA detector, and the ARS. Note that only essential GNC algorithms are addressed in this paper. For example, less critical algorithms such as the RW momentum control loop using magnetic torque rods are omitted for brevity. Spacecraft GNC Algorithm The spacecraft GNC algorithm consists of four major components: slew maneuver planner, navigation filter, sensor processing, and control law, each implemented using Matlab/Simulink blocks and EML. The Attitude Determination and Control System (ADCS) Mission Manager and FSW Scheduler are not parts of typical GNC algorithms, but rather are functions of an upper-level program that integrates and executes these tasks. They are typically developed by GNC software engineers directly in C or C++ [25]. However, since

62 Model-Based Design and Implementation of Pointing and Tracking Systems

they are prerequisite for implementing GNC algorithms, simplified versions are modeled to a limited extent. The ADCS Mission Manager can read predefined time-tagged mission profile data and command slew instructions and control modes; the FSW Scheduler is not explicitly modeled but implicitly using “rate transition blocks” that allow the GNC algorithms to be executed in specific execution cycles.

Control Law and estimates the gyroscopic term, which is not negligible in fast slewing. The most complicated component is the extended Kalman navigation filter depicted in Figure 7. This block implements a standard extended KF that processes the measured rate and attitude quaternions and produces the rate and attitude quaternion states [26]. It has three components: quaternion propagation, Kalman gain propagation (eKFprop), and filter state update. As shown in Figure 7, three “rate transition blocks” are specially employed around the eKFprop block. The main purpose is to allow the eKFprop block to be executed at a different rate from the others (e.g., 1 Hz). The eKFprop block executes standard matrix calculations, including matrix inversion, which are often computationally expensive elements. By running this block at a lower rate, the computational load on the flight computer may be reduced, albeit at a cost of reduced performance of the extended KF.

The Slew Maneuver Planner processes slew information and provides smooth spinup-coasting-spindown eigenrotation attitude profiles along slew directions defined in the slew command. The attitude profile includes commanded quaternion and angular rate. The Slew Maneuver Planner can employ an advanced slew algorithm that can provide the shortest slew profile even when Earth, Sun, and Moon are in the path of eigenrotation slewing. The Navigation Filter consists of three major components: compensation, extended Kalman navigation filter (KF), and error calculation. The Compensation compensates for gyro bias in the measured gyro data using an estimated value from the navigation filter and compensates for deterministic time delay in the measured attitude quaternion using the measured gyro rate data. The Error Calculation calculates the error state of attitude and rate for the

The Control Law implements a simple proportional and derivative (PD) controller. The gains are parameterized in terms of damping ratio and bandwidth as follows:

Kw = 2ξwNJ

Kp = w2NJ (3)

2 wbE wp SC_Ctrl_eKF_M

M

SC_Ctrl_eKF_H

H

SC_Ctrl_eKF_R

R

eKFprop

K

K

bias_pos

delT

SC_Ctrl_N_Its

Qst2i_neg

tau

SC_Sensor_Gyro_BS_tau

1 wb_biasE

wbGds

stMeasOn Qst2i

averager

Qst2i_pos

3 Qst2iE

wbG

update

2 Qst2i

1 wbG

1 z

w

Qst2i Quatprop Qst2i_neg

SC_Ctrl_N_Its

delT

1 z

Figure 7. Extended Kalman navigation filter Simulink block diagram.

Model-Based Design and Implementation of Pointing and Tracking Systems

63

Here, J is the moment of inertia. The damping ratio (ξ) is typically 0.707 in most applications, but a high damping ratio (e.g., 0.995) is recommended in the P&T application to minimize undesired overshoot. The bandwidth (ωN) is typically gain-scheduled by a function of active mode (i.e., fast slew mode and fine tracking mode) and active status of payload GNC system. For example, a high bandwidth is used for the spacecraft GNC controller during fast slew mode, while a low bandwidth is used during fine tracking mode. When the payload GNC system is active, the bandwidth for the spacecraft GNC controller tends to be lowered further to prevent the two GNC loops from fighting each other. The bandwidth is a critical element to pointing accuracy. A higher bandwidth yields better pointing accuracy, or equivalently, smaller pointing error. Therefore, it is typically asked what the nominal bandwidth is and what the achievable bandwidth is. This is another reason why we prefer to employ the simple PD controller characterized by bandwidth instead of more advanced controllers such as linear quadratic Gaussian (LQG) and H-infinity. Without detailed analysis such as stability and Monte Carlo analysis, a typical range of bandwidth could be estimated by a rule-of-thumb waterfall effect—an order reduction in magnitude at each step. In particular, reduction in bandwidth happens at the step from GNC process cycle to GNC navigation filter bandwidth, and another reduction follows at the step from GNC navigation filter bandwidth to GNC controller bandwidth. Of course, the magnitude of reduction may vary around 10% as a function of the quality of the GNC subsystems. For example, using a very high-end gyro like an SIRU makes it possible for GNC controller bandwidth to be 2/10 of navigation filter bandwidth. Payload GNC Algorithm The payload GNC algorithm is typically simple and effectively consists of only the control algorithm as shown in Figure 8. The

main reason for making the payload GNC flight software as simple as possible is that it often runs at a high rate (particularly if it incorporates a high-rate FSM) on a field-programmable gate array (FPGA) with limited computational resources. The control algorithm stabilizes the location of the image of a guidance target (e.g., photons from a star) on the focal plane with the FSM tilt angle modulated by measurements of the FPA detector and the ARS. The FSM tilt angle loop consists of an ARS-to-FSM loop and an FPA-to-FSM loop. The former compensates for the image jitter with frequency higher than 5 Hz and the latter compensates for the image jitter with frequency lower than 5 Hz. To ensure that there is no frequency overlapping, the ARS-to-FSM loop employs washout filters. The FPA-to-FSM loop is assumed to be a 200-Hz process. This loop employs a proportional and integral (PI) controller. The integrator is used for rejecting any bias component. The proportional gain is selected to be a fraction of the ratio of FPA sensor output rate (e.g., 10 Hz) with respect to the GNC process cycle (e.g., 200 Hz). This is one way to generate a high-rate command signal from a low-rate measurement. Another way is to employ a low-pass filter. The output from the PI controller, which is the desired relative FSM tilt angle with respect to the current one, is integrated before being combined with the FSM command from ARS-to-FSM loop. We have focused primarily on the pointing GNC algorithm in this paper. However, the tracking GNC algorithm presents uniquely challenging issues, including coordinated tracking among multiple actuation elements of the spacecraft, payload gimbal mechanisms, and FSM; and reconstruction of the true LOS vector from the payload or spacecraft to a target from multiple sensor measurements such as encoders, gyros, FPA detectors, and known reference object locations. The development of precision tracking GNC algorithms is one of our next research topics.

e_ARS_HPF e_ARS_HPF 2 ARS_rate

e_ARS_HPF e_ARS_HPF

1 s

e_ARS_HPF

Integrator

e_ARS_HPF

Kπu

Kπu

BeamTF

InvFSMOpticalGain1 +

+

FPA_uv 1

Kπu

+

P gain Kπu K Ts z-1

InvFSMOpticalGain Kπu

-1

+ +

+

I gain

1 z

Figure 8. Simplified P&T payload GNC Simulink block diagram.

64 Model-Based Design and Implementation of Pointing and Tracking Systems

1 fsmCmd

Automatic Code Generation and Implementation We have discussed models of critical P&T elements and GNC algorithms. Here, we shift our focus to autogeneration of flight models and GNC flight software from the models and algorithms discussed previously. The code generation requires a few prerequisite conditions in the framework of model-based design. First, the model-based design tool is constructed with appropriate partitions that can accommodate the goal of code generation. For example, consider a SWIL/HWIL test of spacecraft GNC algorithms. An ideal partition consists of two main components: one for spacecraft GNC algorithms and the other for the remaining models

and algorithms. These blocks are connected by input/output signals with “rate transition blocks” (See Figure 9). As mentioned before, a “rate transition block” permits one to execute the algorithms at different rates in the level of design and implementation. Second, models and algorithms are developed using autocodable Matlab/ Simulink blocks and EML. Third, all parameters, including static and tunable parameters such as GNC control gains, are defined as external constant inputs to the GNC algorithm blocks and therefore can be provided by the GNC mission manager during each specific mission phase.

9

sunhc

8 ewDone

6

10 slewDone

5 4 mode

1 tsmPos

7 mode

SC_Sensors magBfield

2 magBfield

wbGF

3 raleGyro

1 tsmModeCmd 2 rmMCmd 3 rwTCmd

Bb

HRate Sensor Proc wb G

wob

rateGyro

4 Q_s [2]

quat_st2i

5

rwSpeed

Qob2i Outputs

rwSpd fet

SC_Actuators rwSpd

rwTCmd

rwTrq mtMCmd

rwTrq

Hrw

Hrw

mtM

mtM

Qb2i

scPos

wb

wb

acc

acc

Qt2i Bb

fet

Qb2i

Qob2i

wob

wob

OB_Dynamics

wobA

TARS_Model

SC_Dynamics

6

mode mode fuvP

Qt2i

Qt2i Qob2i

fuv

fsmP fpaBias

fet

fuvP

TFPA_Sensor TOPT_Model

fuvP fsmModelCmd mode wobA fsmP

fpaBias

FSM_Dynamics

ara2fsmC

ars2fsmC

fpa2fsmC

fpa2smC

TFSM_FSW

fsmP

fsmP

fsmPos

TFSM_PSensor

Figure 9. Spacecraft non-GNC flight software block.

Model-Based Design and Implementation of Pointing and Tracking Systems

65

With these prerequisite conditions, code generation is rather straightforward using the Real-Time Workshop code generator [27]. The generated code is pre-tested with a set of regression tests before SWIL and HWIL tests. First, it is validated and verified against the original model or algorithm within the same model-based design framework, using the Matlab/Simulink Verification and Validation toolbox [28]. Second, it is tested for runtime errors using Matlab/ PolySpace [29], and further tested on a virtual process that mimics the major functionality of a real target processor, i.e., with help of a third-party vendor’s MULTI [30]. The code that passes the above regression tests is now ready for SWIL and HWIL tests. For a SWIL test, the flight model code and the GNC flight software are running separately on two Power PC processors via internet communication using the Matlab/xPCTarget operating system [31]. This test is useful to check communication between flight software and the outside world and for further code validation. For the HWIL test, the flight models can be replaced by actual flight hardware, and the GNC flight software is running on the actual flight computer. This test is useful to evaluate the computational speed and memory limitations of the flight processor and data interfaces between hardware and flight software. Furthermore, it can evaluate network delays and communication data drop-offs. As a result, the development, validation, and verification of flight software may be sped up using the model-based design and implementation.

the bandwidth of the spacecraft GNC system is about 0.06 Hz; the bandwidth of the telescope GNC system is 2 Hz; the RW speed is nominally 1200 rpm (or 20 Hz); and first solar array bending and cryocooler frequencies are 35 Hz and 50 Hz, respectively. Figure 13 shows how the FPA-to-FSM loop and the ARS-to-FSM loop contribute to the power spectral density of the FSM tilt angles shown in Figure 12. The FPA-to-FSM loop is shown to compensate for the spacecraft pointing error with frequencies of up to 0.3 Hz (i.e., spacecraft response due to RW and ST noises), whereas the ARS-toFSM loop is shown to compensate for the spacecraft pointing error with frequencies higher than 10 Hz (i.e., spacecraft vibration). This is consistent with expectations based on the FPA detector and ARS sensor bandwidths. Conclusions This paper has presented a model-based design and implementation approach for pointing and tracking systems. The GNC models and algorithms developed are comparable to heritage counterparts in terms of accuracy and performance. Furthermore, development and modification/adaptation of GNC flight software are time and cost efficient, which is critical to new demands arising in the operationally responsive space field.

Examples Some results are present from the strapdown active P&T system that we have focused on, with a special focus on GNC system design and analysis. The gains for spacecraft and payload GNC algorithms are selected according to the guidelines described in previous sections. The simulation executed a mission plan that sequentially executed initial tracking, slew maneuver, and fine tracking modes. The results are plotted in Figures 10-13. Figure 10 plots the position error of the image with respect to the center of the FPA detector. For convenience, the position error is converted to the LOS error, dividing it by the effective focal length of telescope. As shown, the image position error is reduced significantly within the threshold of 0.35 arcsec (3σ) during fine tracking mode when the FPA detects a guide star and provides the FSM loop with the tracking information of that guide star. By contrast, large errors occur when the FPA detector is in a loss-of-lock situation due to a large slew or small repointing of the instrument LOS (“dither,” done to provide a background calibration for the science instrument being modeled). The large error between 100 and 250 s is due to a large spacecraft slew. Two smaller errors with magnitudes of 5 arcsec are due to dithering the FSM. During each dither, the FPA acquires a new guide star and tracks it. Dithering is clearly shown in Figure 11. Figures 12 and 13 both show power spectral densities of spacecraft pointing LOS error, FSM tilt angle, and FPA image LOS error. In Figure 12, the spacecraft pointing error is shown to be much larger than the requirement. This implies that a “passive P&T system” with same spacecraft GNC system cannot meet the P&T LOS requirement. However, as the FSM tilt angle is modulated to compensate for the spacecraft LOS error, the image LOS error on the FPA detector is controlled within the requirement of 0.35 arcsec (9% margin). The same figure also shows the bandwidths of important subsystems:

66 Model-Based Design and Implementation of Pointing and Tracking Systems

106 105

FPA Image Radial Error

10

FPA provides no data so spacecraft LOS error is used instead

0

102 101

FPA acquisition is done and FPA measurement is used to close FSM loop

FMS is dithered and FPA provides no measurement during reacquisition. The peak is fictitious error to represent this process

Magnitude (arcsec)

Magnitude (arcsec)

104 103

100

Figure 10. FPA image LOS error during initial tracking, slew maneuver, fine tracking modes.

-30 -40 -50

-80

Dithering of 5-arcsec LOS change

0 100 200 300 400 500 600 700 800 Time (s)

Figure 11. FSM tilt angle during initial tracking, slew maneuver, fine tracking modes.

104

104 SC Pointing Error FPA Image Radial Error FSM Tilt Angle Image RMS

10-2 10-4

102 Angle PSD (arcsec2/Hz)

Angle PSD (arcsec2/Hz)

-20

-70

10-2 0 100 200 300 400 500 600 700 800 Time (s)

100

-10

-60

10-1

102

FSM Tilt Angle

20

100

SC Pointing Error (FPA, ARS)-to-FSM Tilt Angle FPA-to-FSM Tilt Angle ASR-to-FSM Tilt Angle

10-2 10-4

10-6

10-6

10-8 10-2 10-1 100 101 102

10-8 10-2 10-1 100 101 102 Frequency (Hz)

Frequency (Hz)

Figure 12. Power spectral density distribution of spacecraft LOS error, FPA image LOS error, FSM tilt angle, and image rms.

Figure 13. Power spectral density distribution of spacecraft LOS error, combined FSM tilt angle, FPA contribution on FSM tilt angle, and ARS contribution on FMS tilt angle.

Model-Based Design and Implementation of Pointing and Tracking Systems

67

References [1] Wegner, P., Operationally Responsive Space, http://www. responsivespace.com/ors/reference/ORS%20Office%20Overview_ PA_Cleared%20notes.pdf. [2] Barnard, P., “Software Development Principles Applied to Graphical Model Development,” AIAA Modeling and Simulation Technologies Conference and Exhibit, AIAA-2005-5888, 2005. [3] Embedded Mathlab, http://www.mathworks.com/products/featured/ embeddedMatlab/. [4] Rate Transition Block, http://www.mathworks.com/access/helpdesk/ help/toolbox/simulink/slref/ratetransition.html. [5] Gilmore, J., S. Feldgoise, T. Chien, D. Woodbury, M. Luniewicz, “Pointing Stabilization System Using the Optical Reference Gyro,” Institute of Navigation Conference, Cambridge, MA, June 1993. [6]

Dorland, B. and R. Gaume, “The J-MAPS Mission: Improvements to Orientation Infrastructure and Support for Space Situational Awareness,” AIAA SPACE 2007 Conference & Exposition, Long Beach, California, September 2007.

[7] James Webb Space Telescope, http://www.jwst.nasa.gov/scope.html. [8] Joint Astrophysics Nascent Universe Satellite (JANUS), A SMEX Mission Proposal Concept Study Report, December 16, 2008. [9] Grocott, S., R. Zee, J. Matthews, “The MOST Microsatellite Mission: One Year in Orbit,” 18th Annual AIAA/USU Conference on Small Satellites, Salt Lake, Utah, 2004. [10] Wie, B., Space Vehicle Dynamics and Control, AIAA Education Series, 1998. [11] Psiaki, M., “Spacecraft Attitude Stabilization Using Passive Aerodynamics and Active Magnetic Torquing,” AIAA Guidance, Navigation, and Control Conference and Exhibit, AIAA 2003-5420, Austin, Texas, August 2003.

[20] Cemenska, J., Sensor Modeling and Kalman Filtering Applied to Satellite Attitude Determination, Masters Thesis, University of California at Berkeley, 2004. [21] Jerebets, S., “Gyro Evaluation for the Mission to Jupiter,” IEEE Aerospace Conference, Big Sky, Montana, March 2007. [22] Schroeder, D., Astronomical Optics, 2nd ed., Academic Press, 2000. [23] Teledyne Imaging Sensors HAWAII-2RG, http://www.teledyne-si.com/ imaging/H2RG.pdf. [24] ARS-12A MHD Angular Rate Sensor, http://www.atasensors.com/. [25] Hart, J., E. King, P. Miotto, S. Lim, “Orion GN&C Architecture for Increased Spacecraft Automation and Autonomy Capabilities,” AIAA Guidance, Navigation & Control Conference, Honolulu, Hawaii, 2008. [26] Lefferts, E.J., F.L. Markley, M.D. Shuster, “Kalman Filtering for Spacecraft Attitude Estimation,” 20th AIAA Aerospace Sciences Meeting, AIAA-82-0070, Orlando, Florida, 1982. [27] Real-Time Workshop 7.4, http://www.mathworks.com/products/rtw/. [28] Simulink Verification and Validation, http://www.mathworks.com/ products/simverification/. [29] PolySpace Client C/C++ 7.1, http://www.mathworks.com/products/ polyspaceclientc/. [30] MULTI Integrated Development Environment, http://www.mathworks. com/products/connections/product_detail/product_35473.html. [31] xPC Target 4.2, http://www.mathworks.com/products/xpctarget/. [32] Hecht, E., Optics, 4th ed., Addison Wesley, 2002. [33] Rodden, J., “Mirror Line of Sight on a Moving Base,” American Astronautical Society, Paper 89-030, February 1989. [34] Weinberg, M., “Working Equations for Piezoelectric Actuators and Sensors,” Journal of Microelectromechanical Systems, Vol. 8, No. 4, December 1999.

[12] Bialke, B., “High-Fidelity Mathematical Modeling of Reaction Wheel Performance,” 21st Annual American Astronautical Society Guidance and Control Conference, AAS 98-063, February 1998. [13] Masterson, R., D. Miller, R. Grogan, “Development of Empirical and Analytical Reaction Wheel Disturbance Models,” AIAA 99-1204. [14] Lappas, V., W. Steyn, C. Underwood, “Design and Testing of a Control Moment Gyroscope Cluster for Small Satellites,” Journal of Spacecraft and Rockets, Vol. 42, No. 4, July-August 2005. [15] Redding, D. and W. Breckenridge, “Optical Modeling for Dynamics and Control Analysis,” Journal of Guidance, Navigation, and Control, Vol. 14, No. 5, September-October 1991. [16] Sugiura, N., E. Morikawa, Y. Koyama, R. Suzuki, Y. Yasuda, “Development of the Elbow Type Gimbal and the Motion Simulator for OISL,” 21st International Communications Satellite Systems Conference and Exhibit, AIAA 2003-2268, 2003. [17] Brady, T., S. Buckley, M. Leammukda, “Space Validation of the Inertial Stellar Compass,” 21st Annual AIAA/USU Conference on Small Satellites, Salt Lake, Utah, 2007. [18] ComTech AeroAstro Miniature Star Tracker, http://www.aeroastro. com/components/star_tracker. [19] Liebe, C.C., “Accuracy Performance of Star Trackers–A Tutorial,” IEEE Aerospace and Electronic Systems, Vol. 38, No. 2, 2002.

68 Model-Based Design and Implementation of Pointing and Tracking Systems

Sungyung Lim is a Senior Member of the Technical Staff in the Strategic and Space Guidance and Control group at Draper Laboratory. Before joining Draper, he was a Senior Engineering Specialist at Space Systems/Loral. His work there involved the analysis of spacecraft dynamics, on-orbit anomaly investigation of spacecraft control systems, and the design and analysis of spacecraft pointing systems. At Draper, his work has extended to analysis and design of GN&C algorithm and software in the strategic and space area. His current interests include modelbased GN&C design and analysis and design of high precision pointing systems for small satellites. Dr. Lim received B.S. and M.S. degrees in Aerospace from Seoul National University and a Ph.D. in Aeronautics and Astronautics from Stanford University.

Benjamin F. Lane is a Senior Member of the Technical Staff at Draper Laboratory and is currently the Task Lead for the Guidance System Concepts effort. Expertise includes the development of advanced algorithms for image processing and real-time control systems, including adaptive optics and spacecraft instrumentation. He has developed instrument concepts, requirements, designs, control software, integration, testing and commissioning, and operations, debugging, and data acquisition. He helped design, build, and operate a multipleaperture telescope system (the Palomar Testbed Interferometer) for extremely high-angular resolution (picorad) astronomical observations, and also designed and built high-contrast imaging payloads for sounding rocket missions and spacecraft. He has published more than 45 peer-reviewed papers in his area of expertise and is a recipient of the 2010 Draper Excellence in Innovation Award. Dr. Lane holds a Ph.D. in Planetary Science from the California Institute of Technology. Bradley A. Moran is a Program Manager for Space Systems at Draper Laboratory. With 26 years of professional experience in both academic and industry settings, he has developed and implemented GN&C algorithms for a number of platforms ranging from undersea to on-orbit. Recent experiences include mission design and analysis for rendezvous and proximity operations and systems engineering for NASA, DoD, and other government sponsors. Since 2009, he has been the Mission System Engineer for the Navy’s Joint Milli-Arcsecond Pathfinder Survey program.

Timothy C. Henderson is a Distinguished Member of the Technical Staff at Draper Laboratory. He has over 35 years of experience leading projects in structural dynamics, GN&C flight software, fault-tolerant systems, robotics, and precision pointing and tracking. He served as Technical Director and the Attitude Determination and Control System (ADCS) Lead for the Joint Astrophysics Nascent Universe Satellite (JANUS) Small Explorer satellite program. He holds B.S. and M.S. degrees in Civil Engineering from Tufts University and MIT, respectively.

Frank A. Geisel is currently Program Manager for Strategic Business Development at Draper Laboratory and is responsible for the identification, capture, and management of programs that are focused on developing leadingedge solutions for DoD, NASA, and the Intelligence Community. He has worked on various aspects of systems engineering, networking, and communications architectures at Draper since 2000, and has held management positions at Draper in both the programs and engineering organizations. His early career was spent in the offshore industry, developing and fielding deep-water integrated navigation systems and closed-loop robotic control systems for subsea inspection, operation, and recovery applications. In the early 1980s, he was the Program Manager for 13 major expeditions to the Arctic and Antarctic in support of oil and gas exploration. Mr. Geisel is a Member of the AIAA, IEEE, and the Society of Naval Architects and Marine Engineers (SNAME). He received a B.S. in Ocean Engineering from MIT.

Model-Based Design and Implementation of Pointing and Tracking Systems

69

Law enforcement and other security-related personnel could benefit greatly from the ability to objectively and quantitatively determine whether or not an individual is being deceptive during an interview. Draper engineers, working with MRAC, have been developing ways to detect deception during interviews using multiple, synchronized physiological measurements. Their work attempts to bring engineering rigor and approaches to the collection and analysis of physiological measurements during a highly controlled psychological experiment. Most previous research has been performed with fewer sensing modalities in more academic environments, primarily with university students as participants in a mock crime. This effort was the first of its kind at Draper and represents an early step forward in exploiting physiological measurements. Future efforts would build on this one and aim toward developing and testing a usable tool. The impact of this work could be valuable in any environment where two persons interact and where the assessment of credibility of information is important. This includes law enforcement, homeland security, and intelligence community applications. Such knowledge could ultimately enable better ways to elicit and validate the information from people during interviews and could have applications in a wide variety of contexts. The researchers continue to investigate how to quantify the physiological responses associated with human interactions in order to make useful inferences about intent and behavior.

70 Detection of Deception in Structured Interviews Using Sensors and Algorithms

Detection of Deception in Structured Interviews Using Sensors and Algorithms Meredith G. Cunha, Alissa C. Clarke, Jennifer Z. Martin, Jason R. Beauregard, Andrea K. Webb, Asher A. Hensley, Nirmal Keshava, and Daniel J. Martin Copyright © 2010 by the Society of Photo-Optical Instrumentation Engineers (SPIE). Presented at SPIE Defense, Security, and Sensing 2010, Orlando FL, April 5-9, 2010

Abstract Draper Laboratory and Martin Research and Consulting, LLC* (MRAC) have recently completed a comprehensive study to quantitatively evaluate deception detection performance under different interviewing styles. The interviews were performed while multiple physiological waveforms were collected from participants to determine how well automated algorithms can detect deception based on changes in physiology. We report the results of a multifactorial experiment with 77 human participants who were deceptive on specific topics during interviews conducted with one of two styles: a forcing style that relies on more coercive or confrontational techniques, or a fostering approach that relies on open-ended interviewing and elements of a cognitive interview. The interviews were performed in a state-of-the-art facility where multiple sensors simultaneously collect synchronized physiological measurements, including electrodermal response, relative blood pressure, respiration, pupil diameter, and electrocardiogram (ECG). Features extracted from these waveforms during honest and deceptive intervals were then submitted to a hypothesis test to evaluate their statistical significance. A univariate statistical detection algorithm then assessed the ability to detect deception for different interview configurations. Our paper will explain the protocol and experimental design for this study. Our results will be in terms of statistical significances, effect sizes, and receiver operating characteristic (ROC) curves and will identify how promising features performed in different interview scenarios. * MRAC is a veteran-owned research and consulting firm that specializes in bridging the gap between empirical knowledge and corporate or government applications. MRAC conducts human subject testing for government agencies, academic institutions, and corporate industries nationwide.

Introduction Motivation A significant amount of current research has been focused on detecting deception based on changes in human physiology, with the obvious benefits to military operations, counterintelligence, and homeland defense applications, which must optimally collect and use human intelligence (HUMINT). Progress in this area has largely focused on how individual sensors (e.g., functional magnetic resonance imaging (fMRI), video, ECG) can reveal evidence of deception, with the hope that a computerized system may be able to automate deception detection reliably. For practical applications outside the laboratory environment, however, an equally important and complementary aspect is the way that information can best be “educed” during elicitations, debriefings, and interrogations. Thus far, however, there has been little investigation into how sensing technologies can complement and improve on educing methodologies and traditional observer-based credibility assessments. In 2006, the Intelligence Science Board published a comprehensive report on educing information [1]. A major finding of the report is that there has been minimal research on most methods of educing information, and there is no scientific evidence that the techniques commonly taught to interviewers achieve the intended results. Indeed, it appears that there have not been any systematic investigations of educing methodologies for almost 50 years. According to the report, most educing tactics and procedures

taught at the various service schools have had little or no validation and are frequently based more on casual empiricism and tradition than on science. In this paper, we report quantitative results of a comprehensive study whose objective was to determine how well measurements of physiology from multiple sensors can be collected and fused to detect deception and to explore how two distinctly different interviewing styles can affect deception detection (DD) performance. As part of a broader research goal at Draper Laboratory to understand how contact and remote sensors can be employed to infer identity and cognitive state, the experiment was conducted in a state-of-the-art facility that hosts multiple sensors in a monitored environment that enables highly calibrated and synchronized measurements to be collected and fused. Typical research facilities in this field do not have the resources to collect synchronized data from the number of sensors we have utilized. Psychological Basis for Investigation Our work builds on longstanding efforts in the area of psychophysiology that have attempted to translate cognitive processes attributed to stress and deception to physiological observables [2]-[10] whose attributes have been quantified using statistics. A key contribution of our effort has been to implement psychophysiological features identified by different researchers into a common, flexible analytical framework that allows their efficacy to be impartially compared and scrutinized.

Detection of Deception in Structured Interviews Using Sensors and Algorithms

71

During the previous century, credibility assessment and deception detection have been investigated in various scientific disciplines including psychiatry, psychology, physiology, philosophy, communications, and linguistics [11], [12]. Areas of specific exploration include nonverbal behavioral cues to deception [13], [14], verbal cues including statement validity analysis [15], [16], psychophysiological measures of lie detection [17], [18], and the effectiveness of training programs in detecting deception [19], [20]. While much progress has been made in determining specific measures that are helpful in detecting deception, research has consistently shown that “catching liars” is a difficult task and that people, including professional lie-catchers, often make mistakes [12], [21]. Thus far, however, there has been little investigation into how sensing technologies can complement and improve on educing methodologies and traditional observer-based credibility assessments. Traditional psychophysiological measures used to detect deception (collectively referred to as the polygraph or lie detector test) have changed little since they first became available. Psychophysiological assessment involves fitting an individual with sensors that are then connected to a polygraph machine. These sensors measure sweating from the palm of the hand or fingers (referred to as electrodermal response or galvanic skin response), relative blood pressure (measured by an inflated cuff on the upper arm), and respiration. Recently, however, other sensors have been proposed [22] as possible alternatives to the polygraph, such as thermal imaging, eye tracking, or a reduced set of only two sensors (electrodermal activity and plethysmograph, referred to as the Preliminary Credibility Assessment Screening System (PCASS)). It now remains to be seen to what extent these sensors will improve on current credibility assessment methodologies. Methods Experimental Design In this paper, we report quantitative results of a comprehensive study whose objective was to determine how well measurements of physiology from multiple sensors can be collected and fused to detect deception and to explore how two distinctly different interviewing styles can affect deception detection performance. Participants were recruited primarily through an advertisement in the Boston Metro, a free newspaper distributed in major metro stations. Seventy-eight participants ultimately completed the study; they were on average 42 years old and had an average of 14 years of education. Participants were informed that they would be paid $75 for the successful completion of the research session and an

additional $100 if they were deemed by the interviewer as truthful throughout the interview. This bonus was intended to motivate the participants to convince the interviewer of his or her honesty. In reality, all of the participants who successfully completed the study were paid the full $175, regardless of the interviewer’s determination. This study was a 2 (deception) X 2 (concealment) X 2 (interview style) factorial design. In the interview, participants were asked first about their current residence, followed by their religious beliefs and their employment. Participants were either instructed to tell the truth about their current residence, but lie about their religious beliefs and their employment status, or to tell the truth about all three topics. The concealment aspect involved a final portion of the interview and is not discussed in the results presented here. Eligible participants were randomly assigned to one of eight conditions. There were no significant differences in gender, race, or age between these different groups, ps > 0.05. The frequencies of participants in each experimental condition are presented in Table 1. Participants were randomized to be interviewed in one of the two styles described below. Forcing There are several sources currently available that provide information about intelligence interviewing techniques, including the U.S. Army Intelligence and Interrogation Handbook [23], the Central Intelligence Agency’s KUBARK Counterintelligence Interrogation Manual [24], and Gordon and Fleisher’s Effective Interviewing & Interrogation Techniques [25]. Despite the breadth of the Army handbook’s suggestions for educing information, many sources note that the more coercive or confrontational approaches contained in the handbook have often received emphasis during training and have been overused in the field. In the forcing interview, the interviewer tightly controls the course of the conversation, and frequently challenges the participant’s motives and responses through open skepticism or accusations of deceit. The interviewer assesses the participant’s honesty, in part, on how the participant reacts to these accusations. This style establishes a comparatively adversarial relationship between the interviewer and the participant. Fostering The fostering interview includes elements of motivational interviewing and the cognitive interview. The cognitive interview [26], [27] was originally developed to improve the ability of the police to acquire the most accurate information possible from a

Table 1. Frequencies of Participants in the Experimental Conditions.

Forcing

Fostering

Lie Condition

Conceal

No Conceal

Total

Lie Condition

Conceal

No Conceal

Total

Lying

13

11

24

Lying

10

8

18

Truthful

8

8

16

Truthful

12

8

20

Total

21

19

40

Total

22

16

38

72 Detection of Deception in Structured Interviews Using Sensors and Algorithms

witness. The fostering style interview aims to establish a collaborative relationship between the interviewer and the participant. In this style, the interviewer adopts a friendlier demeanor. The interviewer does not openly accuse the participant of lying, and his questions never presume deceit. He asks open-ended questions designed to elicit a wealth of reported information that the interviewer could use to assess the participant’s honesty. The fostering interview questions aim to establish a cooperative tone. The following hypotheses generated for this study will be the focus of this report: 1. Does the type of interview affect the interviewer’s ability to detect deception? 2. How well do the physiologic sensors predict deception when analyzed individually? 3. Does the type of interview influence the accuracy of physiologic sensors in detecting deception? Facilities Used for Experimentation The facility used to conduct the experiments consisted of an integrated, sensor-centric testing space consisting of a waiting area, an assessment room, a noise-insulated testing room, an operations room, and a data management room. The research staff executed the research protocol in the waiting area and in the assessment and testing rooms. These rooms were equipped with the different physiological sensors that were used to collect participant data during the execution of the research protocol. Temporal protocol execution, sensor control, and data collection were remotely controlled from the operations room. Finally, the collected electronic sensor and experiment data were processed and stored in the data management room. Sensors Used for Data Collection In the current study, we evaluated 14 features from 5 physiological signals. Several nonverbal behavioral cues were assessed with a Tobii eye tracker, including pupil size and blink rate. The LifeShirt System (commercially available through VivoMetrics) measured ECG and respiration. The electrodermal activity sensor from the Lafayette polygraph was used to measure changes in the electrical activity of the skin surface. These changes in electrical activity can be thought of as indicators of imperceptible sweating that signify sympathetic arousal. In addition, the plethysmograph from the Lafayette polygraph was used. This photoelectric plethysmograph measures rapidly occurring relative changes in pulse blood volume in the finger. Features of Interest From the 5 signals collected, 14 features were analyzed as described in Table 2. Some features have been reviewed sufficiently in the literature that a direction of change can be hypothesized when comparing feature values from baseline data to those gathered while the subject was deceiving. In some cases, the direction of change is not known with certainty. The features that consistently performed better than chance included: interbeat interval, pulse area, pulse width, peak-to-peak interval, and left and right pupil diameter. This feature group comprises cardiac-related features as well as pupil diameter features, and they will be discussed further here.

Interbeat interval was calculated by first decomposing the ECG signal into peaks. A method for locating the R-peak (the highest peak) in the ECG signal was adapted from an algorithm implemented in the ECGTools package by Gari Clifford, which in turn was inspired by the literature [28]. The highest and lowest points in the difference signal were found via filtering and segmentation. Each pair contained an R-peak, located at the time of the maximum signal value in the interval. Once the peaks were located, the R-to-R interval was calculated by taking the difference between times at which successive peaks occurred. In order to calculate the photoplethysmograph (PPG) features, it was necessary to find the peaks and valleys that defined the signal. This was done according to mixed state feature extraction derived from an object tracking algorithm used to track fish movements in video [29]. Pulse area was calculated as the sum PPG signal for one full cycle. Pulse width was calculated at half of the maximum pulse height, according to the full-width half-max (FWHM) norm. Peak-to-peak interval was calculated as the difference between the times at which two consecutive peaks occurred. Each peak must be part of a complete cycle, which begins and ends with a valley. The right and left pupil diameter features were read directly from the data reported by the Tobii eye tracker. Measures of Performance Significance The analysis invokes the following signal model for the measurement of sensor data from multiple sensors to investigate whether deception can be discerned through the observation of feature values. H0 : r = s0 + n H1 : r = s1 + n Here, H0 is assumed to be the hypothesis that the subject is completely truthful and H1 is the hypothesis that the subject is deceptive. The received signal, r, is a vector of features in RN, where N is the number of features. The signal vectors for each hypothesis, s0 and s1, are the vectors of feature values generated under each hypothesis and were assumed to possess Gaussian distributions, although this assertion was not tested statistically. The additive noise, n, was assumed to be all white Gaussian noise (AWGN) that is identically distributed under each hypothesis. Data for the H0 distribution were gathered from the interview on residence, which was the first topic discussed in the interview. These data were gathered either from the entire topical interview or from immediate post-question periods. Data collected from the entire topical interview were collected from several seconds prior to the first question to 20 s after the last question, even though a brief orienting response to the first question of the interview is expected. Data collected from post-question intervals were collected from all questions except the first question of the topical interview for 20 s beginning at the end of the question. Data for the H1 distribution were gathered from the deception interview on employment/ religion in one of the two manners described above.

Detection of Deception in Structured Interviews Using Sensors and Algorithms

73

Table 2. Sensor, Description, and Anticipated Change Under Deception for Each Feature Calculated.

Feature

Sensor

Feature Description

Expected Change Direction Under Deception

Pupil Diameter (Right)

EyeTracker

Subject right eye pupil diameter, in millimeters

Up

Pupil Diameter (Left)

EyeTracker

Subject left eye pupil diameter, in millimeters

Up

Blink Rate

EyeTracker

Subject eye blink frequency, in hertz

Down

Pulse Area

Photoplethysmograph (PPG)

Area of signal in one beat

Down

Pulse Amplitude

PPG

Difference between peak amplitude and trough amplitude

Down

Pulse Width

PPG

FWHM of peak

Down

Peak-to-Peak Interval

PPG

Time between successive peaks Up

Electrodermal Activity (EDA) Amplitude

EDA Finger Electrode

Peak amplitude of skin resistance Up

EDA Duration

EDA Finger Electrode

Time for skin resistance to return to preresponse level

Up

EDA Line Length

EDA Finger Electrode

Length of skin resistance line from peak to recovery

---

Interbeat Interval

ECG (LifeShirt)

Time between successive Rpeaks of cardio signal

Up

Respiratory Inhale/Exhale Ratio

Inductive Plethysmograph (LifeShirt Respiratory Sensor)

Ratio of the time interval from one trough to the next peak to the time interval of the peak to the next trough

Up

Respiratory Cycle Time

LifeShirt Respiratory Sensor

Time between successive peaks Up in respiratory signal

Respiratory Amplitude

LifeShirt Respiratory Sensor

Difference between peak amplitude and trough amplitude

The metrics used to describe significance were the t-test and effect size. Two-tailed t-tests were used with an assumption of equal variance and an alpha value of 0.05. Cohen’s d measure of effect size was used. The general trend of a feature was assessed by looking at the median effect size for that feature across a number of subjects. μ1 – μ0 d=

(n1 – 1) σ21 + (n0 – 1) σ20 n1 + n0 – 2

Detection Test statistics were calculated as the z-score shown below, where t is the test statistic, θ(x) is the test data point, μ0 is the mean from the H0 distribution, and σ0 is standard deviation of the H0 distribution. In this way, test statistics from individual subjects were appropriately comparable.

t=

Down

θ(x) – μ0 σ0

These were used to create ROC curves that encapsulated the three important quantities associated with any detection algorithm that indicate how well it is able to detect deception over an ensemble of subjects: Probability of Detection (PD), Probability of False Alarm (PFA), and Area Under Curve (AUC). A z-score was used to compare the AUC to 0.5, the AUC under a curve generated by random guessing between classes [30]. A similar measure was used to compare two ROC curves and to judge whether their difference was statistically significant [31]. Results Statistical Analyses Does the type of interview affect the interviewer’s ability to detect deception? Interviewer assessment accuracy was analyzed with binomial tests to ascertain if accuracy was better than 50%.

74 Detection of Deception in Structured Interviews Using Sensors and Algorithms

Analyses were performed separately for each interview type. Seventy-seven participants were included in the analyses. Data from one participant were not included because the person was deemed ineligible. The results are shown in Table 3. Interviewers were able to detect participant deception significantly better than chance when interviewing in the fostering style (n = 38, accuracy = 71%, p = 0.01), but not the forcing style (n = 39, accuracy = 62%, p = 0.20). Table 3. Interviewer Accuracy at Deception Detection.

Accuracy

Forcing

Fostering

All

62% (24/39)

71%* (27/38)

66%* (51/77)

Interviews were conducted by two different interviewers, and there were no significant differences in accuracy by interviewer, p > 0.05. Interviewer 2’s accuracy in detecting lies about religion/ employment was significantly better than chance (accuracy = 70%, p = 0.01 ). There was no significant difference in detecting deception between interviewers on the basis of interview style. Participant anxiety was assessed for changes due to the interview. Participants were significantly more anxious during the interview (M = 35.69, SD = 11.22) than they were before (M = 30.22, SD = 8.73) (t(77) = -4.63, p < 0.05). There were no significant differences in anxiety change scores by interview type (fostering M = -4.13, SD = 11.44; forcing M = -6.75, SD = 9.38), or lie condition (deceptive M = -6.83, SD = 11.43; honest M = -3.89, SD = 9.07). Feature Analyses Significance Testing How well do the physiologic sensors predict deception when analyzed individually? Test statistics generated using residence interview post-question interval data for background and the mean of the employment/religion interview post-question interval data were correlated with dichotomous criteria: • Interview Type (forcing coded 0, fostering coded 1). • Deception State (deceptive coded 0, truthful coded 1). The results can be seen in Table 4. Interbeat interval, peak-to-peak interval, pulse area, and pulse width were significantly correlated with deception state; this is indicated by the bold type in the table. Features that did not have significant correlations are not listed in the table. Pulse amplitude showed a positive correlation with interview type but not with deception state. Detector ROC Curves Deception detectors were built from distributions garnered from each interval option for each feature. The AUC for detectors that performed significantly better than chance at the α = 0.05 level are reported in Table 5 along with the sample size for the given ROC curve. LifeShirt data were not recoverable for 2 out of 77 subjects, lowering the sample size for features from the LifeShirt sensors. Further, one subject had poor quality ECG data during a portion of

Table 4. Deception Post-Question Test Statistic Correlations with Interview Type and Participant Deception State. Positive correlation indicates that participants in the given state have smaller test statistics. *p < 0.05 **p < 0.01

Feature

Interview Type

Deception State

Interbeat Interval, ECG

0.289*

Peak-to-Peak Interval, PPG

0.243*

Pulse Amplitude

0.233*

Pulse Area

0.324**

Pulse Width

0.225*

Table 5. Significant Deception Detector Results for Each of 8 Features Measured with Both Intervals. Detectors are statistically different from chance at the α = 0.05 level. Nonsignificant detectors are not shown. All test statistics were generated with a mean test value.

Feature

Post-Question

Entire Topical Interview

Interbeat Interval, ECG

0.703 (N = 75)

0.794 (N = 74)

Right Pupil Diameter

0.720 (N=77)

Left Pupil Diameter

0.691 (N = 77)

Respiratory Inhale/Exhale (I/E) Ratio

0.647 (N = 75)

Respiratory Cycle Time

0.304 (N = 75)

Pulse Area

0.693 (N = 77)

0.778 (N = 77)

Pulse Width

0.663 (N = 77)

0.697 (N = 77)

Peak-to-Peak Interval, PPG

0.673 (N = 77)

0.762 (N = 77)

one of the interviews; this prevented analysis of the interview as a whole without disrupting the post-question analysis for the interbeat interval feature. In all cases where a ROC curve was significant for both entire topical interview intervals and post-question intervals, the AUC for the entire topical interview-generated ROC curve was higher. The two best performing detectors for both interval options were from the interbeat interval and peak-to-peak interval features. For the entire topical interview data interval, these both produced curves with AUC higher than 0.75. For both interval options, both of these curves performed significantly better than chance, but they were not significantly different from each other. Does the type of interview influence the accuracy of physiologic sensors in detecting deception? Correlations between deception post-question interval test statistics and deception state were computed. Three features had positive correlations with deception state under the forcing interview style. These were interbeat interval (0.355, p < 0.05), pulse

Detection of Deception in Structured Interviews Using Sensors and Algorithms

75

area (0.401, p < 0.05), and peak-to-peak interval (0.410, p < 0.01). Only under the forcing interview style did feature values from the deception interview on employment/religion correlate with the deception state. Deception detectors were generated for all features using data from the entire topical interview. The detectors for all features are compared for the two different interview styles via their area under curve in Figure 1. The maximum area under curve possible is one, and a detector that performs at chance will have an area of 0.5. The pulse-based features (interbeat interval, pulse width, peak-to-peak interval, and pulse area) as well as the pupil diameter features in the forcing interview style performed the best, all with an area above 0.7. All of these ROC curves were significantly better than chance. In the fostering interview style, only the pulse area feature performed significantly better than chance.

Table 6. Forcing Interview Style Detector Results Summary. Detectors were generated for both conditions and both interval types. For each combination, the features producing significant detectors are reported with the AUC. All detectors were built with test statistics garnered from mean test values.

Condition

Interval

Features Producing Significant Detectors

Deception

Post-question

Interbeat Interval, ECG (0.730) Respiratory I/E Ratio (0.790) Pulse Area (0.775) Pulse Width (0.775) Peak-to-Peak Interval (0.785)

Deception

Entire topical interview

Interbeat Interval, ECG (0.863) Right Pupil Diameter (0.720) Respiratory Cycle Time (0.250) Pulse Area (0.846) Pulse Width (0.751) Peak-to-Peak Interval (0.855)

Area Under the Curve for Detector Families

Figure 1. AUC for ROC curves generated from entire topical interview intervals for deception detection, comparing results for forcing and fostering interview styles.

Significant detectors for the forcing interview style are summarized in Table 6. Detectors that performed significantly better than chance are listed along with their AUC for each condition, interval type, and feature. The detector with the highest area is the interbeat interval deception detector that operates on the entire topical interview distributions. It has an area of 0.863. For both interval types, interbeat interval, pulse area, pulse width, and peak-to-peak interval make significant deception detectors. Only the pulse area feature yielded a significant detector (AUC 0.649) for the fostering interview style. This was the case when entire topical interview intervals were used for deception detection. The best performing feature from all combinations of interval choice and interview style was interbeat interval. The deception

detection performance of this feature was enhanced when data only from the forcing style interview were used. In Figure 2, the entire topical interview interval detector for interbeat interval from forcing interview participants is shown in comparison to the comparable detector from the fostering interview participants. The forcing interview style detector performed significantly better than chance. (The detector from the fostering participants was not statistically significantly better than chance.)

ROC Curves 1 0.9 0.8

Probability Detection

interBeatInterval rightPupilDiameter leftPupilDiameter eyeBlink edaAmplitude edaDuration edaLineLengths respiratoryAmplitude respiratoryieRatio respiratoryCycleTime pulseArea pulseAmplitude pulseWidth peakToPeakInterval

Fostering

Forcing

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

InterBeatInterval - Forcing (0.863) InterBeatInterval - Fostering (0.658) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Probability False Alarm Figure 2. ROC curves comparing interview styles for entire topical interview intervals data for interbeat interval, ECG. Fostering detector was not significantly better than chance.

76 Detection of Deception in Structured Interviews Using Sensors and Algorithms

Discussion We have demonstrated the promise of certain sensor signals, features, and their analysis parameters in detecting lies. Further, we have demonstrated the effects of interview style on the detection of deception in an interview recorded along with physiological signals. Sensor signal analysis results included higher successful classification and more significant results among participants who underwent the forcing interview style than the fostering interview style. This was apparent through several different analysis approaches. While five features produced significant detectors for the forcing interview style subset, there was only one feature with a significant detector for the fostering subset. The highest area under the curve was 0.863, produced by the interbeat interval feature calculated from the ECG signal of forcing interview participants. When deception test statistics were correlated with deception state for participants who underwent each interview style, significant correlations only occurred in the subset that underwent the forcing interview style. The features that consistently performed better than chance included: interbeat interval, pulse area, pulse width, peak-to-peak interval, left pupil diameter, and right pupil diameter. This feature group comprises cardiac-related features measured from both the ECG and PPG signals as well as pupil diameter features. Significant detectors were also produced by the respiratory I/E ratio and respiratory cycle time features. These will not be discussed as they were not significant in the correlation analysis, nor did they show a moderate or large effect size. The cardiac and pupil features will be discussed further here. A number of cardiac features showed significant correlations with deception state, had moderate-to-high effect size differences, or generated significant detectors. Interbeat interval and peak-topeak interval decreases were significant as measured by effect size on both short (20 s) and long (entire 5-min interview) time scales. These features were also significantly correlated with deception state and they produced detectors with the highest area for both data interval definitions. Although the differences between the two features were not significant, interbeat interval had consistently higher correlations, effect sizes, and areas under curve. The strong results shown by the interbeat interval and peak-to-peak interval features are indicators that deception can be measured effectively by an increase in heart rate. This is in contrast to previous studies that have found a heart rate decrease when participants are deceptive. There has been some debate regarding the heart rate (HR) response to deception. Some have found HR responses to deception to be biphasic [8], [32]. There is an initial increase in HR for the first 4 s following question onset, followed by a decrease until approximately the 7th post stimulus second, and the HR then returns to baseline. Others have found HR deceleration to be indicators of deception [33], [34]. There also has been discussion in the literature as to the nature of the HR response to different types of deception tests [5], [32], [10]. The authors have noted that the direct and often accusatory questions that comprise

the comparison question test may produce defensive responses, evidenced in part by HR acceleration, whereas the stimuli used in a guilty knowledge test may produce orienting responses, evidenced in part by HR deceleration. The HR increase found in the present study could be indicative of a defensive response since the test format is similar to that of the comparison question test. A change in time window does not explain the discrepancy between the HR decrease reported in the literature and the HR increase measured in this study. The responses recorded here also do not follow the biphasic theory as a HR increase was observed both in the 4-s window immediately after question onset as well as in the window from 6 to 12 s after question onset. Other cardiac measures also showed promise. Both pulse area and pulse width had significant correlations with deception state as well as significant detectors. Pulse amplitude, however, was a less promising feature. The pulse amplitude feature is normalized with respect to baseline signal value, while the pulse area feature was not implemented in this way. This may be why pulse amplitude was not a significant predictor of deception while pulse area was. A decreasing DC signal component could cause an erroneously significant result for the pulse area feature, and this possibility should be avoided in future implementations by eliminating the DC component of the signal or by subtracting a baseline or valley value from each measure of pulse area. Pupil diameter measures in the left and right eye did not show significant correlations with deception state, but they exhibited moderate mean effect size differences, and on the entire topical interview interval, they generated detectors that performed better than chance. Detectors generated for entire topical interview background intervals and tested with data from post-question intervals were also not significant. Significance on entire topical interview interval detectors may indicate that participants’ pupil diameter was more likely to increase in response to a question that required an answer more detailed than ‘yes’ or ‘no.’ The postquestion interval data distributions are drawn from select yes/no questions in the experiment. Two data intervals were considered here, and there were several instances in which the entire topical interview data intervals were more informative than the 20-s post-question intervals. (For an example, see Table 5.) The benefits of using entire topical interview intervals for data collection were not observed when the background distribution was gathered from the entire topical interview if the test data came only from post-question intervals. This suggests that because there is more dialogue involved in an interview as compared with a more traditional detection of deception test in which each question is answered with a simple yes/no, there may be more physiological activity and more information that can be extracted from an entire topical interview as opposed to a 20-s post-question interval. This is the case even when those questions are quite to the point of the matter at hand (e.g., “Are you being truthful when you tell me that you work as a retail salesperson?”) Study design may have also impacted the utility of these data intervals. Although the wording of the questions asked during the

Detection of Deception in Structured Interviews Using Sensors and Algorithms

77

interviews was similar to those asked in a traditional deception detection test, the test structure was different. In one type of deception detection test, the comparison question test, each relevant question is preceded by a comparison question, and decisions regarding veracity are made by comparing responses to the two question types [35]. In the present study, the questions used for comparison were asked early in the interview. The deception questions were presented toward the middle of the interview. It may be difficult to see a change in post-question autonomic responding between questions that are presented far apart during the interview. Conclusions and Future Work Interview style impacts interviewer assessment accuracy. Interviewer accuracy at detecting deception was better than chance in the fostering interview style. Rapport developed during a fostering interview may facilitate the interviewer’s ability to detect deceit. In the forcing interview style, interviewer assessment accuracy was not statistically different from chance. A forcing style interview amplifies physiological signals indicative of deception. When the forcing interview style was used, sensor signals yielded detectors that operated significantly better than chance. This showed an advantage over interviewer assessment accuracy that was not better than chance. Physiologic information elicited during topical interviews may be more indicative of deception than physiologic information gathered from periods of structured yes/no questions, although the sources for physiologic changes may be more difficult to identify. This trade-off should be a topic of further study. Heart rate and other pulse-based features show good capability in deception detection. Our results indicate a need for better understanding of the orienting and defensive responses and when to expect each. With regard to pupil diameter, our results are suggestive, but not as strong as the evidence that others have shown. Interview-based deception detection techniques should be pursued further in cases where deception detection with physiological sensors is desired. Placement of comparison questions should be reevaluated. Draper Laboratory continues to expand its facilities, resources, and expertise to pursue important challenges in this area, including unstructured interview analysis, remote sensing of physiology, and contextual factors in educing information.

[4] Dionisio, D.P., E. Granholm, W.A. Hillix, W.F. Perrine, “Differentiation of Deception Using Pupillary Responses as an Index of Cognitive Processing,” Psychophysiology, 2001, pp. 205-211. [5] Elaad, E. and G. Ben-Shakhar, “Finger Pulse Waveform Length in the Detection of Concealed Information,” International Journal of Psychophysiology, 2006, pp. 226-234. [6] Handler, M. and D.J. Krapohl, “The Use and Benefits of the Photoelectric Plethysmograph in Polygraph Testing,” Polygraph, 2007, pp. 18-25. [7] Kircher, J.C., S.D Kristjansson, M.K. Gardner, A.K. Webb, Human and Computer Decision-Making in the Psychophysiological Detection of Deception, University of Utah, Salt Lake City, 2005. [8] Podlesny, J.A. and D.C. Raskin, “Effectiveness of Techniques and Physiological Measures in the Detection of Deception,” Psychophysiology, Vol. 15, 1978, pp. 344-358. [9] Siegle, G.J., N. Ichikawa, S. Steinhauer, “Blink Before and After You Think: Blinks Occur Prior to and Following Cognitive Load Indexed by Pupillary Responses,” Psychophysiology, 2008, pp. 679-687. [10] Verschuere B., G. Crombez, A. De Clercq, E.H.W. Koster, “Autonomic and Behavioral Responding to Concealed Information: DifferentiatingOrientingand Defensive Responses,”Psychophysiology, Vol. 41, 2004, pp. 461-466. [11] Granhag, P.A. and L.A. Strömwall, The Detection of Deception in Forensic Contexts, Cambridge University Press, Cambridge, UK, 2004. [12] Vrij, A., Detecting Lies and Deceit: The Psychology of Lying and the Implications for Professional Practice, Wiley, Chichester, England, 2000. [13] DePaulo, B.M., J.L. Lindsay, B.E. Malone, L. Muhlenbruck, K. Charlton, H. Cooper, “Cues to Deception,” Psychological Bulletin, Vol. 129, 2003, pp. 74-118. [14] DePaulo, B.M. and W.L. Morris, “Discerning Lies from Truth: Behavioural Cues to Deception and the Indirect Pathway of Intuition,” The Detection of Deception in Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004. [15] Köhnken, G., “Statement Validity Analysis and the Detection of the Truth,” The Detection of Deception in Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004. [16] Vrij, A., “Criteria-Based Content Analysis: A Qualitative Review of the First 37 Studies,” Psychology Public Policy and Law, Vol. 11, 2005, pp. 3-41.

References

[17] Ben-Shakhar, G. and E. Elaad, “The Validity of Psychophysiological Detection of Information with the Guilty Knowledge Test: A Meta Analytic Review,” Journal of Applied Psychology, Vol. 88, 2003, pp. 131-151.

[1] Educing Information: Interrogation: Science and Art: Foundations for the Future: Phase 1 Report, Intelligence Science Board, Washington, DC, National Defense Intelligence College, 2006.

[18] Honts, C.R., “The Psychophysiological Detection of Deception,” The Detection of Deception in Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004.

[2] Allen, J., “Photoplethysmography and Its Application in Clinical Physiological Measurement,” Physiol. Meas., 2007, pp. R1-R39. [3] Bell, B.G., D.C. Raskin, C.R. Honts, J.C. Kircher, “The Utah Numerical Scoring System,” Polygraph, Vol. 28, 1999, pp. 1-9.

[19] Bull, R., “Training to Detect Deception from Behavioural Cues: Attempts and Problems,” The Detection of Deception in Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004.

78 Detection of Deception in Structured Interviews Using Sensors and Algorithms

[20] Frank, M.G. and T.H. Feeley, “To Catch a Liar: Challenges for Research in Lie Detection Training,” Journal of Applied Communication Research, Vol. 31, 2003, pp. 58-75. [21] Vrij, A., “Guidelines to Catch a Liar,” The Detection of Deception in Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004. [22] Vendemia, J.M., M.J. Schilliaci, R.F. Buzan, E.P. Green, S.W. Meek, “Credibility Assessment: Psychophysiology and Policy in the Detection of Deception,” American Journal of Forensic Psychology, Vol. 24, 2006, pp. 53-85. [23] U.S. Army Intelligence and Interrogation Handbook: The Official Guide on Prisoner Interrogation, Department of the Army, The Lyons Press, Guilford, CT, 2005. [24] KUBARK Counterintelligence Interrogation, Central Intelligence Agency, Washington, DC, 1963. [25] Effective Interviewing and Interrogation Techniques, 2nd ed., Gordon, N.J. and W.L. Fleisher, eds., Academic Press, Burlington, MA, 2006. [26] Colwell, K., C.K. Hiscock, A. Memon, “Interviewing Techniques and the Assessment of Statement Credibility,” Applied Cognitive Psychology, Vol. 16, 2002, pp. 287-300. [27] Colwell, K., C. Hiscock-Anisman, A. Memon, A. Rachel, L. Colwell, “Vividness and Spontaneity of Statement Detail Characteristics as Predictors of Witness Credibility,” American Journal of Forensic Psychology, Vol. 25, 2007, pp. 5-30. [28] Pan, Hamilton, Tompkins, “A Real Time QRS Detection Algorithm,” IEEE Trans Bio Engineering, Vol. 32, No. 3, 1985, pp. 230-236. [29] Schell, C., S.P. Linder, J.R. Zeider, “Tracking Highly Maneuverable Targets with Unknown Behavior,” Proceedings of the IEEE, Vol. 92, No. 3, 2004, pp. 558-574. [30] Hanley, J.A. and B.J. McNeil, “The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve,” Radiology, 1982, pp. 29-36. [31] Hanley, J.A. and B.J. McNeil, “A Method of Comparing the Areas Under Receiver Operating Characteristic Curves Derived from the Same Cases,” Radiology, 1983, pp. 839-843. [32] Raskin, D.C., “Orienting and Defensive Reflexes in the Detection of Deception,” The Orienting Reflex in Humans, H.D. Kimmel, E.H. van Olst, and J.F. Orlebeke, eds., Erlbaum Associates, Hillsdale, NJ, 1979, pp. 587-605. [33] Patrick, C.J. and W.G. Iacono, “A Comparison of Field and Laboratory Polygraphs in the Detection of Deception,” Psychophysiology, Vol. 28, 1991, pp. 632-638. [34] Podlesny, J.A. and C.M. Truslow, “Validity of an Expanded-Issue (Modified General Question) Polygraph Technique in a Simulated Distributed-Crime-Roles Context,” Journal of Applied Psychology, Vol. 78, 1993, pp. 788-797. [35] Raskin, D.C. and C.R. Honts, “The Comparison Question Test,” Handbook of Polygraph Testing, M. Kleiner, ed., Academic Press, San Diego, CA, 2002, pp 1-47.

Detection of Deception in Structured Interviews Using Sensors and Algorithms

79

Meredith G. Cunha is a Member of the Technical Staff in the Fusion, Exploitation, and Inference Technologies Group. She has experience with data analysis and pattern classification of hyperspectral, biochemical sensors and physiological data. Her recent work is in physiological and psychophysiological signal processing. Mrs. Cunha received the Bachelor of Science and Master of Engineering degrees from the Electrical Engineering and Computer Science Department at MIT.

Alissa C. Clarke is a research consultant at MRAC LLC. She has worked in collaboration with Draper Laboratory on several studies in the area of deception detection, supporting the development of research protocols, coordinating the recruitment and testing of participants, and aiding in the preparation of final reports. Ms. Clarke received an A.B. in Psychology and Health Policy from Harvard University.

Jennifer Z. Martin is an Advisor and Senior Research Scientist with MRAC. She is currently involved with several research projects, including work on intelligence interviewing and cues to deception or malintent (the intent or plan to cause harm). She is an author of the Theory of Malintent, which drives the Department of Homeland Security (DHS) Future Attribute Screening Technologies (FAST) program, and helped devise the malintent research paradigm. Prior to her work with MRAC, she excelled in both corporate and academic settings. Dr. Martin received a Ph.D. in Experimental Social Psychology from Ohio University.

Jason R. Beauregard is a Research Associate with MRAC. Since joining the firm, he has managed several research projects spanning a variety of topics and utilizing unique protocols. His responsibilities include supervising protocol planning, development, implementation, and operation of human subject testing. Prior to his employment with MRAC, he served as a Case Manager, Intervention Specialist, and Assistant Program Director of a Court Support Services Division (CSSD)-sponsored diversionary program in the state of Connecticut. Mr. Beauregard received a B.A. in Psychology from the University of Connecticut.

Andrea K. Webb is a Psychophysiologist at Draper Laboratory. She has an extensive background in psychophysiology, eye-tracking, deception detection, quantitative methods, and experimental design. Her work at Draper has focused on security screening, interviewing, autonomic specificity, and post-traumatic stress disorder (PTSD). She is currently Principal Investigator for a study examining autonomic responses in people with PTSD and is the data analysis lead for a project funded by DHS. Dr. Webb earned a B.S. in Psychology from Boise State University and M.S. and Ph.D. degrees in Educational Psychology from the University of Utah.

80 Detection of Deception in Structured Interviews Using Sensors and Algorithms

Asher A. Hensley is a Radar Systems Engineer at the Telephonics Corporation. His background includes work in sea clutter modeling, detection, antenna blockage processing, and tracking. His primary research interests are in machine learning and computer vision. Mr. Hensley received a B.Sc. in Electrical Engineering from Northeastern University and is currently pursuing a Ph.D. in Electrical Engineering from SUNY Stony Brook.

Nirmal Keshava is the Group Leader for the Fusion, Exploitation, and Inference Technologies group at Draper Laboratory. His interests include the development of statistical signal processing techniques for the analysis of physiological and neuroimaging measurements, as well as the fusion of heterogeneous data in decision algorithms. He received a B.S. in Electrical Engineering from UCLA, an M.S. in Electrical and Computer Engineering from Purdue University, and a Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.

Daniel J. Martin, Ph.D., ABPP, is the Director of MRAC LLC, a research and consulting firm that specializes in bridging the gap between empirical knowledge and corporate or government applications. He is also the Director of Research for the DHS’s FAST program and serves as experimental lead on several research studies investigating security screening and interviewing. His research interests include the effectiveness of different interviewing methodologies in eliciting information and the psychological and physiological cues to deception and malintent. Dr. Martin joined the faculty at Yale University in 1999. There he conducted several multisite clinical trials in substance abuse treatment. He also has over 10 years of experience training hundreds of individuals in motivational interviewing with resistant populations. Dr. Martin is board certified in Clinical Psychology by the American Board of Professional Psychology.

Detection of Deception in Structured Interviews Using Sensors and Algorithms

81

Planner Complexity with Operator Interaction

Planner Complexity

Human

Urban Challenge Increasing Environment Uncertainty

Active Military Robots

Aircraft Autopilot

1/Mission

Teleoperated

Remotely operated robotic systems have demonstrated life-saving utility during U.S. military operations, but the Department of Defense (DoD) has also seen the limitations of ground and aerial robotic systems that require many people for operations and maintenance. Over time, the DoD envisions more capable robotic systems that autonomously execute complex missions with far less human interaction. To enable this transition, the DoD needs to clearly understand the trade-offs that must be made when choosing to develop an autonomous system. There are many circumstances where actions that are straightforward for a manned system to accomplish are enormously difficult — and therefore costly — for machine systems to handle. This developmental paper addresses the need to define understandable requirements for performance and the implications of those requirements on the system design. Instead of attempting to specify a “level” of “autonomy” or overall “intelligence,” the authors propose a starting set of quantifiable — and testable — requirements that can be applied to any autonomous robotic system. These range from the dynamics of the operating environment to the overall expected assertiveness of the system when faced with uncertain conditions. We believe a solid understanding of these expectations will not only benefit the system development, but be a key component of building trust between humans and robotic systems.

82 Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships Troy B. Jones and Mitch G. Leammukda Copyright © 2010 by the Instrumentation Test and Evaluation Association (ITEA). Presented at the 15th Annual Live-VirtualConstructive Conference, El Paso, TX, January 11 - 14, 2010. Sponsored by: ITEA

Abstract Formal testing of autonomous systems is an evolving practice. For these systems to transition from operating in restricted (or completely isolated) environments to truly collaborative operations alongside humans, new test methods and metrics are required to build trust between the operators and their new partners. There are no current general standards of performance and safety testing for autonomous systems. However, we propose that there are several critical system-level requirements to consider for an autonomous system that can efficiently direct the test design to focus on potential system weaknesses: environment uncertainty, frequency of operator interaction, and level of assertiveness. We believe that by understanding the effects of these system requirements, test engineers–and systems engineers–will be better poised to develop validation and verification plans that expose unexpected system behaviors early, ensure a quantifiable level of safety, and ultimately build trust with collaborating humans. To relate these concepts to physical systems, examples will be related to experiences from the Defense Advanced Research Projects Agency (DARPA) Urban Challenge autonomous vehicle race project in 2007 and other relevant systems.

Introduction The adoption of autonomous systems in well-defined and/or controlled operational environments is common; commercial and military aircraft routinely rely on advanced autopilot systems for the majority of flight duties, manufacturing operations around the world employ vast robotic systems, and even individuals rely on increasingly “active” safety systems in automobiles to reduce injuries from collisions. On the surface, based on these trends, adding levels of autonomy in any of these existing systems and deploying new even more helpful systems seems not only inevitable but a straightforward extension of existing development, testing, and deployment methods. However, until fundamental changes in social, legal, and engineering practice are made, the amazing autonomous system advances being demonstrated at universities and research laboratories will remain educational experiments. We see at least three challenges: 1.

People must trust an autonomous system in situations when it may harm them: Arguably, people already trust complex autonomous systems under circumstances such as the aircraft autopilot, but passengers know that a human is supervising that system constantly.

2.

Legal ramifications of injuries or deaths resulting from the actions of autonomous systems must be clearly defined: When an autonomous system causes a death (which certainly will happen), what party is held liable for that injury or death?

3. There must be well-defined standards to test the autono mous system to operate in the required environments with

“acceptable” performance: Defining what is or is not “acceptable” performance for an autonomous system ties directly into how well people will ultimately trust that system and will ease the definition of fair legal responsibilities. In this paper, we examine perhaps the easiest of these topics: proposed methods for specifying and ultimately testing the performance of autonomous systems. As engineers, we are responsible for supplying the supporting evidence to the customer that new autonomous systems will meet expectations of performance and safety. Draper Laboratory has worked in autonomous system evaluation for many years [1]. This paper describes a new approach for autonomous system requirements development and test design based largely on experiences gained during our participation in the DARPA Urban Challenge autonomous vehicle race held in 2007. These concepts are still in development and will be refined as we evaluate more systems and collaborate with other members of the engineering community. Autonomous System Characteristics There are as many definitions of “autonomous systems” as there are papers that define it. Instead of creating yet another incomplete definition, we propose that there are common sets of traits that can be specified for any automated/autonomous/robotic/ intelligent system. These traits help establish what performance is expected of the system (thereby providing a basis for system-level requirements), and effectively point out the most critical areas for test and evaluation.

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

83

These characteristics are intentionally structured in easily comprehended terms with the goal of improving how operators and observers understand the actions of an autonomous system. The following sections will explain these characteristics and include examples of how they drive the autonomous system requirements and testing. Unfortunately, these characteristics are highly coupled and not necessarily in a linear fashion, but understanding these interrelationships is a key area of ongoing work. Environment Uncertainty We live in an uncertain world: Our perceptual abilities (visual, auditory, olfactory, touch) are constantly (and unconsciously) at work keeping us informed about changes in our environment. When designing an autonomous system to function in this uncertain world, we need to carefully understand the environment in which we expect the system to operate. Furthermore, we propose that classifying environmental uncertainty is performed adequately by answering the question: “What is the reaction time we expect from the system to detect and avoid collisions with objects in the environment?” Above all other characteristics, environmental uncertainty is the primary driver for how much perceptual ability an autonomous system requires to do its job. How well does the system need to “see” the environment in order to react to potential hazards and accomplish its mission? This discussion of environment uncertainty is restricted to visual types of perception, but we believe autonomous systems will need to take advantage of other “senses” to eventually meet our (human) expectations of performance. Perception Coverage We define the perception coverage as the percentage of spherical volume around a system that is pierced by a perceptual sensing system. For an easy-to-understand example, we begin by estimating the perception coverage for a human visual system. Human Visual Perception Coverage Since we desire a nondimensional metric, we will choose an arbitrary radius, in this case 100 m, for the spherical volume and project how much of the volume is seen by human eyes. This graphical construction is shown in Figure 1 and shows that human vision in a given instant of time can perceive about 40% of the volume around your head. Of course, we can rapidly scan our environment by rotating our heads and bodies, thus providing a complete visual scan in seconds. What does this mean with regard to environmental uncertainty? Certainly humans are very adept at operating in highly uncertain conditions and do so with a high degree of success. Therefore, we propose that the human instantaneous perceptual coverage (visual in this case) is an intuitive upper bound on the same metric for an autonomous system. Having this large amount of input perceptual information at all times gives us excellent awareness of changes in our environment. It has been shown [2] that humans see, recognize, and react to a visual stimulus within 400-600 ms of seeing the stimulus. This range then is a practical lower limit on how quickly we should expect an autonomous system to react to changes in the environment.

Figure 1. Human visual perceptual coverage, approximately = 40%..

Autonomous System Perception Coverage We now select an example of an autonomous system that operates in a high uncertainty environment, the MIT Urban Challenge LR3 autonomous vehicle, Talos, shown in Figure 2, and estimate the same metric. This system completed approximately 60 mi of completely autonomous driving in a low-speed race with human-driven and other autonomous vehicle traffic. To do this, Talos has a myriad of perceptual sensor inputs [3]: • 1 x Velodyne HDL-64 LIDAR 360-deg 3D scanning LIDAR. • 12 x SICK planar scanning LIDAR. • 5 x Point Grey Firefly MV cameras. • 15 x Delphi automotive cruise control radars.

Velodyne HDL Pushbroom SICK LIDAR (5) ACC RADAR (15) Cameras (6)

Skirt SICK LIDAR (7)

Figure 2. MIT Urban Challenge vehicle Talos.

The most useful of these perceptual inputs for detecting obstacles and vehicle tracking [3] is the Velodyne HDL-64. It contains an array of 64 lasers mounted in a head unit that covers a 26-deg vertical

84 Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

range [4]. Motors spin the entire head assembly at 15 revolutions per second, generating approximately 66,000 range samples per revolution, or about 1 million samples per second of operation. Each full revolution of the Velodyne returned a complete 3D point cloud of distances to objects all around the vehicle and it was by far the most popular single sensor to have in the Urban Challenge (if your team could afford it). A single sensor that returns continuous data around the entire vehicle eliminates the need to construct a 3D environment model out of successive line scans from multiple planar LIDAR (such as the SICK units), which is a computationally intensive and errorprone process.

Based on those (generous) input assumptions, we created a graphical construction of perception coverage for the Velodyne, which is shown in Figure 3. We discovered that a single scan is approximately 0.1% coverage, that is, 400 times less than a single instant of human visual information. Despite the large disparity with human ability, the Velodyne proved to be an adequate primary sensor in the context of the Urban Challenge. Talos had several methods to detect and avoid collisions with objects that reduced its effective reaction time [3]. However, for this example, we limit the reaction time estimate based on the rules of the DARPA Urban Challenge, which placed a 30-mph speed limit on all vehicles [5]. If we consider the case of two vehicles in opposing lanes of travel, we have a maximum closing speed of 60 mph (27 m/s). Talos used approximately 60 m of the Velodyne’s 100+ m range [4] for perception, and therefore would have just over 2 s in which to react to an oncoming vehicle in the wrong lane. Clearly, there is a relationship between the operating environment of a system and the perceptual capabilities needed to operate in that environment, and we illustrate this by using the two examples given and the addition of a third point: We assume that in order to react to uncertainty instantly, a system would need 100% perceptual coverage (and the ability to process and decide actions instantly). These data points and a qualitative relationship between them are shown in Figure 4. It is logical that decreasing the uncertainty in the environment should reduce the need for perception, and the same is true for the converse. While not the final answer to how to specify requirements for an autonomous system perception system, we believe it is a start that leads to metrics for testing the perception coverage of the system. In fact, it is very compelling that vehicles in the Urban Challenge were able to safely complete the race with so little perceptual information overall.

Figure 3. Perception coverage for Velodyne HDL-64 LIDAR system, ~0.1%.

100

Perception Coverage with Environmental Uncertainty Blind

Perception Coverage (%)

Performing the same calculation of perception coverage for a Velodyne HDL-64 LIDAR involves representing the geometry of the sensing beams. For the human vision system, we assumed the resolution of the image data is practically infinite, but the LIDAR is restricted to 64 discrete beams of distance measurement that are swept around a center axis. To perform the calculation, we assumed that each beam has a nominal diameter of 1/8 in, does not diffract, and assumed that each revolution of a beam was a continuous disk of range data, when in fact each revolution is a series of laser pulses.

80 60 40

Visible Human Reaction Time

Human Visual

20 0

Urban Challenge Required Reaction Time Velodyne HDL 360-deg LIDAR

0 0.5 1 1.5 2 Potential Time to Collision (s)

Figure 4. Variation in perception coverage as driven by the environment uncertainty.

Environmental Uncertainty Test Concepts Clear requirements on perception coverage and potential time-tocollision will bound the test design for environmental stimuli. It is up to the test engineering team to design experiments that validate the ability of the system to meet performance goals and also stress the system to find potential weaknesses. If a system is designed to operate in a low uncertainty environment all the time (e.g., on factory floor welding metal components), the perception-related tests required are limited to proper detection of

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

85

On the other extreme, an autonomous system operating in a dynamic environment, be it on the ground or in the air, needs the perceptual systems stressed early and as often as possible. As we discovered during the DARPA Urban Challenge experience, changing the operating environment of the system always revealed flaws in the assumptions made during the development of various algorithms. Perception systems should be tested thoroughly against many kinds of surfaces moving at speeds appropriate to the environment, as both surface properties and velocity will impact the accuracy of the detection and classification of objects [3]. In addition, test cases for tracking small objects, if applicable, can be very challenging due to gaps in coverage and latency of the measurements. Frequency of Operator Interaction When developing any system, it is critical to understand how the users interact with it. This information can be captured in “Concept of Operations” documents that specify when users are expecting to input information into or get information out of a system. This same concept must be applied to autonomous systems with a slight shift in implications. When we are developing an autonomous system, we need to establish expectations on how much help a human operator is expected to provide during normal operations. Ideally, an entirely autonomous system would require a single mission statement and it would execute that mission without further assistance. However, just as people sometimes need additional inputs during a task, an autonomous system requires the same consideration. On the other end of the spectrum, an autonomous system can degenerate into an entirely remotely-controlled system. The human operator is constantly in contact with the system, providing direct commands to accomplish the task. In this section, we explore the impact of specifying the required level of operator interaction. This characteristic in particular has far-reaching implications, and unlike environmental uncertainty, is fully controlled by the customer and developer of the system. A customer can choose (for example) to require an autonomous system to need only a single operator interaction per year, but that requirement will significantly impact development time and cost. Planner Complexity If the autonomous system is intended to operate with very little operator interaction, then that system must be able to effectively decide what to do on its own as the environment and mission evolve. We will refer to this capability generically as “planning” rather than “intelligence.” The planner operation is central to how well autonomous systems operate in uncertain environments. We will review some examples of planning complexity and how it relates to operator inputs. Additionally, when ranking complexity, we need to consider the operating environment of the system. A planning system that operates in a highly uncertain environment must adapt quickly to changes in that environment, whereas low uncertainty environments can be traversed with perhaps only a single plan for the entire mission.

Aircraft Autopilot Everyday autopilot systems in commercial and military aircraft perform certain planning tasks based on pilot commands. Modern autopilot systems have many potential “modes” of operation, such as maintaining altitude and heading or steering the aircraft to follow a set course of waypoints [7]. Even though the pilot must initiate these modes, once activated, the autopilot program can make course changes to follow the desired route and therefore is planning vehicle motion. However, an aircraft autopilot program will not change the course of the aircraft to avoid a collision with another aircraft [8]. Instead, the pilot is issued an “advisory” to change altitude and/or heading. With this basic understanding of what an autopilot is allowed to do, we rank the planner complexity of these systems as low. Since most aircraft with autopilot systems operate in air traffic controlled airspace, we also believe the environmental uncertainty is low, placing the autopilot planner complexity on a qualitative graph as shown in Figure 5.

Planner Complexity with Operator Interaction Human

Planner Complexity

the work piece and the welded connections. If an operator enters a work zone of the system as defined by a rigid barrier, the system must shut down immediately [6].

Urban Challenge Increasing Environment Uncertainty

Active Military Robots

Aircraft Autopilot

1/Mission

Frequency of Operator Interaction

Teleoperated

Figure 5. Variation in planner complexity as function of required frequency of operator interaction.

Urban Challenge Autonomous System Vehicles that competed in the Urban Challenge were asked to achieve a difficult set of goals with a single operator interaction with the system per mission. After initial setup, the vehicle was required to negotiate an uncertain environment of human-driven and autonomous traffic vehicles without assistance. Accomplishing this performance implied a key top-level design requirement for the planning system: it must be capable of generating entirely new vehicle motion plans and executing them automatically. For example, both the 4th place finisher, MIT, and the 1st place finisher, Carnegie Mellon (Boss), relied on planning systems that were constantly creating new motion plans based on the perceived environment [9], [10]. In the case of Talos, the vehicle motion planning system was always searching for safe vehicle plans

86 Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

This type of flexibility, however, comes at a high complexity cost, at least when compared with traditional systems that are not allowed to replan their actions automatically without human consent. The motion plans were generated continuously at a 10-Hz rate and could represent plans up to several seconds into the future [9]. The dynamic nature of the planning was founded on incorporating randomness in the system, meaning that there was no predefined finite set of paths from which the system was selecting. Instead, it was constantly creating new possible plans and selecting them based on the environmental and rule constraints.

with DO-178B and obtaining the certification for new software is so involved that entire companies exist to assist with/perform the process or create software tools to help generate software that is compliant with the standards [14]-[16]. Therefore, we classify aircraft avionics software as being a “very high” level of verification effort, not the highest, but certainly close. And remember, we classified the complexity of the planning software as low. For this example, we will quantify an aircraft autopilot as needing inputs from the human operators from several to many times in a given flight. The pilots are responsible for setting the operational mode of the autopilot and are required to initiate complex sequences like autolanding [7]. Therefore, we place aircraft autopilot on a verification effort versus frequency of operator interaction as shown in Figure 6.

We feel this adapting type of planning system is the evolutionary path to greater autonomous vehicle capability and it represents a high level of complexity. But the Urban Challenge systems still operated in a controlled environment with moderate levels of uncertainty, so we rank the planner complexity well above the autopilot case and on a higher environment uncertainty curve.

Remotely Operated Systems The lowest end of the planning complexity curve is occupied by remotely operated systems. These systems depend on a human operator to make all planning decisions. For this ranking, we consider only the capabilities of the base system without the operator. We understand that indeed a great advantage of remotely operated systems is the planning capability of the human operator. Currently, most active robots used by the military fall into this classification. Verification Effort The frequency of interaction with an autonomous system is a powerful parameter stating how independent we expect the system to be over time (and is also tightly related to the level of assertiveness in the next section). Intuitively, we expect that the more independent a system is, the more time must be spent performing testing to verify system performance and safety. The following examples will help create another qualitative relationship between verification effort and the required frequency of operator interaction. Aircraft Autopilot As an example, we first consider the very formal verification process performed for certification of aircraft autopilot software (and other avionics components), as recommended by the Federal Aviation Agency (FAA) [11]. Autopilots are robust autonomous systems flying millions of miles without incident [12]. Organizations are required to meet the guidelines set forth in DO-178B [13] (and the many ancillary documents) in order to achieve autopilot software certification. The intent of these standards is to provide a rigorous set of processes and tests that ensure the safety of software products that operate the airplane. The process of achieving compliance

Verification Effort Bandwidth Aircraft Avionics*

Verification Effort

Human Planning For the upper bound of the relationship, we rank human planning processes as extremely adaptable and highly complex, giving the highest level complexity ranking for the most uncertain environments, and likely off the notional planner complexity scale as shown.

Verification and Comms with Operator Interaction

Active Military Robots

Urban Challenge 1/Mission

Bandwidth of Comm Link

that would achieve the goal of the next mission waypoint, but also possible plans for bringing the vehicle to safe stop at all times. This strategy was flexible and allowed the vehicle to execute sudden stops if a collision was anticipated.

Teleoperated Frequency of Operator Interaction

Figure 6. Verification effort and communications bandwidth as a function of operator interaction.

Urban Challenge Autonomous System DARPA required all entrants in the Urban Challenge to have only a single operator interaction per mission in order to compete in the race. The operators could stage the vehicle at the starting zone, enter the mission map, arm the vehicle for autonomous driving, and walk away. At that point, the operators were intended to have no further contact, radio or physical, with the vehicle until the mission was completed [5]. Due to the highly experimental nature of the Urban Challenge and the compressed schedule, most, if not all, teams performed a tightly coupled “code → code while testing → code” iterative loop of development. This practice was certainly true of the MIT team and left little room for evaluating the effects of constant software changes on the overall performance of the system. In other words, the team was continuously writing software with no formal process for updating the software on the physical system. Therefore,

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

87

while the vehicles met the goals set forth by DARPA for operator interaction, we estimate the level of verification on each vehicle was very low, as shown in Figure 6. This highlights the large gap that exists between a demonstration system that drives in a controlled environment and a deployable mass-market or military system. As described in the previous section, the software on Urban Challenge vehicles was constantly creating and executing new motion plans. However, this capability implies that tests must adequately verify the performance of a system that does not have a finite set of output actions based on a single set of inputs. This verification discussion is beyond the scope of this paper, but is of great interest to Draper Laboratory and will continue to be an area of research for many. Remotely Operated Systems Most, if not all, currently deployed military and law enforcement “robots” or “unmanned systems” are truly operated remotely. A human operator at a terminal is providing frequent to continuous input commands to the system to accomplish a mission. While it is certainly required to verify that these systems perform their functions, that verification testing process can focus on the accurate execution of the operator commands. We therefore consider remotely operated systems at the lowest end of the verification effort scale, certainly nonzero, but far from the aircraft avionics case. It is possible, however, that some unmanned aircraft systems will execute automated flights back to home base on certain failure conditions. Therefore, those systems would likely need verification levels consummate with aircraft autopilot systems. Communications Bandwidth The expected interactions of the operator with the system also have a direct effect on how much data must be exchanged between the operator and the system during the mission. Higher operator interaction will drive more bandwidth requirements, while low interactions will save bandwidth, but increase the required verification effort. Urban Challenge Autonomous System As the minimum case, we have the Urban Challenge type autonomous vehicles, which were required to have only a dedicated “emergency-stop” transceiver active during the race. This radio allowed the race monitoring center to remotely pause or completely disable any vehicle on the course, as well as give those same options to the dedicated chase car drivers that were following each vehicle around the course [5]. This kind of link did not exchange much information; the GPS coordinates of the vehicle and some bits to indicate current operating mode were sufficient. Therefore, we can locate the bandwidth requirements for these vehicles on the very low end of the scale as shown in Figure 6. Remotely Operated Systems At the opposite end of the scale, we have systems that are representative of all the actively deployed “robotic” or “unmanned” systems used in military operations. These systems are remotely operated, requiring a constant high-bandwidth data link to a ground station that allows an operator to see live video and other system data at all times. These types of links are required to satisfy the human operator’s need for rapidly updating data to operate

the system safely. Therefore, we place these systems highest on the bandwidth requirement. Operator Interaction Test Concepts With an understanding of how the operators are expected to interact with the system, the performance of the system with regard to this metric can be measured directly. At all times during any system-level tests, the actions of the operators must be recorded and compared against the expected values. During the Urban Challenge, we observed that many teams had dedicated vehicle test drivers. These drivers had over months of involvement become comfortable with what level of help would be required for their vehicle for many scenarios. A practiced vehicle test driver would allow the autonomous system to proceed in situations a less experienced test driver would deem dangerous and take control of the vehicle. This observation is an example of how different drivers trusted the systems they interacted with and it highlights the need to understand this relationship. To transition more autonomous systems into daily use, the time for developing that trust must be shortened from months or weeks into hours, or perhaps even minutes. All of us routinely estimate the actions of others around us and trust that they will execute tasks much as we would on a daily basis. When driving on a road, we all assume that others around us are following the rules of that road as expected; we routinely trust our lives to complete strangers. Indeed, it is a daunting task to conjecture what will be required to ever achieve the verification of a completely autonomous vehicle driving in a general city setting. Aircraft avionics benefit from a very strict operating set of conditions and intentional air traffic control to mitigate the chance of collisions, but ground vehicles have no such aids and operate in a far more complex and dynamic environment. Level of Assertiveness Finally, we discuss the idea of an autonomous system being assertive: How much leeway should the system be given in executing the desired mission? This is another characteristic that is entirely controllable by the customer and the development team. It is inversely related to the previously discussed frequency of operator interaction in that a system intended to operate for long periods without assistance must necessarily be assertive in mission execution. The intent of specifying assertiveness is to give the operators and collaborating humans a feel for how persistent a given system will be in completing the mission. This “feel” may be a time span over which the system “thinks” about different options for continuing a mission in the face of an obstacle, and it may include various physical behaviors that allow the system to scan the situation with the perceptual system from a different viewpoint in order to resolve a perceived ambiguity in what the system is seeing. Object Classification Accuracy We feel that for an autonomous system to be assertive in executing a mission, it must be able to not only see obstacles in the path of the vehicle, it must be able to classify what those obstacles are. For example, if a truck-sized LIDAR-equipped ground vehicle encounters a long row of low bushes, it will “see” these bushes as a point cloud of

88 Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

The DARPA Urban Challenge mitigated the issue of object classification by carefully selecting the rules to make the problem more tractable. For example, rules dictated that the race course would only contain other cars and static objects such as traffic cones or barrels. The distinction between static objects and cars was important due to different standoff distance requirements for the two types of objects. Vehicles were allowed to pass much closer to static objects (including parked cars) than moving vehicles. In the case of Talos, the classification of objects was performed based solely on the tracked velocity of the object. This type of classification avoided the need to attempt to extract vehicle specific geometry from LIDAR and camera data, but also contributed to a low-speed collision with the Cornell system [17]. Unlike previous system characteristics, we only have the Urban Challenge example, but we feel qualitative curves can still be constructed to show a relationship between classification accuracy and assertiveness as shown in Figure 7. Notice that we also feel that the need to increase levels of classification accuracy is a function of the environment uncertainty: Systems that operate in a low uncertainty environment can be very assertive with a low level of classification accuracy. At the lowest end of the scale, we place a zero assertiveness system: It will never change the operating plan without interaction from an operator because the operator is making all classification decisions. Examples of zero assertiveness systems are remotely operated robots and aircraft autopilots, both of which require operator interaction to change plans. We estimate most Urban Challenge systems have low classification accuracy in a moderately uncertain environment. Based on experience with the Talos system, we estimate that it classified objects correctly around 20% of the time, and the assertiveness was intentionally skewed toward the far end of the scale, but the system would eventually stop all motion if no safe motion plans were found. Finally, we include a not quite 100% rating for human classification accuracy for the most uncertain environments at the “never ask for help” end of the scale. As shown, we feel that there is much work remaining to achieve practical autonomous systems that can complete missions in uncontrolled environments without a well-defined method of operators assisting that system. Assertiveness Test Concepts Object classification was an important part of the Urban Challenge testing processes. Test scenarios were developed to intentionally provide a mixed set of vehicle and static obstacles during development. Other team members (and even other Urban Challenge systems) provided live traffic vehicles in a representative

Classification Accuracy with Level of Assertiveness

100

Classification Accuracy (%)

distance measurements with a regular height. Those bushes, for all practical purposes, will look exactly like a concrete wall to a LIDAR, and the vehicle will be confronted with a planning decision: find a way around this obstacle or drive over it. To an outside observer, this decision is trivial (assuming the property owner is not around), but it is a real and difficult problem in autonomous system deployment.

Human

80 60 40 20

Urban Challenge Increasing Environment Uncertainty Aircraft Autopilot

0 Stop & Wait for Input

Assertiveness

Never Ask for Help

Figure 7. Variation of classification accuracy as a function of assertiveness.

“urban” environment at the former El Toro Marine Corps Air Station in Irvine, California. Another valuable source of classification data came from the human-driven commuting times to and from test sites. The software architecture of the Talos system allowed real-time recording of all system data that could be played back later. This allowed algorithm testing with real system data on any developer computer [3]. The vehicle perception systems were often left operating in many types of general traffic scenarios that were later used to evaluate classification performance. Testing for assertiveness did not happen until near the end of the Urban Challenge project for the MIT team as it was a concept that grew out of testing just prior to the race. The Talos team and others [10], [3] implemented logic into the systems designed to “unstick” the vehicles and continue on the mission. In order to test these features, the test designer must have a working knowledge of how the assertiveness of the system should vary as a function of operating conditions. When the Talos vehicle failed to make forward progress, a chain of events would start increasing the assertiveness level of the system incrementally. This was done by relaxing planning constraints that the vehicle was maintaining, such as staying within the lane or road boundaries and large standoff distances to objects. This gave the planning system a chance to recalculate a plan that may result in forward progress. These relaxations of constraints would escalate until eventually, if no plan were found, the system would reboot all the control software and try again [3]. Conclusions We have proposed and given examples for how to categorize the toplevel requirements for the performance of an autonomous system. These characteristics are intended to apply to any automated/ intelligent/autonomous system by describing expected behaviors that in turn specify the required performance on lower level system capabilities, thereby providing a basis for testing and analysis.

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

89

Environmental uncertainty is the primary driver for the overall perceptual needs of the system. Systems that operate in highly uncertain environments must be able to recognize and react to objects from any direction at a response time sufficient to avoid collisions. Estimating a metric of perception coverage reveals that current state-of-the-art LIDAR systems provide far less perception information than human vision and provide seconds or less in collision detection range, but depending on the required operational environment may be sufficient. Testing the system for varying levels of environment uncertainty must be a focus of any autonomous system verification; experience indicates that groundbased systems in particular are very sensitive to environmental uncertainty.

[3] Leonard, J., D. Barrett, T. Jones, M. Antone, R. Galejs, “A Perception Driven Autonomous Urban Vehicle,” Journal of Field Robotics, DOI 10.1002, 2008. [PDF]: http://acl.mit.edu/papers/LeonardJFR08.pdf.

The frequency of operator interaction is a controlling parameter that has a direct effect on several key system abilities: planner complexity, verification effort, and communications bandwidth. Motion planning systems capable of continuously creating new plans in response to environment changes are inherently a nonfinite state and therefore need new types of verification testing and research. When a system is intended to operate with minimal operator input, it allows the communications bandwidth to be reduced, whereas teleoperated systems with constant operator interaction require more robust links.

[8] Introduction to TCAS II Version 7, ARINC, [PDF]: http://www.arinc. com/downloads/tcas/tcas.pdf.

And finally, the level of assertiveness of the system, which is tied to the desired frequency of operator interaction, will have an impact on how accurately the autonomous system must be able to classify objects in the environment. Systems that are intended to operate with little supervision must make safe decisions about crossing perceived constraints of travel in the environment, which drives the need to classify objects around the system. Object classification is a complex topic that requires much research to create robust systems. Specifying an assertive autonomous system also requires a planning system that is allowed to change motion plans automatically during the mission, driving up the planner complexity and the associated verification efforts. Draper Laboratory will continue efforts to refine these characteristics (and expand if needed) and is interested in collaborating with other institutions in developing requirements and test metrics for autonomous systems. We believe it will take widespread agreement among different organizations to arrive at an understandable set of guidelines that will help move advanced autonomous systems into fielded use domestically and in military operations. These systems, even with limitations of current perception and planning, can be useful right now in reducing threats to U.S. military forces. We must focus efforts on specifying and testing systems that can be trusted by their operators to succeed in their missions.

[4] Velodyne HDL-64E Specifications [HTML]: velodyne.com/lidar/products/specifications.aspx.

[5] DARPA Urban Challenge Rules [PDF]: http://www.darpa.mil/ grandchallenge/docs/Urban_Challenge_Rules_102707.pdf. [6] “Preventing the Injury of Workers by Robots,” National Institute of Occupational Safety and Health (NIOSH), Publication No. 85-103, [HTML]: http://www.cdc.gov/niosh/85-103.html. [7] Advanced Avionics Handbook, U.S. Department of Transportation, Federal Aviation Administration, FAA-H-8083-6, 2009. [PDF]: http://www.faa.gov/library/manuals/aviation/media/FAA-H-8083-6.pdf.

[9]

Kuwata, Y., G. Fiore, E. Frazzoli, “Real-Time Motion Planning with Applications to Autonomous Urban Driving,” IEEE Transactions on Control Systems Technology, Vol. XX, No. XX, January 2009 [PDF]: http://acl.mit.edu/papers/KuwataTCST09.pdf.

[10] Baker, C., D. Ferguson, J. Dolan, “Robust Mission Execution for Autonomous Urban Driving,” 10th International Conference on Intelligent Autonomous Systems (IAS 2008), July, 2008, Carnegie Mellon University, [PDF]: http://www.ri.cmu.edu/pub_ files/pub4/baker_christopher_2008_1/baker_christopher_2008_1.pdf. [11] FAA Advisory Circular 20-115B [PDF]: http://rgl.faa.gov/ Regulatory_and_Guidance_Library/rgAdvisoryCircular.nsf/0/ DCDB1D2031B19791862569AE007833E7? OpenDocument. [12] Aviation Accident Statistics, National Transportation Safety Board, [HTML]: http://www.ntsb.gov/aviation/Table2.htm. [13] Software Considerations in Airborne Systems and Equipment Certification, RTCA DO-178B, [PDF]: http://www.rtca. org/downloads/ListofAvailableDocs_December_2009.htm#_ Toc247698345. [14] Donatech Commercial Aviation DO-178B Certification Services Page [HTML]: http://www.donatech.com/aviation-defense/commer cial/commercial-tanker-transport-planes.html. [15] HighRely, Reliable Embedded highrely.com/index.php.

Solutions

[HTML]:

http://

[16] Esterel Technologies [HTML]: http://www.esterel-technologies. com/products/scade-suite/. [17] Fletcher, L., I. Miller, et al., “The MIT-Cornell Collision and Why It Happened”, Journal of Field Robotics, DOI 10.1002, 2008 [PDF]: http://people.csail.mit.edu/seth/pubs/FletcherEtAlJFR2008.pdf.

References [1]

http://www.

Cleary, M., M. Abramson, M. Adams, S. Kolitz, “Metrics for Embedded Collaborative Systems,” Charles Stark Draper Laboratory, Performance Metrics for Intelligent Systems, National Institute of Standards & Technology (NIST), Gaithersburg, MD, 2000.

[2] Sternberg, S., “Memory Scanning: Mental Processes Revealed by Reaction Time Experiments,” American Scientist, Vol. 57, 1969, pp. 421-457.

90 Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

Troy B. Jones is the Autonomous Systems Capability Leader at Draper Laboratory. He joined Draper in 2004 and began working in the System Integration and Test division on the TRIDENT MARK 6 MOD 1 inertial guidance system. Current duties focus on strengthening Draper’s existing autonomous technologies in platform design and software by adding new testing methods and incorporating concepts from Human System Collaboration. Draper’s ultimate goal is to produce autonomous systems that are trusted implicitly by our customers to perform their critical missions. In 2006, he joined with students and faculty at MIT to build an entry for the 2007 DARPA Urban Challenge. The team’s fully autonomous Land Rover LR3 used a combination of LIDAR, vision, radar, and GPS/INS to perceive the environment and road, and safely completed the Urban Challenge in fourth place overall. Mr. Jones earned B.S. and M.S. degrees at Virginia Tech. Mitch G. Leammukda is a Member of the Technical Staff in the Integrated Systems Development and Test group at Draper Laboratory. For the past 7 years, he has worked on navigation systems for naval aircraft, space instruments, and individual soldiers. He has also led the system integration for a robotic forklift and an RF instrumentation platform. He is currently developing a universal test station platform for inertial guidance instruments. Mr. Leammukda holds M.S. and B.S. degrees in Electrical Engineering from Northeastern University.

Requirements-Driven Autonomous System Test Design: Building Trusting Relationships

91

List of 2010 Published Papers and Presentations Abrahamsson, C.K.; Yang, F.; Park, H.; Brunger, J.M.; Valonen, P.K.; Langer, R.S.; Welter, J.F.; Caplan, A.I.; Guilak, F.; Freed, L.E. Chondrogenesis and Mineralization During In Vitro Culture of Human Mesenchymal Stem Cells on 3D-Woven Scaffolds Tissue Engineering: Part A, Vol. 16, No. 7, July 2010

Barbour, N.M.; Flueckiger, K. Understanding Commonly Encountered Inertial Instrument Specifications Missile Defense Agency/Deputy for Engineering, Producibility (MDA/ DEP), June 2010

Abramson, M.R.; Kahn, A.C.; Kolitz, S.E. Coordination Manager - Antidote to the Stovepipe Anti-Pattern Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010. Sponsored by: AIAA

Bellan, L.; Wu, D.; Borenstein, J.T.; Cropeck, D.; Langer, R.S. Microfluidics in Hydrogels Using a Sealing Adhesion Layer (poster) Biomedical Engineering Society/Annals of Biomedical Engineering, May 5, 2010

Abramson, M.R.; Carter, D.W.; Kahn, A.C.; Kolitz, S.E.; Riek, J.C.; Scheidler, P.J. Single Orbital Revolution Planner for NASA’s EO-1 Spacecraft Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010. Sponsored by: AIAA Agte, J.S.; Borer, N.K.; de Weck, O. Simulation-Based Design Model for Analysis and Optimization of Multistate Aircraft Performance Multidisciplinary Design Optimization (MDO) Specialist’s Conference, Orlando, FL, April 12-15, 2010. Sponsored by: AIAA Ahuja, R.; Tao, S.L.; Nithianandam, B.; Kurihara, T.; Saint-Geniez, M.; D’Amore, P.; Redenti, S.; Young, M. Polymer Thin Films as an Antiangiogenic and Neuroprotective Biointerface Materials Research Society (MRS) Fall Meeting, Boston, MA, November 29-December 3, 2010. Sponsored by: MRS. Ahuja, R.; Nithianandam, B.; Kurihara, T.; Saint-Geniez, M.; D’Amore, P.; Redenti, S.; Young, M.; Tao, S.L Polymer Thin-Films as an Antiangiogenic and Neuroprotective Biointerface Graduate Student Award Appreciation, Materials Research Society, Boston, MA, November 2010 Barbour, N.M.; Hopkins III, R.E.; Kourepenis, A.S.; Ward, P.A. Inertial MEMS System Applications (SET116) NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal, March 15-26, 2010. Sponsored by: NATO Research & Technology Organization Barbour, N.M. Inertial Navigation Sensors (SET116) NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal, March 15-26, 2010. Sponsored by: NATO Research & Technology Organization

Benvegnu, E.; Suri, N.; Tortonesi, M.; Esterrich III, T. Seamless Network Migration Using the Mockets Communications Middleware Military Communications Conference (MILCOM), San Jose, CA, October 31-November 3, 2010. Sponsored by: IEEE Bettinger, C.J.; Borenstein, J.T. Biomaterials-Based Microfluidics for Tissue Development Soft Matter, Vol. 6, No. 20, October 2010 Billingsley, K.L.; Balaconis, M.K.; Dubach, J.M.; Zhang, N.; Lim, E.; Francis, K.; Clark, H.A. Fluorescent Nano-Optodes for Glucose Detection Analytical Chemistry, American Chemical Society (ACS), Vol. 82, No. 9, May 1, 2010 Bogner, A.J.; Torgerson, J.F.; Mitchell, M.L. GPS Receiver Development History for the Extended Navy Test Bed Missile Sciences Conference, Monterey, CA, November 16-18, 2010. Sponsored by: AIAA Borenstein, J.T.; Tupper, M.M.; Mack, P.J.; Weinberg, E.J.; Khalil, A.S.; Hsiao, J.C.; García-Cardeña, G. Functional Endothelialized Microvascular Networks with Circular Cross-Sections in a Tissue Culture Substrate Biomedical Microdevices, Vol. 12, No. 1, February 2010 Borer, N.K. Analysis and Design of Fault-Tolerant Systems DEKA Lecture Series, Manchester, NH, August 12, 2010. Sponsored by: DEKA Research and Development. Borer, N.K.; Cohanim, B.E.; Curry, M.L.; Manuse, J.E. Characterization of a Persistent Lunar Surface Science Network Using On-Orbit Beamed Power Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE

92 List of 2010 Published Papers and Presentations

Borer, N.K.; Claypool, I.R.; Clark, W.D.; West, J.J.; Odegard, R.G.; Somervill, K.; Suzuki, N. Model-Driven Development of Reliable Avionics Architectures for Lunar Surface Systems Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE

Cuiffi, J.D.; Soong, R.K.; Manolakos, S.Z.; Mohapatra, S.; Larson, D.N. Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays 26th Southern Biomedical Engineering Conference, College Park, MD, April 30-May 2, 2010. Sponsored by: International Federation for Medical and Biological Engineering (IFMBE)

Bortolami, S.B.; Duda, K.R.; Borer, N.K. Markov Analysis of Human-in-the-Loop System Performance Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE

Cunha, M.G.; Clarke, A.C.; Martin, J.; Beauregard, J.R.; Webb, A.K.; Hensley, A.A.; Keshava, N.; Martin, D.J. Detection of Deception in Structured Interviews Using Sensors and Algorithms International Society for Optical Engineers (SPIE) Defense, Security and Sensing, Orlando, FL, April 5-9, 2010. Sponsored by: SPIE

Brady, T.M.; Paschall II, S.C. Challenge of Safe Lunar Landing Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Brady, T.M.; Paschall II, S.C.; Crain, T. GN&C Development for Future Lunar Landing Missions Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA Carter, D.J.; Cook, E. Towards Integrated CNT-Bearing Based MEMS Rotary Systems Gordon Research Conference on Nanostructure Fabrication, Tilton, NH, July 18-23, 2010. Sponsored by: Tilton School Clark, T.; Stimpson, A.; Young, L.R.; Oman, C.M.; Duda, K.R. Analysis of Human Spatial Perception During Lunar Landing Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Cohanim, B.E.; Cunio, P.M.; Hoffman, J.; Joyce, M.; Mosher, T.J.; Tuohy, S.T. Taking the Next Giant Leap 33rd Guidance and Control Conference, Breckenridge, CO, February 6-10, 2010. Sponsored by: AAS Collins, B.K.; Kessler, L.J.; Benagh, E.A. Algorithm for Enhanced Situation Awareness for Trajectory Performance Management Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010. Sponsored by: AIAA Copeland, A.D.; Mangoubi, R.; Mitter, S.K.; Desai, M.N.; Malek, A.M. Spatio-Temporal Data Fusion in Cerebral Angiography IEEE Transactions on Medical Imaging, Vol. 29, No. 6, June 2010 Crain, T.; Bishop, R.H.; Brady, T.M. Shifting the Inertial Navigation Paradigm with MEMS Technology 33rd Guidance and Control Conference, Breckenridge, CO, February 6-10, 2010. Sponsored by: American Astronautical Society (AAS)

List of 2010 Published Papers and Presentations

Cunio, P.M.; Lanford, E.R.; McLinko, R.; Han, C.; Canizales-Diaz, J.; Olthoff, C.T.; Nothnagel, S.L.; Bailey, Z.J.; Hoffman, J.; Cohanim, B.E. Further Development and Flight Testing of a Prototype Lunar and Planetary Surface Exploration Hopper: Update on the TALARIS Project Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010. Sponsored by: AIAA Cunio, P.M.; Corbin, B.A.; Han, C.; Lanford, E.R.; Yue, H.K.; Hoffman, J.; Cohanim, B.E. Shared Human and Robotic Landing and Surface Exploration in the Neighborhood of Mars Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010. Sponsored by: AIAA Davis, J.L.; Striepe, S.A.; Maddock, R.W.; Johnson, A.E.; Paschall II, S.C. Post2 End-to-End Descent and Landing Simulation for ALHAT Design Analysis Cycle 2 International Planetary Probe Workshop, Barcelona, Spain, June 14-18, 2010. Sponsored by: Georgia Institute of Technology DeBitetto, P.A. Using 3D Virtual Models and Ground-Based Imagery for Aiding Navigation in Large-Scale Urban Terrain 35th Joint Navigation Conference (JNC), Orlando, FL, June 8-10, 2010. Sponsored by: Joint Services Data Exchange (JSDE) Dorland, B.N.; Dudik, R.P.; Veillette, D.; Hennessy, G.S.; Dugan, Z.; Lane, B.F.; Moran, B.A. Automated Frozen Sample Aliquotting System European Laboratory Robotics Interest Group (ELRIG) Liquid Handling & Label-Free Detection Technologies Conference, Whittlebury Hall, UK, March 4, 2010. Sponsored by: ELRIG

93

Dorland, B.N.; Dudik, R.P.; Veillette, D.; Hennessy, G.S.; Dugan, Z.; Lane, B.F.; Moran, B.A. The Joint Milli-Arcsecond Pathfinder Survey (JMAPS): Measurement Accuracy of the Primary Instrument when Used as Fine Guidance Sensor 33rd Guidance and Control Conference, Breckenridge, CO, February 6-10, 2010. Sponsored by: AAS Dubach, J.M.; Lim, E.; Zhang, N.; Francis, K.; Clark, H.A. In Vivo Sodium Concentration Continuously Monitored with Fluorescent Sensors Integrative Biology: Quantitative Biosciences from Nano to Macro, November 2010 Duda, K.R.; Johnson, M.C.; Fill, T.J.; Major, L.M.; Zimpfer, D.J. Design and Analysis of an Attitude Command/Hover Hold plus Incremental Position Command Blended Control Mode for Piloted Lunar Landing Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA Duda, K.R.; Oman, C.M.; Hainley Jr., C.J.; Wen, H.-Y. Modeling Human-Automation Interactions During Lunar Landing Supervisory Control 81st Annual Aerospace Medical Association (ASMA) Scientific Meeting, Phoenix, AZ, May 9-13, 2010. Sponsored by: ASMA Effinger, R.T.; Williams, B.; Hofmann, A. Dynamic Execution of Temporally and Spatially Flexible Reactive Programs 24th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, Atlanta, GA, July 11-15, 2010. Sponsored by: AAAI

Fill, T.J. Lunar Landing and Ascent Trajectory Guidance Design for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) Program Space Flight Mechanics Conference, San Diego, CA, February 14-17, 2010. Sponsored by: AAS and AIAA Fritz, M.P.; Zanetti, R.; Vadali, S.R. Analysis of Relative GPS Navigation Techniques Space Flight Mechanics Conference, San Diego, CA, February 14-17, 2010. Sponsored by: AAS and AIAA Frohlich, E.; Ko, C.W.; Tao, S.L.; Charest, J.L. Fabrication of Cell Substrates to Determine the Role of Mechanical Cues in Tissue Structure Formation of Renal Epithelial Cells Science and Engineering Day Symposium, Boston, MA, March 30, 2010. Sponsored by: Boston University Frohlich, E.; Ko, C.W.; Zhang, X.; Charest, J.L.; Tao, S.L. Fabrication of Cell Substrates to Determine the Role of Topographical Cues in Differentiation and Tissue Structure Formation Tech Connect Summit, Anaheim, CA, June 21-24, 2010. Sponsored by: TechConnect World Geisler, M.A. Expedition MCM-D Layout for Multi-Layer Die User2User (U2U) Mentor Graphics Users Conference, Westford, MA, April 14, 2010. Sponsored by: U2U Grant, M.J.; Steinfeldt, B.A.; Braun, R.D.; Barton, G.H. Smart Divert: A New Mars Robotic Entry, Descent, and Landing Architecture Journal of Spacecraft and Rockets, AIAA, Vol. 47. No. 3, May-June 2010

Epshteyn, A.A.; Maher, S.P.; Taylor, A.J.; Borenstein, J.T.; Cuiffi, J.D. Membrane-Integrated Microfluidic Device for High-Resolution Live Cell Imaging Fabricated via a Novel Substrate Transfer Technique Materials Research Society (MRS) Fall Meeting, Boston, MA, November 29-December 3, 2010. Sponsored by: MRS

Guillemette, M.D.; Park, H.; Hsiao, J.C.; Jain, S.R.; Larson, B.L.; Langer, R.S.; Freed, L.E. Combined Technologies for Microfabricating Elastomeric Cardiac Tissue Engineering Scaffolds Journal of Macromolecular Bioscience, Vol. 10, No. 11, November 2010

Fallon, L.P.; Magee, R.J.; Wadland, R.A. Centrifuge Technologies for Evaluating Inertial Guidance Systems 81st Shock and Vibration Symposium, Orlando, FL, October 24-28, 2010. Sponsored by: Shock and Vibration Information Analysis Center (SAVIAC)

Guo, X.; Popadin, K.Y.; Markuzon, N.; Orlov, Y.L.; Kraytsberg, Y.; Krishnan, K.J.; Zsurka, G.; Turnbull, D.M.; Kunz, W.S.; Khrapko, K. Repeats, Longevity, and the Sources of mtDNA Deletions: Evidence from “Deletional Spectra” Trends in Genetics, Vol. 26, No. 8, August 2010, pp. 340-343

Feng, M.Y.; Marinis, T.F.; Giglio, J.; Sherman, P.G.; Elliott, R.D.; Magee, T.; Warren, J. Electronics Packaging to Isolate MEMS Sensors from Thermal Transients International Mechanical Engineering Congress, Vancouver, CA, November 12-18, 2010. Sponsored by: ASME

Hammett, R.C. Fault-Tolerant Avionics Tutorial for the NASA/Army Forum on “Challenges of Complex Systems” NASA/Army Systems and Software Engineering Forum, Huntsville, AL, May 11-12, 2010. Sponsored by: University of Alabama

94 List of 2010 Published Papers and Presentations

Harjes, D.I.; Dubach, J.M.; Rosenzweig, A.; Das, S.; Clark, H.A. Ion-Selective Optodes Measure Extracellular Potassium Flux in Excitable Cells Macromolecular Rapid Communications, Vol. 31, No. 2, January 2010 Herold, T.M.; Abramson, M.R.; Kahn, A.C.; Kolitz, S.E.; Balakrishnan, H. Asynchronous, Distributed Optimization for the Coordinated Planning of Air and Space Assets Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010. Sponsored by: AIAA Hicks, B.; Cook, T.; Lane, B.F.; Chakrabarti, S. OPD Measurement and Dispersion Reduction in a Monolithic Interferometer Optics Express, Vol. 18, No. 16, August 2, 2010, pp. 17542-17547 Hicks, B.; Cook, T.; Lane, B.F.; Chakrabarti, S. Progress in the Development of MANIC: a Monolithic Nulling Interferometer for Characterizing Extrasolar Environments Astronomical Telescopes and Instrumentation, San Diego, CA, June 27-July 2, 2010. Sponsored by: SPIE Hoganson, D.M.; Anderson, J.L.; Weinberg, E.J.; Swart, E.F.; Orrick, B.; Borenstein, J.T.; Vacanti, J.P. Branched Vascular Network Architecture: A New Approach to Lung Assist Device Technology Journal of Thoracic and Cardiovascular Surgery, Vol. 140, No. 5, November 2010 Hopkins III, R.E. Contemporary and Emerging Inertial Sensor Technologies Position Location and Navigation Symposium (PLANS), Indian Wells, CA, May 4-6, 2010. Sponsored by: IEEE/Institute of Navigation (ION) Hopkins III, R.E.; Barbour, N.M.; Gustafson, D.E.; Sherman, P.G. Integrated Inertial/GPS-Based Navigation Applications NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal, March 15-26, 2010 Sponsored by: NATO Research & Technology Organization Hsiao, J.C.; Borenstein, J.T.; Kulig, K.M.; Finkelstein, E.B.; Hoganson, D.M.; Eng, K.Y.; Vacanti, J.P.; Fermini, B.; Neville, C.M. Novel In Vitro Model of Vascular Injury with a Biomimetic Internal Elastic Lamina TERMIS-NA Annual Conference & Exposition, Orlando, FL, December 5-10, 2010. Sponsored by: Tissue Engineering International and Regenerative Medicine Society Hsu, W.-M.; Carraro, A.; Kulig, K.M.; Miller, M.L.; Kaazempur-Mofrad, M.R.; Entabi, F.; Albadawi, H.; Watkins, M.T.; Borenstein, J.T.; Vacanti, J.P.; Neville, C.M. Liver Assist Device with a Microfluidics-Based Vascular Bed in an Animal Model Annals of Surgery, Vol. 252, No. 2, August 2010

List of 2010 Published Papers and Presentations

Huxel, P.J.; Cohanim, B.E. Small Lunar Lander/Hopper Navigation Analysis Using Linear Covariance Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Irvine, J.M. ATR Technology: Why We Need It, Why We Can’t Have It, and How We’ll Get It Geotech Conference, Fairfax, VA, September 27-28, 2010. Sponsored by: American Society of Photogrammetry and Remote Sensing (ASPRS) Irvine, J.M. Human Guided Visualization Enhances Automated Target Detection SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010. Sponsored by: SPIE Jackson, M.C.; Straube, T. Orion Flight Performance Design Trades Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA Jackson, T.R.; Keating, D.J.; Mather, R.A.; Matlis, J.; Silvestro, M.; Ting, B.C. Role of Modeling, Simulation, Testing, and Analysis Throughout the Design, Development, and Production of the MARK 6 MOD 1 Guidance System Missile Sciences Conference. Monterey, CA, November 16-18, 2010. Sponsored by: AIAA Jang, D.; Wendelken, S.M.; Irvine, J.M. Robust Human Identification Using ECG: Eigenpulse Revisited SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010. Sponsored by: SPIE Jang, J.-W.; Plummer, M.K.; Bedrossian, N.S.; Hall, C.; Spanos, P.D. Absolute Stability Analysis of a Phase Plane Controlled Spacecraft 20th Spaceflight Mechanics Meeting, San Diego, CA, February 14-17, 2010. Sponsored by: AAS/AIAA Jang, J.-W.; Alaniz, A.; Bedrossian, N.S.; Hall, C.; Ryan, S.; Jackson, M. Ares I Flight Control System Design 2010 Astrodynamics Specialist Conference, Toronto, Canada, August 2-5, 2010. Sponsored by: AAS/AIAA Jones, T.B.; Leammukda, M.G. Requirements-Driven Autonomous System Test Design: Building Trusting Relationships International Test and Evaluation Association (ITEA) Live Virtual Constructive Conference, El Paso, TX, January 11-14, 2010 Kahn, A.C.; Kolitz, S.E.; Abramson, M.R.; Carter, D.W. Human-System Collaborative Planning Environment for Unmanned Aerial Vehicle Mission Planning Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010. Sponsored by: AIAA

95

Keating, D.J.; Laiosa, J.P.; Ting, B.C.; Wasileski, B.J.; Vican, J.E.; Silvestro, M.; Foley, B.M.; Shakhmalian, C.T. Using Hardware-in-the-Loop Simulation for System Integration of the MARK 6 MOD 1 Guidance System Missile Sciences Conference, Monterey, CA, November 16-18, 2010. Sponsored by: AIAA Keshava, N. Detection of Deception in Structured Interviews Using Sensors and Algorithms SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010. Sponsored by: SPIE Keshava, N.; Coskren, W.D. Sensor Fusion for Multi-Sensor Human Signals to Infer Cognitive States National Symposium Sensor Data Fusion, Las Vegas, NV, July 26-29, 2010. Sponsored by: Military Sensing Symposium Kessler, L.J.; West, J.J.; McClung, K.; Miller, J.; Zimpfer, D.J. Autonomous Operations for the Next Generation of Human Space Exploration SpaceOps, Huntsville, AL, April 25-30, 2010. Sponsored by: AIAA Kim, K.H.; Burns, J.A.; Bernstein, J.J.; Maguluri, G.N.; Park, B.H.; De Boer, J.F. In Vivo 3D Human Vocal Fold Imaging with Polarization Sensitive Optical Coherence Tomography and a MEMS Scanning Catheter Optics Express, Vol. 18, No. 14, July 5, 2010

Kourepenis, A.S. Emerging Navigation Technologies for Miniature Autonomous Systems Autonomous Weapons Summit and GNC Challenges for Miniature Autonomous Systems Workshop, Fort Walton Beach, FL, October 2527, 2010. Sponsored by: ION Lai, W.; Erdonmez, C.K.; Marinis, T.F.; Bjune, C.K.; Dudney, N.J.; Xu, F.; Wartena, R.; Chiang, Y.-M. Ultrahigh-Energy-Density Microbatteries Enabled by New Electrode Architecture and Micropackaging Design Advanced Materials, Vol. 22, No. 20, May 2010 Larson, D.N.; Slusarz, J.; Bellio, S.L.; Maloney, L.M.; Ellis, H.J.; Rifai, N.; Bradwin, G.; de Dios, J. Automated Frozen Sample Aliquotter International Society of Biological and Environmental Respositories (ISBER) Annual Meeting and Exhibits, Rotterdam, Netherlands, May 11-14, 2010. Sponsored by: ISBER Larson, D.N.; Fiering, J.O.; Kowalski, G.J.; Sen, M. Development of a Nanoscale Calorimeter: Instrument for Developing Pharmaceutical Products Innovative Molecular Analysis Technologies (IMAT) Conference, San Francisco, CA, October 25-26, 2010. Sponsored by: National Cancer Institute (NCI)

King, E.T.; Hart, J.J.; Odegard, R. Orion GN&C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE

Larson, D.N.; Miranda, L.; Dederis, J. Innovations in Biobanking-Related Engineering and Design: A Novel Automated Methodology for Optimizing Banked Sample Processing ISBER Annual Meeting and Exhibits, Rotterdam, Netherlands, May 1114, 2010. Sponsored by: ISBER

Kniazeva, T.; Hsiao, J.C.; Charest, J.L.; Borenstein, J.T. Microfluidic Respiratory Assist Device with High Gas Permeability for Artificial Lung Applications Biomedical Microdevices, Online First™, November 26, 2010

Larson, D.N. Nanohole Array for Protein Analysis 26th Southern Biomedical Engineering Conference, College Park, MD, April 30-May 2, 2010. Sponsored by: IFMBE

Ko, C.W.; McHugh, K.J.; Yao, J.; Kurihara, T.; D’Amore, P.; Saint-Geniez, M.; Young, M.; Tao, S.L. Nanopatterning of Poly(e-caprolactone) Thin Film Scaffolds for Retinal Rescue 4th Military Vision Symposium on Ocular and Brain Injury, Boston, MA, September 26-30, 2010. Sponsored by: Schepens Eye Research Institute

Larson, D.N. Nanohole Array Sensing Biomedical Optics Workshop, Boston, MA, April 13, 2010. Sponsored by: IEEE and Boston University

Ko, C.W. Micro and Nanostructured Polymer Thin Films for the Organization and Differentiation of Retinal Progenitor Cells Materials Research Society Fall Meeting, Boston, MA, November 29-December 3, 2010. Sponsored by: MRS

Larson, D.N. New Method for Processing Banked Samples Biospecimen Research Network (BRN) Symposium, Bethesda, MD, March 24-25, 2010. Sponsored by: NCI Larson, D.N. Optimizing the Processing and Augmenting the Value of Critical Banked Biological Specimens Biorepositories Conference, Boston, MA, September 27-29, 2010

96 List of 2010 Published Papers and Presentations

Larson, D. N. Transitioning Research into Operations: A View from Healthcare NASA Human Research Program Investigators’ Workshop, Houston, TX, February 3-5, 2010. Sponsored by: NASA/NASA Space Biomedical Research Institute (NSBRI)

Mather, R.A. Development and Simulation of a 4-Processor Virtual Guidance System for the MARK 6 MOD 1 Program Missile Sciences Conference, Monterey, CA, November 16-18, 2010. Sponsored by: AIAA

Lim, S.; Lane, B.F.; Moran, B.A.; Henderson, T.C.; Geisel, F.A. Model-Based Design and Implementation of Pointing and Tracking Systems: From Model to Code in One Step 33rd Guidance and Control Conference, Breckenridge, CO, February 6-10, 2010. Sponsored by: AAS

Matlis, J. Application of Instruction Set Simulator Technology for Flight Software Development for the MARK 6 MOD 1 Program Missile Sciences Conference, Monterey, CA, November 16-18, 2010. Sponsored by: AIAA

Lowry, N.C.; Mangoubi, R.S.; Desai, M.N.; Sammak, P.J. Nonparametric Segmentation and Classification of Small Size Irregularly Shaped Stem Cell Nuclei Using Adjustable Windowing 7th International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, the Netherlands, April 14-17, 2010. Sponsored by: IEEE

Matranga, M.J. Draper Multichip Modules for Space Applications ChipSat Workshop, Providence, RI, February 18, 2010. Sponsored by: Brown University

Madison, R.W.; Xu, Y. Tactical Geospatial Intelligence from Full Motion Video Applied Imagery Pattern Recognition Workshop, Washington, D.C., October 13-15, 2010. Sponsored by: IEEE Magee, R.J. Shock and Vibration Information Analysis Center (SAVIAC) Video 81st Shock and Vibration Symposium, Orlando, FL, October 24-28, 2010. Sponsored by: SAVIAC Major, L.M.; Duda, K.R.; Zimpfer, D.J.; West, J.J. Approach to Addressing Human-Centered Technology Challenges for Future Space Exploration Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010. Sponsored by: AIAA Manolakos, S.Z.; Evans-Nguyen, T.G.; Postlethwaite, T.A. Low Temperature Plasma Sampling for Explosives Detection in a Handheld Prototype Chemical and Biological Defense Science and Technology Conference, Orlando, FL, November 15-19, 2010. Sponsored by: Defense Threat Reduction Agency (DTRA) Marchant, C.C. Ares I Avionics Introduction AIAA Webinar, Huntsville, AL, February 11, 2010. Sponsored by: AIAA Marchant, C.C. Ares I Avionics Introduction NASA/Army Systems and Software Engineering Forum, Huntsville, AL, May 11-12, 2010. Sponsored by: University of Alabama Marinis, T.F.; Nercessian, B. Hermetic Sealing of Stainless Steel Packages by Seam Seal Welding 43rd International Symposium on Microelectronics, Raleigh, NC, October 31-November 4, 2010. Sponsored by: International Microelectronics and Packaging Society (IMAPS)

List of 2010 Published Papers and Presentations

McCall, A.A.; Swan, E.E.; Borenstein, J.T.; Sewell, W.F.; Kujawa, S.G.; McKenna, M.J. Drug Delivery for Treatment of Inner Ear Disease: Current State of Knowledge Ear & Hearing, Vol. 31, January 2010 McHugh, K.J.; Teynor, W.A.; Saint-Geniez, M.; Tao, S.L. High-Yield MEMS Technique to Fabricate Microneedles for Tissue Engineering Applications National Institute of Biomedical Imaging and Bioengineering Training Grantees Meeting, Bethesda, MD, June 24-25, 2010. Sponsored by: National Institutes of Health (NIH) McHugh, J.; Tao, S.L.; Saint-Geniez, M. Template Fabrication of a Nanoporous Polycaprolactone ThinFilm for Retinal Tissue Engineering Materials Research Society (MRS) Fall Meeting, Boston, MA, November 29-December 3, 2010. Sponsored by: MRS McLaughlin, B.L.; Wells, A.C.; Virtue, S.; Vidal-Puig, A.; Wilkinson, T.D.; Watson, C.J.E.; Robertson, P.A. Electrical and Optical Spectroscopy for Quantitative Screening of Hepatic Steatosis in Donor Livers Physics in Medicine and Biology, Vol. 55, No. 22, November 2010 Mescher, M.J.; Kim, E.S.; Fiering, J.O.; Holmboe, M.E.; Swan, E.E.; Sewell, W.F.; Kujawa, S.G.; McKenna, M.J.; Borenstein, J.T. Development of a Micropump for Dispensing Nanoliter-Scale Volumes of Concentrated Drug for Intracochlear Delivery 33rd Association for Research in Otolaryngology (ARO) Midwinter Meeting, Anaheim, CA, February 6-11, 2010. Sponsored by: ARO Middleton, A.; Paschall II, S.C.; Cohanim, B.E. Small Lunar Lander/Hopper Performance Analysis Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE

97

Miotto, P.; Breger, L.S.; Mitchell, I.T.; Keller, B.; Rishikof, B. Designing and Validating Proximity Operations Rendezvous and Approach Trajectories for the Cygnus Mission Astrodynamics Specialist Conference, Toronto, Canada, August 2-5, 2010. Sponsored by: AAS/AIAA Mitchell, M.L.; Werner, B.; Roy, N. Sensor Assignment for Collaborative Urban Navigation 35th Joint Navigation Conference, Orlando, FL, June 8-10, 2010. Sponsored by: JSDE Mohiuddin, S.; Donna, J.I.; Axelrad, P.; Bradley, B. Improving Sensitivity, Time to First Fix, and Robustness of GPS Positioning by Combining Signals from Multiple Satellites 35th Joint Navigation Conference, Orlando, FL, June 8, 2010-June 10, 2010. Sponsored by: JSDE Muterspaugh, M.W.; Lane, B.F.; Kulkarni, S.R.; Konacki, M.; Burke, B.F.; Colavita, M.M.; Shao, M.; Wiktorowicz, S.J.; Hartkopf, W.I.; O’Connell, J.; Williamson, M.; Fekel, F.C. The PHASES Differential Astrometry Data Archive: Parts I – V Astronomical Journal, AAS, Vol. 140, No. 6, December 2010 Nelson, E.D.; Irvine, J.M. Intelligent Management of Multiple Sensors for Enhanced Situational Awareness Applied Imagery Pattern Recognition Workshop, Washington, D.C. October 13-15, 2010. Sponsored by: IEEE Nothnagel, S.L.; Bailey, Z.J.; Cunio, P.M.; Hoffman, J.; Cohanim, B.E.; Streetman, B.J. Development of a Cold Gas Spacecraft Emulator System for the TALARIS Hopper Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010. Sponsored by: AIAA Olthoff, C.T.; Cunio, P.M.; Hoffman, J.; Cohanim, B.E. Incorporation of Flexibility into the Avionics Subsystem for the TALARIS Small Advanced Prototype Vehicle Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010. Sponsored by: AIAA O’Melia, S.; Elbirt, A.J. Enhancing the Performance of Symmetric-Key Cryptography via Instruction Set Extensions IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 18, No. 11, November 2010 Okerson, G.; Kang, N.; Ross, J.; Tetewsky, A.K.; Soltz, J.; Greenspan, R.L.; Anszperger, J.C.; Lozow, J.B.; Mitchell, M.R.; Vaughn, N.L.; O’Brien, C.P.; Graham, D.K. Qualitative and Quantitative Inter-Signal Correction Metrics for On Orbit GPS Satellites 35th Joint Navigation Conference, Orlando, FL, June 8-10, 2010. Sponsored by: JSDE

Perry, H.C.; Polizzotto, L.; Schwartz, J.L. Creative Path from Invention to Successful Transition Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Polizzotto, L. Creating Customer Value Through Innovation Technology & Innovation, Vol. 12, No. 1, January, 2010 Putnam, Z.R.; Barton, G.H.; Neave, M.D. Entry Trajectory Design Methodology for Lunar Return Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Putnam, Z.R.; Neave, M.D.; Barton, G.H. PredGuid Entry Guidance for Orion Return from Low Earth Orbit Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Rachlin, Y.; McManus, M.F.; Yu, C.C.; Mangoubi, R.S. Outlier Robust Navigation Using L1 Minimization 35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010. Sponsored by: JSDE Roy, W.A.; Kwok, P.Y.; Chen, C.-J.; Racz, L.M. Thermal Management of a Novel iUHD-Technology-Based MCM IMAPS National Meeting, Palo Alto, CA, September 28-30, 2010. Sponsored by: IMAPS Schaefer, M.L.; Wongravee, K.; Holmboe, M.E.; Heinrich, N.M.; Dixon, S.J.; Zeskind, J.E.; Kulaga, H.M.; Brereton, R.G.; Reed, R.R.; Trevejo, J.M. Mouse Urinary Biomarkers Provide Signatures of Maturation, Diet, Stress Level, and Diurnal Rhythm Chemical Senses, Vol. 35, No. 6, July 2010 Serna, F.J. Systems Engineering Considerations in Practicing Test and Evaluation 26th Annual National Test and Evaluation Conference, San Diego, CA, March 1-4, 2010. Sponsored by: National Defense Industrial Association (NDIA) Sherman, P.G. Precision Northfinding INS with Low-Noise MEMS Inertial Sensors Joint Precision Azimuth Sensing Conference (JPASC), Las Vegas, NV, August 2-6, 2010 Sievers, A.; Zanetti, R.; Woffinden, D.C. Multiple Event Triggers in Linear Covariance Analysis for Spacecraft Rendezvous Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA

98 List of 2010 Published Papers and Presentations

Silvestro, M. Time Synchronization in Closed-Loop GPS/INS Hardware-in-theLoop Simulations 35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010. Sponsored by: JSDE Smith, B.R.; Kwok, P.Y.; Thompson, J.C.; Mueller, A.J.; Racz, L.M. Demonstration of a Novel Hybrid Silicon-Resin High-Density Interconnect (HDI) Substrate 60th Electronic Components and Technology Conference (ECTC), Las Vegas, NV, June 1-4, 2010. Sponsored by IEEE, Components, Packaging and Manufacturing Technology (CPMT) Society Sodha, S.; Wall, K.A.; Redenti, S.; Klassen, H.; Young, M.; Tao, S.L. Microfabrication of a Three-Dimensional Polycaprolactone ThinFilm Scaffold for Retinal Progenitor Cell Encapsulation Journal of Biomaterials Science - Polymer Edition, Vol. 22, No. 4-6, January 2011 Stanwell, P.; Siddall, P.; Keshava, N.; Cocuzzo, D.C.; Ramadan, S.; Lin, A.; Herbert, D.; Craig, A.; Tran, Y.; Middleton, J.; Gautam, S.; Cousins, M.; Mountford, C. Neuro Magnetic Resonance Spectroscopy Using Wavelet Decomposition and Statistical Testing Identifies Biochemical Changes in People with Spinal Cord Injury and Pain Neuroimage, Vol. 53, No. 2, November 2010 Steedman, M.R.; Tao, S.L.; Klassen, H.; Desai, T.A. Enhanced Differentiation of Retinal Progenitor Cells Using Microfabricated Topographical Cues Biomedical Microdevices, Vol. 12, No. 3, June 2010 Steinfeldt, B.A.; Grant, M.J.; Matz, D.A.; Braun, R.D.; Barton, G.H. Guidance, Navigation, and Control System Performance Trades for Mars Pinpoint Landing Journal of Spacecraft and Rockets, AIAA, Vol. 47, No. 1, 2010 Steinfeldt, B.A.; Braun, R.D.; Paschall II, S.C. Guidance and Control Algorithm Robustness Baseline Indexing Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA Streetman, B.J.; Peck, M.A. General Bang-Bang Control Method for Lorentz Augmented Orbits Journal of Spacecraft and Rockets, AIAA, Vol. 47, No. 3, May-June 2010 Streetman, B.J.; Johnson, M.C.; Kroehl, J.F. Generic Framework for Spacecraft GN&C Emulation: Performing a Lunar-Like Hop on the Earth Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2-5, 2010. Sponsored by: AIAA

List of 2010 Published Papers and Presentations

Swan, E.E.; Borenstein, J.T.; Fiering, J.O.; Kim, E.S.; Mescher, M.J.; Murphy, B.; Tao, S.L.; Chen, Z.; Kujawa, S.G.; McKenna, M.J.; Sewell, W.F. Characterization of Reciprocating Flow Parameters for Inner Ear Drug Delivery 33rd Midwinter Meeting, Association for Research in Otolaryngology, Anaheim, CA, February 6-11, 2010. Sponsored by: ARO Tamblyn, S.; Henry, J.R.; King, E.T. Model-Based Design and Testing Approach for Orion GN&C Flight Software Development Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by: IEEE Tao, S.L. Polycaprolactone Nanowires for Controlling Cell Behavior at the Biointerface Popat, K., ed., Nanotechnology in Tissue Engineering and Regenerative Medicine, Chapter 3, CRC Press, Taylor & Francis Group, Boca Raton, FL, November 22, 2010 Tepolt, G.B.; Mescher, M.J.; LeBlanc, J.; Lutwak, R.; Varghese, M. Hermetic Vacuum Sealing of MEMS Devices Containing Organic Components Photonics West-MOEMS-MEMS, San Francisco, CA, January 22-27, 2010. Sponsored by: SPIE Torgerson, J.F.; Sherman, P.G.; Scudiere, J.D.; Tran, V.; Del Colliano, J.; Sokolowski, S.; Ganop, S. Collaborative Soldier Navigation Study 35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010. Sponsored by: JSDE Tucker, J.; Boydston, T.E.; Heffner, K. Closing the Level 4 Secure Computing Gap via Advanced MCM Technology Department of Defense Anti-Tamper Conference, Baltimore, MD, April 13-15, 2010. Sponsored by: DoD Tucker, B; Saint-Geniez, M.; Tao, S.L.; D’Amore, P.; Borenstein, J.T.; Herman, I.M.; Young, M. Tissue Engineering for the Treatment of AMD Expert Reviews in Ophthalmology, Vol. 5, No. 5, October 2010 Valonen, P.K.; Moutos, F.T.; Kusanagi, A.; Moretti, M.; Diekman, B.O.; Welter, J.F.; Caplan, A.I.; Guilak, F.; Freed, L.E. In Vitro Generation of Mechanically Functional Cartilage Grafts Based on Adult Human Stem Cells and 3D-woven Poly(εcaprolactone) Scaffolds Biomaterials, Vol. 31, January 2010

99

Varsanik, J.S.; Teynor, W.A.; LeBlanc, J.; Clark, H.A.; Krogmeier, J.; Yang, T.; Crozier, K.; Bernstein, J.J. Subwavelength Plasmonic Readout for Direct Linear Analysis of Optically Tagged DNA Photonics West-BIOS, San Francisco, CA, January 23-28, 2010. Sponsored by: SPIE Wen, H.-Y.; Duda, K.R.; Oman, C.M. Simulating Human-Automation Task Allocations for Space System Design Human Factors and Ergonomic Society Student Conference, New England Chapter, Boston, MA., October 22, 2010 Wang, J.; Bettinger, C.J.; Langer, R.S.; Borenstein, J.T. Biodegradable Microfluidic Scaffolds for Tissue Engineering from Amino Alcohol-Based Poly(Ester Amide) Elastomers Organogenesis, Volume 6, No. 4, 2010, pp. 1-5 Yoon, S.-H.; Cha, N.-G.; Lee, J.S.; Park, J.-G.; Carter, D.J.; Mead, J.L.; Barry, C.M.F. Effect of Processing Parameters, Antistiction Coatings, and Polymer Type when Injection Molding Microfeatures Polymer Engineering & Science, Vol. 50, Issue 2, February 2010

Young, L.R.; Oman, C.M.; Stimpson, A.; Duda, K.R.; Clark, T. Flight Displays and Control Modes for Safe and Precise Lunar Landing 81st Annual Aerospace Medical Association Scientific Meeting, Phoenix, AZ, May 9-13, 2010. Sponsored by: ASMA Young, L.R.; Clark, T.; Stimpson, A.; Duda, K.R.; Oman, C.M. Sensorimotor Controls and Displays for Safe and Precise Lunar Landing 61st International Astronautical Congress, Prague, Czech Republic, September 27-October 1, 2010. Sponsored by: International Astronautical Federation (IAF) Zanetti, R. Multiplicative Residual Approach to Attitude Kalman Filtering with Unit-Vector Measurements Space Flight Mechanics Conference, San Diego, CA, February 14-17, 2010. Sponsored by: AAS and AIAA Zanetti, R.; DeMars, K.J.; Bishop, R.H. On Underweighting Nonlinear Measurements Journal of Guidance, Control, and Dynamics, AIAA, Vol. 33, No. 5, September-October 2010, pp. 1670-1675

Yoon, S.-H.; Lee, K.-H.; Palanisamy, P.; Lee, J.S.; Cha, N.-G.; Carter, D.J.; Mead, J.L.; Barry, C.M.F. Enhancement of Surface Replication by Gas Assisted Microinjection Moulding Plastics, Rubber and Composites, Vol. 39, No. 7, September 2010

100 List of 2010 Published Papers and Presentations

PatentsIntroduction Introduction Patents Draper Laboratory is well known for integrating diverse technical capabilities and technologies into innovative and creative solutions for problems of national concern. Draper encourages scientists and engineers to advance the application of science and technology, expand the functions of existing technologies, and create new ones. The disclosure of inventions is an important step in documenting these creative efforts and is required under Laboratory contracts (and by an agreement with Draper that all employees sign). Draper has an established patent policy and understands the value of patents in directing attention to individual accomplishments. Pursuing patent protection enables the Laboratory to pursue its strategic mission and to recognize its employees’ valuable contributions to advancing the state-of-the-art in their technical areas. An issued patent is also recognition by a critical third party (the U.S. Patent Office) of innovative work for which the inventor should be justly proud. On average, Draper’s Patent Committee typically recommends seeking patent protection for 50 percent of the disclosures received. Millions of U.S. patents have been issued since the first patent in 1836. Through December 31, 2010, 1,468 Draper patent disclosures have been submitted to the Patent Committee since 1973; 757 of which were approved by Draper’s Patent Committee for further patent action. As of December 31, a total of 552 patents have been granted for inventions made by Draper personnel. Nineteen patents were issued for calendar year 2010.

This year’s featured patent is: Systems and Methods for High Density Multi-Component Modules

The following pages present an overview of the technology covered in the patent and the official patent abstract issued by the U.S. Patent Office.

Patents Introduction

101

Systems and Methods for High Density Multi-Component Modules Scott A. Uhland, Seth M. Davis, Stanley R. Shanfield, Douglas W. White, and Livia M. Racz

U.S. Patent No. 7,727,806; Date Issued: June 1, 2010

Draper’s patented i-UHD technology will enable Draper to take miniaturization to new levels for customers who demand highly capable systems with minimal size and power requirements. By removing all nonessential elements and stacking layers of components buried in silicon wafers on top of each other, Draper can fit an entire system into a package the size of a Scrabble tile. This work is close to transitioning into production for two sponsors, and the extreme miniaturization could be an asset for other customers in fields ranging from national security to biomedical technology. Scott A. Uhland is a Member of the Technical Staff at the Palo Alto Research Center (PARC). Within the Electronic Materials and Devices Laboratory, Dr. Uhland is developing microfluidic actuated systems for a variety of commercial applications ranging from devices for hormone therapy to optical displays. Prior to joining PARC, he was a Senior Member of the Technical Staff at Draper Laboratory, where he was the Bioengineering Group Leader and oversaw the development of a wide variety of technologies and programs, including biological sensors, tissue engineering, and drug delivery. He was also a Principal Investigator (PI) at Draper for the research and development of electronic packaging technologies that push component densities to the theoretical limit. From 2000 to 2004, he was one of the initial PIs at MicroCHIPS, Inc., where he pioneered the use of MEMS technology in the medical field, particularly in the development of innovative drug delivery and sensing systems. He has authored more than 35 publications, reviews, and patents, and holds 60+ pending U.S. applications. Dr. Uhland received a B.S. in Materials Science and Engineering (summa cum laude) from Rutgers University, where he served as President of the Tau Beta Pi Honor Society, and a Ph.D. in Materials Science and Engineering from MIT. Seth M. Davis is currently the Associate Director for Communication, Navigation, and Miniaturization in the Special Programs Office. He is responsible for business development, strategic planning, and internal technology investment for first-of-a-kind special communications systems, miniaturized navigation systems, and advanced tagging tracking and locating systems. His technical interests focus on ultra-miniaturization of complex, low-power electronics systems for sensing, signal processing, and RF communications. Prior to his current position, he was Division Leader of the Electronics Division. Mr. Davis received B.S. and M.S. degrees in Electrical Engineering from MIT and Northeastern University, respectively. Stanley Shanfield is a Distinguished Member of the Technical Staff, and has recently been a Technical Director for a variety of intelligence community programs. He led a team that developed a miniature, low-power, stable frequency source that maintains stability to better than 0.1 part-per-billion over several seconds, suitable for high-performance digital transmitters and receivers. He also led a team that developed and demonstrated an

Suggest Documents