Chapter 11: Impacts on Engineering Research and Technology

Previous chapters of this History described the evolution of the Engineering Research Centers (ERC) Program at the National Science Foundation (NSF) from the standpoint of research, education, industrial interaction, and center and Program leadership. The foregoing Chapter 10 described the Program’s impacts on academic engineering. This chapter provides an overview of selected achievements of the ERCs in systems and technology development and of some of their important impacts in industry and on the U.S. economy, including such notable advances as the portable defibrillator, the mpeg video file format, wearable computers, the prosthetic retina, and new radars for tornado detection.

The introduction to the 1984 National Academy of Engineering (NAE) report that laid out Guidelines for ERCs began by asserting two purposes that underlie ERCs:

  1. Enhance the capacities of engineering research universities to conduct cross-disciplinary research on problems of industrial importance.
  2. Lessen one of several weaknesses in engineering education: an inadequate understanding by students of engineering practice; that is, the understanding of how engineering knowledge is converted by industry into societal goods and services. [1]

The opening section of a report of a symposium held in early 1990 to review the ERC Program put it this way: “Indeed, a major goal of the ERC Program is to facilitate the more efficient conversion of advances in fundamental research in universities into high-quality, competitive products and processes in industry.”[2] The connecting bridge between fundamental research and commercialized products is innovation—and that space, including the type of education needed to achieve it and produce graduates who could “hit the ground running,” was where ERCs would prove their worth. As NAE panel Chairman W. Dale Compton said in his preface to the Guidelines report, “If the Foundation embraces the concept with enthusiasm and supports it with zeal, the program could contribute well beyond expectations.”[3] NSF did just that, and so has the ERC Program.

11-A    Innovative Systems Platforms and Their Impacts

One of the primary aspects of the ERC Program’s mission was and is to make it possible—and acceptable—for academic researchers to pursue research on large-scale, next-generation engineered systems. Such systems are the core of each ERC’s vision. This work requires a coordinated, strategically planned team approach, carried out in the context of industrial perspectives and needs—something that was highly innovative and thus controversial on university campuses, especially in the early years of the Program. This section will describe examples of such systems in several broad areas of technology development.

11-A(a) Flexible/Intelligent Manufacturing

When the ERC Program was initiated, there was very little academic engineering research focused on manufacturing processes and systems, outside of chemical engineering processes for the manufacture of pharmaceuticals, oil and gas products, and food products. There had been some NSF support for robotics per se from the Research Applied to National Needs (RANN) Program in the 1970s and some follow-on support of robotics in manufacturing after the demise of RANN, but it was at the project level.[4] With the onset of the ERC Program and its goal to drive research strategically from the systems level, there was through an ERC enough time to build systems-level testbed platforms and some financial support; but larger scale testbeds would require additional support. While there were 14 ERCs that focused on manufacturing systems between 1985 and 2014, this section will focus on a few that were especially successful at creating flexible and intelligent manufacturing systems platforms for the manufacture of physical parts and products, as opposed to chemical and biological products.

Quick Turnaround Cell:The first was at the Purdue ERC for Intelligent Manufacturing Systems (ERC Class of 1985[5]), where one of the first systems-level testbeds, the Quick Turnaround Cell, was developed to enable more flexible manufacturing for industry so that design, cutting, and quality inspection could be integrated to support rapid production of small batch and one-of-a-kind machined parts. See Section 11-B(a)iii. Advanced Manufacturing, for its eventual use by the Army Missile Command to design and machine prototype parts and its widespread impact on computer-aided design programs. In addition, a search for Quick Turnaround CNC Machining will show the wide variety of applications of this concept in machining and manufacturing today.

Rapid Prototype System: The next important manufacturing testbed developed at an ERC was the Rapid Prototype System developed in collaboration with General Motors (GM) by the Engineering Design Research Center (EDRC) (Class of 1986) at Carnegie Mellon University. GM asked the EDRC to work with them collaboratively on a technology that would be directly applied to a manufacturing issue at Inland Fisher Guide Division. This division made lights, plastic trim, and other non-structural components. These parts changed from model year to model year, so the Division wanted to develop a product from initial concept to implementation in a time frame of 30-40 weeks, where it took 60-70 weeks at that time. To support design engineers at GM, some of whom might be inexperienced, the EDRC developed a knowledge-based system that could be connected to GM’s own design system, prompting the design engineers to recognize when design rules had been violated. Working on this system on the factory floor with production engineers and technicians, the EDRC faculty and students led a completely new approach to tooling, as an alternative to conventional machining at the time. They were able to develop a rapid tool technology, which enabled the use of metal spraying as a prototype. The outcome was a reduction in cost and increase in speed of making the tools via this process at an order of magnitude lower than with conventional machining technology.[6]

Reconfigurable Manufacturing: The ERC for Reconfigurable Manufacturing Systems (RMS), established at the University of Michigan (Class of 1996) envisioned building manufacturing systems platforms and machines with the exact capacity and functionality needed, and changing them rapidly exactly when needed not months or years later. The visionary leader of the ERC, Yoram Koren, and his leadership team weren’t sure they could do this in the ten-year lifespan of an ERC. However, with close collaboration with visionary engineering leaders in industry, in less than ten years the RMS reconfigurable inspection technology was already integrated into engine production lines of the domestic auto industry, and the data reduction methodologies were widely applied in manufacturing plants.

The RMS team defined three principles that facilitate achieving cost-effective, rapid reconfiguration:

(1) adjustable production resources to respond to imminent market changes;

(2) functionality rapidly converted to the production of new parts; and

(3) adjustable, rapid response to unexpected equipment failures.[7]

The challenge was to develop mathematical tools to support reconfigurable manufacturing and then to design and build “three full-size prototypes of original machines that revolutionized the state-of-art in production engineering, particularly in real-time engine inspection at the line-speed. Inspection machines were implemented, which improved productivity in 69 production lines in 15 factories in the U.S. and Canada (e.g., Chrysler Mack Avenue Engine Plant, Ford Windsor Engine Plant, GM Flint Engine Plant, Boeing Everett Operations in Seattle, and Cummins engine plant, Columbus, Indiana.)”[8] The resulting technology is explained in depth in Section 11-B(c) of this chapter.

Nanomanufacturing Platforms: Physical parts come in all scales, and at present they are sometimes produced at the nanoscale, which poses challenges as nanoscale devices are integrated with other devices in a systems testbed. The ERC for Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT), was established at the University of Texas at Austin in 2012. NASCENT’s vision is to create high-throughput, reliable, and versatile nanomanufacturing process systems. The research outcome of novel processes and systems from this center contributes to the development of critical nanomanufacturing infrastructure by identifying new and different manufacturing equipment and process systems for high-speed, low-defect, reliable nanomanufacturing systems that are not currently available. NASCENT is focused on developing nanosystems platforms, such as: the Transfer Torque Random Access Memory (STT-RAM) with data densities exceeding a terabit/sq. in.; high-speed FETs on flex substrates that will provide bulk Si CMOS-like transistor performance at flat panel display-like costs; and rollable batteries with high energy density Si nanowire anodes for lithium batteries. Its systems platforms include: (1) Roll-to-Roll (R2R) Nanofabrication Testbed—2D and 3D nanosculpting on R2R flexible substrates, high-speed solvent-less R2R graphene transfer, R2R thin-film coatings with nanoscale thicknesses, R2R printable nanomaterials, and in-line nanometrology; and (2) Flex Crystalline Nanofabrication Testbed—flexible crystalline substrates exfoliated from bulk wafers, wafer-scale 2D and 3D nanosculpting with unprecedented control in size and shape of patterns, and in-line nanometrology.

11-A(b) Nanosystems

Beyond nanomanufacturing, by 2011 there had been more than a decade of sustained funding for nanoscale research that was enriching the fundamental knowledge base about the characteristics, behavior, and functionality of a wide range of particles at the nanoscale. Some work had begun on how they could be combined into nanoscale devices and some new products using these particles had emerged, such as environmental sensors and micromechanical motors. However, at that time there had been insufficient investment in long-term research to explore how these nanoscale devices could be combined into components and systems.

To explore this frontier and determine if it could become a new system platform for innovation, the ERC Program joined with the Nanoscale Science and Engineering Program to support new nanosystems ERCs. “The Nanosystems ERCs will build on more than a decade of investment and discoveries in fundamental nanoscale science and engineering,” said Thomas Peterson, the NSF’s Assistant Director for Engineering at the time. “Our understanding of nanoscale phenomena, materials and devices has progressed to a point where we can make significant strides in nanoscale components, systems and manufacturing.”[9] Three new nanosystems ERCs were funded in 2012:

  • The NSF Nanosystems Engineering Research Center for Advanced Self-Powered Systems of Sensors and Technology (ASSIST), led by Professor Veena Misra, North Carolina State University, is creating self-powered monitoring systems that are worn on the wrist to simultaneously monitor a person’s environment and health, in search of connections between exposure to pollutants and chronic disease.
  • The NSF Nanosystems Engineering Research Center for Translational Applications of Nanoscale Multiferroic Systems (TANMS), led by Greg Carman of the University of California, Los Angeles, seeks to reduce the size and increase the efficiency of components and systems whose functions rely on the manipulation of either magnetic or electromagnetic fields.
  • NSF Nanosystems Engineering Research Center for Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT), led by Roger Bonnecaze and S.V. Sreenivasan of the University of Texas at Austin, is pursuing high-throughput, reliable, and versatile nanomanufacturing process systems, and is demonstrating them through the manufacture of mobile nanodevices.

Curious about the actual barriers to integrating nanoscale devices with larger-scale components and systems while preparing this History, Preston asked the Directors of these three nanosystems ERCs, which were approaching their sixth year of operation at the time (2018), to give her some input regarding those challenges. Greg Carman’s response is a good example of the issues his ERC has addressed get to the systems plane. As background, the then-current mission statement for TANMS is to “develop a fundamentally new approach coupling electricity to magnetism using engineered nanoscale multiferroic elements to enable increased energy efficiency, reduced physical size, and increased power output in consumer electronics. This new nanoscale multiferroic approach overcomes the scaling limitations present in the two-century-old mechanism to control magnetism that was originally discovered by Oersted in 1820. TANMS’s goals are to translate its research discoveries on nanoscale multiferroics to industry while seamlessly integrating a cradle-to-career education philosophy involving all of its students and future engineers in unique research and entrepreneurial experiences.”[10] In December 2018, Carman laid out three key barriers TANMS had to address to begin to reach its systems goals:

1. Increasing the voltage control of magnetism constant in materials that are presently used in magnetic memory. The TANMS team used interface materials to increase this value by an order of magnitude. The most recent results suggest that it can be increased by another two orders of magnitude, increasing the potential that this new approach may be adopted by the community in the near future. This particular Voltage Control of Magnetic Anisotropy (VCMA) is only present at length scales on the order of the “exchange length,” ~10 nm. The NASCENT team discovered that adding in an additional atomic layer of a different material modifies the electronic interaction between the materials and dramatically improves the coupling. The most recent analysis indicates that the increase is on the order of 100x, and once again this is only present at small length scales and is considered inconsequential or absent at large length scales.

2. Developing new coupled models that allow the integration of piezoelectric with magnetostrictive materials to launch electromagnetic waves from electrically small antennae. TANMS is one of the only groups in the world now to have a numerical multi-physics code that combines the elastodynamics equations developed by Newton with the electromagnetic equations developed by Maxwell, and finally with the micro-magnetic equations developed by Landau-Liftshitz-Gilbert.

3. Electromagnetic motors presently do not exist in the small scale due to resistive losses; therefore, this is a speculative market where a clear application has not been clearly articulated. TANMS, working with bioengineering researchers, have found that individual cell selection for personalized medicine represents an important problem and could substantially benefit from magnetic control. The biology community has been using magnetic particles attached to cells for quite some time now, but have not been able to solve the problem of individually selecting superior cells necessary for personalized medicine. This community has found that they can tag cells with different proteins/enzymes attached to magnetic particles. These proteins or enzymes produce different luminosity, so the cell themselves can be interrogated optically through the luminescence. Importantly, certain cells have “superior” capabilities for fighting specific diseases (e.g., cancer); thus, we can select the correct superior cells out of a large population, with the goal to select these superior cells and then culture them and reintroduce them back to the body. The biology community has tried to select the superior cells through optical tweezers, but has largely been unsuccessful. Recently, TANMS has demonstrated that its electromagnetic motor concept is ideally suited to this problem with demonstrations of cell capture.

11-A(c)  Collaborative Adaptive Sensing Systems

The Collaborative Adaptive Sensing Systems (CASA) ERC (Class of 2003), headquartered at the University of Massachusetts (UMass), Amherst, is a multi-university partnership among engineers at UMass and Colorado State University; atmospheric scientists, at the University of Oklahoma; and social scientists at the University of Delaware. From its inception, CASA was committed to developing a new kind of sensing system to be deployed to improve forecasting of and response to high-impact weather events, which are transient and occur at heights and scales that are difficult to monitor with the existing weather infrastructure. Examples of such weather events include tornados, microbursts, flash floods, boundaries that initiate convection, and lofted radiological, chemical, and biological agents. At the time of the ERC’s award, monitoring practices would likely either miss these events because they form in blind regions of present-day sensors, or record them poorly because they occur when the sensor is monitoring other regions or at distances where resolution is degraded. (See Figure 11-1.) In addition, each sensor of standard operational Doppler radar systems (WSR‑88D in the U.S.) was independent from all others; hence, they were not able to coordinate in a manner that could result in optimal use. CASA was designed to prove the efficacy of networked arrays of low‑cost, closely spaced advanced radars to adaptively sense severe weather conditions (e.g., tornados) and use relevant observations in atmospheric numerical forecast models.

Figure 11-1: Existing weather radars leave gaps in coverage at low altitudes. (Source: CASA)

The vision of CASA was to establish a revolutionary new paradigm in which a system of distributed, collaborative, and adaptive sensors (DCAS) is deployed to overcome fundamental limitations of current approaches to the detection of weather hazards. (See Figure 11-2.) Success of the Center would depend upon collaborative research between the computational, atmospheric and social sciences, and electrical engineering. CASA had to solve four fundamental problems: radar engineering to be able to develop advanced, small and inexpensive radar systems; networking to be able to perform real‑time optimal command and control of the system; atmospheric sciences to develop cutting edge algorithms and models for detection and forecasting of weather hazards; and social sciences to fully understand end user (e.g., weather forecasters, emergency managers) needs and develop optimal protocols that would be used for system control.

Figure 11-2: CASA’s new system of distributed, collaborative, and adaptive sensors (DCAS) improves weather hazard detection. (Source: CASA)

All of this knowledge was driven by the need to develop real-time testbeds deployed in the field to predict and detect tornados and issue warnings to emergency responders with accurate and timely data. The impacts of the ERC are multiple and include not only improved weather forecasting, prediction, and warnings, but also development of new technologies to enable construction of low-cost radars, electronic beam-steering, and network interfaces. (See Ch. 5, Section 5-D(a)ii for further discussion of CASA’s interface with DCAS system end users.)

CASA’s major systems-level testbed was perhaps the most complex large-scale testbed launched by an ERC in a real-time environment. The CASA testbed covered a 7,000 square km region in southwestern Oklahoma that receives an average of four tornado warnings and 53 thunderstorm warnings per year. The network was comprised of four small radars spaced closely enough together to defeat the Earth curvature blockage and allow users to directly view the lower atmosphere beneath the coverage of the current national radar network. The radars operated in a closed-loop configuration enabling the resources of the system to be preferentially allocated toward particular sub-regions of the atmosphere in response to the changing weather itself. This testbed was “end-to-end,” in that critical and substantial end-user populations have been involved in the development and research trials conducted with the system. The system implements a policy mechanism in its resource allocation calculation that preferentially allocates resources to different user populations so as to maximize the overall utility of the information collected. The network became operational in 2007 and remained in near-continuous operation for extended periods until 2011, allowing research and operational users to directly view the lower atmosphere with high-resolution observations.

The ERC pointed to significant impacts in its Final Report in 2013:

CASA’s research using the IP1 testbed demonstrated a new dimension to weather observing that leads to improved characterization of storms and the potential for better forecasting and improved warning and response to tornadoes and other hazards. During a severe thunderstorm in southwest Oklahoma in June 2011, a tornado struck the town of Newcastle where the IP1 radar network was operating and was providing real-time storm data to local emergency managers. The trajectory of the tornado was changing too rapidly for it to be accurately tracked by the nearest NEXRAD National Doppler radar, but the CASA network was able to resolve the rapid motion and provide that information to the emergency managers to better warn the public. The Oklahoma City Journal-Record reported on July 1, 2011, ‘The data from a new radar system being tested in Newcastle was so precise that refugees from the storm were able to time the closing of the town’s public shelter down to the last minute.’ City Manager Nick Nazar said: ‘The opportunity to use this advanced technology was very helpful and probably saved lives. It was literally up to the minute and it made a difference.’ As a result of the publications and presentations on this deployment, the 2008 National Research Council report on Observing Weather and Climate from the Ground Up has recommended that CASA-type technology be deployed in future radar systems, writing, ‘Emerging technologies for distributed-collaborative adaptive sensing should be employed by observing networks, especially scanning remote sensors such as radars and lidars.’[11]

As CASA reported, its “research using this testbed demonstrated a new dimension to weather observing, improved characterization of storms, better forecasting, and improved warning and response to tornadoes and other hazards. In trials conducted during the 2007-2011 Oklahoma storm seasons, CASA team members gathered data that demonstrated substantive quantitative improvements relative to the state of the art, including a 5x increase in resolution and update times compared to today’s NEXRAD system, as well as fundamentally new capabilities such as visibility down to 200 m altitude and multi-Doppler observations for estimating the atmospheric wind vector field. In a case-study assessment conducted with National Weather Service forecasters, CASA participants documented the ability of these users to reduce their estimates of surface winds by 31% when they are working with CASA data, compared to when they didn’t have these data.”[12]

As CASA neared graduation, it shifted it major testbed implementation from tornadoes to severe storms. Brenda Philips, the co-Director of CASA post-graduation, leads one of CASA’s principle sustaining activities. She contributed the accompanying case study to this History in 2018.

11-A(d) Neuroengineering Systems

Two then-new fields, biotechnology and computational neuroscience, were separately at the forefront of major advances in the last decades of the 20th century. Their confluence in the first two decades of the 21st century has catalyzed some of the most prominent new advances in science and engineering. Among the most promising new areas is the rapidly developing field of neuroengineering. This interdisciplinary research area encompasses the development of concepts, algorithms, and devices that are used to assist, understand, and interact with neural systems. It comprises fundamental, experimental, computational, theoretical, and quantitative research aimed at furthering our ability to understand and augment brain and neural function in both health and disease.

Like the other ERC systems platforms just described, neuroengineering systems serve as a platform for innovation in several ERCs, usually in the form of testbeds aimed at the development of systems to aid people with a variety of neural disorders in reducing the impact of the disorders on their lives.

Neuroprosthetics. The earliest ERC to pursue research in neuroengineering was the Center for Neuromorphic Systems Engineering, established at Caltech in 1995. The goal of the CNSE was “to develop the technology infrastructure for endowing the machines of the next century with the senses of vision, touch, and olfaction which mimic or improve upon human sensory systems.” One of two main testbeds of the Center focused on neuroprosthetics, employing a a unique cognitive approach: decoding goals, intentions, and the cognitive state of the paralyzed patient, leading to the implementation of real-time control of a robotic arm through a brain/computer interface, or probe.

Cochlear Implant. The ERC for Wireless Integrated MicroSystems WIMS ERC at the University of Michigan (Class of 2000) also focused on advancing neural prosthetics—specifically, a cochlear implant. The cochlear microsystem for the profoundly deaf contained a custom microcontroller, a digital signal processor that executed speech processing algorithms, a wireless chip that derived power from an external radio-frequency carrier and provided bidirectional data transfer, and an ultra-flexible thin-film electrode array that could be inserted deep within the cochlea of the inner ear. 

Biomimetic Systems. Two more examples of ERC work in neuroengineering are from the Biomimetic MicroElectronic Systems (BMES) ERC at USC (Class of 2003), i.e.: the Argus II retinal prosthesis developed by Center Director Mark Humayun and his team, and the efforts by an interdisciplinary team led by Professor Theodore Berger to restore higher cognitive functions that are lost as a result of damage (stroke, head trauma, epilepsy) or degenerative disease (dementia, Alzheimer’s disease, etc.) by focusing on long-term memory formation.

Neuroplasticity. A more recent example is the ERC for Sensorimotor Neural Engineering (CSNE)[13] at the University of Washington (Class of 2011), with two testbeds aimed at engineering neuroplasticity in the damaged brain and spinal cord.

These examples and others are described in more detail in the research Chapter 5, Section 5-D(b)ii, the “Special Topics” subsection on Neuromorphic Systems Engineering.

11-B    Technology/Innovation Achievements and Impacts

The pursuit of a large-scale system vision, as described in the previous section, involves a coordinated approach to fundamental research and the advancement of technology that underlies and enables the successful realization of the system. In the process, smaller-scale innovations and inventions are often achieved that are significant and highly useful advances in and of themselves. The enabling technology developed at ERCs is often licensed to their industrial members, who then attempt to take the advancement farther into commercialization. This technology transfer is a key element of the mission of government-funded ERCs, in which public funding is used to bring social and economic benefits to the Nation and its people.

This section will present just some of the hundreds of important achievements in technology development and innovation attained by the ERCs over the past 30-plus years in a wide range of fields of engineering and technology.

11-B(a) Bioengineering

Biotechnology Process Engineering Center (BPEC), Massachusetts Institute of Technology, Class of 1985

Bioprocess Technologies: Advances made in mammalian cell bioprocess technology and protein therapeutics made by BPEC enabled the development of a wide range of new pharmaceuticals. In its first ten years as an ERC (BPEC I),[14] these advances contributed to the following major impacts on its industrial partners:

  • Copyrighted the BioDesigner software for bioprocess simulation, leading to the efficient design and synthesis of bioprocesses. This algorithm was licensed to a start-up company, Intelligen, and is used presently by biotechnology companies worldwide for bioprocess simulation as well as in universities for course teaching.
  • Rational-medium design based on fundamental principles in stoichiometry, biochemistry, and metabolism to increase cell viability and prolong cell culture times in animal cell culture systems, increasing the product concentration. Many companies now employ the algorithm from this research in the manufacture of mammalian cell products.
  • Methodologies for the characterization of glycoprotein quality, with special emphasis on sialic acid content of therapeutic glycoproteins, which successfully demonstrated a means to increase sialic acid content in recombinant glycoproteins, thus helping industry to maintain protein quality during manufacturing.

Center for Emerging Cardiovascular Technologies (ERC/ECT), Duke University, Class of 1987

Implantable Defibrillators: The research in antiarrhythmic systems at the ERC for Emerging Cardiovascular Technologies (ECT) at Duke University, which graduated in 1996, was aimed toward developing high‑tech devices to halt or prevent ventricular fibrillation, the primary cause of sudden cardiac death. About 400,000 people succumb to sudden cardiac death annually in the United States. The ERC judged in the late ‘80s that if only 10% of these individuals could be identified to be at risk and have devices implanted, the potential U.S. market would be close to a billion dollars per year and the international market several times larger.

Two of the ERC/ECT’s major research breakthroughs in antiarrhythmic systems—improved electrodes and a novel understanding of biphasic waveforms, which led to biphasic waveform circuitry—were transferred to the implantable defibrillator industry. Both of these developments reduced the energy needed to defibrillate. This single improvement resulted in five advantages over previous implantable defibrillator technology: (1) reduced tissue damage; (2) reduced device size, allowing for easier implantation; (3) reduced time to charge the device, thus decreasing the time the body is without blood flow during the arrhythmia; (4) extended battery life; and (5) a wider range of patients treatable with implantable defibrillators.

Biphasic waveforms have been adopted by the implantable defibrillator industry. Two Center member companies, Intermedics and Ventritex, working with ERC/ECT researchers, licensed the Center’s technology and took the research in biphasic waveforms to the stage of clinical testing. Two other companies, Cardiac Pacemakers (CPI) and Medtronic, developed their own biphasic waveform circuitry based in part on this ERC/ECT research. Intermedics also brought to clinical trials the improved electrodes developed by the ERC/ECT. Today, implantable defibrillator companies continue to build on the Center’s findings and modify their internal defibrillators accordingly.

Portable Defibrillators: The same improvements in sensors and electrodes that the ERC/ECT’s work brought to internal defibrillators have also been used by industry to design external, portable defibrillators (Figure 11-6) that are easier to use and less expensive than preexisting models and help people who suffer heart attacks in public places. A more efficient and effective power source for delivery of the shock permitted miniaturization of the devices.

Figure 11-6: An early portable defibrillator based in part on the ERC/ECT’s work (Source: Defibtech)

3D Ultrasound: The ERC/ECT achieved several breakthroughs in sensing and image processing that made three-dimensional ultrasound possible. At the time, this technology was 5–7 years ahead of acceptance by the medical community and insurance companies; now it is ubiquitous, partly as a result of early championing by the ERC/ECT through a startup, Volumetrics Medical Imaging.

Engineering Design Research Center (EDRC), Carnegie Mellon University, Class of 1986

Wearable Computers:  Carnegie Mellon University (CMU) has had a long involvement with the development of wearable information technology through two ERCs, the EDRC (1986–1996) and the later Quality of Life Technologies (QoLT) ERC (2006–2015). CMU faculty associated with the EDRC built their first wearable computer in 1991 and had units in the field with head-mounted displays in 1994-1995, supporting maintenance activities of the U.S. military. By 1995, a community of wearable computing researchers was forming. In 1997, an IEEE Working Group on Wearable Information Systems was formed and organized the first International Symposium on Wearable Computers (ISWC) at MIT, with the EDRC’s Thad Starner as general chair. By 1998 the Working Group had been promoted to a Technical Committee that Daniel Siewiorek (later QoLT center director) chaired; the second ISWC was held in 1998 at CMU.

The early research in this field was codified in a 2008 research monograph entitled “Application Design for Wearable Computers,” co-authored by Siewiorek, Starner, and Asim Smailagic.[15] The results of CMU’s research were then spread through a number of students and faculty who came through the EDRC and QoLT and started companies, producing products that laid the foundation of today’s wearable technology industry. These included BodyMedia, one of the first companies to commercially offer wearable sensors; Morewood Labs, which designed electronics for the first five models of wearables from FitBit, Inc.; and ESI, a forerunner to Inmedius that developed tools for authoring Interactive Electronic Technical Manuals for F-18 fighter jet maintenance worldwide (eventually acquired by Boeing to transfer the technology to commercial aircraft). Google Glass, a voice-activated computer/monitor combination worn on eyeglass frames, has direct ties to the wearable computing research at EDRC through Thad Starner, who went to work at Google as the proselytizer for this technology (Figure 11-7).

Figure 11-7: Google Glass eyewear, Explorer Edition (Credit: Google)

From 1995 to 1997, four EDRC-inspired wearable computers (VuMan 3, MoCCA, Digital Ink, and Promera) won prestigious international design awards, making the EDRC design team competitive with internationally known design firms such as Frog Design and IDEO.

The EDRC inspired a summer design course at CMU that originally led to the concept of wearable computers and to building the first models. The methodology developed for rapidly designing and building CMU’s wearable computers has since been used for over 25 years in interdisciplinary engineering capstone design courses worldwide.

Computer-Integrated Surgical Systems and Technology (CISST), Johns Hopkins University, Class of 1998

Robotic Surgery: Over its ten-year lifetime as an ERC, the Center for Computer Integrated Surgical Systems and Technology, based at John Hopkins University, made major advances in robotic surgery. These include new technologies that steady the hands of surgeons while they perform microscopically precise eye surgeries; tiny robotic “snakes” that can travel through the esophagus to remove hard-to-reach tumors that would otherwise require very invasive surgery to reach; and visualization and mapping tools that give surgeons greater confidence and accuracy in performing biopsies, delivering radiation seeds, and other delicate operations.

The vision that drove and still drives the Center’s research is to integrate cutting-edge technologies into systems that are able both to greatly improve physicians’ ability to plan and perform existing surgical interventions and to enable new procedures that would not otherwise be possible. This vision has led the Center to explore all aspects of computer-integrated interventional medicine. The advances pursued will eventually touch upon virtually every aspect of care delivery: more accurate procedures; more consistent, predictable results from one patient to the next; improved clinical outcomes; greater patient safety; more cost-effective methods for treatment of care; better methods for physician training; and creation of an information infrastructure that will facilitate experience-based methods for assessing treatment alternatives and improving procedures.

Although the Center’s central focus has been on medical robotics, its interdisciplinary research has accordingly been very broad, encompassing medical imaging, modeling of patient anatomy and surgical procedures, novel sensors and mechanisms, human-machine interactions, and systems science.

The Surgical Assistant Workstation: The practice of surgery has long involved direct hands-on operations performed by a highly skilled surgeon. But in recent years that age-old model is being expanded. Academic and industrial research and development are combining to produce robotic assistants that extend and augment the abilities of surgeons to perform operations that are beyond the limitations of human eyes and hands. We are also entering the era of “telesurgery,” in which teleoperated robotic surgical systems in combination with high-resolution video and broadband internet communications will be employed to perform surgery remotely, vital given growing populations worldwide and an impending shortage of qualified surgeons.

The CISST is a leader in research that underlies the advances being made in this new realm of surgical support. One example is the Surgical Assistant Workstation (SAW) project. SAW is a modular, open-source software framework that is designed to serve several important purposes: it can be used for rapid prototyping of new telesurgical research systems. It provides enhanced 3-D visualization of the surgical site, and it allows users to interact with the surgical system using 3-D manipulations. This versatile framework addresses a very important need to support systems-level research in medical robotics and computer-integrated interventional medicine, on the one hand, while also promoting advanced research on the individual components of such systems. Additionally, it provides a flexible and low-cost mechanism for industrial researchers to evaluate academic research advances within the context of their own commercial products.

Being modular, the SAW framework includes a library of software components that can be used to implement single-user or multi-user robot control systems that rely on complex video pipelines and an innovative, highly interactive surgical visualization environment. The software is designed to be plug-and-play, so system developers can add support for their own robotic devices and hardware platforms.

Image-guided Needle Placement: The value of image-guided needle-based therapy and biopsy for dealing with a wide variety of medical problems is proven. However, both the accuracy and the procedure time vary widely among practitioners of most systems currently in use. Typically, a physician views the images on the scanner’s console and then must mentally relate those images to the anatomy of the actual patient. A variety of virtual reality methods, such as head-mounted displays, video projections, and volumetric image overlay have been investigated, but all these require elaborate calibration, registration, and spatial tracking of all actors and components. This creates a rather complex and expensive surgical tool and requires a surgeon with exceptional skills to integrate the images with needle placement in real time.

Researchers at the CISST, in collaboration with Dr. Ken Masamune of Tokyo Denki University in Japan and the Siemens Corporation, developed an inexpensive 2D image overlay system to simplify, and increase the precision of image-guided needle placements using conventional CT scanners. The device consists of a flat LCD display and a half mirror, mounted on the gantry. When the practitioner looks at the patient through the mirror, the CT image appears to be floating inside the patient with correct size and position, thereby providing the physician with two-dimensional “X-ray vision” to guide needle placement procedures.

By enhancing the physician’s ability to accurately maneuver inside the human body, needle steering could potentially improve a range of procedures from chemotherapy and radiotherapy to biopsy collection and tumor ablation, all without additional trauma to the patient. By increasing the dexterity and accuracy of minimally invasive procedures, anticipated results will not only improve outcomes of existing procedures, but will also enable percutaneous procedures for many conditions that currently require open surgery. Ultimately, this advance could also significantly improve public health by lowering treatment costs, infection rates, and patient recovery times.

Center for Biofilm Engineering (CBE), Montana State University, Class of 1990

Biofilm Control by Signal Manipulation: Biofilms form on surfaces in contact with water. (See Figure 11-8.) At the time NSF funded this ERC it was known that all engineered aqueous systems suffer from the deleterious effects of biofilm formation, which causes fouling, corrosion, and filter blockage in industrial systems and much more profound problems in the human body, such as chronic infection from medical device implants and wounds. Marine systems are adversely affected by bacterial fouling, and by subsequent macrophyte colonization; and the increased energy costs of propelling fouled hulls through the water costs the US Navy billions of dollars per year. Little was known about how to combat the formation of biofilms.

Figure 11-8: Biofilms cause problems with many types of surfaces (Source: CBE)

In the late 1990s, researchers at the CBE discovered that biofilms are composed of cells in matrix-enclosed micro-colonies and that these micro-colonies form “towers” interspersed between open water channels, to enable the flow of nutrients to nourish the bacteria living in the colonies. They concluded that a system of chemical signals must control the development of these complex communities. They were then the first to show, in an April 1998 Science paper,[16] that biofilm formation in Gram-negative bacteria[17] is controlled by chemical signals (acyl homoserine lactones, or AHLs) that also control quorum sensing processes by which bacteria “sense” the number of cells present in a given ecosystem. Subsequently, the CBE described many different signals of this type. Because of this discovery, the ERC and the medical research community began to realize that many chronic diseases, such as cystic fibrosis, prostatitis, and chronic wounds, are the result of biofilm formation.

Biofilm control signals have subsequently been identified in many economically important organisms and several start-up companies sought to find and commercialize specific signal-blocking analogues in order to control biofilm formation. The CBE was awarded a patent on biofilm control by signal manipulation and a start-up firm, BioSurface Technologies, was spun off to capitalize on this technology.

Biofilm as a Causative Agent in Chronic Wounds: Together with Dr. Randy Wolcott of the Southwest Regional Wound Care Clinic in Lubbock, Texas, the CBE showed that chronic wounds, such as diabetic lower extremity ulcers, are due to persistent biofilm infections (Figure 11-9). Preliminary work in this field led to the award of a NIH grant to the CBE to continue to develop models for the in-vitro study of chronic wounds and assess the efficacy of anti-biofilm agents.

Figure 11-9: Biofilms attack various tissues and structures in the human body. (Source: CBE)

Biofilm Assessment Methodologies: The CBE was instrumental in standardizing methods of measuring biofilms. This impacts how commercial products are evaluated by regulatory agencies, how health-related guidelines are enforced, and how researchers choose assays to quantify attached cell growth. The Center led activities within standard-setting organizations such as ASTM and the American Dental Association, as well as within regulatory agencies such as EPA and FDA, to propose statistically valid methods for biofilm assessment and quantification.

See Chapter 5, Section 5-B for further description of the CBE’s work in biofilms.

Biomimetic MicroElectronic Systems Center (BMES), University of Southern California, Class of 2003

Prosthetic Retina: By May 2014, artificial retina technology developed by a BMES team led by Center Director Dr. Mark Humayun had been successfully implanted in five U.S. patients with retinitis pigmentosa (RP), a condition that often leads to blindness. This was the first “bionic eye” to be approved for use in the United States through Food and Drug Administration market approval (in February 2013). By fall 2013, Second Sight Medical Products, Inc. (BMES’ commercial partner) received Medicare reimbursement approval and began transitioning the bionic eye—known as the Argus II Retinal Prosthesis System (Figure 11-10)—into mainstream application, with availability at 12 surgical centers throughout the U.S. as of the time of writing.

Figure 11-10: The Argus II retinal prosthesis developed by BMES and Second Sight Medical Products is entering clinical practice. (Credit: Second Sight)

The Argus II was the first FDA-approved long-term therapy for people with advanced RP in the U.S. and a breakthrough device for treating blindness. As a result of the retinal prosthesis, patients with chronic degenerative eye disease can regain some vision to detect the shapes of people and objects around them. The sight gained is enough to allow patients to navigate independently, improving their mobility as well as confidence.

The effort by Humayun and his colleagues received early and continuing support from NSF, the National Institutes of Health and the Department of Energy, with grants totaling more than $100 million. The private sector’s support nearly matched that of the federal government. The Argus I and Argus II systems won worldwide recognition, earning many prestigious awards. The BMES ERC has since developed a newer prototype system with an array of more than 15 times as many electrodes and an ultra-miniature video camera that can be implanted in the eye. The center, now “graduated” from NSF ERC Program funding, continues to work toward improving the Argus II and training surgeons in implanting it, as it is now available for patient use.

Hippocampal Prosthesis: Impairment of brain function can stem from many sources: external injury such as from head trauma, internal damage such as from stroke or epilepsy, or degenerative diseases such as Parkinson’s or Alzheimer’s. Since 2003, Dr. Theodore Berger and his colleagues at the BMES ERC have made rapid, even revolutionary strides toward developing implantable microcomputer chips that can bypass damaged parts of the brain and provide some restoration of brain function. Their main focus has been on developing a chip that can replace the function lost due to damage to the hippocampus, an area of the brain that is essential for learning and memory. The implanted chip will bypass the damaged brain region by mimicking the signal processing function of hippocampal neurons and circuits. The electronic prosthetic system is based on a multi-input multi-output (MIMO) nonlinear mathematical model that can influence the firing patterns of multiple neurons in the hippocampus. The ultimate aim is to devise neural prostheses to replace lost cognitive and memory function in injured soldiers, accident victims, and anyone suffering from cognitive and memory impairment.

The team began with successful tests in live rats in 2011 and moved to successful tests in non-human primates in 2013. They began testing in humans in 2015; and in 2016 a startup named Kernel was formed to move toward commercialization and clinical use of the hippocampus memory prosthetic. By 2018, working with researchers at Wake Forest Baptist Medical Center, Berger and his team had demonstrated the successful implementation of the prosthetic system, for the first time using clinical subjects’ own memory patterns to facilitate the brain’s ability to encode and recall memory. In the study, published in the Journal of Neural Engineering, participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements.[18]  

Synthetic Biology Engineering Research Center (Synberc), University of California at Berkeley, Class of 2006

BIOFAB: BIOFAB, built by Synberc in 2009 with support from an ERC Innovation award, was the world’s first biotechnology design-and-build facility. In its first two years, the BIOFAB completed two major technical projects. First, the team assembled and tested all of the most popular expression control elements used in prokaryotic genetic engineering, then applied a new statistical approach for quantifying the primary activity of each part and also how much each part’s activity varies across changing contexts. This was the first example in synthetic biology of defining part quality in terms related to part reuse. It also provided a framework for establishing shareable descriptions of standard biological parts and devices. Second, the BIOFAB team invented and delivered the first example of a design architecture for standard biological parts that function reliably across changing contexts. This provided an initial realization of one of the core engineering dreams underlying synthetic biology—i.e., standard biological parts.

The Center created the BIOFAB to facilitate academic and industry synthetic biology projects, enabling researchers to mix and match parts in synthetic organisms to produce new drugs, fuels, or chemicals. The pilotprojectswere essential to launching BIOFAB’s operations and provide the first systematic insights into how standard biological parts behave in combination with each other.

 “Off-the-Shelf” Biological Parts: Researchers at Synberc addressed the need for highly reproducible control elements in synthetic biology. The Joint BioEnergy Institute (JBEI), a multi-institution partnership that included Synberc, created new software—the Inventory of Composable Elements, or JBEI-ICE—that made a wide variety of essential information readily available to the synthetic biology design community for the first time. Synberc was instrumental in developing a robust “Web of Registries” (Figure 11-11) to make parts and tools available to the design community. JBEI-ICE is an information repository of these quality parts for designers to use to engineer biological systems. Researchers at Synberc partner institution MIT made advances in developing “biological circuit” components that can be relied on to work the same way every time. Using genes as interchangeable parts, synthetic biologists can design cellular circuits to perform new functions, enabling improved organisms such as plants that produce biofuels and bacteria that can detect pollution.

Figure 11-11: Synberc/JBEI’s Web of Registries gives synthetic biology designers ready access to parts and tools. (Source: Synberc)

CRISPR: The ability to control gene expression in an organism is an essential goal of synthetic biology. Gene expression is used in the synthesis of functional biochemical material, i.e., ribonucleic acid (RNA) or protein. In 2012, independent investigators Jennifer Doudna (UC-Berkeley) and Emmanuelle Charpentier (Max Planck Institute for Infection Biology) demonstrated that CRISPR-Cas9[19] (enzymes from bacteria that control microbial immunity), a naturally occurring genome editing system, could be used for programmable editing of genes in DNA, thus controlling gene expression. This seminal discovery launched a race to exploit the CRISPR-Cas9 system’s precision-cutting approach to gene editing by applying it to the genomes of higher organisms. Synberc-affiliated researchers at MIT (chiefly George Church and colleagues) constituted one of two teams that led the race, using CRISPR/Cas9 to edit genes in human stem cells. The other team, led by Feng Zhang of Harvard and MIT, simultaneously applied the technique to mouse and human cells in vitro. Both teams reported their results in the same issue of Science in January 2013.[20] These researchers were able to engineer CRISPR for the first time to do precise editing, launching a revolution in gene editing that is accelerating progress in synthetic biology.

Quality of Life Technologies ERC (QoLT), Carnegie Mellon University and University of Pittsburgh, Class of 2006

Assistive Robotics (Non-Surgical): Robots are extremely effective in environments like factory floors that are structured for them, and currently ineffective in environments like our homes that are structured for humans. The Personal Robotics Lab of the Robotics Institute at Carnegie Mellon University, supported by researchers and funding from the Quality of Life Technology Center (QoLT), is developing the fundamental building blocks of perception, navigation, manipulation, and interaction that will enable robots to perform useful tasks in human environments.[21] An “assistive robot” is one that can sense and process sensory information in order to benefit people with disabilities as well as seniors and others needing such care. The United Nations predicts that the global over-65 population will grow by 181% and will account for nearly 16% of the population by 2050.[22] Assistive eldercare robots are targeted at this aging population and are expected to rapidly expand the robot base, surpassing the numbers in industrial settings by then.

Service tasks such as complex cooking and cleaning remain major technical challenges for assistive robots. But QoLT researchers are finding solutions with projects such as the Home Exploring Robot Butler. The two-armed, mobile robot, known as “HERB,” (Figure 11-12) can recognize and manipulate a variety of everyday objects. It can recycle bottles, retrieve personal items, and even play interactive games. HERB recently learned to cook and serve microwave meals.

Figure 11-12: HERB is the QoLT ERC’s assistive robot designed for personal home use. (Source: QoLT)

QoLT studies have explored how people react differently to variations in a robot’s voice, conversation and embodied movement patterns — with factors like gender, culture, pace, intonation, emotion, and subject matter all under consideration. “As intelligent and adaptive technologies become ever more integrated into our normal personal lives, we may not view them as cold, personal devices—mere ‘appliances,’’ said Daniel Siewiorek, QoLT Center Director. “We expect they will become true partners and companions to the human users they serve.”[23]

As in the case of the BMES retinal prosthesis, QoLT technologies offer a good example of the direction interaction with end users—in this case, individual people—in which ERC researchers engage, as well as the personal benefits that their advances provide directly to end users. See Chapter 5, Section 5-B for further discussion of end-user interactions.

11-B(b) Communications

Institute for Systems Research (ISR) (formerly Systems Research Center, or SRC), University of Maryland, Class of 1985

Satellite-based Internet Access: Despite the rapid deployment of traditional cable and DSL broadband across the country and around the world, billions of people live beyond the reach of high-speed, wired Internet connections. That fact created a multi-billion-dollar market for satellite-based, two-way Internet connections conceived and designed by the SRC/ISR.

The then-SRC worked closely with Hughes Network Systems in developing the hardware and software necessary to multicast broadband data across satellites to terrestrial receivers at homes and offices. Then-Director of the Center, John Baras, created the necessary algorithms. Hughes has marketed the resulting product under a variety of names, starting as DirecPC (Figure 11-13) and now incorporated into its HughesNet® offering, currently sold to 1.3 million subscribers in the Americas (in 2018) under plans that reach speeds of 25 Mbps for downloads and 3 Mbps for uploads. With each customer currently paying rates that start at about $50 a month, by 2018 the system was bringing recurring revenue of about $1.4 billion per year to Hughes alone. The SRC-developed technology also led to a worldwide industry of other companies delivering similar broadband services over satellites. In addition to utilizing the technology, Hughes hired many of the SRC/ISR students who worked on the project through the years.

In late 2012, the Senior Vice President for Engineering at Hughes described the impact of their collaboration with Dr. Baras and the Center: “About $5 billion can be directly credited to the work done at the SRC. We employ 1,500 people here in the state. Most of them are working on businesses related to this technology. This helped develop not just a new product, but a new industry.”

Figure 11-13: The original Hughes DirecPC system was a novel hybrid of satellite and telephone network links. (Source: SRC)

Network Simulator Testbed (NEST): NEST was a network simulator testbed tool consisting of a software environment to test network designs and scenarios. NEST was installed in over 400 companies and universities, where it was extensively applied in a wide range of networks and distributed systems studies and designs. NETMATE was a comprehensive network management platform consisting of tools to model and organize massive and complex managed information as well as tools to support visual access to and effective interpretation of managed information. For example, Citibank chose NETMATE as the centerpiece of the platform it used to manage its enterprise network worldwide.

Integrated Media Systems Center (IMSC), University of Southern California, Class of 1996

Immersive Media: The vision of the IMSC was the creation of a three-dimensional multimedia environment that immerses people at remote locations in a completely lifelike visual, auditory, and tactile experience. Before the term “virtual reality” was in common use, they termed this environment and the experience it would produce “immersipresence”—taking video conferencing to a level that would enable interaction and collaboration among people in widely separated locations that is so realistic it can largely replace the need for travel in business, education, collaborative research, and even to enjoy sports, arts, and other entertainment events.

In pursuit of this vision, the IMSC developed an array of technologies that support the creation, use, distribution, and effective communication of multi-modal information. These technologies include: high-definition video transmitted over a shared network at extremely high speed; 360º panoramic video; multichannel immersive audio that reproduces a fully realistic aural experience; real-time digital storage, playback, and transmission of multiple streams of video and audio data; and integration of all these technologies into a seamless immersipresence experience.

IMSC advances in several key technologies enabled a richer and higher-fidelity aural and visual ambience. These ground-breaking technologies led collectively to the Remote Media Immersion (RMI) system. Demonstration of the RMI integration in a concert hall with audience members located remotely proved the feasibility and viability of advanced, immersive systems on the internet. The simultaneous, synchronized live transmission of four channels of high-definition video and 10.2 channels of audio over the Internet had never before been achieved or even attempted at that time (2004).

11-B(c)  Advanced Manufacturing

ERC for Intelligent Manufacturing Systems (CIMS), Purdue University, Class of 1985

Quick Turnaround Cell: The Quick Turnaround Cell (QTC), developed at the Purdue University ERC for Intelligent Manufacturing Systems, was one of the first engineering system testbeds produced in an ERC. It was a flexible manufacturing cell that integrated design, cutting, and quality inspection. It was used for rapid production of small-batch and one-of-a kind machined parts. It led the ERC team to understand that the geometries in use at that time were not sufficient for real production situations, yielding the development of more advanced feature-based approaches to design of complex parts. This realization identified the essential role of the concept of “features” to integrate computer-based manufacturing programs. The QTC was advanced in sophistication over time as it moved out of the ERC through government and industry partners to the point where it was, by 1992, in use at the Army Missile Command and Loral Vought Systems for design and machining of prototype parts. By the mid-1990s, all new computer-aided design programs were using this concept.

Engineering Design Research Center (EDRC), Carnegie Mellon University, Class of 1986

Traveling Salesman Algorithm: A joint project between an EDRC Ph.D. candidate who was a Visiting Scientist at the DuPont Company and a Dupont researcher who became an Industrial Resident at the EDRC resulted in the development of a parallel branch-and-bound algorithm[24] for solving the asymmetric traveling salesman problem, which made it possible to solve problems larger than existing algorithms could. The corresponding mathematical model was used to optimize the production sequence of different product lots in multiproduct batch-processing plants. A major accomplishment of this work was that problems with up to 7,000 lots could be solved to the optimal point in less than 20 minutes of computation time. The EDRC student who worked on the project was hired by DuPont to implement the model. The software was transferred to DuPont and applied to the operation of 35 DuPont businesses encompassing 8 plants worldwide. The annual cost savings for Dupont, documented in November 1989, were on the order of $2.5M—of which almost 10 percent could be attributed directly to the traveling salesman optimization procedure developed at the ERC. A number of applications for these algorithms specific to the chemical industry were also identified and adopted by chemical processors.

Reconfigurable Manufacturing Systems (RMS) ERC, University of Michigan, Class of 1996

Reconfigurable Inspection Machine on GMC Manufacturing Line: Today’s automotive engine technology is extremely sophisticated and requires manufacturers to maintain exacting quality specifications to ensure optimum engine performance and reliability. Therefore, manufacturers are increasingly employing in-line inspection stations to inspect critical part features on every part. In-line inspection minimizes the chances of defective parts reaching the customer and facilitates process control and improvement. The best applications of in-line inspection are those where the quality is highly unpredictable.

A good example is the need for in-line surface porosity inspection systems. Surface porosity is caused by tiny voids or pits at the surface of machined castings such as engine blocks and engine heads. It begins in the casting process when gases are trapped in the metal as the casting solidifies, creating voids in the material. If the void is exposed during machining, it leaves a small pit (a pore) at the surface. Although they are typically smaller than 1mm, surface pores can cause significant leaks of coolant, oil, or combustion gases between mated surfaces and cause severe damage to engines and transmissions. If such a pore is not detected, the consumer will have a noisy engine with a shorter lifetime. A major challenge to engine manufacturers lies in the difficulty in objectively measuring the sizes and location of irregularly shaped surface pores at production line rates.

As an outgrowth of its Reconfigurable Inspection Machine project, the ERC/RMS developed a prototype machine-vision system for in-line surface porosity inspection of engine blocks and engine heads. The system utilizes a specially designed vision system to acquire very high-resolution (300 megapixel) images of the part surface. The high-resolution images are then analyzed rapidly to detect, locate, and measure pores without slowing the production line.

In July 2006, General Motors Corp., an ERC member company, installed an industrial system for in-line surface porosity inspection of engine blocks in Flint, Michigan (Figure 11-14). The system is based on the technology developed at the ERC/RMS. The inspection system is integrated into the production line, and a conveyor moves engine blocks through the inspection station. Every part is measured within 15-20 seconds, allowing the human inspector to do a more thorough job much faster.

Figure 11-14: At the In-Line Porosity Inspection Station in GMC’s Flint, Michigan, plant, an operator inspects the images of an engine block in which pores were detected and makes an informed decision as to whether the engine block is indeed defective.

Performance Analysis for Manufacturing Systems: A key element of reconfigurable manufacturing systems technology (Figure 11-15) is giving a system-level planner the tools to evaluate the desired volume and mix, comparing productivity, part quality, convertibility, and scalability options. The planner then can perform automatic system-balancing based on algorithms and statistics. One useful software package to perform these tasks is Performance Analysis for Manufacturing Systems (PAMS).

Invented with the support of the ERC/RMS, the PAMS software package analyzes and optimizes manufacturing system performances. It has analysis modules for system throughput and work-in-process calculation and optimization. It can identify machine bottlenecks and calculates the optimal allocation of buffers for pull or push manufacturing systems.

Figure 11-15: The RMS ERC’s Reconfigurable Machining System concept gained rapid acceptance in industry.

In 2007 officials at the Chrysler Indiana Transmission Plant were planning to add more pallets to reduce traffic blockage and streamline their entire Materials Handling System. ERC/RMS analysis using PAMS software instead recommended pulling out 15-18 pallets from the current closed-loop transmission machining line. The plant implemented the recommendation, and Chrysler reported an observed increase of throughput around 5%. Considering the mass production scale of the assembly line, this single improvement on the transmission case machining line has saved Chrysler hundreds of thousands of dollars annually.

Similar applications in 2007 at the Ford Cleveland Engine Assembly line, and in 2008 at four production lines at the Chrysler Kokomo Transmission Plant, realized even greater improvements in production. GM, meanwhile, imported the source codes of PAMS from the ERC and incorporated them into their own production throughput software.

Center for Power Electronics Systems (CPES), Virginia Tech, Class of 1998

Multiphase Voltage Regulator Module: Intel microprocessors operate at very low voltage and high current, and with ever-increasing speed, requiring a fast and dynamic response to switch the microprocessor from sleep to power mode and vice versa. This operating mode is necessary to conserve energy, as well as to extend the operation time for any battery-operated equipment. The challenge for the voltage regulator module (VRM) is to provide tightly regulated output voltage with fast dynamic response in order to transfer energy as quickly as possible to the microprocessor. The first generation of VRM, developed for the Pentium II processor in the late 1990s, was too slow to respond to the power demand of subsequent generations of microprocessors, which included the Pentium III and Pentium 4. As a result, a large number of capacitors had to be placed adjacent to the microprocessor in order to provide the required fast power transfer. This solution became costly and bulky.

Responding to Intel’s microprocessor challenges, CPES established a mini-consortium of companies with a keen interest in the development of VRMs for future generations of high-speed microprocessors. When CPES at Virginia Tech first proposed the multiphase buck converter as a VRM for the Intel processors, this became the standard practice in the entire industry.

Figure 11-16: The multi-phase voltage regulator module developed at CPES became standard in Intel microprocessors. (Source: CPES)

Subsequently, every computer containing Intel microprocessors used the multiphase VRM approach developed at CPES (Figure 11-16). This particular technology developed into a multi-billion-dollar industry and gave U.S. industry the leadership role in both technology and market position. It also enabled new job creation and job retention in the U.S. Without this technology infusion from CPES, U.S. industry would have lost its market position in providing power management solutions to the new generation of microprocessors to overseas low-cost providers.

Novel Multi-Phase Coupled-inductor and Current Sensing: Today, every microprocessor is powered by a multi-phase voltage regulator (VR). Each phase employs a sizeable energy storage inductor to perform the necessary power conversion. Generally, for such an application, large inductance is preferred for steady-state operation, so that the current ripples can be reduced. On the other hand, a smaller inductor is preferred for fast transients, such as from “sleep mode” to “wake-up mode” and vice versa. To satisfy this conflicting requirement, in principle, a nonlinear inductor would be preferred so that during the steady state the inductance value is large, while the transient value is small. However, there is no simple way of realizing such a nonlinear inductor. In 1999, CPES proposed a coupled-inductor concept to address the issue. When the inductors are coupled in a multi-phase buck converter, by virtue of the switching network, they behave like nonlinear inductors. The equivalent inductance is large for a steady state and small for a transient. This enables a multi-phase VR to deliver power to the microprocessor that is currently operating at GHz clock frequency. This coupled-inductor concept was adopted in industry practice, where it enabled much improved performance, resulting in reduced footprint and cost. Coupled induction is still widely used in VR applications today.

Particle Engineering Research Center (PERC), University of Florida, Class of 1995

Synthesis of Nanofunctionalized Particulates by the Atomic Flux Coating Process (AFCP): The aim of this ERC was to develop innovative particulate-based systems for next-generation processes and devices. To that end, PERC researchers synthesized a then-new class of particulate materials with specific functional characteristics. The particulates are synthesized by the attachment of nanometer-sized inorganic functional clusters onto the surface of core particles. The researchers demonstrated the synthesis of artificially structured, nano-functionalized particulate materials with unique optical, cathodoluminescent, superconducting, and electrical properties. It was shown that attaching atomic-to-nanosized inorganic, multi-elemental clusters onto the surface of the core particles generated materials and products with significantly enhanced properties.

Such materials can be used for a wide range of existing and emerging products involving advanced ceramics, metals, and composites which span multiple industries such as aerospace, automobile manufacturing, machining, vacuum electronics, batteries, data storage, catalysis, and superconductors. The Center applied this technology to develop coated drugs for slow release. It was expected that hundreds of millions of dollars would be saved in this pharmaceutical application alone.

In 1999 a company called Nanotherapeutics was formed (originally as NanoCoat) to license PERC’s atomic flux coating process technology (Figure 11-17). The CEO of the company was James Talton, a former graduate student in the ERC. A particular focus was on coating drug molecules with thin, porous films to allow the timed release of the drug. The company developed several other proprietary drug delivery technologies for pharmaceuticals, including, for example, an injectable bone filler. In 2009, Nanotherapeutics was awarded a $30.9 million, 5-year contract from the National Institute of Allergy and Infectious Diseases, part of the National Institutes of Health, to develop an inhaled version of the injectable antiviral drug, cidofovir. The drug would provide non-invasive, post-exposure prophylaxis and treatment of the Category A bioterrorism agent smallpox. In 2007 the company had been awarded a $20 million contract to develop an inhaled version of gentamicin for the post-exposure prophylaxis and treatment of tularemia and plague, both also Category A bioterrorism agents.

Figure 11-17: The PERC ERC spun off several successful startups based on Center-developed technologies. (Source: PERC}

In late 2016, the company opened a new $138M plant near Gainesville, Florida. The plant was built to fulfill a Department of Defense contract potentially worth up to $359 million. In October 2017, Nanotherapeutics changed its name to Ology Bioservices.

Center for Advanced Engineering Fibers and Films (CAEFF), Clemson University, Class of 1998

Modifying Substrates: The CAEFF provided an integrated research and education environment for the systems-oriented study of fibers and films, promoting the transformation from trial-and-error development to computer-based design of fibers and films. CAEFF researchers developed novel technology in the area of surface modification of polymeric and inorganic substrates. This technology uses a “primer” layer of poly glycidyl methacrylate (PGMA) to provide reactive groups on a surface that can subsequently be functionalized with other molecules, including biomolecules. Clemson University licensed the technology to one company, Invenca, in the form of an exclusive license restricted to the field of liquid chromatography and to Aldrich Chemical as a non-exclusive license restricted to the field of soft lithography. A third company, Specialty & Custom Fibers, also licensed the surface-treatment technology for anti-fouling fibers for biological species.

11-B(d) Energy, Sustainability, Infrastructure

Center for Advanced Technology for Large Structural Systems (ATLSS), Lehigh University, Class of 1986

ATLSS Integrated Building Systems: The AIBS program was developed to coordinate ongoing research projects in automated construction and connections systems in order to design, fabricate, erect, and evaluate cost-effective building systems, with a focus on providing a computer-integrated approach to these activities. A family of structural systems, called ATLSS connections, was developed with enhanced fabrication and erection characteristics. These ATLSS connections have the capability of being erected using automated construction techniques. This feature minimizes human assistance during construction, resulting in quicker, less expensive erection procedures in which workers are less susceptible to injury or fatalities. The technology for automated construction is heavily dependent on the use of Stewart platform cranes, which are controlled by a system of six cables to allow precise movement in six directions. A scale-model Stewart platform crane was constructed in the ATLSS laboratory to test the feasibility and limitations of automated construction with these connections.

Bridge Inspection Technologies: A series of bridge-related technologies were developed at ATLSS which saw implementation and commercialization. The bridge technologies were designed to assist bridge inspectors and owners to inspect, assess, and maintain bridges in a more cost-effective and safe manner. These are:

Corrosion Monitor: Provides a measurement of atmospheric corrosivity by the production of an electric current generated during the corrosion process. It quantifies the corrosion on a steel element due to atmospheric conditions such as dust, humidity, condensation, salt spray, etc. The device was been placed on six bridge sites in five states.

Fatigue Monitoring System: Conceived to simplify the process of collecting and processing the stress history data required to estimate the remaining fatigue life of steel bridges. The key differentiating technology is a Fatigue Data Processing Chip which accepts conditioned data, performs “rainflow counting” calculations, generates a stress histogram, computes equivalent stress ranges, and stores the processed data. The ATLSS laboratory was used as a full-scale testbed for validating the chip, including using telemetry to retrieve data. In addition, a field demonstration was arranged with one of the ATLSS industry partners.

Hypermedia Bridge Fatigue Investigator (HBFI): Knowledge-based computer system that was designed to increase the effectiveness of bridge inspection of fatigue-critical components. HBFI tells inspectors where in a structure to look for evidence of fatigue cracking, leading to early detection and hence more economical repairs when needed. The hypermedia portion of the software guides an inspector through data entry on an existing bridge, and provides supplemental information on the concepts of fatigue and inspection for fatigue.

Smart Paint: Developed to augment visual inspection of steel bridges for existence of cracks. The system consists of dye-containing microcapsules mixed with paint. In the event of a crack in the underlying substrate, the paint cracks, resulting in the rupture of the microcapsules. The dye is released and quickly appears along the crack edges. The Smart Paint was tested in the laboratory, both on small specimens and full-scale ship components under long-term cyclic loading. A field bridge site was developed to further test the weatherability and performance of the paint.

Advanced Combustion Engineering Research Center (ACERC), Brigham Young University and the University of Utah, Class of 1986

3-D Entrained-flow Coal Combustion Model: With 85 percent of the world’s energy generated by burning fossil fuels, ACERC’s early research was aimed at making coal-burning plants cleaner and more efficient. Specifically, the center fostered the use of computational fluid dynamics (CFD) as a tool in this field, including improvements to the comprehensive 3-D entrained-flow coal combustion model (PCGC-3).

The current CFD market is dominated by software that arose from two companies, Fluent (now part of Ansys) and StarCD (part of CD-adapco until it was acquired by Siemens in 2016). Parts of ACERC codes are embodied in commercial software packages from both, which provide users with substantial cost savings in terms of lower emissions of nitrous oxide, less carbon in the ash, and the ability to design new systems or refit old systems without extensive full-scale testing.

The benefits of CFD are largely aerodynamic. The software helps decide how to design burners, where to place overfire air registers, and whether or not to add additional fuel (i.e., reburning) or ammonia (i.e., selective non-catalytic reduction). It also helps plant operators evaluate coal blends, anticipate the impacts when they switch coals, and identify problem areas on the boiler walls due to corrosion, ash buildup, hot spots, etc. The value of these cost savings, while recognized as significant, is difficult to quantify, since the software is now used worldwide in hundreds of plants and cost data are generally not available.

2-D and 3-D Comprehensive Combustion Codes: A major research thrust of ACERC was and is the development of comprehensive computer models that can aid in the solution of complex combustion problems—primarily of coal but also of oil, natural gas, and toxic and municipal solid waste. The ACERC developed a two-dimensional (2-D) combustion code, a generalized, steady-state analysis tool that can characterize combustion processes when the gas flow is assumed to be along a flat plane. Numerous large oil and chemical companies have used this code to model various commercial applications of coal gasification. The second code is a 3-D code for multi-dimensional modeling of turbulent reactive flows. Many companies have used this code not only to model the combustion process but also to identify more efficient geometries to use in furnace design.

Offshore Technology Center (OTRC), Texas A&M University, Class of 1988

Deepwater Riser: This ERC’s mission was “to provide technology, expertise, and services needed for the development of drilling, production, and transportation systems that enable the safe and economically viable exploitation of hydrocarbon resources in deep and ultra-deep water,” especially in the Gulf of Mexico. To enable well drilling by the offshore oil industry in water as deep as 3,000 feet, the OTRC used composite materials to develop a deep-water riser for enclosing the drill-pipe string for exploratory and production-well drilling. This advance was adopted by Westinghouse-Marine, ABB, Deepstar, Hercules, and Reading & Bates to significantly reduce the cost of deep-water production.

Homopolar Pulse Welding: In deep water (1,500 feet or greater), line pipe had to be lowered vertically off the stern of pipe-laying barges, allowing for only one station for welding segments of pipe—a slow process. The OTRC developed a novel process using a homopolar generator to produce a high-intensity pulse (about 1 million amps) of DC current that heats the interface between two sections of pipe, which are then quickly pressed together with great force in a press/clamp fixture. The result is a welded joint with excellent weld quality and high corrosion resistance. The cycle time for the weld is quite fast and the process is both reliable and largely non-manual, greatly improving safety.

The Center’s homopolar pulse welding technology was adopted by Shell Development, Texaco, British Petroleum, Exxon, Mobil R&D, and Amoco Research, resulting in significant reduction in construction cost and improved reliability of welds.

ERC for Compact and Efficient Fluid Power (CCEFP), University of Minnesota, Class of 2006

Hybrid Excavator: Researchers at the CCEFP and Caterpillar Inc. jointly developed and commercialized a novel hydraulic hybrid excavator system (Figure 11-18) that combines hydraulic hybrid technology with energy-efficient displacement-controlled actuation. Hydraulic accumulators store and reuse brake energy, which further reduces fuel consumption. Novel control and power management concepts allow effective power flows between the actuators, engine, and accumulator. System simulations showed that the innovative architecture of the hydraulic hybrid excavator allows for a 50% decrease in engine footprint size, while providing an additional 20% in fuel savings when compared with a non-hybrid displacement-controlled excavator. In 2014, Caterpillar commercialized the hydraulic hybrid excavator as model 336EH. In contrast to competing electric hybrid excavators, the 336EH was a commercial success, having captured 15% of the excavator market in its class by 2016.

Figure 11-18: The CCEFP’s Hydraulic Hybrid Excavator (Source: CCEFP)

Miniature Free Piston Engine Compressor: Basic assumptions used in designing larger engines are not valid at smaller scales. The design of tiny valves, sensors, and actuators is more challenging, and ignition behavior differs.  Researchers at the CCEFP developed a miniature free-piston engine compressor for a fluid power orthosis (Figure 11-19). At the time, 2017, the prototype device was the world’s smallest air compressor. The fabrication of miniature components with tight tolerances is also not easy. The miniature air compressor developed at the CCEFP can be used on other small mobile applications, opening new markets for the fluid power industry. The miniature free piston engine compressor has a tiny homogeneous charge compression ignition (HCCI) engine compressor that creates compressed air at 80 pounds per square inch (psi) for small powered devices, such as an active ankle foot orthosis or a powered construction tool. Further research led to benchmarking of other devices that might use the new CCEFP engine, such as the first-ever models of small engines that can power model aircraft. The task of developing the tiny free piston engine required comprehensive mathematical models of the ignition, fluid flow, and mechanical motion of the parts, as well as clever manufacturing methods.

Figure 11-19: The world’s smallest free-piston engine for human-scale, mobile fluid power applications. (Source: CCEFP)

11-B(e) Environment

ERC for Environmentally Benign Semiconductor Manufacturing (CEBSM), University of Arizona, Class of 1996

Reducing Water Use in IC Manufacturing: This ERC was jointly supported by NSF and the Semiconductor Research Corporation to address environmental challenges posed by semiconductor technology manufacturing. The semiconductor industry’s use of large quantities of highly purified water in integrated circuit (IC) chip manufacturing is not only costly but also has large potential environmental implications. Along with its partners, the CEBSM set up a unique physical and simulation testbed facility that allowed researchers to devise improved water conservation and recycling tools and techniques for IC fabrication. The goal was to provide technology that would make it possible to reduce water usage by 10% to 60%, depending on the fabrication technology being used. Achieving it required a series of breakthroughs in water purification methods, use reduction, recycling, and reuse of water. (See Figure 11-20.)

Figure 11-20: The CEBSM’s process for water use and reuse (Credit: CEBSM)

Some of the conservation and resource management techniques developed at the facility were transferred to industry, where they saved between $250,000 and $2 million annually at each manufacturing site. This research received a number of high-level national and international awards, including from Semiconductor Equipment and Manufacturing International (SEMI) and the Semiconductor Research Corporation (SRC), which recognized the contributions as “major innovations that have significantly impacted industry and society.”

Engineering Research Center (ERC) for Biorenewable Chemicals (CBiRC), Iowa State University, Class of 2008

Paradigm Shift in Engineering Enzyme-Derived Products: Research teams at CBiRC  created substantially more stable versions of several polyketide synthases (a class of enzymes). The focus was on increasing the enzymes’ kinetic activity as well as in vivo lifetime during multi-day fermentation runs. This advance demonstrated a new paradigm shift in the engineering of enzyme-derived products in vivo. With regard to CBiRC’s pyrone testbed (one of the Center’s mechanisms to evaluate concepts for production potential), this engineering approach has greatly increased the economic value of CBiRC’s current production platforms. Longer term, the strategy of enhancing the oxidation resistance of polyketide synthases will be employed generally during the engineering of additional polyketide and fatty acid synthases. The stabilized enzymes are being commercialized through Pareto Biotechnologies, Inc., a CBiRC member company. The work also resulted in the filing of a Patent Cooperation Treaty (PCT) application (PCTUS1519058) that includes all members of the two CBiRC teams focused on pyrone-synthase engineering.

11-B(f)  Earthquake Engineering

Mid-America Earthquake (MAE) Center, University of Illinois at Urbana-Champaign, Class of 1997

MAEViz: Predicting the risks and losses due to earthquakes has long been an elusive goal. Collaboration between the MAE Center and the National Center for Supercomputing Application resulted in a pioneering software package, The Mid-America Earthquake Center Visualization Module (MAEViz). MAEViz is an online framework that enables researchers, professional engineers, and officials to visualize the magnitude and pathway of an earthquake and calculate and map seismic loss estimates and risk assessments. (See Figure 11-21.) MAEViz was designed with open source programming and enables interaction between geographically distributed researchers, engineers, scientists, social scientists, and decision makers. The project provides a secure portal interface with a standard look and feel for access via desktop to the many and diverse MAEViz features.

Figure 11-21: The MAEviz online platform for earthquake impact assessments can predict expected property tax losses from residential structural damage, as shown in the figure, where it was applied to Shelby County, Tennessee. (Credit: MAE Center)

The MAE Center enhanced MAEViz by merging it with FEMA’s HAZUS—a methodology and software program for estimating potential losses from earthquakes, floods, and hurricane winds. Combining the two software packages created a simpler and more powerful tool for earthquake researchers to collaborate on experiments with distributed leading-edge computing resources and research equipment.

MAEviz supports over fifty different analyses for buildings, bridges, hazards, lifelines, and socioeconomic implications. In 2008, engineers, social scientists, and economists added a set of new capabilities to MAEviz to predict social vulnerability, fiscal impact, household and population dislocation, shelter requirements, short-term shelter needs, business content loss, business interruption and inventory loss. These features were tested on 17 April 2008, when a magnitude 5.2 earthquake hit the Central U.S. near the Illinois-Indiana border. Within minutes, the MAEviz team issued impact estimates that were proven to be reasonably representative of observations. Following the 2008 Illinois earthquake, usage of MAEviz increased markedly.

In 2014, MAEviz became ERGO, a seismic risk assessment platform and application tool used to help coordinate planning and event mitigation, response, and recovery, which is operated and managed by the National Center for Supercomputing Applications (NCSA). Ergo is also an open source community to support software platform/application, led by the NCSA.  In addition, MAEviz was further developed for fire and used in particular data sets in Turkey under the name HazTurk. The original MAEviz is still available through the existing MAE Center.

Pacific Earthquake Engineering Research (PEER) Center, University of California-Berkeley, Class of 1997

OpenSees: PEER develops the theory and tools that engineers need to design facilities whose performance is individually tailored to the resources and needs of the owner and society. Highrise buildings can be safer, bridges and hospitals remain functional, and museum collections can be protected. A key component is earthquake performance simulation, whereby specialized software is used to model seismic waves, facility response including damage, and consequences in terms of repair costs, downtime, and casualties.

One software innovation, the Open System for Earthquake Engineering Simulation, or OpenSees, takes advantage of modern high-end computing, grid communications, access to databases, and scientific visualizations for improving the ability to model and simulate complex structural and geotechnical systems (Figure 11-22). With advanced seismic simulations, engineers and building/structure owners can visualize outcomes and make critical decisions affecting future facility performance. Recognizing the broad applications of this approach, the NSF Network for Earthquake Engineering Simulation (NEES) System Integration team selected OpenSees as the simulation component for NEESgrid. In addition, a number of researchers in the U.S. and internationally have developed hybrid physical/ computational simulation methods using OpenSees as the simulation engine.

Figure 11-22: OpenSees is a seismic performance simulation tool that stakeholders can use to visualize outcomes of earthquakes and make informed decisions. (Credit: PEER)

MCEER, The University at Buffalo, Class of 1997

Lifelines: Lifeline systems—water and electric power, among others—comprise the infrastructure backbone of all communities. As was glaringly apparent in Haiti in 2010, damage to these systems from earthquakes or other disasters can severely handicap rapid emergency response, the longer-term fundamental quality of life, and a region’s economic foundations, with effects that can ripple throughout the economy. The impacts are even more pronounced in highly developed economies than in less-developed ones.

Figure 11-23: Lifeline utility systems components in the Los Angeles, CA, region (Credit: MCEER)

The resilience of lifeline systems is measured in terms of system robustness or strength and the rapidity with which services are restored following a disaster. MCEER’s solution is a new generation of lifeline systems that are more resilient to earthquakes and other disasters. With a specific focus on electric power transmission networks and water delivery systems, MCEER researchers have developed and deployed a Comprehensive Model for Integrated Electric Power Systems and a Comprehensive Model for Integrated Water Supply Systems based on the nation’s largest metropolitan area, Los Angeles, California (Figure 11-23). Both models incorporate fragility and other data from experimental testing and analyses of the seismic behavior and functionality of various utility system components, and interdependencies between the two systems. The resulting decision-support systems have been deployed by the Los Angeles Department of Water and Power (LADWP), where they enhance system-wide planning and engineering.

Toggle Brace System: During the 1990s, MCEER researcher Michael Constantinou worked with a center industrial partner, Taylor Devices, to develop a toggle-brace concept for building reinforcement. The company then designed and built a toggle-brace system, donating one to MCEER for use in investigating the advantages of mechanisms to leverage damping. Continuing research by Constantinou on how to make the concept more efficient then led to the development of the scissor-jack brace concept (Figure 11-24).

Figure 11-24: Three-story Olympic Committee building in Cyprus, employing the scissor-jack brace. (Credit: MCEER)

This latter concept is of great interest to practicing engineers due to the increased open-bay configuration it provides. Three 38-story buildings with toggle braces have been constructed (in Boston and San Francisco) only a few years after development of the concept. Considering that the scissor-jack brace is the natural successor to the toggle-brace, the future of the newer system is even more promising.

Fifty-two scissor-jack braces built by Taylor Devices were used in the Olympic House building on the island of Cyprus, completed in July 2006. This first implementation is particularly noteworthy given that the preparation and filing of national and international patents on the scissor-jack (in March 2001) prevented broader dissemination of research results prior to that time.

Over its ten years of association with MCEER, Taylor Devices reported that its sales of seismic products such as brace dampers grew from $4M to $8M.

11-B(g) Microelectronics, Sensing, and IT

ERC for Compound Semiconductor Microelectronics (CCSM), University of Illinois at Urbana-Champaign, Class of 1986

Variable-Temperature, Mechanically Stable Scanning Tunneling Microscope: When atomic resolution scanning tunneling microscopy (STM) was invented in 1982, the properties of surfaces could be studied with three­dimensional atomic-scale resolution. One of the CCSM’s researchers, Joseph W. Lyding, invented a new STM that was radically different from earlier STM designs. Thermal drift of the sample was several orders of magnitude lower than that of other designs, and it was so mechanically stable that atomic resolution images could be obtained with no need for vibration isolation. Earlier STMs were large—typically 12 feet tall and 4 feet square—with almost all that volume dedicated to vibration and thermal isolation. The ERC’s new design fit in the palm of a hand. The CCSM’s STM design was patented and licensed to three companies at that time (1992).

Data Storage Systems Center, Carnegie Mellon University, Class of 1990

Perpendicular Magnetic Recording: For the first two decades of personal computing, hard disk drives (HDDs) used longitudinal (flat) recording technology to record data. But by the mid-1980s, the areal density of recording was limited by head-to-media spacing. Meanwhile, in 1978, Shunichi Iwasaki in Japan had invented Perpendicular Magnetic Recording (PMR), which, in theory, offered higher areal densities. This approach was examined seriously by most companies in the magnetic recording industry, but at the time it was more difficult to achieve low head-to-media spacing with perpendicular media. Also, switching to PMR would have required a significant capital investment, because new tools would be needed for media deposition. Because of these constraints, the hard drive companies decided that the path of least resistance to increasing density was to continue scaling longitudinal recording. Then, in 1995, Prof. Stan Charap, working in the DSSC, pointed out that continuing to scale HDDs would result in reaching the superparamagnetic limit at 36 gigabits per square inch. DSSC center director Mark Kryder, working with the National Storage Industry Consortium that he had helped to found, held a workshop in which the participants determined that higher areal densities could be achieved either by reducing the bit aspect ratio of the recordings or by switching to perpendicular recording. The path of least resistance for the industry was to reduce the bit aspect ratio because, again, that did not require a significant investment in capital equipment. Over the next few years the bit aspect ratio was reduced from 20:1 to 6:1 as the areal density approached 100 gigabits per square inch—but that was nearing the new limit of density due to superparamagnetism.

In 1998, Kryder left the DSSC and CMU and joined Seagate as its Senior Vice President of Research and Chief Technology Officer. Seagate had moved to Pittsburgh to be near the DSSC. At Seagate, Kryder decided to pursue PMR, since it was not subject to this limitation. By early 2001 the Seagate Research team had demonstrated 100 gigabits per square inch. When it became apparent that the signal-to-noise ratio of perpendicular media would exceed that of longitudinal media, Seagate transitioned the technology into a product in late 2005. By 2013 there were over 500 million HDDs being sold per year in the industry, all of which used perpendicular recording.

Essentially, the DSSC carried out early research on PMR so that it was familiar with the technology, then identified the upcoming superparamagnetic limit, and then assumed leadership in showing that changing the bit aspect ratio and moving to perpendicular recording would allow the industry to continue increasing areal density. A 2005 Scientific American article noted that magnetic disk areal storage density had been doubling approximately every two years. The authors dubbed this observation “Kryder’s Law,” referring to the analogous doubling of transistor count every two years, as described earlier in Moore’s Law.[25]

In 2014, Iwasaki and Kryder were jointly awarded the Benjamin Franklin Medal in Electrical Engineering “For the development and realization of the system of Perpendicular Magnetic Recording, which has enabled a dramatic increase in the storage capacity of computer-readable media.”

Heat Assisted Magnetic Recording (HAMR): Research conducted at the DSSC was also critical to the development of HAMR, which many believe will be the successor to perpendicular recording, now that PMR, too, is approaching its superparamagnetic limits. HAMR is essentially an extension of perpendicular recording, since it uses perpendicular media; the difference is that it uses heat from a laser to assist during the record process.

The DSSC’s involvement in HAMR dates to a paper published in the Journal of Applied Physics in 1993. Between the publication of that article and 2001, the DSSC continued to work on the technology. When Kryder went to Seagate and successfully demonstrated perpendicular recording, he began to look at other technologies to pursue. One of those was HAMR. In order to obtain more funding for HAMR work, Seagate wrote a proposal to the Advanced Technology Program sponsored by NIST. The proposal was funded for five years. Tim Rausch, a DSSC graduate student, had begun working on HAMR under the direction of Prof. Ed Schlessinger, and Seagate funded research by both of them beginning in 2001, eventually hiring Rausch to work on HAMR at Seagate. By 2007, Seagate was well on the road to demonstrating HAMR at 250 gigabits per square inch. By 2012 Seagate announced that they had achieved 1 terabit per square inch recording density using HAMR.

The Seagate website currently (August 2018) describes the status of HAMR as “pilot production,” with volume production tools online. At present, Seagate is the only drive manufacturer able to ship HAMR. Assuming that manufacturing methods can be found to make bit-patterned media cost effective, areal densities of 10 terabits per square inch are possible using HAMR. This capacity should enable hard disk storage to continue its role as the main storage technology for the cloud for at least the next decade, with HAMR taking over as the hard disk technology of choice for the future. Although solid state drives (SSDs) are rapidly replacing hard drives in consumer computers, hard drives continue to dominate in the cloud because they offer considerably lower cost than SSDs, and HAMR will enable that to continue.

The hard disk drive market size is currently about $30 billion per year and is likely to continue at about that size, according to Kryder. He believes that there is still potential for HDD’s to increase areal density by a factor of 25, which will make that technology 25 times less expensive. With HAMR being introduced, cost will decline even faster. Even using perpendicular recording, in 2018 Seagate announced a 14 Terabyte drive. HAMR is expected to achieve 20 TByte by 2020 and 40 TByte or higher by 2023, with a 30 percent compound annual growth rate in areal density.[26]

Nickel-aluminum (NiAl) Underlayer in High-density Media: Probably the single most widely-recognized invention of the DSSC was the NiAl underlayer that enabled high-density media on glass substrates. This in turn was an enabling technology for making high-capacity drives for laptops and MP3 players, such as the Apple iPod (Figure 11-25). NiAl was known in bulk form before DSSC researchers David Lambeth and David Laughlin decided to use it as an underlayer for hard disks, but no one had sputter deposited it in thin-film form, with the crystalline texture and micro-structure that they achieved. Hence, in a way it was a new material.

The NiAl underlayer was used for essentially all mobile-device hard drives until roughly 2006, when perpendicular recording replaced longitudinal recording in order to get around the superparamagnetic limit, which was first identified by Prof. Stan Charap in the DSSC (see above). With perpendicular recording, an entirely different underlayer was necessary, and that material was not developed in the DSSC. Because it was a widely deployed enabling technology, the commercial value of this invention is not possible to specify, but cumulatively it was well into the hundreds of billions of dollars worldwide.

Figure 11-25: A DSSC advance in materials enabled small, high-capacity drives for laptops and .mp3 players. (Credit: Apple Inc.)

Gordon Center for Subsurface Sensing and Imaging (CenSSIS), Northeastern University, Class of 2000

Tomosynthesis Acceleration: Nearly 200,000 women are diagnosed with breast cancer in the United States each year; about 40,000 die from the disease annually. Although traditional mammography screening reduces the mortality rate by 25 percent, it triggers false positives that require additional testing, adding cost, anxiety, and frustration. Digital Breast Tomosynthesis (DBT), sometimes called “next-generation mammography,” overcomes the callback costs associated with mammography and likely increases sensitivity by providing three-dimensional breast imaging for cancer detection and evaluation. Tomosynthesis was developed by Massachusetts General Hospital (MGH) and General Electric (GE). The potential impact is a much-improved imaging technology at significantly lower cost.

Its introduction into clinical use was delayed by one major bottleneck to the use of DBT—the long time needed to reconstruct the image from the raw x-ray data. To address this problem, CenSSIS undertook a project on Tomosynthesis Acceleration on behalf of the MGH breast imaging group. The Center’s researchers developed a parallelized version of the serial maximum-likelihood reconstruction algorithm, which reduced execution time of the algorithm from 4 hours to less than 5 minutes. The parallelized code was used in clinical trials of the tomosynthesis device at MGH. This project was so important to MGH that the hospital provided support for a CenSSIS Ph.D. student to continue work on acceleration. This project also collaborated with Mercury Computer, which provided a high-performance computing unit for MGH to use in clinical trials. Hologic obtained FDA approval to market a commercial product based on DBT, called Selenia Dimensions, the first commercially available system to provide 3D breast tomosynthesis. GE then introduced SenoClaire, its 3D breast tomosynthesis system. Both products are now widely used in breast imaging.

Extreme Ultraviolet (EUV) ERC, Colorado State University, Class of 2003

Coherent X-rays: A half century ago the first laser was demonstrated. That device has since been of huge benefit to society. The same revolution that happened decades earlier for visible light sources is now happening for X-rays, which can penetrate thick samples, image small objects, and have the added advantage of elemental and chemical specificity.

By enabling a type of coherent X-ray “super-continuum” that spans the electromagnetic spectrum from ultraviolet (UV) to the keV (thousand electron-volt) region, EUV ERC researchers advanced nonlinear (laser-like) optics to an extreme not considered possible before. These coherent beams represent a modern laser-like, tabletop version of the vintage Roentgen X-ray tube, but in the soft X-ray region. (See Figure 11-26.)

Figure 11-26: Young’s Double-Slit interference patterns showing that the high harmonic keV X-ray beams are spatially coherent. (Credit: EUV ERC)

 The X-ray radiation spans a region of the X-ray spectrum known as the “water window” that is useful for taking ultrahigh-resolution X-ray images of single living cells or nanostructures. By offering X-rays that are the ultimate “strobe light,” coherent X-ray beams promise revolutionary new capabilities for understanding and controlling how the nanoworld works on its fundamental time and length scales. This knowledge is highly relevant to next-generation electronics, data and energy storage devices, and medical diagnostics. The unique ability of broad bandwidth, ultra-fast X-rays to probe functioning at multiple atomic sites simultaneously is already uncovering new understanding of how electrons, spins, phonons, and photons behave at spatial-temporal limits.

Mid-InfraRed Technologies for Health and the Environment (MIRTHE), Princeton University, Class of 2006

Quantum Cascade Laser: MIRTHE focuses on the development of mid-infrared (3-30 μm) optical trace gas sensing systems based on new technologies such as quantum cascade (QC) lasers or quartz enhanced photo-acoustic spectroscopy. These tools have the ability to detect and identify minute amounts of chemicals found in the environment or atmosphere, emitted from spills, combustion, hidden materials, or natural sources, or exhaled from the human body.

One of the obstacles to broader applications of single-mode QC lasers has been their lack of wide tunability. The MIRTHE center overcame this drawback and established a new record in continuous single-mode external cavity tuning of a QC laser. Soon after, a collaboration between MIRTHE academic and industry partners led to the development of a new design for power-efficient QC lasers. QC lasers are based in large part on cascaded optical transitions between sub-bands. Because of the nature of the process, laser devices are not highly power-efficient: the cascading process involves non-radiative energy loss, the intersub-band transition has a strong non-radiative component, and optical losses are high. To overcome these inefficiencies, Center collaborators leveraged work done on the Defense Advanced Research Projects Agency (DARPA) Efficient Mid Infrared Laser program and employed new quantum design strategies to raise the lasers’ wall-plug efficiency from under 35% to well over 45%. The collaborators’ work was published in the January 2010 Issue of the journal Nature Photonics.[27]

Center for Integrated Access Networks (CIAN), University of Arizona, Class of 2008

Holographic 3-D Video: Researchers at CIAN moved a step closer to projecting 3D video images in near real-time—images that would not require special eyewear to view. A key breakthrough was the development by CIAN of a novel polymeric material capable of rapidly displaying many images. This advance promises to revolutionize video projection—a more important improvement than the advancement from standard TV projection to high definition. The system is the first with enough computing power and a display medium that can project a near real-time video image in holographic stereo (Figure 11-27). The CIAN system has the potential for transmitting a human-size, full-color 3D image across the world for videoconferencing that would mimic in-person meetings. Nasser Peyghambarian (CIAN Director) and his team including Lloyd LaComb (CIAN ILO) have been working with Michael Bove and his team at MIT’s Media Lab and Daniel Smalley and his group at Brigham Young University to advance the technology. “This advance brings us a step closer to the ultimate goal of realistic holographic telepresence with high-resolution, full-color, 3-D images that can be sent at video refresh rates from one part of the world to the other,” said Peyghambarian, who estimated in 2012 that it will take another decade of research to get closer to commercializing this technology.

Figure 11-27: An image of an F4 Phantom fighter jet created with CIAN’s 3D telepresence system. (Credit: CIAN)

The research team reported the breakthrough in the cover story of the Nov. 4, 2010, issue of Nature.[28] The new display can refresh images every two seconds. While not yet ideal for video, this rate is more than one hundred times faster than the previously demonstrated rate. Further improvements could bring applications not only in videoconferencing but also in telemedicine, advertising, updatable 3D maps, and entertainment—where the concept of 3D holographic telepresence attracted considerable public interest when it was depicted in the original Star Wars film in 1977, with Princess Leia appearing on a tabletop to talk with other characters.

11-B(i)  Nanosystems

Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT) ERC, University of Texas at Austin, Class of 2012

Nanowires: In 2015, the NASCENT Nanosystems ERC (a NERC) produced diamond-shaped silicon nanowires with sharp corners. These novel nanowires can provide more than 90% more storage capacity than exists currently and may increase capacitance 10-fold. Such nanowires are used in applications from ultra-high sensitivity sensors to high-density energy storage devices such as capacitors and batteries.

The nanowires are produced using nanoshape imprinting followed by imprint-assisted Metal Assisted Chemical Etching (iMACE), an anisotropic wet etch technique that uses hydrogen peroxide, hydrogen fluoride, and distilled water and that can be scaled up to fabricate wafer-scale devices with high throughput and high yield. The imprint lithography is used to define a gold mesh with diamond holes, which acts as a catalyst for iMACE. The process includes fabricating large arrays of diamond-shaped nanowires, which can be used as a parallel connection of many surrounded gate metal-oxide-semiconductor capacitors. This method relies on atomic layer deposition to deposit the dielectric hafnium oxide, an electric insulator, and a tin top gate metal. The capacitance of the diamond nanowires is roughly double that of circular nanowire capacitors.

Nanosystems Engineering Research Center (NERC) for Translational Applications of Nanoscale Multiferroic Systems (TANMS), University of California at Los Angeles, Class of 2012

Nanocrystal Memory: Researchers at the TANMS Nanosystems ERC in 2016 developed a new type of magnetoelectric memory device that represents a greater than 10x improvement in both write energy efficiency and scalability. A major recent driver of this accomplishment is the researchers’ progress in advancing a technology known as voltage-controlled magnetic anisotropy (VCMA), which can enable writing of information on extremely small nanocrystals. Researchers have observed electric-field control of magnetization (required to write information) in both 9 nm and 5 nm diameter iron palladium (FePd) nanocrystals—5 nm being the smallest in which voltage control of magnetization has been seen.

This work is a huge step toward satisfying current large demands for fast, low-power-consumption memory systems as well as needs for high storage densities (and correspondingly small bit sizes) in memory systems for personal electronic devices. The VCMA advances show promise in overcoming one of the major stumbling blocks in achieving high densities. That block is low efficiencies, which produce excessive heating, thereby preventing further miniaturization in consumer electronics. The VCMA results, combined with TANMS’ demonstration of a 20x reduction in write energy, provide a new pathway toward energy-efficient, small-computing technologies. The TANMS work described here shows that control of magnetization with an electric field is feasible in nanocrystal-based systems.

ERC for Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST), North Carolina State University, Class of 2012

Self-powered Wearable Devices: Making flexible devices that can tap into the human body as an energy source is crucial to developing wearable, long-lasting sensors that can monitor someone’s health and surrounding environment. Harvesting electrical energy through piezoelectricity, using forces generated by body movements to place strain on a material and thereby power wearable devices, is a central focus of research at ASSIST (Figure 11-28).

Figure 11-28: The use of standard components, liquid metal, and a flexible elastomer will facilitate the fabrication of large-area, flexible thermoelectric generators to power wearable devices. (Credit: ASSIST)

Researchers have shown that thicker piezoelectric films can generate greater power, so they have developed a technique for depositing the film at twice the thickness that was previously workable, with excellent properties for generating piezoelectric power. Another ASSIST innovation is the use of a liquid metal alloy of gallium and indium to make it feasible to use electronic component “legs” that are already manufactured in bulk to connect components of flexible, wearable devices, while maintaining the devices’ flexibility.

The piezoelectric system is just one of those being developed at ASSIST, whose self-powered platforms will also include solar, thermoelectric, and piezoelectric energy harvesters. The combinations are expected to significantly increase the amount of energy that can be generated to power the ASSIST devices.

11-B(j)  Advanced Materials

Revolutionizing Metallic Biomaterials (RMB), North Carolina A&T University, Class of 2008

Bioabsorbable Implants:  Every year, millions of patients require surgical implants, typically made of nonabsorbable titanium or stainless steel, that require painful and costly second surgeries to remove. Implants made of bioabsorbable polymers can be absorbed by the body once they have served their purpose and allowed the bone to regrow; but those that are currently available are not always strong enough to retain the bone geometries surgeons design and set.

The RMB ERC conducts collaborative research to transform current medical and surgical solutions by creating “smart” and biodegradable implants, offering improved treatments for orthopedic, craniofacial, neural, and cardiovascular conditions. Promising RMB research is focused on a bioabsorbable alloy that has twice the strength of commercially available orthopedic implants (Figure 11-29). BioMg® 250 is made of magnesium alloyed with small amounts of elements (zinc, calcium, and manganese) that are naturally found in the body and are essential nutrients in stimulating new bone growth. This alloy, developed in collaboration with RMB research partner nanoMAG LLC, a member of the RMB’s Industrial Advisory Board, has enabled the company to get BioMg 250 out of the laboratory and into the surgical suite for continued research. Dr. Charles Sfeir, Director of the Center for Craniofacial Regeneration, is leading the team at PITT that is conducting histology evaluations of the BioMg implants after they are harvested from small animal studies. He indicated in 2013 that: “There is still a significant amount of testing and certification that needs to be conducted to meet FDA regulatory requirements, but this new class of materials could be a disruptive game changer in the field of orthopedic surgery.”[29]

Figure 11-29: Bioabsorbable magnesium screws release essential nutrients critical for bone healing as they dissolve. (Credit: RMB)

The current annual market for implants exceeds $4 billion, and the costs of secondary operations exceed $500 million.

University of Washington Engineered Biomaterials (UWEB), University of Washington, Class of 1996

UWEB focuses on exploiting specific biological recognition mechanisms in order to develop a new generation of biomaterials for medical implants that will heal in the body in a facile, physiologically normal manner. UWEB has spun out a number of companies capitalizing on the Center’s research. One of these is Healionics, incorporated in March 2005. Healionics began with a biomedical breakthrough in research—i.e., discovering the “sweet spot” in a precise pore size and geometry that allows biomaterial to promote the acceptance of biomedical devices within the body. The result is based on UWEB technology and is known as STAR (Sphere Templated Angiogenic Regeneration) biomaterial, a precision-engineered three-dimensional biomaterial scaffold that is designed to heal around a medical device and promote acceptance in the body. (See Figure 11-30.)

Figure 11-30: SEM image of crystalline pattern on STAR material surface (Credit: Healionics)

Healionics in March 2009 announced the sale of its first commercial product featuring Healionics’ STAR biomaterial. The product, TR-ClarifEYE, an innovative veterinary glaucoma implant marketed by TR BioSurgical, LLC (TRBIO), was launched in April 2009. In addition to the veterinary ophthalmic market, Healionics has numerous research agreements in place evaluating the STAR biomaterial in multiple human market applications including cosmetic surgery, obesity management, diabetes care, advanced wound care, chronic pain management, end stage renal disease, and long-term infusion care. The technology is also being developed further for use in bone repair and in percutaneous access devices.

11-B(k) Neuroscience/Neuroengineering

ERC for Wireless Integrated Microsystems (WIMS), University of Michigan, Class of 2000

WIMS focuses on miniature, low-cost integrated microsystems capable of measuring (or controlling) a variety of physical parameters, interpreting the data, and communicating with a host system over a bi-directional wireless link. As such, the Center addresses the intersection of microelectronics, wireless communications, and microelectro-mechanical systems (MEMS). This research has had many focus areas over the years, including the following.

  • Developed an advanced 32-site cochlear implant (later expanded to a 128-site human array) by joining thin-film electrode arrays with backing structures that permit their deep insertion into the scala tympani and positioning the electrodes close to auditory neurons along the length of the cochlea.
  • Spinning off WIMS research on sensor technology for water chemistry into a company, Sensicore, that develops smart sensors and sensor networks that automate water testing, data collection, and analysis for both drinking and wastewater applications.
  • Research on CMOS MEMS resonator technology evolved into a spinoff company, Discera, that commercialized resonators used to create the industry’s most advanced and economical frequency control and RF circuits.
  • Leveraging broad design expertise in the RF and analog domains, a WIMS grad student founded Mobius Microsystems, which became a leader in all-silicon clock generation technology and commercial products using standard CMOS processes. Mobius’ products enabled lower power consumption and lower total product cost through greater levels of circuit integration, improved performance, and faster time-to-market. In 2010, Mobius was bought by Integrated Device Technology, Inc. (IDT), a leading provider of mixed signal semiconductor solutions that enrich the digital media experience.
Minds Over Matter
The Director of the University of Michigan-based WIMS ERC, Ken Wise, recalls that “In the cochlear (research) area, I had three female doctoral students. Pamela Bhatti went to Atlanta and is now a Professor at GaTech. Jianbai Wang won the Best Paper (Roger Haken) Award at the 2005 IEDM and went to Texas Instruments in the Digital Mirror Device (DMD) area. Then she and her husband moved to the Bay Area and she got a law degree. She’s now a patent attorney at Morgan Lewis in Santa Clara, California. And Danielle Merriam started in the cochlear area but came in one morning and said she’d decided to become a nun! After a few years (staying in touch occasionally), she called one morning and said her superiors wanted her to go back to school, so she came back and, with the cochlear positions filled, worked on cortical electrode arrays. She did a great job but couldn’t do the processing herself because she couldn’t get the cleanroom bunny suit over her habit! After completing her doctorate, Sr. Mary Elizabeth Merriam is now head of the science department at St. Dominic Savio Catholic High School in Austin, TX. So that’s a few of the crew that I had. I’ve kept in close touch with almost all of my doctoral students, and as I’ve gotten older I’ve realized more and more that it’s not so much the research breakthroughs that are important—important though they are—but rather the people you work with and the impact it has on their lives. That’s one of the reasons I’m so glad I got to be a professor, teacher, and engineer.”
Figure 11-31: N2T neural probes are fabricated in a variety of shapes. (Credit: N2T)

NeuroNexus Technologies (N2T) was formed in 2004 by a WIMS researcher to commercialize neural probe technologies that were developed over nearly two decades of research by Wise and his students in the College of Engineering at Michigan, including at WIMS (Figure 11-31). N2T licensed this platform technology from the university and developed it into a franchise of neural probe systems for medical and scientific applications. N2T’s products provide microscale electrical and chemical interfaces with the brain that meet demanding application requirements in cost-effective configurations, including any 2-dimensional shape.

11-C    Industrial/Economic Impacts

Economic impacts of academic research are notoriously difficult to quantify, or even estimate. Even in the case of ERCs, which take research beyond the fundamental level farther toward technology development than is traditionally the case, commercialization of ERC-developed technologies can take years to come to fruition. Patents can be transferred from company to company. Key enabling technologies can be buried inside products as software and subsystems that are hard to differentiate, and that in many cases have been further developed or customized to suit a company’s needs. In addition, companies are often reluctant or unwilling to provide market data on product sales or other proprietary information.

For these reasons there have been few attempts to gauge the economic impact of the ERCs in any quantitative way, beyond the kinds of anecdotal and subjective examples given in the preceding section. However, over the course of the ERC Program to date, there have been three significant attempts to do so. These are summarized in this section.

i.              Economic Impact on Georgia

In 2004, the Georgia Research Alliance (GRA) asked SRI International’s Center for Science, Technology, and Economic Development to conduct a study of the impact that the Microsystems Packaging Research Center (PRC), an ERC at the Georgia Institute of Technology (Georgia Tech), had on the economy of Georgia. They stated that during the 10-year life of the ERC as an NSF-funded center, Georgia invested $32.5 million in the PRC through both Georgia Tech and the GRA.[30] The question to be addressed was: What was the return on investment to the state?

The PRC had attracted large amounts of cash and in-kind support from sources outside Georgia, beginning with NSF’s core support for the ERC. The study found that the PRC’s expenditures on research, education, and related activities had led to a variety of direct economic impacts on the state, including benefits to Georgia firms that had interacted with the PRC and cost savings and other benefits to Georgia firms that had hired PRC graduates. SRI’s analysis of these impacts showed a total, quantifiable direct impact on Georgia of nearly $192 million over the 10-year life of the center at that point. They also found that the indirect or “ripple” effects of the Center over that period had amounted to $159 million, so that the total quantifiable impact of the PRC’s existence was estimated to be $351 million over 10 years—a more than 10:1 return on Georgia’s investment.

Part of the direct impact was in job creation, either through the formation of new programs by existing companies or startup companies spun off from or catalyzed by the PRC. SRI estimated the value of these “direct” jobs to be more than $18 million (396 job-years over ten years); and they estimated, in addition, that an average of about 343 other jobs per year were supported in Georgia through the supplying of goods and services directly or indirectly to the PRC and its employees.

Since the study was conducted at the end of the PRC’s 10-year life as an NSF-funded ERC, it was not possible to quantify the economic impacts that the Center would continue to have as a self-sustaining ERC—and in fact, in 2018 the PRC continues to operate quite actively. However, SRI predicted that the ensuing years would see sustained regional economic growth due to the PRC in the form of start-ups, spin-ins, intellectual property, and human capital. They projected that, post-NSF, the PRC would devote increased attention and resources to fostering startup companies and realizing the commercial potential of new technologies based on PRC research, so that “by 2014 the PRC’s economic impact on Georgia will exhibit a different, more commercially-oriented pattern, and that its value will exceed the $350 million mark established in its first decade.”[31]

ii.       SRI Pilot Study of ERC Economic Impact

In 2006, the ERC Program commissioned SRI International’s Center for Science, Technology, and Economic Development to conduct an experimental or “pilot” Program-wide study of the economic impact of ERCs.[32] NSF asked SRI to apply a modified version of the approach used for the Georgia Tech study described in the preceding section to cover the national and regional economic impacts of five ERCs. The hope was that the results of the pilot study would demonstrate a method that could be used to document the Program’s economic benefits to both states and the nation.

Figure 11-32: Estimated regional economic impact of three ERCs (Credit: SRI)

As in the Georgia study, the economic impacts of centers at or near the end of their NSF-funded term of ERC support were examined. SRI’s report in 2010 was, in the end, based on data collected for three of the five ERCs—Caltech’s Center for Neuromorphic Systems Engineering (CNSE), Virginia Tech’s Center for Power Electronics Systems (CPES), and the University of Michigan’s Center for Wireless Integrated Microsystems (WIMS). The regional and national impacts varied widely across the three ERCs, for a variety of reasons that made comparison of the results across centers “complicated.” Figure 11-32 illustrates this disparity, which was even more pronounced for national impacts than for regional impacts.

The study authors concluded with a number of “lessons learned” regarding this type of economic impact study:

  1. Despite the apparent value of waiting as long as possible in the history of an ERC before attempting to measure its economic impact, it is clear that such efforts should be made well before termination of NSF ERC Program support, since data are unlikely to exist following a center’s graduation.
  2. There probably is no optimum time to attempt to measure ERC impacts.
  3. The economic (and probably other) impacts of ERCs should not be compared across ERCs or against “standard” performance measures.
  4. In ERC impact studies, focusing on narrowly-conceived, quantifiable economic impact data alone should be avoided.

The authors emphasized in particular the drawback of assessing impact at 10 or fewer years after the founding of an ERC, recognizing that technologies usually take a longer time to reach commercial fruition in industry. They suggested that “using the ’nuggets’ approach to capture the top, say, 10% of ERC-based innovations that have generated new product sales with cost savings associated with them would be an appropriate and credible approach. We have no doubts that the results would show a highly positive outcome.” This is the approach used by the ERC Program and published on the ERC Association website each year.

iii.      SciTech Communications “Innovations” Report

In late 2006, the then-new Director of the NSF Engineering Education and Centers Division, Allen Soyster, asked Lynn Preston and Court Lewis, “Do we have any idea what the return on investment of the entire ERC Program has been?” The answer was, “Other than a few major commercial ‘home runs,’ not really.” Soyster’s question launched an effort on the part of Lewis, the communications contractor for the ERC Program, to see if more could be learned. SciTech Communications LLC (STC) was tasked with conducting an economic-impact study similar to those just described, but in this case across all ERCs, both current and “graduated,” and without attempting to be as methodologically rigorous—instead focusing on digging deeper into the collection of annually published “nuggets” to see if it was possible to trace economic impacts that would, collectively, give a broad-brush idea of the return on investment by the ERCs as a whole.

Realizing the challenge of assessing the value of technologies well downstream from their origins in an ERC, STC took a more investigative-journalistic approach. The recently laid-off tech business editor of U.S. News & World Report’s online section, which had been shut down due to the then-ongoing recession, was engaged to use his skills, under the direction of STC’s president, Lewis—who was very familiar with ERC personnel and their achievements from the beginning of the Program on—to track down technologies and their outcomes. Direct interviews, published reports, and business databases were used to try to obtain estimates of the value of individual technologies and startup companies, including jobs created, where possible.

The result was an 82-page report containing brief synopses of well over a hundred “ERC-Generated Commercialized Products, Processes, and Startups,” with a mixture of data and best-guess estimates of their economic value.[33] The findings were summarized as follows:

Over the past 25 years, 48 ERCs (including 3 Earthquake ERCs[34]) have received roughly $1 billion in funding from the ERC program. Although specific market data are difficult to obtain, it is clear even from the numbers reported here that the total downstream market value of ERC innovations to the U.S. economy to date is well into the tens of billions of dollars, with some of the most promising technologies just now poised to begin rapid growth in the real-world market for technological goods and services. And the dollar figures do not even address the enormous benefits of those products to the nation and the world in terms of human health, safety and security, industrial and personal productivity, quality of life, and environmental protection.[35]

Since many of the dollar estimates were expressed as a range, the cumulative total of many ranges is an even wider range; and in some cases, dollar values could not be obtained. Also, only the technologies and startups judged to be the most impactful were tracked. Given those constraints, the STC informally estimated the total commercial value at that time of all ERC outputs, 25 years after the Program’s founding, to be between $50B and $75B—a leveraged ratio they expected to continue increasing over time due to the Program’s increased emphasis on innovation and the continued addition of new centers to those older ones whose outputs were still finding new downstream applications and markets.

11-C(b) Economic Impact Examples

This section presents a sampling, in roughly chronological order by center, of industrial applications of ERC-developed technology in a variety of fields that have had significant economic impact.

Center for Telecommunications Research (CTR), Columbia University, Class of 1985

Leadership in Digital Video: The Center for Telecommunications Research participated in key developments that led to the international standard MPEG-2 for digital video (Figure 11-33). The technology is at the heart of modern digital video production and transmission. It is best known for its central role in Digital Video Discs, better known as DVDs, the plastic platters on which Hollywood has sold and rented billions of movies to consumers around the world.

Dr. Dimitris Anastassiou, a professor of Electrical Engineering at Columbia, is author of patents accepted as essential for the implementation of the MPEG-2 as well as the later standard of AVC/H.264, which are also used in digital television broadcasting and other means of delivering digital video, such as internet streaming. Dr. Anastassiou joined Columbia in 1983 and his work was supported by the CTR. 

Figure 11-33: MPEG-2 is a compression codec that is used in Digital Video Broadcast and DVDs to transmit and store large video files. (Credit: Columbia University)

Because of the work of Dr. Anastassiou and graduate student Feng M. Weng, and with the support of the CTR, Columbia University emerged as the only academic holder of a share of the patents used in the MPEG-2 standard. Columbia was one of more than a dozen holders of more than 40 patents that included consumer electronics giants Fujitsu, Mitsubishi Electric, Philips Electronics, and Sony. The MPEG-2 patent alone has brought more than $100 million in licensing income to Columbia University, and was a key factor in the university topping all other U.S. schools for licensing income during the late 1990s.

Beyond its support of research into the codecs that are central to the compression standards, the CTR also assumed a central role in the committee work that led to the MPEG-2 and later standards. The Center hosted numerous meetings of scientists working to develop the standards and participated in discussions that led to a groundbreaking licensing agreement among the more than a dozen corporate patent holders.

Institute for Systems Research (ISR) (formerly Systems Research Center, or SRC), University of Maryland, Class of 1985

Design of New Chemical Process Plant: Exxon Corporation asked the ERC to incorporate its state-of-the-art systems research into the design and construction of a new chemical processing plant to market a new family of products. Working with Exxon, a team of ERC researchers and students developed new software for reaction modeling and control in the new processing plant using parameters and data from Exxon’s ongoing pilot plant studies. Process control engineers used the ERC’s software for process analysis, optimization, and debottlenecking in the plant. In addition, the ERC team visited the plant site regularly during its construction to review progress and give training sessions. The ERC’s work with Exxon enabled the new plant to provide safe and profitable operations while meeting strict product quality specifications.[36]

Center for Computational Field Simulation (CCFS), Mississippi State University, Class of 1990

Advanced Vehicles–Nissan: The ERC for Computational Field Simulation was initiated in 1990 as a multidisciplinary academic research center with a mission to reduce the time and cost of complex field simulations for engineering analysis and design through cross-disciplinary research teams and the utilization of testbed integrated systems to provide a common focus.

The presence and prestige of the ERC was instrumental in the State of Mississippi’s successful effort to attract the then-largest Nissan manufacturing plant in North America. After graduation from ERC Program support in 2011, the CCFS diversified into five subsidiary centers, all derived from CCFS research thrusts. One—the Center for Advanced Vehicular Systems (CAVS)—is focused on technology directed at advanced vehicles and supports Nissan’s automotive research. CAVS was created by special legislation that funded a 57,000-square-foot research facility on the MSU campus in Starkville and a 25,000-square-foot CAVS Extension Center adjacent to the Nissan plant in metropolitan Jackson.

Independent assessment by the U.S. Department of Commerce indicated that CAVS Extension alone has had over $5.9 billion in economic impact on the U.S. economy, and has assisted in creating or retaining 4,753 jobs in Mississippi from 2006 to 2018. Additionally, the Center has helped industry create $44.5 million in cost savings and has led to over $200 million in plant and equipment investments over the same period.

Former CCFS Center Director Don Trotter, recalling the competitive recruitment effort that led Nissan to choose Mississippi over neighboring Alabama as its new U.S. manufacturing location, notes that, “Without the success of the ERC, it is doubtful that Nissan would have selected Mississippi over Alabama.”[37]

Optoelectronic Computing Systems (OCS) ERC, University of Colorado, Class of 1987

3-D Cinema–ColorLink: The film Avatar introduced to a worldwide audience a new generation of 3-D cinematic experience. In fact, the film won a 2009 Academy Award for Visual Effects based in part on its innovative 3-D technology. That 3-D technology was licensed by the filmmakers from a Beverly Hills-based company, RealD Cinema, and had its roots in research and technology developed in the late 1980s and early 1990s at the OCS ERC.

The OCS center generated a number of inventions related to color display technology and color projection technology. Researchers Kristina Johnson (then the Center Director) and Gary Sharp partnered in 1995 to spin out some of these innovations into ColorLink Inc., a Boulder, Colorado-based photonics company focused on liquid crystal and tunable optical filter technologies. ColorLink’s goals included improving the quality of projection displays. The company’s technologies found applications in NASA, the DoD, HDTV manufacturers, and, ultimately, at RealD Cinema, which bought ColorLink in 2007 (Figure 11-34). The acquisition included ColorLink’s R&D campus in Boulder, its manufacturing facilities in Tokyo and Shanghai, and a broad range of patents covering optical, liquid crystal, and light-based technologies. In addition, RealD acquired ColorLink’s line of 3-D imaging components for entertainment, gaming, industrial, and scientific applications.

Figure 11-34: Innovations made by OCS ERC spinoff ColorLink, based on OCS research, underpin the RealD 3-D technology that helped the digital film “Avatar” win an Academy Award in 2009. (Credit: RealD)

Before acquiring the company, RealD leveraged ColorLink’s optical filter technology as an attachment that allowed a single digital projector to show 3-D movies. The first such film was Chicken Little, which was released in digital 3-D in 2005. This success led to the decision to buy ColorLink. “ColorLink is the engine that drives our technological innovation,” said Michael Lewis, RealD’s chairman and CEO, in 2010. “As 3-D grows in capability, scale, and reach, the innovation out of Boulder becomes even more important to RealD as it works to retain its market-leader position in the burgeoning digital cinema industry.”[38]

Data Storage Systems Center, Carnegie Mellon University, Class of 1990

Nickel-aluminum (NiAl) Underlayer: During and after its life as an NSF-funded ERC, the DSSC made a number of critical advances in storage technology. As was described in Section 11-B(g), probably the single most widely-recognized invention of the DSSC was the nickel-aluminum (NiAl) underlayer that enabled high-density media on glass substrates. Since glass substrates have been used in nearly all small form factor (2.5 inch and smaller) hard drives, such as those used in laptops and iPods, the invention of the NiAl underlayer by researchers at the DSSC was an enabling technology for making high-capacity drives for laptops and MP3 players, such as the Apple iPod. Because it was such a widely deployed enabling technology until it was superseded by another form of recording in 2006, perpendicular magnetic recording, the commercial value of this invention is not possible to specify, but it was in the hundreds of billions of dollars worldwide.

Data Detector: In 1997 a DSSC faculty member, José Moura, and his student, Aleksandar Kavcic, invented a detector technology that increased the accuracy with which hard disk drive circuits read data from high-speed magnetic disks. The patents are in the area of signal processing, as they pertain to extracting the recorded signals from the storage media and playback head. A large fraction of the disk drives made since the mid-2000’s and installed in computers, from large servers to small laptops, have used this invention. A major semiconductor company (Marvell Technology Group) infringed on their patents, selling a reported 2.3 billion chips between 2003 and 2012.

In early 2013, Carnegie Mellon University was awarded a large settlement, which was reduced on appeal. Eventually the two sides reached an agreement for a $750 million payment, which was announced in February 2016. This is the largest payment ever for a patent case involving a computer science invention, and the second-largest payment ever over technology patents.

Center for Neuromorphic Systems Engineering (CNSE), California Institute of Technology, Class of 1995

Fingerprint Recognition–DigitalPersona: Founded by former CNSE students Vance Bjorn and Serge Belongie in 1996 when they were undergraduates, DigitalPersona developed “U. are U.” fingerprint identification technology. Millions of users have benefited from the time savings of the biometric technology, notably buyers of a number of Microsoft keyboards and related products that incorporated DigitalPersona gear (Figure 11-35).

Figure 11-35: DigitalPersona’s fingerprint reader technology, developed at the Caltech ERC, offered a new route to digital security. (Credit: DigitalPersona)

DigitalPersona products reduce “password fatigue” by making it more convenient to open password-protected pages while continuing to ensure privacy and security. The fingerprint reader is specifically designed to be intuitive and reliable. The fingerprint recognition technology allows people to log on to the PC, switch between users, and access favorite online sites at the touch of a finger.

In late 2009. the company reported that its technology has been used by more than 95 million people worldwide. At that time DigitalPersona employed about 90 people in California and Florida and had annual sales of about $24 million, according to Dun & Bradstreet. The company merged with Cross Match in 2014.

Integrated Media Systems Center (IMSC), University of Southern California, Class of 1996

Film Sound: The IMSC’s research in “immersive media” led in many cases to systems with direct commercial applications. For example, IMSC researcher Tomlinson Holman, inventor of film audio technologies—notably the Lucasfilm THX sound system and the world’s first 10.2 sound system—won an Academy Award for Technical Achievement in March 2002 for improvements in motion picture loudspeakers. In 2001 Holman wrote the textbook Sound for Film and Television, which is required reading in many college film courses.

Special Effects Software: Hollywood special effects house Rhythm & Hues is using new “augmented reality” software from the University of Southern California’s IMSC to add computer-generated effects to movies easier and much faster. Named “Fastrack,” the software cuts the tracking time from minutes to just seconds per frame and reduces the need for hand-corrections. Fastrack has played a starring role in movies such as X-Men 2, Daredevil and Dr. Seuss’ The Cat in the Hat.

Extreme Ultraviolet (EUV) ERC, Colorado State University, Class of 2003

Higher-power EUV Lithography: The goal of the EUV ERC has been to advance the technology of small-scale and cost-effective coherent extreme ultraviolet (EUV) sources for use in applications such as nano-fabrication of semiconductors. A major barrier to the adoption of EUV lithography by the semiconductor industry was the need to develop 250-watt EUV lithography sources.

Silicon chip manufacturers have long believed that a source power of 250 watts would be required to achieve a throughput of 125 wafers per hour (WPH). The lithography vendor ASML and Cymer (an EUV ERC industry member which ASML acquired in 2013) had been trying to push the technology to hit that mark, which has been considered the primary roadblock for EUV development in recent years. InJuly 2017, ASML announced that they can now claim this long-elusive milestone—the commercial availability of the first 250 watt EUV source—a goal they achieved using EUV ERC-developed technology. EUV, when it achieves its 125 WPH throughput target, offers an economic benefit compared to the high cost of conventional triple- or quadruple-patterning using immersion lithography tools.

Center for Structured Organic Particulate Systems (C-SOPS), Rutgers University, Class of 2006

Continuous Pharmaceutical Manufacturing:  The Center for Structured Organic Particulate Systems focuses on improving the effectiveness of the manufacturing of tablets and other delivery means for pharmaceuticals. A primary mission of the ERC, in partnership with major pharmaceutical companies, is the development of technologies for continuous manufacturing of solid dosage forms (both tablets and capsules). This would allow the pharmaceutical industry to move away from batch processing and into continuous processing, providing greater control over the manufacturing process and the quality of the product while at the same time significantly decreasing manufacturing costs.

C-SOPS partnered with Johnson & Johnson (J&J) to support this leading manufacturer in implementing in-house technologies for continuous manufacturing. J&J invested $15 million dollars in a commercial continuous manufacturing pilot line, based on a C-SOPS design, in Gurabo, Puerto Rico. The ERC is also conducting a project funded by Haldor Topsoe, a Danish company that manufactures catalysts in Houston, Texas, to improve their existing continuous manufacturing systems. C-SOPS is partnering in two projects with ARDEC to support automation of military manufacturing processes ($2 million per year), and with several other pharmaceutical companies to implement continuous manufacturing systems. Continuous manufacturing unit operations at Rutgers are integrated and at full scale, utilizing closed loop control systems.

In May 2015, Janssen Supply Chain (JSC), part of the Janssen Pharmaceutical Companies of Johnson & Johnson, announced that it was providing over $6 million to expand ongoing research efforts supporting Janssen’s introduction of continuous manufacturing techniques for pharmaceuticals. The funds from JSC will increase research and development efforts at C-SOPS, including those aimed at developing the specially designed continuous manufacturing line at JSC’s facility in Puerto Rico. Chapter 10, Section 10-B(a)viii elaborates further on the development of C-SOPS’ continuous manufacturing testbed in-house.

Figure 11-36: C-SOPS researchers collaborated with Janssen Supply Chain to develop and commercialize the new continuous direct compression manufacturing line at JSC’s plant in Gurabo, Puerto Rico. (Credit: C-SOPS)

On April 8, 2016, the FDA approved a first-ever change in the manufacturing of tablets by JSC, allowing the company to produce for sale tablets on its continuous manufacturing production line at its manufacturing facility in Puerto Rico (Figure 11-36). Another C-SOPS industry partner, Vertex, the maker of a breakthrough cystic fibrosis therapy called Orkambi, has been using the continuous manufacturing process for this drug since it received FDA approval in July 2015. Vertex has the first wet granulator product, manufactured on a production line developed in-house at Vertex using elements of C-SOPS continuous manufacturing technology. JSC’s Prezista, using the Inspire line developed in collaboration with C-SOPS, is the first continuous direct compression product to be approved. The FDA is strongly encouraging others in the pharmaceutical industry to consider similar efforts to improve quality and technology standards across the industry.

Continuous manufacturing enables much faster production and more reliable products through an uninterrupted process. In some cases, manufacturing that takes two weeks or more using batch technology might only take a day using continuous manufacturing techniques. The process also reduces waste and environmental impact and allows continuous monitoring of quality. More efficient production of quality products can drive down manufacturing costs, possibly resulting in lower drug prices for consumers, and can allow more products to be developed. Continuous manufacturing also allows manufacturers to respond much quicker to changes in demand, potentially averting drug shortages. FDA’s approval of JSC’s move to continuous manufacturing of tablets marked a significant step towards integrating continuous manufacturing into pharmaceutical production across the industry.

11-D    Perspective

In this chapter an extraordinarily wide range of achievements in fundamental engineering research and technology have been described. Some have helped to launch new fields and entirely new systems approaches in areas as diverse as bioengineering, advanced manufacturing, and nanosystems. Innovative, even revolutionary products in microelectronics, communications, neuroengineering, synthetic biology, and sensing have resulted from the melding of research across disciplines in the context of strategic planning in partnership with industry. It was gratifying to learn in 2007 that the return on NSF’s investment in ERCs was, at that time, at least 50 to 1. In more than a decade since, that leverage of public funds has certainly grown even greater.

In 1987, two years after the first ERCs were launched, Nam Suh, then the head of NSF’s Engineering Directorate, wrote in an article in the journal Engineering Education that “NSF would have achieved its objectives if industrial firms depend on the outputs of these centers for their next move. Then the national network and infrastructure that will be established among these successful ERCs…and industrial firms will become a formidable asset in the 1990s and the twenty-first century.”[39] It is clear from the achievements described in this chapter that the ERC/industry collaboration that is at the heart of these centers did become a formidable national asset, one that continues to produce strong results today.


[1] National Academy of Engineering (1984). Guidelines for Engineering Research Centers: A Report to the National Science Foundation. Washington, DC: National Academy Press, p. 1. [https://doi.org/10.17226/19472]

[2] Engineering Centers Division (1991). The ERCs: A Partnership for Competitiveness, Report of a Symposium, February 28-March 1, 1990. Washington, DC: National Science Foundation. NSF 91-9. p. 1.

[3] NAE (1984). Op. cit., p. vii.

[4] For a discussion of RANN, see Chapter 1, Section 1-B(a).

[5] Year established.

[6] NSF (1990). The ERCs: A Partnership for Competitiveness, Report of a Symposium, February 28-March 1, 1990. Engineering Education and Centers Division, NSF. pp. 36-37.

[7] Koren, Yoram (2007). 11 Years: A Retrospective on the NSF Engineering Research Center for Reconfigurable Manufacturing Systems at the University of Michigan College of Engineering. Ann Arbor, MI: University of Michigan.

[8]  See Dr. Koren’s “Center Director Experience Essay

[9] http://www.nsf.gov/news/news_summ.jsp?cntn_id=125280&WT.mc_id=USNSF_51&WT.mc_ev=click[09/

[10] http://www.tanms-erc.org/

[11]  McLaughlin, David, V. Chandrasekar, Michael Zink, Brenda Philips (2013). CASA, Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere, Final Report. University of Massachusetts, Amherst, MA., p. 13.

[12] Ibid.

[13] Now known as the Center for Neurotechnology.

[14] BPEC was the only ERC ever to recompete successfully for a new ERC (BPEC II) and complete the full second term of NSF funding.

[15] Researchers at CMU’s Institute for Complex Engineered Systems, the post-graduation follow-on to the EDRC.

[16] Davies, David G., Matthew R. Parsek, James P. Pearson, Barbara H. Iglewski, J. W. Costerton, E. P. Greenberg (1998). The involvement of cell-to-cell signals in the development of a bacterial biofilm. Science, 10 Apr 1998, 280(5361), 295-298.  https://science.sciencemag.org/content/280/5361/295.full

[17] Gram-negative bacteria are bacteria that do not retain the crystal violet stain used in the Gram staining method of bacterial differentiation. They are characterized by their cell envelopes, which are composed of a thin peptidoglycan cell wall sandwiched between an inner cytoplasmic cell membrane and a bacterial outer membrane.

[18] Hampson, Robert E., Dong Song, Brian S Robinson, Dustin Fetterhoff, Alexander S Dakos, Brent M Roeder, Xiwei She, Robert T Wicks, Mark R Witcher, Daniel E Couture, Adrian W Laxton, Heidi Munger-Clary, Gautam Popli, Myriam J Sollman, Christopher T Whitlow, Vasilis Z Marmarelis, Theodore W Berger, and Sam A Deadwyler (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. J. Neural Eng., June 2018; 15(3): 036014.  https://iopscience.iop.org/article/10.1088/1741-2552/aaaed7

[19] CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats.

[20] For the full story, see Grant, Bob (2015). Credit for CRISPR: A conversation with George Church. The Scientist, December 29, 2015. https://www.the-scientist.com/news-analysis/credit-for-crispr-a-conversation-with-george-church-34306

[21] See http://personalrobotics.ri.cmu.edu/

[22] Bhatia, Richa (2017). Elon Musk’s OpenAI showcases new research – imitation algorithm that learns from human in single shot. Analytics India Magazine, May 22, 2017. https://www.analyticsindiamag.com/elon-musks-openai-showcases-new-research-imitation-algorithm-learns-human-single-shot/

[23] See https://www.cmu.edu/homepage/health/2012/summer/assistive-robots.shtml

[24] Branch-and-bound (BnB) is a general programming paradigm (or algorithm) used in operations research to solve difficult combinatorial optimization problems, such as those involving open-ended (unbounded) issues of time and movement complexity.

[25] Walter, Chip (2005). Kryder’s Law. Sci. Amer., August 2005.  https://www.scientificamerican.com/article/kryders-law/

[26] This section relied on an interview with Dr. Kyder in August 2018 and materials he provided to the authors.

[27] Liu, Peter Q., Anthony J. Hoffman, Matthew D. Escarra, Kale J. Franz, Jacob B. Khurgin, Yamac Dikmelik, Xiaojun Wang, Jen-Yu Fan & Claire F. Gmachl (2010). Highly power-efficient quantum cascade lasers. Nature Photonics, vol. 4, pp. 95–98.  https://www.nature.com/articles/nphoton.2009.262

[28] Blanche, P.-A., A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto & N. Peyghambarian (2010). Holographic three-dimensional telepresence using large-area photorefractive polymer. Nature, vol. 468, pp. 80–83 (04 November 2010).  https://www.nature.com/articles/nature09521

[29] https://www.nanomag.us/pdfs/nanoMag_Develops_New_Class_of_Materials_for_Orthopaedic_Implants.pdf

[30] Roessner, David, Quindi Franco, and Sushanta Mohapatra (2004). Economic Impact on Georgia of Georgia Tech’s Packaging Research Center. Final report to the Georgia Research Alliance. Arlington, VA: SRI International, October 2004.

[31] Ibid., p. 6.

[32] Roessner, David, Lynne Manrique, and Jongwon Park (2010). The economic impact of engineering research centers: Preliminary results of a pilot study. J. Technol. Transfer 35(5):475-493. https://doi.org/10.1007/s10961-010-9163-x

[33] Lewis, Courtland S. (2010). Innovations: ERC-Generated Commercialized Products, Processes, and Startups. Report to the National Science Foundation, Engineering Directorate, Engineering Education and Centers Division. Melbourne, FL: SciTech Communications, February 2010. http://erc-assoc.org/sites/default/files/topics/ERC_INNOVATIONS_2010_reprint.pdf

[34] The already-existing Earthquake Engineering Research Centers (EERCs) were absorbed into the ERC Program from another Engineering division at NSF in 1997.

[35] Ibid., pg. 1.

[36] Engineering Research Centers Program (1993). Industrial Involvement in National Science Foundation Engineering Research Centers. A report to the Subcommittee on VA HUD – Independent Agencies of the United States Senate Committee on Appropriations.  Washington DC:  NSF. April 22, 1993. p. 7.

[37] Trotter, Donald to Courtland Lewis. personal communication via email, October 23, 2010.

[38] Lewis, Courtland (2010), op. cit., p. 45.

[39] Suh, Nam P. (1987). The ERCs: What we have learned. Engineering Education, October 1987, p. 18.