Chapter 9: NSF Program Management

9-A       Overall Program Management System

As was discussed in previous chapters, the ERC Program was revolutionary in many respects. Therefore, managing the Program at the NSF level required innovation in management practices just as much as the ERC host universities required innovations in research and education practices. These management practices encompassed processes and procedures internal to NSF, such as funding mechanisms, pre-award review processes, post-award oversight, etc.; and processes and procedures for centers, such as novel reporting requirements, required management structures, and the development of performance indices or measurements in order to assess centers’ contributions and outputs in light of the substantial taxpayer investment they were receiving. This chapter describes both of these types of program management innovations.

Most of the current National Science Board (NSB) and NSF Senior Management principles for centers originated with these innovative ERC program management practices, as noted in a 2007 Office of the Inspector General report on the management of eight NSF center programs.[1] For example, the ERC Program is the innovator of the following internal NSF processes necessary for successful center programs:

  • Pre-award review system that includes criteria relevant to the value added from a center configuration, pre-award site visits, and briefings to the final review panel at NSF
  • Developing the cooperative agreement mechanism for centers
  • Providing funding throughout a ten-year period at $2-$5M, with phase-down
    • ERCs receive a ramp-up of funding from $2.5 and $4.2 M over eight years as an incentive for performance, with phased-down funding in the last two years to ready the ERC for self-sufficiency planning and to prepare business plans for self-sufficiency
  • Collection of data on post-award performance
  • Post-award oversight through annual site visit reviews
  • Renewal reviews to weed out poorly performing centers and extend the life span of the best centers to ten years
  • Opportunity for centers to recompete with a refreshed and redirected vision
  • External program-level evaluations, including studies and Committee of Visitor reviews.

Within NSF, the ERC Program shared its pre-award and post-award review and oversight processes and documents with other NSF center programs, including the Science and Technology Centers Program (via Program Managers Nat Pitts and Dragana Braskovic), the Materials Research Science and Engineering Centers Program (Thomas Reiker), the Nanoscale Science and Engineering Centers (Mihail Roco), and the Science of Learning Centers (Soo-Siang Lim).

At the centers level, the ERC Program was the innovator of center management requirements including:

  • Strategic planning at the center level
  • An external industry advisory board  and, later, an external scientific advisory board
  • Specified management positions and management plans
  • Purposeful integration of research and education.

Center-level best practices are shared through the ERC Best Practices Manual, written by members of the ERCs’ leadership teams and posted at http://erc-assoc.org/best_practices/best-practices-manual.

The following section provides a chronological overview of the evolution of the ERC Program management system.

9-A(a)    Chronology of Events

i.                  Start-Up: 1984–1990 – Learning How

  • Development of ERC cooperative agreement
  • Funding term or life cycle fixed at 11 years at the request of OMB in 1986
  • Post-award oversight system designed by Preston and implemented by ERC PDs, each assigned to monitor a subset of funded centers
  • Development of post-award oversight system with annual and third-year renewal reviews
  • Annual reporting guidelines
  • Development of review criteria to define “excellent” to “poor” implementation of the ERC key features
  • Database of quantitative indicators of inputs, progress, and impact established
  • Implementation of third-year renewal reviews for Classes of 1985-1987
  • Requirement for strategic planning—begun in 1987-88
  • Third-year renewal reviews—begun in 1987

ii.                1990–1994 – Refining the Model and Preparing Older ERCs for Self-Sufficiency

  • Start of sixth-year renewals
  • Development of recompetition policy and approval by the NSB
  • Emphasis placed on developing self-sufficiency plans
  • NSB requirement that all center awards be limited to 10 years, impacting the Class of 1998 and forward
  • Refinement of reporting guidelines from lessons learned at ERCs
  • Expansion of database requirements due to requests from NSF and Congress
  • Refinement and expansion of performance criteria based on experience and increases in requirements
  • Lead ERC PD in ERC’s division supported by liaison PDs from other ENG Divisions

iii.                 1995–2000 – Consolidation of Experience: Gen-2 ERCs

  • ERC Program provides more development assistance based on knowledge developed by ERCs
  • “ERC Best Practices Manual” written by ERCs; ERC Association website launched
  • SWOT Analysis process carried out at ERCs and included in site visit reporting
  • Student Leadership Councils required
  • Templates provided to ERCs to record and upload data for ERC database
  • Some lead PDs now come from outside EEC Division
  • Stronger start-up assistance provided through on-campus visits by NSF staff and ERC Consultancy of experienced ERC personnel
  • Certification of industrial membership and tighter financial reporting due to lessons learned from malfeasance at two ERCs
  • Second generation of strategic planning improved through 3-plane chart, 1998
  • Evaluation results indicate that Gen-1 ERC graduates were highly productive in industry, ERCs provided significant benefit to member firms, and ERCs had impacted the competitiveness of 67% of their member firms

iv.                   2001–2006 – Evolving the Gen-2 Construct and Planning for Gen-3

  • Improvements in strategic planning and systems constructs through 3-plane chart
  • Evaluation results indicate Gen-2 ERCs are even more productive than Gen-1 and these ERCs have a greater positive impact (75%) on the competitiveness of their member firms
  • Strategic diversity plans required
  • Greater emphasis on innovation and small business development, as opposed to technology transfer
  • New features developed through strategic planning for Gen-3 ERCs, including innovation and translational research support in partnership with small R&D firms; solicitation released
  • New performance criteria developed for new Gen-3 ERC features

v.                 2007–2014 – Gen-3 ERCs

  • Initial classes of Gen-3 ERCs funded
  • Education and Translational Research key features definitions improved
  • Second- and third-edition ERC Best Practices chapters released
  • ERCs and Economic Stimulus–Innovation Fund: 2009
  • Survey finds that 83% of graduated ERCs are self-sustaining and retain some or most ERC-like features.

The following section 9-A(b) provides a history of the development and operation of the ERC post-award oversight system. Section 9-B describes other features of NSF’s oversight of ERCs and the management of the Program. Section 9-F describes the centers’ life-span and the pattern of NSF funding relative to center operation and graduation.

9-A(b)    Program Management Evolution

NSF’s traditional approach to the generation of proposals and monitoring the outcome of awards had to be changed in order to successfully implement the 1984 vision of the ERC Program as expressed by both the Executive Branch and Congress. They charged the ERC Program with a mission to change the culture of U.S. academic engineering in order to strengthen U.S. industrial competitiveness, and to do it with a relatively large monetary award. NSF’s culture at that time was to let proposals come in at the initiative of the principle investigators (PIs), carry out rigorous pre-award peer review, issue a grant, and then expect the awardees to publish results—which would serve a quality control function, inform the scientific community, and contribute to the growth of knowledge of science. Longer-term impacts on society or technology were not a part of the mission of the basic research program elements of NSF. Because the ERC Program had a mission not only to advance fundamental knowledge but also to advance technology and change the culture of engineering education, this passive approach to proposal generation and quality control was not appropriate.

To generate proposals, rather than relying on unsolicited proposals from university researchers the ERC Program released a program announcement specifying the Program’s mission and the ERC key features that the proposing team would be obligated to fulfill. The review process included mail and panel review with criteria geared to the ERC key features.

As the awards to the first Class of six ERCs were initiated in 1985, Pete Mayfield, who managed the ERC Program, and Lynn Preston, who was his deputy, realized that the traditional peer-reviewed publication of research results alone would not be sufficient to ensure the quality of the new centers. They understood that the large size of the awards and the critical mission of the ERC Program warranted an active system of post-award oversight. In addition, their prior experience in the RANN Program—where some awardees received large awards for complex interdisciplinary proposals with applied technology advancement goals and then went ahead and carried out the research in a way that ignored those proposed goals—made them both wary of fully trusting academics to address the complex goals of their proposals without oversight. For Preston, this wariness derived directly from her RANN experience, where she had asked for statements of progress and then made site visits, either by herself or with a small team of academics, and found a few awardees carrying out their research in a business-as-usual mode without addressing the interdisciplinary goals of their funded projects. This experience convinced her of the need for strong post-award management. Based on this experience, she was given the responsibility to develop the post-award oversight system to enable the ERC Program staff to monitor center-level progress in achieving its proposed goals and, thereby, the goals of the ERC Program.

Preston’s ERC post-award system was an innovation for NSF at the time. It was an active system, governed by performance criteria defined by the ERC key features, implemented through a cooperative agreement tying performance to ERC Program and center proposal goals. It was carried out through post-award progress reports, monitoring by the ERC Program Manager and ERC Program Directors (PDs), and post-award peer review through annual on-site visits by teams of the ERCs’ peers from academe and industry, led by each ERC’s Program Director. In addition, industry advisors recommended that there be two, more stringent, renewal reviews: one at the third year, to weed out centers that were not able to set up systems to effectively address their proposed goals and the ERC Program’s goals; and another at the sixth year, to weed out those that could not mount a convincing engineered systems testbed.

Looking back at the oversight system and how it grew over time, it can best be characterized initially as a “tough love” system, much the way a parent sets standards and then monitors progress and provides encouragement and corrective action to improve growth, creativity, and readiness for eventual independence. Strong developmental guidance was provided by the industry partners, the site visit teams, and the ERC Program staff. However, there was always a strict penalty to pay for poor leadership in fulfilling the goals of a proposed ERC: termination of funding.

The post-award system grew in complexity over time as the NSF itself grew into a more complex bureaucracy, the size of the ERC Program budget grew, and the size of the awards became large enough to generate oversight by the management team of the NSF Office of the Inspector General. This complexity was not always a positive feature of the system, as it became too proscriptive. A word of caution to those developing new center programs:  keep the oversight system clear and focused and resist micromanagement through the system.

9-B       Program Oversight System

9-B(a)    Structure and Performance Standards

The basic structure of the post-award oversight system combined a major threat for poor performance with strong guidance for excellent performance. The threat was that the Program would not provide continued support to centers that functioned in an academic “business as usual” mode of operation without the necessary redirection required by the ERC key features. The guidance for excellent performance was provided by an oversight system with the “tough love” philosophy to help ensure that a center had the best guidance that NSF and the peer community could provide. If a center ignored that guidance or could not perform effectively, funding would be terminated.

The following processes were created to allow the ERC Program to achieve its mission. These were firsts for NSF and over the long run served as models for other center programs soon to be added to the NSF portfolio, such as the Science and Technology Centers Program.[2]

  • Development of an ERC Cooperative Agreement that specified performance expectations and program goals. This was an award instrument that was based on an agreement used for facilities awards; the ERC agreement was the first time it was used for a research award. It combined features of both contract and grant instruments by specifying some performance expectations similar to contracts while allowing research flexibility similar to grants. In particular, the features of the ERC cooperative agreement included:
    • Goals and features of the ERC, taken from its proposal
    • Strategic research planning 
      • By 1987, submission of a long-range research plan—a strategic plan—detailing how it will carry out its work and within what timeframes;
    • Responsibility to establish a partnership with industry and hold annual meetings with industry;
    • Requirement to attend the ERC Program annual meeting with NSF staff and other centers;
    • Requirement to establish and maintain a database in order to provide NSF with quantitative indicators of its activities and progress in meeting the center’s and Program’s goals;
    • Continued support would depend, among other things on an annual review of progress.[3]
      • Initially, the expected life-span (NSF-funding) of an ERC was 11 years. However, in the late 1980s the NSB implemented a requirement that all NSF center awards be limited to 10 years, which impacted the ERC Classes of 1998 and forward.
  • Development of reporting requirements and guidelines to ensure that the ERC focused on its goals, gathered information to support its performance claims, and reported that to NSF. These were summarized in an Annual Reporting Guidelines document that ERC program staff sent to centers.
  • Development of post-award performance criteria relevant to each key feature to define “excellent” to “poor” implementation of the key features (See linked file “Gen-2 Performance Criteria–Final[4] for a later evolution of the original criteria.)
  • Annual on-campus expert-peer review site visits led by ERC program directors and conducted under guidance common across all funded ERCs.
    • Third year renewal site-visit reviews began in 1987
    • Sixth year renewal site-visit reviews began in 1990
  • Development of an ERC Program-level database to record quantitative information about the ERCs’ resources, funding, and outputs. This information allowed ERC outputs to be quantitatively measured in order to demonstrate the success of the centers in achieving program goals. The linked file “1998 Data. Ppt” shows the types of data collected by ERCs and displayed by the ERC Program.[5]
  • Development of a community of sharing through annual meetings of NSF staff and the leaders and key staff of the ERCs to share successes and failures so as to improve individual center and overall ERC program performance. (See section 9-J for further discussion of community-building activities.)

9-B(b)    Evolution of the Oversight System To Achieve ERC Program Objectives

The importance and visibility of the ERC Program and its revolutionary trajectory led to a structured and “aggressive” post-award oversight system. The high expectations placed on the ERCs meant that the Program staff were devoted to strict oversight to help ensure that the ERCs successfully addressed and achieved their complex, interdependent goals and delivered a changed culture for academic engineering as well as a new generation of engineers better able to contribute quickly in their roles in industry. A major way the Program staff provided this oversight was through the annual peer-review site visit. However, developing the appropriate response to findings from the annual site visits was another challenge of managing a new, nationally recognized program.

For example, the outcome of one site visit in particular in the Class of 1987 was brought to the attention of Nam Suh, the Assistant Director for Engineering at the time, for guidance in terms of an appropriate action. The system goals that the ERC had stated in their proposal appeared to have been ignored.  After discussions with Dr. Suh, he asked the staff to send a warning shot across the bow of that ERC: only half of the next year’s annual support was provided at the time of the annual start date and the remainder would not be provided until the research plan was adjusted to address the systems goals. This response demonstrated the advantage of using a cooperative agreement as a funding instrument, because NSF had leverage to affect the performance of the center. That produced the desired result—a strengthened commitment to systems—and the remainder of the ERC’s annual funding was restored.

The first set of third-year renewal site visits was carried out in 1987 to review the renewal proposals of the first six ERCs in the Class of 1985.  For those site visits, the review teams were charged with determining whether or not sufficient progress had been achieved in a trajectory that matched the centers’ goals and the Program’s goals. NSF staff often used the phrase “not business as usual” to refer the nature of that trajectory. The review teams were charged to assess whether a center had structured a new type of research program that had long-range systems technology goals and had joined multiple disciplines in research to address them. They also expected the focus for education to be on developing students—both undergraduate and graduate students—who were familiar with industrial practice and with integrating knowledge to advance technology. Industry was expected to financially support the ERC and provide guidance on the research and education programs, which the faculty integrated into their planning. Two of the six third-year renewal site visit teams determined that the ERCs under their respective reviews were not meeting the required standards. This was due primarily to the fact that the center cultures that had not shifted sufficiently from “business as usual” to give NSF confidence that they would grow into fully successful ERCs. Therefore, NSF made the determination to terminate funding for these two centers at the end of their first five-year cooperative agreement. However, NSF reduced the funding gradually; by one-third for each of the next two years, in order to protect the students from an abrupt termination of funding.

After carrying out many annual and renewal (years 3 and 6 in the life of a center) site visits throughout the 1980s and into the early 1990s, Preston and Marshall Lih, the EEC Division Director at the time, began to develop a different approach to site visit reviews. While strict adherence to program goals and high standards of performance were necessary, the annual site visit reviews became more developmental, providing the ERC with more guidance on how to improve performance, while the renewal site visit reviews were still to be judgmental. The “love” part of the Tough Love philosophy thus came into balance with the “tough” part. Through the years, this approach led to successful third- and sixth-year renewals for most of the 64 ERCs functioning from 1985 to 2014, the cutoff year for this History. Eight ERCs (12%) did not pass their renewal reviews and were phased out; the last to fail that review was one from the Class of 2003. All of the remaining ERCs passed their sixth-year renewal reviews, but five of these had to repeat their renewal review in the seventh year to deal with serious but fixable weaknesses, especially regarding their systems testbeds.

9-C       Evolution of Program Announcements, Review, Awards, and Agreements

9-C(a)    Announcements

As was referenced in Chapter 2-A(b), and mentioned earlier in this chapter, the NSF team charged with developing and managing the new ERC Program decided that relying on unsolicited proposals from university researchers, as traditionally was done, would not generate proposals that would be able to meet the goals of this new program. Thus, the team settled on writing a program announcement (later called a program solicitation) so that they could provide explicit guidance on the required features of this new type of center. This instrument had been pioneered in the RANN program and was subsequently used throughout the Foundation in the late 1970s and early 1980s.

Over a 30-year period these announcements (solicitations) served two roles. One was to define the information necessary to submit an ERC proposal: key features, proposal requirements, and review criteria. This information grew over time in characterization and specificity as the knowledge of what it takes to develop and manage a successful ERC grew. (See linked file “Evolution of ERC Key Features”.) The second role was as a teaching tool. As knowledge of best practices and pitfalls for an ERC grew over time, the NSF staff decided to include this knowledge in the solicitations with references to the online “ERC Best Practices Manual.” In this way, they sought to level the playing field for proposers so that proposers new to the ERC concept would not be disadvantaged by those from universities with more ERC experience.

Preliminary proposals, or pre-proposals, were introduced in program announcement NSF 94-150 to reduce the proposal preparation burden for academe; however, it significantly lengthened the time it took from releasing the announcement to making the award from one year to up to 2 years. The increase in processing time was also due to longer award approval processes at the NSF Director’s level.

i.       ERC Configuration Requirements

Gradually over time, ERCs became more complex in structure, moving from one university to a configuration of a lead university and a few core partners plus, at times, affiliated universities. This trend was initiated by the ERCs themselves through their proposals and facilitated by the communications made possible through the worldwide web. NSF’s diversity requirements broadened those partnerships to include universities that served groups predominantly underrepresented in engineering.

Gen-1 ERCs (simple configuration): Classes of 1985–1990[6],[7]

  • U.S. academic institution with engineering research and education in the lead, in some cases with affiliates.

Gen-2 ERCs (more complex configuration to improve lead university team skill base and diversity): Classes of 1994/1995 through 2000[8],[9],[10],[11]

  • U.S. academic institution with graduate and undergraduate engineering research and education programs (NSF 93-41)
  • May be a single university or multi-university proposal; if the latter, one institution is the lead and submits the proposal, and programs must be integrated
  • Same as above—plus, to be considered a joint partner, the university would have a significant role in planning and execution of the ERC; if participation is at the project level only, the university should be called affiliate
  • NSF 97-5: Lead institution must be a Ph.D. degree-granting U.S. institution with undergraduate and graduate research and education programs; partner or affiliated institutions need not be Ph.D. degree-granting
  • Partners should be involved on a flexible basis based on need and performance
  • NSF 98-146: Same as above, but only 1-3 long-term partners, plus a limited number of outreach partners at the single faculty level to broaden impact.

Gen-2 ERCs: Classes of 2003–2006[12],[13]

  • NSF 02-24: Same as above, plus pre-college partners may be identified
  • NSF 04-570:  Multi-university configuration required, with lead and up to four core partners, plus limited number of research and outreach partners, including at least one NSF diversity awardee.

Gen-3 ERCs (outreach expanded to include foreign university partnerships): Classes of 2008–2013[14],[15],[16],[17]

  • Lead and up to four domestic partner institutions, one of which served large numbers of underrepresented students who were majoring in STEM field; later (NSF 07-521) the specific requirement of up to four partners became “a manageable number” of partner institutions.
  • Domestic affiliates not required, but may be included
  • May include collaborations and partnerships in research and education with faculty in foreign universities or foreign institutes that function in the pre-competitive space, as opposed to new product development; but by NSF 11-537 the partnerships were required
  • Long-term partnerships with pre-college educational institutions
  • Partnership with university, state, or local government organizations devoted to innovation and entrepreneurship.

This evolution of configuration requirements reflects an evolution in NSF policy to broaden the types of institutions involved in ERCs so as to ensure a broader involvement of a more diverse body of faculty and students in NSF awards. Caution should be exercised by those developing new center programs to keep the required team configuration motivated by two complementary goals: finding the skill base needed to address the center vision and increasing diversity in the participants.

                      ii.    Management Plan

Because of their complexity, NSF always understood that ERCs would be a management challenge in academe, where the single-investigator culture predominated. Thus there was a level of guidance in the program announcements regarding organization and management that was more specific than for other required features. In the early and mid-years of the Program it was necessary to point out expectations regarding tenure and promotion practices because of the cultural changes inherent in the ERC construct. It was also necessary to point the ERCs toward management systems that would support self-sufficiency (i.e.., continuation as cross-disciplinary centers post-NSF funding). Mechanisms for advice from the ERCs’ peers, outside of the site visit mechanism, were introduced to broaden the amount of input and feedback for the centers. However, these mechanisms were eventually dropped because the Center Directors felt that outside peers, other than through NSF-led site visits, often had apparent conflicts of interest because they were competitors in the ERCs’ fields.

Evolution of ERC Management Plan Requirements:

Gen-1 ERCs: Classes of 1985–1990[18]

  • Mechanisms for selecting research projects, allocating funds and equipment, recruiting staff, and disseminating and utilizing research results
  • Organizational chart
  • Procedures to ensure high-quality research
  • Added in FY 1986:
    • Information on faculty promotion and tenure practices for faculty involved in cross-disciplinary research
    • University planning and plans for self-sufficiency
  • Plans for management of industrial participation, support, and interaction.

Gen-2 ERCs: Classes of 1994/1995 through 2000[19],[20],[21]

  • NSF 93-41
    • Role of Center Director and key management associates (Industrial Liaison Officer, Administrative Manager,[22] and support staff)
    • Procedures to plan and assess center activities to assure high-quality research and education relevant to center’s goals
    • Process for allocating funds
    • Plan for collaboration with any partner institutions
    • Organizational chart
    • Tables of committed financial support from all sources
  • NSF 97-5 dropped planning specifics, added education coordinator and financial manager, projected funding by source and financial plan for allocation of funds by function, and full proposals required long-term financial plan for self-sufficiency
  • NSF 98-146 added back collaboration plan for multi-university ERCs, added an advisory system, project selection and assessment, and specified reporting to the Dean of Engineering.   (Most ERCs already proposed a management structure where the Center Director reported to the Dean of Engineering, but some had proposed to report to a department head instead, which was not acceptable.)

Gen-2 ERCs: Classes of 2003–2006[23]

  • Management and associated performance and financial management information systems needed to deploy center resources to achieve its goals
  • Mechanisms for securing external advice from academic and industrial experts to set strategic directions, select and assess projects, and develop internal policies, including cross-university policies for multi-university ERCs
  • Full proposals only: Allocation of funds by function and by institution (Year 1 only).

Gen-3 ERCs: Classes of 2008–2013[24]

  • For a multi-institution ERC, report to Dean of Engineering, who leads a Council of Deans (a management structure that was assumed but needed to be reitereated because one ERC began reporting to a department)
  • Function with sound management systems to ensure effective integration of its components to meet its goals
  • Sound financial management and reporting systems
  • Sound project selection and assessment systems that include input from Scientific and Industrial/Practitioner Advisory Boards
  • Student Leadership Council required
  • Organizational chart
  • NSF 13-520 added:
    • Process for SWOT (Strengths, Weaknesses, Opportunities and Threats) analyses by Industrial/Practitioner Advisory Board and Student Leadership Council
    • Table of committed funds by source (not by firm)
    • Functional budget tables (invited full proposals only)
    • Plan for distribution of funds by institution for year one (NSF invited full proposals only).

                      iii. University Financial and Cultural Support Requirements

Gen-1 ERCs: Classes of 1985–1990[25]

  • Show university support in proposed budget
  • Added in FY 1986: Information required on faculty promotion and tenure practices for faculty members involved in cross-disciplinary research.

Gen-2 ERCs: Classes of 1994/1995 through 2000[26]

The vague reference to university support in the program announcement for the first five years of the program led PIs and their universities to propose high levels of university support; which was often difficult to deliver. This led to tighter definitions and restrictions regarding statements of university support.

  • Substantial university financial support, i.e. cost sharing with letter committing to this support
  • Formal recognition of the cross-disciplinary, industrially relevant culture of the ERC
  • Tenure and reward policies to support involvement in the ERC.

Gen-2 ERCs: Classes of 2003–2006[27]

  • NSF 02-24: Same as above; plus university cost sharing of 10 percent, stipulated by NSF policy, applies only to lead and core partners, not affiliate institutions
  • NSF 04-570: Cost sharing raised to 20 percent.

Gen-3 ERCs- Classes of 2008 – 2013[28]

  • Cost sharing eliminated for all awards by NSF policy (Class of 2008)
  • Academic policies to sustain and reward ERC’s cross-disciplinary, global culture; its goals for technological innovation; and the role of its faculty and students in mentoring and pre-college education; i.e.:
    • Policies in place to reward faculty in the tenure and promotion process for cross-disciplinary research, research on education, research, and other activities focused on advancing technology and innovation
  • NSF 09-545: Cost sharing restored for centers by Congress and policy requirements same as above.

                      iv. Non-university Financial Support Requirements

Gen-1 ERCs: Classes of 1985–1990[29]

  • The only mention is in regard to the budget, where a separate schedule was required to show the total operating budget of the ERC including funds from NSF and other sources, including the proposing university(ies).

Gen-2 ERCs: Classes of 1994/1995 through 2000[30]

When pre-proposals were introduced in 1994 with NSF 94-150, the submission of financial support tables and commitments from non-NSF sources was required only at the full proposal stage.

  • Substantial support from non-NSF sources, including funds and equipment donations from participating companies and state and local government agencies, and exchange of technical personnel with industry
  • Table listing the current and committed future member firms, including letters of commitment for industrial financial support
  • Table of data on current and committed industrial, university, state, and other support and by 1998 tables on how total support will be expended by function of the ERC
  • Review criteria on strength of financial support from non-NSF sources.

Gen-2 ERCs: Classes of 2003–2006[31]

  • Same as above

Gen-3 ERCs: Classes of 2008–2013[32]

  • Dropped from NSF 07-521 at the request of NSF Policy Office, but commitments for industrial membership implies financial support
  • NSF 09-545 and later solicitations returned the requirement for financial support from industry and other sources at the full proposal stage, as well as academe at the preliminary proposal stage. Tables required on expected support and how it would be allocated by function.

Over time, the management and financial systems became more complex in recognition of the complexity of the structure of ERCs and the need to set up a financial system unique to the center so the center personnel could manage inflow and outflow of funds, as if it were a business within the university system, to prepare it for financial success and self-sufficiency post-graduation from NSF/ERC support.

9-C(b)    Pre-Award Review Process

While some government agencies make award decisions based only on staff input, NSF has always relied on the peer review system to gather input on quality. This is the foundation upon which the ERC Program rests. NSF award recommendations are made by NSF staff, taking into account input received through the peer review system and their judgment of needed future directions in a field. However, a new type of review and recommendation process was needed for ERC proposals and awards because the proposals were (and are) more complex than research proposals from a single investigator. The complexity stems from the requirements that ERC proposals establish a strategic vision and then integrate research across disciplines, develop a synergy between research and education, and establish a partnership with industry in order to accomplish the vision.

As was detailed in Chapter 2, Section 2-C(b), “Inventing a New Review Process,” the ERC Program had to create its own review process to handle this complexity. The process was multi-stage and began that first year with a review of the 142 full proposals submitted in response to the first ERC program announcement. To prepare for their arrival, the staff developed review criteria which were informed by the review criteria in the program announcement and expanded to focus on specifics related to those criteria. Those review criteria were:

Research and Research Team

  • Is the research innovative and high quality?
  • Will it lead to technological advances?
  • Does it provide an integrated-systems view?
  • Is the research team appropriately cross-disciplinary?
  • Is the quality of the faculty sufficient to achieve the goals?

International Competitiveness

  • Is the focus directed toward competitiveness or a national problem underlying competitiveness?
  • Will the planned advances serve as a basis for new/improved technology?

Education

  • Does the center provide for working relations between faculty and students and practicing engineers and scientists?
  • Are a significant number of graduate and undergraduate students involved in cross-disciplinary research?
  • Are they exposed to a systems view of engineering?
  • Are there plans for new or improved course material generated from the center’s work?
  • Are there effective plans for continuing education for practicing engineers?

Industrial Involvement/Technology Transfer

  • Will industrial engineers and scientists be actively involved in the planning, research, and educational activities of the ERC?
  • Is there a strong commitment or strong potential for a commitment for support from industry?
  • Are new and timely methods for successful transfer of knowledge and developments to industry ready to be in place?

Management

  • Will the center management be actively engaged in organizing human and physical resources to achieve an effective ERC?

University Commitment

  • Is there evidence of support and commitment to the ERC by the university?
  • Is there evidence that the university’s tenure/reward practices will not deter successful cross-disciplinary collaboration?

Later in the review process, in subsequent years as the ERC portfolio grew, the following secondary criteria were applied by the NSF staff before making a final award decision. These criteria were:

  • Geographic balance and distribution
  • Whether the lead university has already been granted an ERC, and if so, there could only be two ongoing ERCs on a university campus at one time
  • Whether the research area complements those of the already existing centers. [33]

The basic elements of the ERC review process were set up to review the proposals in 1985: first individual and panel reviews to recommend proposals for site visit, then the site visits, and finally “Blue Ribbon” panel review of the results of the site visits and a final presentation to this panel by the proposed center direct resulting in award recommendations.  That system served as the basis for ERC proposal review over the years, growing in depth over time from experience and NSF requirements.

In setting up the review system, Mayfield and Preston operated with an overriding ethical commitment that the review process could not be dominated by the opinions of the ERC Program staff, or those of one or two leaders in the field of engineering, or by higher-level management at NSF. It had to be grounded in the peer review system, collecting input from a broad base of people from a broad range of institutions and firms. It could not be swayed by Congressional pressure to fund an ERC in the districts of members of the Senate or Congress. That commitment prevailed throughout the period of this history.

                      i      Conflict-of-Interest Policies

One of the landmark policies NSF uses to uphold ethical standards is monitoring and controlling for conflicts of interest (COI). Initially, Mayfield and Preston set up the policy regarding conflicts of interest in the ERC peer review process. Basically, it excluded the review of a proposal by anyone from the institution submitting that proposal and any industrial person who had signed a letter of support for that proposal.

Regarding staff conflicts, prior to 1985 all NSF staff below the Assistant Director (AD) level were permanent NSF employees so no academic conflicts existed at that level. However, in 1985 when the program started, the AD for Engineering, Nam Suh, was on leave from MIT and an MIT employee. At that point, he did not sit in on any of the panel meetings and was not present during the deliberations of the blue-ribbon panel that recommended awards. However, in 1986 he did insist on sittting in on the blue-ribbon panel meeting; afterwards, Mayfield and Preston decided that in the future, ADs should not sit in on those deliberations, both because they might have an institutional conflict of interest and because he/she would be the official recommending an award to the Director. In addition, any casual comments coming from someone at the AD level could be perceived as suggestions by the members of the panel and “perturb” their deliberations. Briefing the blue-ribbon panel on the ERC Program and its importance to the Nation in general prior to the initiation of the work of the panel was considered to be an appropriate level of involvement for an AD.

By the 1990s, NSF’s and the ERC Program’s COI policies became more complex for two reasons:

  • Some of the ERC PDs came directly from universities to serve in a staff capacity but were still under the employment of their university.
  • Congress mandated that NSF could not use reviewers who had a financial interest in any firm proposed to be a member of a center or serving as a member of any ongoing center.

These issues could be addressed through expanded COI policies, which were reviewed and approved by Charles (Charlie) S. Brown , the Ethics Officer of the NSF Office of the General Counsel (OGC). The policy developed for ERC full proposals for NSF 98-146 was as follows regarding academic institution COIs:

  • No NSF staff person may be involved with a proposal from his/her home institution.
  • No reviewer may serve on a panel considering a proposal from his/her institution (while the usual NSF policy was to let that person leave the panel room during the deliberations of the conflicted proposal, that was not possible given the size of ERC awards and their pervasive impact on campuses).
  • A reviewer from an institution that has submitted a proposal may sit on a panel that is not considering proposals from that person’s home institution.
  • Deans, Vice Presidents for Research, or a person who serves in a similar role over several departments or schools in a university that has submitted a proposal may not serve on any panel.[34]

However, a Congressional regulation regarding financial conflicts of interest posed a serious threat to the integrity of the ERC review process (pre and post-award), because as written it would have eliminated all reviewers with private investments that were under their own control or that of their spouse and/or dependent child in firms proposing to be members of a proposed ERC or that were already members of a funded ERC. Since there is only a tenuous connection for a firm, let alone an investor, between joining an ERC and predictable financial reward to that firm, Preston felt that this restriction as it related to the review of center proposals and ERC post-award reviews should be reviewed again by the OGC. She voiced her concerns about the implication of this policy on the integrity of the ERC review process with Charlie Brown, and together they worked out a waiver system that was accepted by NSF and Congress. It read as follows:

“Technical and ERC panelists also will be asked to review the lists of industrial participants committing to financial support and involvement to ascertain if they individually own stock worth more than $5,000 in any of the firms. If they own stock worth between $5,000 and $50,000 they will be asked to sign an affiliate waiver. If the amount of ownership is greater than $50,000, the affiliate waiver will have to be signed by the Office of the General Council. These conflicts will be checked by the Program Director before the panel meetings so the waivers that need OGC approval will be processed before the meetings. This does not apply to those firms merely signing letters of interest without a financial commitment.”[35]

This waiver was eventually used across the Foundation in peer review of proposals or ongoing awards and basically continues today.

However, the waiver policy had an impact on Preston’s ability to continue to use retired industrial VPs for Research or Chief Technical Officers because it required that they disclose any stock holdings where they had control over the funds invested. For some of these people, with considerable wealth, this was a burden; and for Preston, it proved to be a disincentive to using this type of reviewer as she felt the policy was an invasion of their privacy. Thus, in the late 1990s and beyond, there were fewer industrial reviewers  with high level experience and in some cases, fewer academic reviewers who controlled their own investments.

Over time, the ERC Program’s COI policy for proposal review became increasingly complex because of the complex affiliations for NSF staff, presenting a broader internal impact. Because academic professors who served at NSF in temporary staff positions were employed by their home universities, that affiliation posed conflict issues regarding their knowledge of and involvement in the review of ERC proposals because of the magnitude and prestige of ERC awards. The issue was that it would be inappropriate for these people to see lists of proposals or participate in the review of proposals, as they would be put in a position of having information about proposals that might be in competition with proposals from their home institutions—and their home institutions might put pressure on them to disclose information regarding what proposals were submitted to the ERC competition by other institutions.

As a consequence, policies to deal with this type of conflict were developed with Charlie Brown until he retired from NSF in 2007 and then with Karen Santoro, the NSF Ethics Officer who took his place. These were as follows:

  • An Assistant Director for Engineering and his/her Deputy AD (DAD), a Division Director, or a Program Director could not participate in the review of ERC proposals or sign on an award recommendation if one of the proposals in the competition was submitted by his/her home university.
  • A conflict of interest occurred when that person’s university participated in an ERC proposal as the lead university or one of the core partner universities. An affiliate relationship, since it usually involved only one person, and not a large number of professors, would not trigger a conflict.
  • If that proposal failed in one stage of the process (e.g. pre-proposal review), then the conflicted staff member could participate in the rest of process.

After Preston left the NSF in early 2014, this policy largely remained intact, except that a modification enabled an ERC Program Leader who was then an academic employee working temporarily at NSF to manage the review process for pre-proposals with access to the lists of proposals but without access to the “jackets” with more detailed information. If his/her university received a full proposal invitation, then he/she would remain conflicted unless and until the university’s proposal failed. This COI policy is always subject to modification and may well have evolved further since that time.

ii                   The ERC Proposal Review System

The components of the pre-award proposal review system and how it has functioned through time are described in detail in the linked file “The ERC Proposal Review System.”

NSF Award Recommendation and Decision Process

The process of determining award recommendations grew in complexity over time as NSF evolved into a more hierarchical bureaucracy. During the first decade and a half, 1985-2000, that decision process was managed by the Director of the division housing the ERC Program—Mayfield through mid-1987 and then Marshall Lih, both of them worked jointly with Preston on the recommendation; she began managing/leading the ERC Program by 1987. The issue usually was to determine which of the Highly Recommended and top Recommended proposals to recommend for awards. The ranking of the Blue Ribbon ERC Panel was the first point of reference. If there were sufficient funds to award all the proposals highly recommended by the ERC Panel, that was the outcome. If there were more proposals highly recommended for award than funds to support them, the ERC team awarded those with the highest rankings, complemented by consideration of the fields of the proposed awards in relation to the current portfolio of ERCs and National needs. Lih and Preston conferred with the ERC PDs who led the review panels and often asked for input from relevant division directors, but neither group had a formal role in the final decision process, so as to avoid “disciplinary lobbying”—i.e., “I want an ERC for my discipline.”

While the Assistant Director (AD) for Engineering had been involved in the initial decisions regarding which ERCs to fund in 1985 and 1986, by 1987 it became apparent that he/she should not initially be directly involved because the AD was the person recommending the awards to the Director through the National Science Board (NSB). When the recommendation decision was reached, it was then discussed with the AD, if he/she was not conflicted. If conflicted, it was discussed with the Deputy Assistant Director (DAD), who was usually a permanent employee of NSF.

When the recommendation decision was final, award recommendation packages were prepared for NSB review and signed by the AD/ENG or DAD/ENG. ERC and ENG staff presented the recommended awards to the NSB for their approval. If approved, that approval was sent to the Director, who then delegated the award process back to the AD/ENG, who in turn delegated it to the ERC Program’s Division Director, with instructions to prepare award recommendation actions. Award packages were signed by the AD/ENG and forwarded to the Division of Grants and Agreements—the division responsible for making an award to a university—for their review, approval, and action. Once that was achieved, the award instrument—the cooperative agreement—was sent to the university for their review and signature. At the end of that process, the award was official.

During the 1980s and early 1990s, Preston would organize a team of staff from the Directorate to review the NSB packages in advance for strength of analyses, recommendations, and to find errors. She based this effort on her experience in RANN as a Member of the RANN AD’s Grant Review Board (GRB), which used that Board for the same purpose. In 1991, Joseph Bordogna became the AD/ENG and was involved in the GRB process. When he became the Acting Deputy Director of NSF in 1996, he formed the Director’s Review Board (DRB) to assist him in the process of reviewing award recommendation packages on their way to NSB approval. Also, at that time the NSB put award size limits on proposals they would review. Individual ERC awards were too small to be reviewed by the NSB, so the DRB became the last “stop” in the process for recommending approval to the Director.

This recommendation process continued until 2004, when the recommendation process became more complex within the Directorate for Engineering (ENG). The internal review process for award recommendations requiring DRB or NSB approval was formalized and renamed the Engineering Review Board (ERB), with an expanded role in the recommendation process. The ERB now participated in the decisions regarding which proposals would be recommend for award, based on the recommendations of both the Blue Ribbon ERC Panel and the ERC Program. The result was a shift in “power” in the recommendation process from the ERC Program and its Division Director up to the AD’s level. The ERB reviewed the recommendations of the ERC Program regarding which proposals to recommend for award and eventually actually voted regarding which awards to recommend. This had a benefit for the ERC Program of developing more ENG staff who were more familiar with the ERC Program and also providing a broader base of disciplinary expertise to inform the final decisions about which proposals to recommend. The DAD was also responsible for managing any disciplinary “politics” impacting award recommendations.

Another layer of involvement of staff from the divisions of ENG was added in 2008 in the form of the ERC Working Group. This was a team of staff from the other divisions of ENG who had ERC experience, either as former ERC PDs or as site visitors. They became involved in the decisions regarding which pre-proposals to invite to submit a full proposal and remained involved in the process until the recommendations reached the ERB level. While their involvement may have broadened the knowledge about ERCs among the staff of the Directorate, Preston and the other ERC Program staff found it greatly complicated the review process, lengthening still again the time it took to make decisions. They also found that some of the Working Group members did not spend sufficient time attending panel meetings to be as fully familiar with the proposals under consideration as Preston and the ERC PD who led the panels were. She attended every panel meeting and read every proposal and ERC PDs who led the panels read all the proposals in their panel and also attended other panels.

The involvement of the ERB in the new award and renewal award recommendation process continued until 2009, when the Acting Director of the NSF, Cora Marrett, decided that ERC award and renewal packages were of such high quality, they did not have to be reviewed by the DRB. However, Tom Peterson, then the AD for ENG, requested that ERC award recommendations still be considered by the DRB so that the members of the DRB, his co-leaders of other Directorates, could continue to be cognizant of the ERCs. Thus, the role of the ERB continued as outlined above for the awards, but the ERB became the final stop for renewal recommendations. That decision was reversed after Preston retired when Richard Buckius, the Chief Operating Officer of the NSF, who had previously been an AD/ENG, decided that both award and renewal recommendations had to be presented to the DRB for approval.

In 2012, the award recommendation process became even more concentrated in the Office of the AD/ENG. The DAD decided to take over the role of selection of full proposals from the ERC Program. He formed a committee that included Preston (by then no longer the Leader of the ERC Program); Eduardo Misawa, the new Leader of the ERC Program; and two other division-level leaders who had no experience with the current competition or its panels. This move, as might be expected, was not well received by the ERC Program personnel who had worked on the panels because it left them out of the decision process and put someone in charge of full proposal selection who had not participated in the review panels. Suggestion: As you develop new programs, be mindful that this type of approach is not optimal.  

9-C(c) Agreements and Start-up

i.         Award Instrument: The Cooperative Agreement

Once the award decisions for the first class of ERCs were made in 1985, Pete Mayfield, the Division Director at the time, and Lynn Preston understood that a new type of funding instrument would be required. Using a traditional NSF grant was too passive an instrument, as it was not possible to require the Center Director–the Principle Investigator (PI)—to deliver on the proposed goals of the ERC. On the other hand, using a contract instrument provided NSF with too much control. Preston conferred with a grants officer in the NSF division that processed awards. After explaining to him that the ERCs had to structure programs to fulfill the research, education, and industrial collaboration goals proposed and accepted for funding by NSF, they decided that the appropriate instrument would be a cooperative agreement. This agreement is basically a set of two-way obligations. The grants officer had developed cooperative agreements for the ocean science programs to oversee the purchase and deployment of research vessels, so that model served as a starting point.

An agreement was structured for the ERC Program on that basis. The ERC Program goals were taken from the program announcement and the individual ERC’s proposed features were taken from the proposal and included in the agreement, so that the ERC PI would be obligated to harness resources to fulfill them. Funding amounts and schedules, reporting requirements, special requirements for a particular center, and joint NSF-awardee activities were stipulated. The PI was required to form an industrial advisory committee and hold annual meetings with industry, to attend annual meetings of ERC PIs and staff with NSF, and to disseminate the center’s findings in research and education. In time, the agreement was expanded to require the ERC to keep a database in order to provide NSF with quantitative indicators of activities and programs in meeting ERC Program goals. ERCs were required to submit an update of their strategic plan, within 90 days of their award, which served as a tool to assure that the strategic planning process remained in the forefront of the research programs. They were also required to submit annual reports, which grew in complexity over time.

If future center-level budgets were to grow, that growth would depend on performance. Using financial incentives to stimulate performance was a new mode for NSF and academics who had to provide evidence of performance beyond publications. The agreement also stated that continued NSF support would depend upon an annual report and an annual review of the ERC’s progress. Preston wanted to use the annual reports as a management tool at the center level, so that at least once a year the ERC team would have to get together and reassess its goals and progress, analyze its level of delivery, and determine needed course corrections for the next year—or next five years for a renewal review. The agreement also had protections for the ERC to prevent the NSF ERC Program Director from exercising too much control over the ERC and attempting to usurp the role of the PI.

Initially, the recipient university(-ies) were obligated to provide proposed cost sharing. Any state or local government was expected to provide promised support and industrial support was required, but expected levels were not stipulated.

The ERC was required to report to the Dean of Engineering, to position it equal to or higher than a department in the academic hierarchy of a school of engineering and to signify and cement its cross-disciplinary structure, since housing it in a disciplinary department would be counterproductive.

These agreements grew in complexity and specificity over time, as did the ERC Program and its awards. Initially, there was little engagement from the NSF division that actually executed (funded) the awards (the technical oversight was provided by Preston and her team in a different division), but that changed over time as well. The ERC Program’s cooperative agreement became a challenge to the funding division, where personnel were more familiar with the grant instrument as a funding method. Personnel had to be trained to understand how to monitor and guide the development of the cooperative agreements, which were increasingly used by other center programs at NSF. The name of the funding division became the “Division of Grants and Agreements” and there was one officer assigned specifically to the ERC program, initially Tim Kashmer, who worked closely with Preston and her staff in the development and execution of the agreements, and over time he became an expert in cooperative agreements.

For further detail based on an actual agreement template, see the Program Terms and Conditions in the master agreement that was used in 2006.[36] Note that the agreement provides significant guidance regarding performance components and expectations in order to serve as a mechanism for not only detailing requirements but also providing guidance on how the features should be achieved. This resulted in joining terms and best practices into one document because of concern that many ERC leaders and/or their university supervisors would not fully understand how to achieve the Program’s goals. The financial terms were contained in a separate document.

Once the agreement was signed, the award went into effect. Funds were provided on a billing-back, draw-down basis.

ii.       New Center Start-Up- How to Successfully Launch an ERC

Initially, center start-up was quite informal. Preston and Mayfield met at NSF with the Directors of the first class of ERCs and their Administrative Directors plus key faculty leaders after the NAE symposium announcing their awards to personally congratulate them. The additional goals of the meeting were to inform them of the general terms of their cooperative agreements, the Program’s expectations for performance, and the fact that there would be annual reviewers and a third-year renewal review in 1987, plus a Program-level annual meeting that they and their staff would be expected to attend.

As discussed previously in other chapters and later in this chapter (section 9-J(a)), the ERC Program annual meetings were designed to serve as platforms for sharing Program-level information and center-level sharing of challenges, successes, and failures. The basic message was that the NSF staff and the ERC participants were learning together how best to fulfill the ERC Program’s mission. As more experience at the center level was validated through annual and renewal reviews, sessions were set up at the annual meetings for experienced ERC key leadership to brief new leaders on how best to start up their centers.

The first edition of the ERC Best Practices Manual, written by ERC leaders, was published in 1996 on the Web through a new site, www.erc-assoc.org, and later served as another tool for helping new ERCs get off the ground without repeating approaches that did not work under the ERC construct. By 2000, the new centers were given a place in the meeting agenda to briefly present their visions and goals, thereby informing other ERC staff at all levels about these new centers. In addition, as centers graduated from NSF support, they also had time on the agenda to present their goals and achievements, so all could see what they had achieved.

However, Preston felt over time that more was needed to help new centers, given the scope, complexity, and impact of the ERC awards. Beginning with the Class of 1996, rather than waiting for the ERC Program annual meeting, Preston brought small teams of leaders of the new ERCs to NSF for a briefing on effective practices in starting up an ERC. While this served a useful purpose, she was concerned that the impact was narrow and that full communication of NSF’s expectations depended on what the one or two leaders brought to NSF for these briefings understood about start-up and how well they communicated it back home.

With the start of the Class of 1998, she took the startup process on the road to have a broader impact on a new ERC’s team and its university administrators than was possible by relying only on one or two ERC leaders coming to NSF and the ERC annual meetings as startup training tools. The on-campus startup meetings combined a celebration of the award and training. After the long and arduous review process that took over a year, a new ERC Director could use this meeting as a means to bring the faculty team back together, to start the Industrial Advisory Board and begin the membership process for on-board and new firms, and to bring the university administrators together in support of their ERC.

Preston and the lead ERC PD (and a co-PD if there were one) went to the lead university’s campus. The meeting began with a celebratory dinner, where she and the ERC PD could congratulate the ERC team and the lead and partner universities on winning the prestigious ERC award and the Center Director could formally congratulate his/her team. This dinner involved a broad base of academic leaders, the ERC’s industrial supporters, often state and local government officials, and even students who had played a key role in the proposal development and review process. The next day, the new ERC’s team briefed on its start-up goals and features and NSF staff briefed them to provide guidance on up-to-date best practices in research, education, industrial collaboration, administration, and financial management.

The new-center briefing also took on more depth on the administrative side as a result of the leadership of Barbara Kenny, an ERC PD to whom Preston delegated the reporting and oversight system responsibility in 2008. Kenny also coordinated a team of ERC Administrative Directors, who met monthly in a conference call to discuss center administration issues. They suggested that part of the start-up celebration meeting be devoted to in-depth training of new ADs, since it was such a different academic responsibility. Kenny recruited Janice Brickley, the Administrative Director of the ERC for Collaborative Adaptive Sensing of the Atmosphere (CASA), headquartered at the University of Massachusetts-Amherst, to organize those briefings and teach the new ADs coming from new ERCs at start-up as well as new ADs from ongoing ERCs. Both Brickley and Kenny briefed on how to manage the administration and finances of a new ERC. Those start-up briefings began in 2008.[37],[38]

Brickley brought a wealth of experience based on her role as the AD for CASA and her contributions to a later edition of the ERC Best Practices Manual chapter on Administrative Management. She started her briefing to the new ADs as follows, quoting from the Best Practices Manual:

“…to all of the day-to-day challenges of operating an industry-oriented, multidisciplinary Center on a university campus are added the extra dimensions—geographic, logistical, administrative, legal, cultural, and psychological—of requiring separate institutions to collaborate closely.”[39]

9-D       Post-Award Oversight System

9-D(a)    Annual Reporting

i.         1984–1990: Brief Reporting Guidelines and Minimal Data Collection

During the beginning years of the ERC program, the ERC Program leaders at NSF and the ERC center leaders at the universities were learning together how best to start-up and operate an ongoing ERC. Right after the awards for the first class of ERCs were made in 1985, Preston got agreement from Pete Mayfield, the ERC Office Director, and Nam Suh, the head of the Directorate for Engineering, that ERCs had to report on their progress and outcomes. This was a break with NSF tradition, because prior awards had been made in a passive mode with little regard for accountability. The only expectation was that the awards should result in the advancement of science and engineering.

However, under the new ERC program, each ERC was much more than its own individual award; it was now a part of a community of ERCs that would together be fulfilling not only their own visions but also the mission of the overall ERC Program.  In light of this, Preston strongly argued that the ERCs should prepare annual reports on their progress, plans, and outcomes and impacts, because Congress would be looking for how these ERCs impacted U.S. competitiveness. In addition, she argued, the exercise was important not just for external reporting but for the internal management of the ERCs. She reasoned that at least once a year, each ERC had to stand back from the day-to-day work and assess progress, plan for the future, and revise/build the team for the next year. Mayfield was at first uncomfortable with this approach because he was more accustomed to the traditional NSF passive practice of oversight, having been at the Foundation since the late 1950s. However, through her conversations with him, Nam Suh, and Erich Bloch (then the Director of NSF), agreement was reached that the ERCs would have to prepare annual reports and would receive post-award site visit reviews.

That requirement was put in the first cooperative agreement and minimal guidelines were issued for the first annual report, to be delivered in the winter of 1986, requiring chapters on the key features of an ERC and its management. The responses varied considerably across the centers, with one essentially coupling together individual reports by the PIs who received the funding to others that provided more thoughtful analyses of progress. The data the centers provided on inputs and outputs varied considerably as well. The next year, for the second annual report which was due in the winter of 1987, more substantial but still minimal guidelines were provided to the ERCs, including an expanded research section to include strategic planning with milestone charts depicting deliverables and discussions of testbeds to explore proof-of-concept. In the winter of 1988, the annual reports and renewal proposals for the third-year renewal reviews were due, and the reviews took place in March 1988.

The guidelines for preparing annual reports evolved over this time with the help of a GAO assessment of the ERC program in 1988. The assessment included an evaluation of the post-award oversight system. GAO experts provided Preston with useful guidance on how to prepare performance criteria and reporting guidelines, which helped her develop and improve the oversight system for the fledgling ERC program.

The annual reports also provided information for the post-award site visit teams to review and evaluate the progress of the center toward meeting its goals. By 1987 the site visit teams (then called Technical Advisory Committees) were instructed to prepare site visit reports on the following topics:  (1) management of the ERC and its leadership, (2) the quality of the research program, (3) the education program with particular response to undergraduate education, (4) the extent and reality of industrial participation, (5) the extent and reality of state and university support, and (6) specific comments and recommendations to the Program Director for improvement of the ERC.[40]

In addition, the site visit teams were provided with the guiding philosophy below:

It is not intended that an evaluation/review report merely be answers to these questions. Rather, the report should reflect the judgment of the (review) team regarding the progress and prospects of the ERC using these criteria as a frame of reference. They are intended to bring the reviewer up to speed on the goals and objectives of the ERC program. The application of the criteria to each center may differ depending upon whether or not the ERC was built on an existing center or is started de novo (anew), the degree of difficulty inherent in the focus of the center, the degree of difficulty inherent in the blending of the disciplines involved in the ERC, the degree of sophistication of the targeted industrial community, etc.[41]

ii.   1991–2002: More Intensive Reporting and Data Collection

The years between 1991 and 2002 represented a honing of the ERC key features based on experience, the creation of data reporting guidelines, improved definitions of data required, and significant augmentation of reporting requirements. The oversight system eventually included:

  • A definition of key features designed to promote outcomes and impacts on knowledge, technology, education, and industry
  • Review criteria and program guidance to ERCs emphasizing outcome, impacts, and deliverables
  • Strategic plans for research and education to organize resources to achieve goals and deliverables
  • Database of indicators of performance and impact, designed to support the performance review system and reporting for ERCs and the ERC Program
  • Annual reporting guidelines to focus reports on outcomes and impacts from past support, value added by ERCs, and future plans
  • Annual meetings designed to encourage ERCs to share information on how to achieve cross-disciplinary, systems-focused research programs, transfer knowledge and technology to impact industry, and produce more effective graduates for industry.

iii.    1993 and After: Stronger Reporting Guidelines

In 1993, with the addition of Linda Parker, an evaluation specialist, to the NSF ERC team, Preston gained additional support in developing reporting and database guidelines. They worked with the Administrative Directors of the ERCs, who were responsible for gathering information and data from the faculty and staff and for preparing the reports in collaboration with the Center Directors. The purpose was to develop lines of communication about NSF’s need for information to provide to reviewers, to keep as records of performance, and to report to higher levels of NSF, the OMB, and Congress. The Center Directors also had needs for information about faculty and staff achievements and plans for the future. Data were required to provide a factual basis for claims about inputs (money, member firms, and students, faculty, and staff) and outputs (graduates, publications, patents and licenses, and technology transferred to industry/users).

The result was a set of database and reporting guidelines to be used by the staff of ERCs to gather information and develop their reports. Because the report writing was most often delegated to ERC staff, some of whom might not be fully familiar with the goals and expectations for performance of each key feature, Preston and Parker expanded the guidelines so they would serve as a “teaching tool” for ERC staff and participants about performance expectations and required information and formats.

During this period, the guidelines often were a “work in progress” as NSF was learning how to ask for reports of progress from the ERCs so the outcome would serve its oversight and reporting purposes, and the ERCs were learning how to gather information for the reports and how to prepare effective, readable reports, which also would be used for internal assessments of progress and plans.  There was a lot of push and pull; reviewers were demanding more and more specific technical and financial information, but centers sometimes complained about the burden of reporting. Findings by the NSF Inspector General that a few centers were less than honest in their reporting prompted requirements for new certifications by higher university officials of industrial memberships and cost sharing, and new definitions of core projects and associated projects.  In addition, as more ERC PDs came from an engineering background, they and the engineers in the ERCs shared a need for precise definition of requirements, rather than requirements that allowed more flexibility

By 2000, there had been several iterations of the annual reporting guidelines based on input from the ERCs, from a committee charged with improving reporting requirements, and from site visit teams. The annual reports were originally only one volume, which provided information on progress and plans for the key features and management, plus data. At the request of site visit teams, a second volume was added to include summaries of project-level reports, the certifications, and budget and other required NSF materials.

In 2000, Preston asked one of the ERC Program Directors to organize a committee, comprised of personnel from the ERCs, to develop new and improved reporting requirements because there were ongoing complaints from the Center Directors about the reporting burden. Despite the desire to reduce the reporting burden, the committee recommended that the number of volumes of the annual report be increased from two to four:

Volume I: Vision/Strategy/Outcomes/Impacts

Volume II: Data Report

Volume III: Thrust Reports

Volume IV: NSF Cooperative Agreement Documentation (Budget, Current and Pending Support, Membership Agreements, Cost Sharing Report, Intellectual Policy Description).

NSF ERC staff did not implement this suggestion of four volumes because the staff believed that it would be difficult for site visit teams to juggle their reading between four volumes. For example, there might be assertions made in Volume I about impacts but the reader would have to shift to another volume to find the proof through data—something the staff believed would not happen, as most reviewers read the reports shortly before the site visit or even on the airplane on the way to the visit.

Instead, in 2001 Parker and Preston and the ERC PDs revised the reporting guidelines in a different and specific way, based on the input from the ERCs and this committee. (See the linked file Guidelines here.[42]) Basically, the guidelines had the following format:

  • Performance expectations and requirements by Key Feature with a reference to a separate document containing a set of specific criteria, organized by feature with definitions of high-and low-quality performance per feature
  • Reporting Requirements
  1. Report scope – what to include
  2. Two Volumes – Size and other requirements
  3. Data tables suitable for inclusion in the report to be generated by the ERC Program’s database contractor, containing data specific to the reporting ERC
  • Volume I Requirements
  1. Cover page requirements
  2. Project Summary
  3. List of all ERC Faculty
  4. List of External Advisory Committees and Members
  5. Table of Contents
    • Systems Vision/Value Added and Broader Impacts of the Center
      1.  Systems Vision
      2. Value Added Discussion (Knowledge, Technology Education and Outreach)
    • Strategic Research Plan and Research Program
      1. Strategic Research Plan
      2. Research Program (Thrust Level)
    • Education and Educational Outreach
    • Industrial/Practitioner Collaboration and Technology Transfer
    •  Strategic Resource and Management Plan
      1. Institutional Configuration, Leadership, Team, Equipment and Space
      2. Management Systems and University Partnership         
      3. Financial Support and Budget Allocations           
    • Budget Requests
    • References Cited

Volume I, Appendix I: Glossary

Volume I, Appendix II (Includes the following items):

  1. ERC’s Current Center-Wide Industrial/Practitioner Membership Agreement
  2. ERC’s Intellectual Property Agreement (if not part of the Generic Industrial/ Practitioner Membership Agreement)
  3. Certification of the Industry/Practitioner Membership by the Awardee Authorized Organizational Representative
  4. Certification of Cumulative and Current Cost Sharing by the Awardee Authorized Organizational Representative
  5. Written Conflict-of-Interest Policy and Certification by an Authorized Organizational Representative of the Policy’s Enforcement

Volume I, Appendix III: Current and Pending Support for the members of the ERC’s leadership team.

  • Volume II Requirements
    • List of all supplements and special-purpose awards to the center from the ERC Program
    • Organized by research thrust, 2-3 page detailed project summaries for all projects funded by direct support to the center and by indirect support to an ERC faculty member’s department as associated grants, and contracts that are under the scope of the center’s strategic plan that are non-proprietary. This volume gives a snapshot of all of the research projects going on in the center with a brief description of each.
    • Bibliography of publications
    • Two-page biographical sketches of the ERC faculty and leadership team per instructions in the NSF’s Grant Proposal Guide (GPG).

A set of final reporting guidelines was also developed and distributed to the ERCs in this period for the first class of ERCs that graduated from NSF support in the mid-1990s. These guidelines challenged the graduating ERCs to take an historical view and:

  • Review the “state-of-the-art” of the field(s) most relevant to the vision of the ERC at the start of NSF support and at the 11-year point; as part of this review, also identify others in the field who made significant contributions during this period.
  • Identify the most significant advances in knowledge and technology made by the ERC, particularly those made as a direct result of the “engineered systems” and interdisciplinary focus of the center.

They were required to summarize:

  • How the research contributions of the center have impacted the field and how the field is farther ahead than it would have been without the establishment of the ERC;
  • How research discoveries made by the ERC have affected and will affect the development of production processes and products and how they have provided a pathway for further discovery;
  • How technology transfer and technological development in industry were enhanced by the establishment of the ERC; how communication between academe and industry was impacted by the ERC pointing out significant advances to technology which can be attributed to the ERC. Impact can be stated as a “sea change” in technology or processes, and/or incremental changes and their impact on the companies, their design and production process, and final products.
  • How partner firms and the industry in general have gained competitive advantages they would not have had without the establishment of the ERC;
  • How the integration of the ERCs research and research culture impacted the engineering school’s research and education culture;
  • The employment history of ERC graduates in industry:
  • ; and
  • Plans for the future of the center. What is the strategy for center self-sufficiency?

The final report was to provide a section on “Lessons Learned,” addressing specific items that were learned from the ERC organizational concept and which would be important to pass along to other research organizations and NSF. Finally, the report would include the complete financial, personnel, and quantitative tables in the electronic indicators submission system for the final year of NSF support.

The FY 2013 reporting guidelines are here.

9-D(b)    ERC Database of Performance Indicators

i.         ERC Database Established – An Innovation at NSF

The ERC Performance Indicators Database was an innovation in NSF oversight when it was created in the mid-1980s. As far as Preston can remember, there was no program at the time that was collecting data on resources used and impacts, other than perhaps lists of publications in follow-on proposals. The need for such data arose from the fact that the ERC Program was put in place with the explicit goal of changing the academic engineering culture and impacting the competitiveness of U.S. industry. These data would be necessary to judge the performance of each ERC by the site visit teams and to judge the overall performance of the ERC Program.

Initially, the data set collected was minimal. These data were sent to NSF in hard copy, as this was before the internet, and hand-entered into spreadsheets by George Brosseau, an ERC staff member who had been a Program Director in the RANN Program. Preston assigned Brousseau to the task because, as a geneticist, he would have the detail-oriented mindset needed to establish this new database. Over time, with input from the site-visit review teams and the ERCs’ Administrative Directors, an increasingly useful and consistent set of data was collected.

By 1990, the initial ERC database included:

  • Numbers of faculty, undergraduate, MS, and PhD students supported by the ERC
  • Diversity of these participants
  • Numbers of publications attributable to ERC-supported research
  • Numbers of member and affiliated firms
  • Numbers of small, medium, and large firms
  • Numbers of patents and licenses
  • Inventions disclosed
  • Spinoff-companies
  • Numbers of employees at spinoff companies
  • Sources of support (NSF, industry, other Federal government agencies, and states)
  • Value of any new buildings contributed by the states to house or partially house the ERC.

From these data, the Program was able to report in 1989 that the six ERCs in the Class of 1985 had 197 participating firms; 106 (54%) were large firms, 46 (23%) were mid-sized firms, and 45 (23%) were small firms. The five ERCs in the Class of 1986 had 121 participating firms; 73 (60%) were large firms, 26 (22%) were mid-sized firms, and 22 (18%) were small firms. The three ERCs in the Class of 1987 had 41 participating firms; 20 (49%) were large firms, 9 (22%) were mid-sized firms, and 12 (29%) were small firms. These data indicated that the preponderance of ERC industry partners were large firms, but there was roughly an even split between mid-sized and small firms across the classes. This was an important indication that these ERCs were reaching out successfully to engage small firms, even though there were financial requirements for ERC memberships (which were reduced for small firms). At that stage, it was not feasible to benchmark these data by class for reviewers to judge relative performance; that feature was added to the database and annual report tables in later years. The systematic collection of the data enabled “proof” of the contributions of centers toward outputs and outcomes.

A sample of the summary of industrial participation data as reported in 2000 is shown in Table 9-1.

Table 9-1. Industrial Participation at ERCs, FY 1995–1999

ii.      Improved Data Collection Definitions and Database System, 1991–2013: Honing the Systems and Improving User Utility

Throughout the 1990s, the ERC Program staff and the ERC Administrative Directors devoted significant effort to improving the definitions of the required information so they would be reasonably uniform across the ERCs. Based on a series of definition efforts, a series of data collection guidelines were issued to help the ADs across the ERCs. This effort was led by Linda Parker, who, as noted earlier, was an evaluation expert added to the ERC team in 1993. Initially, Parker engaged a contractor to help produce program-level data summary tables and presentations looking at trend lines across the centers. In the late ’90s, she engaged another contractor, QRC,[43] to develop data input tables for the ERCs to use to report the data to NSF; center-by-center data reporting tables for use in center annual reports; and aggregated tables and graphics for Program-level presentations. The Program-level data were presented to the ERCs at the ERC annual meetings and reported to NSF and Congress annually through the budget exercises.

By 1999, the ERC Program had established a robust system for collecting and reporting the performance data. See Preston’s presentation “ERC Program Review and Oversight System.”[44]  A list of the data collected is given below.

Quantitative Indicators of Inputs

  • Personnel paid to work in ERC (cash or cost-shared)
  • Numbers plus race, ethnicity for US citizens and green card holders, foreign nationals
  • All sources of support and expenditures of funds from all sources
  • Industrial membership firms and characteristics (e.g. small, medium, large, and foreign or domestic)

Quantitative Indicators of Output and Impact

  • Students impacted (in ERC classes, research, outreach, etc.)
  • Knowledge and technology advances and impact
  • Technology adopted
  • Degrees earned by ERC students
  • Post-graduation employment of ERC students in industry, academe, or government
  • Curricula produced, courses and course modules developed
  • Publications, workshops and seminars
  • Patents, licenses, inventions
  • Start-up small businesses sun off from ERC advances

As more data were collected from a larger set of ERCs in each technology sector, the contractor could develop benchmarks for certain performance indicators so reviewers could understand average performance by class and technology sector of an ERC. For example purposes only, and not drawn from real data, in 2012 across the 17 funded ERCs a data table might have reported that there were on average 6 invention disclosures for the 4 ERCs in the biotechnology sector, and there also was an average of 6 invention disclosures across the class as a whole; but for the Class of 2006 there were 12 invention disclosures, and for one particular biotech center there were 18. That would have indicated to reviewers that this center was particularly productive in invention disclosures across all ERCs, in its class, and in its sector.

After Linda Parker retired from NSF, during the period 2008 through 2013 Barbara Kenny, an ERC PD who was given increasing responsibility for the performance oversight system, and her assistant, Victoria Kwaziborski, devoted significant effort to working with the ERC Administrative Directors, the ERC PDs, and the database contractor to hone the database requirements. Their aims were to strip out data proven to be superfluous over time, to improve presentation formats, and improve the tables, which were developed by the contractor using each ERC’s data submission and returned to each ERC’s AD for inclusion in that ERC’s annual report.

By 2013, the contractor could then produce data reports as follows:[45]

  • Products of Innovation –
    • Intellectual Property: Inventions disclosed, patent applications filed, patents awarded, licenses issued
    • Economic Development: Spinoff Companies, Spinoff Employees
  • Impacts on Curriculum
    • Degrees
    • Courses – current impacted by ERC e, systems focus multidisciplinary, used across institutions, team taught
    • Textbooks
  • Information Dissemination
    • Peer-Reviewed Publications
    • Education and Outreach workshops, etc.
  • ERC Student’s Degrees Granted
    • Bachelors
    • Masters
    • Ph.D.
  • ERC Graduate Employment by Sector
  • Personnel Carrying out Research and Education Projects
    • by underrepresented group, and citizenship
    • by role in the university system, i.e. faculty, student, etc.
  • Historical Review of citizenship by faculty and type of student
  • Outreach Participation
    • Community College and Undergraduate Events
    • K-12 Events (teachers and students)
  • Historical Diversity Data for Faculty, Students, and Postdocs
  • Industrial/Practitioner Members and Supporting Organizations by type and size
  • Industrial/Practitioner Members’ Support in total and by type
  • Total ERC Cash Support
  • Functional Budgets Across all ERCs by type of Expenditure
  • Total number of faculty participants by discipline and by ERC sector (biotechnology, etc.)
  • Number of institutions participating in ERCs
  • Number of foreign institutions participating in ERCs, in total and by country.

The following (Figure 9-1(a-d)) are a few examples of the presentation slides prepared for the ERC Program by the contractor:

Figure 9-1(a-d): Examples of slides prepared annually by ERC Program’s data contractor (Source: ICF)

9-D(c)    Performance Assessment

i.         Evolution of Specificity of Performance Criteria

The development of a set of post-award performance criteria was an innovation for NSF and their development evolved over time with the increasing understanding of what it took for an ERC to be successful. For the reports to be useful to reviewers, they had to be constructed by the ERCs to provide information relevant to the criteria, integrated across individual efforts. The reviewers were instructed to use the criteria to make judgements by key feature across the center, not at the project-by-project level. (Most engineering reviewers were accustomed to providing technical reviews of a project-level paper.)

The earliest performance criteria were merely positive statements of the key features but the criteria evolved over time. The GAO evaluation team that provided feedback about the ERC program in 1988 was very helpful in this effort and offered a lot of guidance to Preston on how to construct a useful set of criteria. By following that guidance and learning from the feedback coming from site visit reports and her personal attendance at site visits, she developed a robust set of criteria that specified in detail excellent and poor performance for each ERC key feature, plus leadership and management. The following criteria were among those used in the third-year review of ERCs in 1990 and specifically defined what was meant by “excellent”:

“THIRD-YEAR REVIEW OF ERCS: FEATURES AND PERFORMANCE BOUNDARIES

RESEARCH PROGRAM

An Excellent Effort is characterized by:

  • A coherent vision for the center, articulating its purpose, the need for a center configuration, and its importance for competitiveness;
  • A fully operational strategic plan, focused on proposed goals, providing a systems view of the area, and an integrated research program to implement the ERC’s goals;
  • A high quality (publishable and respected in the community) research effort, designed to contribute significant advances in the knowledge base and the next generation of technology;
  • A strong, cross-disciplinary research team, with the appropriate degree of integration of the disciplines in a team effort so the center contributed more than a collection of individual research projects;
  • If appropriate to the focus of the ERC, provision of experimental capabilities through large instruments not normally available through individual awards.

(Excerpted from Engineering Research Centers Third-Year Evaluation – Review Process. Prepared by Lynn Preston, Engineering Centers Division, NSF, 1990.)[46]

The full set of the original 1990 criteria has been lost. The GAO evaluation indicates that Preston developed them with the intention that they were “….to be used as a frame of reference upon which to build informed judgments and recommendations, not a “cookbook” or formula.” The written guidance stated that:

It is not intended that an evaluation/review report merely be answers to these questions. Rather the report should reflect the judgment of the team regarding the progress and prospects of the ERC, using these criteria as a team of reference. They are intended to bring the reviewer up to speed on the goals and objectives of the ERC Program.

In addition, application of the criteria to each center may differ:

…depending upon whether or not the ERC was built upon an existing center or it started de novo [anew], the degree of difficulty inherent in the blending of the disciplines involved in the ERC, the degree of sophistication of the targeted industrial community, etc.[47]

The set of criteria are multi-dimensional, reflecting the complex and multidimensional mission of the ERC Program and each ERC. Excellent performance on one set of criteria—for example, research—was not sufficient to support overall excellent performance. While the criteria were never weighted, implicit in their ordering and explicit in the briefing materials for the site visit teams was the fact that research performance was the first “gate” through which the ERC had to pass with an Excellent or Very Good score, which would have to be supported by Excellent or Very Good performance in education and industrial collaboration for the ERC to receive incremental funding or be renewed.

The set of data gathered by each ERC, starting in 1985, was used by each ERC in their annual report and renewal proposal to support its case for an increase in annual funding and for renewal. These data were supporting indicators of performance and impact, not primary. Thus, there was no quantitative formula for excellent overall performance. Site-visit teams were advised to use the criteria to assess the information in the reports based on their experience with the ERC and the field and industrial sector associated with the ERC.

During the 1990s, based on lessons learned regarding performance from a series of annual and renewal reviews, Preston and Linda Parker, the ERC team’s evaluation specialist, expanded and refined the criteria. In addition, in the late 1990s the addition of two engineers to the ERC PD team, Cheryl Cathey and Joy Pauschke, had an interesting effect on the criteria. As they started to carry out their roles as ERC PDs, they received comments from the leadership teams of the centers for which they were responsible that the ERC leaders really didn’t understand the criteria and didn’t fully understand what constituted a successful ERC. The PDs who were engineers brought these comments to Preston and Parker because they wanted the criteria to be redesigned to fit into the engineering mindset—i.e., they wanted a clear roadmap to success. This was something that Preston had been trying to resist over time, to give the ERC engineers an ability to deal with ambiguities more creatively. However, in light of these concerns, the ERC team embarked on an effort to pare down the wording in each criterion and focus each one more explicitly on expected inputs or outcomes, while leaving room for creativity in addressing the challenges.

By 2001 the third-year renewal review criteria for a Gen-2 ERC reflected the changes brought about by these efforts. It was now clear that the expectation for the third-year renewal was that a new ERC would be on a strong footing to make future advances or was already making early advances derived from its vision and the ERC construct. It was not expected that a new ERC would demonstrate excellent performance for all criteria. The first set of criteria, shown below (Table 9-3), were developed to review an ERC’s systems vision and value added (two of the required key features of a center). These were derived from the review criteria used to review the Class of 1998 in 2001.[48] From 2001 to 2013, these criteria were refined even more by Kenny and Kwaziborsky and expanded to reflect changes due to the establishment of the Gen-3 ERC construct.[49]

Table 9-3: Third-year Renewal Review Criteria (Gen-2 ERCs)

Systems Vision and Value Added – (Years 1-3) FY 2001

 High Quality Effort                                                                Low Quality Effort

Vision beginning to prove its potential to transform or significantly strengthen our current industrial base, service sector, or infrastructureVision proving to have little potential to transform or significantly impact industry, the service sector, or infrastructure
Societal impact incorporated into research and education programsLittle or no societal impact is likely to emerge from the center’s efforts
Research output is high quality, knowledgeadvances are derived from the cross-disciplinary configuration and a few from the systems vision, interdisciplinary publication in important journals beginningKnowledge advances are mediocre or low-quality, they are not derived from a systems vision, or they could have been achieved by single investigators, minimal interdisciplinary publication
Beginning to produce technology that is unique (invention disclosures, technology transferred, etc.)Impacting technology is not a primary goal of the ERC or the impacts could have been achieved without the ERC research construct
Impacting engineering education in ways that promise to be unique for the fieldCurricular impact has been forgotten or could have been achieved without the ERC research and education constructs
Center is recognized or becoming recognized as a leaderCenter is behind leaders in the field and center contributions are rarely recognized by the field

Systems Vision and Value Added – (Years 1-3) FY 2013

High Quality Vision & Value Added (Years 1-3) Low Quality Vision and Value Added (Years 1-3)
Systems Motivation: Strong systems vision motivates the ERC, early systems requirements understoodSystems Motivation: Little understanding of engineered systems
Transformational: Vision still has potential to transform or significantly impact industry/practitioners, the workforce, and societyTransformational: Losing sight of the promise of the vision and its potential impact
Leading-edge: Vision positions the ERC to lead in the fieldLeading-edge: ERC lags the state of the art or is already eclipsed by competitors
High Quality Research: Research output is high quality, some deriving from cross-disciplinary collaboration, (for NERCs* – sound base of fundamental nanoscale research), publications based on ERC research in processHigh Quality Research: Research output is low quality; or if high quality, it resembles the output of a collection of single investigator projects (for NERCs – no fundamental nanoscale research away to achieve the vision), publication record low or not significant
Educational Impact: Education programs focused on preparing graduates to be more effective in practice, more creative and innovative, starting to provide them with innovation experiences. Pre-college partnership established and starting to provide engineering experiences for teachers and studentsEducational Impact: Education programs will have little or no impact on developing graduates who are more effective in practice, or creative, and innovative. Pre-college program having little or no impact
Innovation Ecosystem: New innovation ecosystem under development involving member firms/practitioners, other organizations devoted to innovation and entrepreneurship, and plans for translational research– sound basis for strong future impact.Innovation Ecosystem: Concept not understood, no understanding of how to build a Gen-3 innovation ecosystem or industry has not joined to form a critical mass to start the ERC off on a sound footing—little impact expected.

*NERCs = Nanoscale Engineering Research Centers, funded in FY 2012

This history reflects a culture of “learning by doing” until the criteria became optimized to reflect performance expectations developed over time for both excellent and poor performance.

Suggestion: If you are starting a new program, resist the pressure to define the full scope of performance expectations until there is some actual performance carried out. While you should define excellence early on, to focus the performers on what you expect them to achieve, it may be difficult to anticipate the full nature of excellent and poor performance until reviews have been carried out over a few years.

ii.      Evolution of the Role of Site Visits

1.      Long-term role of the site visit team in the development of an ERC

The post-award oversight system of the ERC Program has involved annual and two renewal reviews since its inception. The purpose of these reviews is to strengthen an ERC by focusing its efforts on critical performance criteria, requiring reporting on progress relevant to those criteria, and requiring site visits by peers of the ERC from industry and academe to review progress and recommend improvements for the future. The renewal review in the third year is designed to cull weak ERCs from the portfolio as soon as possible to make way for stronger teams; the sixth-year review is designed to cull or strengthen ERCs having trouble advancing their research over time or functioning with a driving engineered-system vision, or facing diminished industrial interest and little educational impact. These two renewal reviews, while considered burdensome by some of the ERCs, served this culling function and served to strengthen the performance of those who pass to the next phase in the ERC life-cycle, in preparation for self-sufficiency in partnership with academe and industry. Some of the graduated ERCs actually miss the annual site visits because of their focusing impact, useful praise and critique, and ability to draw the faculty together as a team.

The site visits were guided by site visit protocols that were uniform across all ERCs, depending on the stage in the life cycle. (See detailed description in Section 9-C(b).) These protocols were used to ensure uniform guidance by ERC PDs to the site visit team and a uniform understanding of the review process by the reviewers. An ERC’s site visit team was assembled by the ERC PD who was responsible for the oversight of a particular ERC, with input from the Center Director, other members of the site visit team, Preston and/or the Division Director, and NSF staff with expertise in the center’s research areas and or in education. The lists of potential reviewers to serve on the team were augmented though searches through the Web when that resource became available.

Beginning in the 1990s, it became apparent that an ERC would benefit from site visit teams that included reviewers who had served in that role for that center in earlier years. This way, the ERC would not have to continually assume that the reviewers did not understand their ERC and the reviewers could have a role in not only assessing quality but also in providing suggestions for improvement and following up on progress over several years. Review teams were refreshed over time as reviewers became “saturated” and found little to gain from continued involvement or were eliminated from the site visit team because they could not write and/or discuss well or had difficulty understanding and valuing what an ERC was supposed to achieve.

The criteria were used as a frame of reference for judging progress and impact so that a reviewer who might be more adept at in-depth technical analyses could be guided by the criteria in making judgments about overall progress and plans for a key feature of the ERC.

Encounter at a Site Visit
The ERC Program uses site visit teams to judge the progress of centers in addressing their visions for pioneering technology and education. A serendipitous outcome of one of these site visits occurred in the 1990s when one of the visitors, John Sprague, the CEO of Sprague Electric, related to NSF Program Director Tapan “Tap” Mukherjee that the first wife of his firm’s founder, Frank Julian Sprague, was Mary Keatinge. After they divorced, in 1924 she became the wife of Bengali Indian revolutionary and internationalist scholar Taraknath Das. Based on that encounter, Tap decided to explore Das’s life. His research resulted in Tap’s book, Taraknath Das: Life and Letters of a Revolutionary in Exile.* Das was one of the leading early fighters on behalf of the right of South Asians—Indians in particular—to emigrate to the US, a development which has had an enormous positive influence on engineering and technology in this country.

Mukherjee, Tapan (1998). Calcutta: National Council of Education. ISBN 13: 9788186954003.

Also in the 1990s, the role of the site visit teams evolved from being solely judgmental in a pass-fail mode to more of a nurturing and developmental mode. This approach was a better fit with the culture of the NSF ERC team, who preferred a developmental role rather than just a judgmental role, and it fit better with the academic culture, where faculty and students were nurtured and challenged to achieve their highest levels of performance.

One of the defining events in a site visit was the development of a set of questions (by the site visit team) and answers (by the ERC team). The site visit team identified a set of issues still in need of clarification by the end of the last briefing day of the visit. Initially, those issues were posed to the ERC team the morning of the last day for them to respond to in real time before the briefing portion of the visit closed and the site visit team began to determine its recommendation and write its report before leaving campus. Preston found this to be too stressful for the ERC teams and the process did not result in the best responses. Around 2000, she decided that the clarification issues would be presented to the ERC team the night before the last day of the visit, so the Director and his/her team could meet in the evening to address them and provide written answers/ slides for discussion the morning of the last day of the visit. In other words, the ERC team would have “homework” on the last night of the visit to answer specific questions from the reviewers. This gave an opportunity for thoughtful team responses to key issues still remaining in the reviewers’ minds and often resulted in clarifications in the ERC teams’ thinking as well as providing answers to the site team reviewers. In addition, in weaker ERCs it revealed weaknesses in the team structure of the ERC if the Director prepared and presented all the responses or key faculty left too early in the evening when the ERC was preparing its response to the issues and/or did not return the next morning. It also revealed commitments from university administrators, as the most involved Deans helped in the process and returned the final morning to observe, or in some cases contribute to, the presentation. The site team could gain insight into the dynamics and operations of the ERC team through how they responded to the overnight questions. The reader can access the FY 2013 site visit guidelines that were used then by the ERC PDs and the ERCs’ Leadership Teams to develop and manage a site visit.[50] By this time, the guidelines were quite detailed to be sure there was uniformity across the centers and to account for lessons learned from past visits.

Testimonials from two long-term ERC reviewers and a long-term ERC PD reflect upon this culture.

Dr. Larry Fehrenbacher, President of Technology Assessment and Transfer, Inc., Annapolis, Maryland, served as a member of the “Blue Ribbon Panel” for the FY 2006 ERC competition, the panel that recommended a final set of proposals for consideration for an award. Because of his expertise, he was also asked to serve as a post-award reviewer for one of the ERCs funded as a result of this review, the Center for Compact and Efficient Fluid Power (CCEFP), headquartered at the University of Minnesota. His insights will help the reader understand the dynamics of the recommendation process and the post-award oversight system from a reviewer’s perspective. Click here to access this essay.

Dr. Randolph (Randy) Hatch is President of Cerex, Inc. and a frequent ERC Blue Ribbon Panel member and site visitor. Click here to access his essay.

Cheryl Cathey served as an ERC PD between 1996 and 2000. She brought a background in chemical engineering to the ERC Program, which was expanded by gaining knowledge of work at the interface of biology and chemical engineering through her role as the lead PD for ERCs in the bioengineering field. After leaving NSF, she returned to California and has been working in bioengineering start-ups in the Palo Alto area since then. Click here to access her essay.

2.      Initiation of the SWOT Analysis (1996) and its role in the site visits and center development

Preston inserted the Strengths-Weaknesses-Opportunities-Threats (SWOT) analysis into the ERC review process in 1996, after she was introduced to it by the Industrial Advisory Board members at the start-up visit to the then-new Packaging Research Center at Georgia Tech. The SWOT analysis quickly became the analytical management tool for the Industrial Advisory Boards (IABs), the Student Leadership Councils (SLCs),[51] and the site visit review teams. At an ERC’s site visit, the review teams met in private with the IAB and the SLC to discuss their respective SWOT analyses. The site visit team also produced a SWOT analysis under the guidance of the ERC PD during the site visit. It was used as a tool to summarize the overall integrated performance of the ERC and to focus a site visit team’s discussions during executive sessions held during the site visit.

A typical SWOT analysis, scrubbed for ERC-specific identifiers of an ERC that joined medical and engineering faculty by replacing field-specific comments with XXXs, follows:

Strengths:

  • Project Directors are leaders in their field;
  • Strong culture of collaboration between clinical and engineering faculty and students and institutions, which has been significantly enhanced by the ERC mechanisms;
  • Center management fosters a strong interdisciplinary research program at both graduate and undergraduate levels combining strengths of many departments, and clinicians at several institutions;
  • Major strengths in systems engineering and integration of robotics in interventional procedures;
  • Commendable and visionary education and outreach programs for K-12;
  • Development of sub-system modules that could become part of the envisioned plug-and-play systems; and
  • Impact of research is beginning to show through increased numbers of publications in clinical journals and through SBIR and NIH grants.

Weaknesses:

  • There are relatively few patents and licensed technologies. This shortcoming may be addressed by simplifying the process for IP agreements and the development of focused collaborative projects;
  •  The center lacks capabilities or close collaborations in certain areas of XXX, XXX, and XXX;
  • Director should look outside the boundary of the core institutions for appropriate clinical collaborators, if needed;
  •  Improvement is needed to attract significant numbers of underrepresented minority and women faculty and underrepresented minority students. This was noted as a particular problem in the case of graduate students; and
  • Communication with industrial members needs to be improved.

Opportunities:

  • The ERC could play a key or a catalyst role in developing standards for interfaces in the domain;
  • Advertise and market the ERC and its capabilities at all levels from students and potential collaborators to the medical and research community at large;
  • Find a mechanism to encourage companies to provide the ERC with the state-of-the-art equipment such as imaging modalities and seek further funding from programs such as Major Research Instrumentation (MRI) Program at NSF and similar programs at NIH;
  • Investigate opportunities for supporting the Center’s infrastructure through such mechanisms as NIH P41 Research Resource and BRPs, and also through sustained support from core institutions;
  • Strengthen and deepen collaboration with current industrial partners and broaden the industrial base from which the members are recruited; in particular, encourage student internship with industry;
  • Continue to refine the role of the industrial partners in the strategic planning process;
  • Take advantage of opportunities at the medical school to showcase ERC technologies to practitioners and students.
  • It may be useful to provide a collection of succinct clinical impact statements made by the clinical collaborators for each of the proposed applications tackled by the ERC. Industrial Affiliates would also appreciate the ability to browse a collection of short research summaries describing the various tasks.

Threats:

  • The IP issues involved in setting up effective collaborations with industry may not be resolved for effective technology transfer to companies other than ERC spin-offs;
  • Retaining key research participants is crucial to the success of the Center.

The site visit report preparation process went electronic as web-enabled communication tools progressed. Thus, by 2010, a site visit team member could download a template of the site visit report from NSF’s SharePoint system, which included the criteria, prepare his/her analysis of performance using the criteria, and then upload his/her draft analysis for review by other members of the site visit team. This way, the site visit report would be written starting in the late morning, completed, reviewed, and signed off on by the early afternoon of the last day of the visit, before the site visit team members left campus.

iii.    Review Outcomes and Reasons for Failure

1.         Success and failure rates

Between 1987—the renewal review for the Class of 1985—and 2014, the failure rate at the third-year renewal declined over time with every generation of ERCs, reflecting an increasing understanding on the part of ERCs about what it takes to succeed, based on improved performance criteria, improved guidance provided to start-up ERCs by the ERC Program staff and members of the ongoing ERCs’ leadership teams,  and improved guidance in site visit reports. For the 21 Gen-1 ERCs, 18 successfully passed their 3rd-year renewal reviews, leaving 3 that did not (18%): 2 in the Class of 1985 and 1 in the Class of 1987. For the 31 Gen-2 ERCs, 2 from the recompeted Class of 1985 were not eligible for a third-year review, given the short term of support they received. That left 29 undergoing that third-year review, of which only 2 did not pass, for a failure rate of 7%—a greater than 60% reduction in the failure rate. Those two were in the classes of 1998 and 2003. Among the Gen-3 ERCs, none failed their third-year renewal reviews. Generally, the main reason these earlier ERCs did not pass their renewal reviews was a failure in leadership at the Directors’ level to fully grasp the challenges of their own vision and research program, which required integration across faculty and institutional partners as well as a sincere attempt to develop systems-level testbeds.

There were no failures at the sixth-year renewal point; but several ERCs were put on hold as an outcome of a less than highly successful sixth-year renewal and asked to repeat that renewal in the seventh year. Reasons for this were most often due to less than impressive system-level testbeds. However, for two, there was a concern that the ERC’s team may have lost interest in fulfilling the full set of ERC key features, such as a robust education program or a robust industrial collaboration program, and their renewals were put on hold. These two were asked to consider if they wanted to skip a second sixth-year review and just run out the funding on the agreement, forgoing additional time and support. After serious team meetings both decided that the opportunities for collaboration and intellectual excitement generated so far by ERC funding were too rich and important to walk away from. They “shaped up” weak features, faced the repeat sixth-year renewal reviews, and were recommended for continued support. A third ERC’s bid for a sixth-year renewal was put on hold because the site visit team judged its plan for self-sufficiency would result in a destroyed ERC—merely a collection of PIs operating without much integration at the different partner sites. That ERC Director restructured the plans for self-sufficiency and passed the repeat renewal review.

2.         Interface with the OIG strengthens the ERC Program’s oversight system

In 2006 and 2007 the NSF Office of Inspector General (OIG) carried out an assessment of the pre-award review and post-award performance oversight systems used by eight NSF center programs.[52] Members of the staff of the audit side of the OIG started out their assessment of all center programs by visiting with the ERC Program staff to gather the ERC Program’s pre-award review system materials and its post-award oversight system materials to use as a baseline for their study. They told Preston and Barbara Kenny at the end of their evaluation that the ERC Program’s post-award oversight system was the “gold standard” against which they measured the other programs.

By 2006, the ERC Program’s systems had been strengthened through continuous improvement based on feedback from the review processes, evaluations, and input from the OIG. The OIG input was especially important in transforming the system from one of trust and assess to also include trust and verify. As noted in Chapter 6 (Section 6-D(g)) and above (section 9-A(a)), the need to verify derived from malfeasance on the part of one center director in reporting industrial members, a system that inflated project involvement and therefore industrial participation on the part of another center director, and a need to verify cost sharing across all centers to improve the cost sharing audit throughout NSF. The result was increased oversight by university administrators of the management and reporting of centers, which included certifications of industrial membership and support and cost sharing certifications. To rein in a tendency to overreport center activity by including projects underway by faculty supported by a center, the reporting system was revised so that there was a separation of projects directly supported by an ERC’s budget (NSF, cost sharing, industrial member support, etc.) and those supported by other sources but under the umbrella of the ERC’s strategic plan (associated project support).

9-D(d)    Increased Training for ERC Leadership Teams

During this period, it was becoming apparent that there was more turnover in staff in the later generations of ERCs, with insufficient training of new staff before the more experienced staff left an ERC. In response, there was an increased effort at NSF to form teams of experienced ERC staff to help train new personnel coming on board at the centers in staff roles.

Preston established consultancies for the Industrial Liaison Officers, the Administrative Directors, and the Education Directors. The annual meeting planning process was used to identify leaders from ongoing ERCs who would be good mentors and trainers. A leader of a consultancy was selected, that person formed a team of experienced ERC members in that role, and support was provided for their travel and time to visit new staff on site at ERCs. In addition, retreats were held during the summer for ADs and closed meetings were held at the annual meetings for all staff groups. Finally, an NSF PD was put in charge of coordinating monthly teleconferences with teams from each functional group.

9-E       Program Budgets/Center Funding

9-E(a)    Base Budgets

                      i.     Program Budget

The ERC Program’s budget grew from $10M in 1985 to its then-highest level of $76M (for total ERC funding) in FY 2018. The growth of the budget involved planning for the growth of the base budgets of the ongoing ERCs, because those budgets grew over time, plus planning for budget growth to support the addition of new centers. The original plan was to bring the total number of annually funded centers to 25. That level was achieved once in 1996, but overall the steady state in the 1990s was 20.

                      ii.   Center-level Base Budgets

Gen-1: The first ERC Program Announcement indicated that annual budgets, after the start-up year, would range from $2.5M to $5.0M per center. However, budget realities intervened, and that maximum level was never reached. Since funds were limited in 1985, only the six Highly Recommended proposals were recommended for start-up awards as follows, beginning on May 1, 1985, with Year 2 budgets showing the annual level at that time.

  • Columbia University – $2.2M for 9 months, to $3.24M for Year 2
  • University of Delaware – $.75M for 9 months, to $1.2M for Year 2
  • University of Maryland – $1.5M for 9 months, to $2.426M for Year 2
  • Massachusetts Institute of Technology – $2.2M for 9 months, to $3.048M for Year 2
  • Purdue University – $1.6M for 9 months, to $2.038M for Year 2
  • University of California at Santa Barbara – $1.17M for 9 months, to $1.75M for Year 2.[53]

The differences in funding levels reflected an NSF determination of readiness of the recipient university to deal with a large award and also a consideration of whether the center was built on prior centers (Delaware and Columbia, both Industry/University Cooperative Research Centers (I/UCRCs)) and therefore required less funding at startup. Through the Gen-1 period (1985-1990) the decision was made to keep the ERC budgets in the range of $2.5M to $3.0M per year past the start-up year, and to use growth in the base budget as an incentive for performance. But rather than making a few highly funded ERCs, NSF decided to make more ERCs each year and use the base NSF support as just that—a base, upon which leveraged support from universities through cost sharing, from industry, and in some cases from state, local, or even other Federal government agencies would round out the ERC’s total budget.

Gen-2: This was a period of rising base NSF budgets for the ERCs. The guidance for the proposed budgets for Gen-2 ERCs in 1994 was that the start-up years would be funded at $1.5M to $2.0M per year, rising to a maximum of $3.0M/yr by year five under NSF solicitation 94-150. Under NSF solicitation 98-146, the maximum rose to $4.0M/yr in year six, and under NSF solicitation 02-24, the maximum of $4.0M/yr could be reached by year four. Under NSF solicitation 04-570, which funded five centers forming the Class of 2006, the base NSF budgets could not exceed $3.0 M for year one, $3.25 M for year two, $3.5 M for year three, and $4.0 M for years four and five.[54]

Gen-3: The guidance for the proposed budgets of the Gen-3 ERCs (2008–2017) was $3.25M for year one, $3.5M for year two, $3.75M in years three, and $4.0.M for years four and five. During this period, impacting the Class of 2008, the NSB decided to forbid university cost sharing on NSF awards, including centers and other large awards. This seemed ill-advised for large awards, because the NSF support was a considerable asset for the receiving university. By 2009, Congress reversed that policy for centers and ERC university cost sharing was again required. However, involuntary cost sharing, providing more than the stipulated amount, was prohibited.

9-E(b)    Supplemental Funding Opportunities

The base NSF awards were augmented by university cost sharing and leveraged support from industry and other government agencies. The ERC Program also provided opportunities for ERCs to receive additional NSF support through competitive supplements. The use of supplemental funds was a way of encouraging ERCs to go in new directions that they might not pursue with their limited base support. Funds for education supplements were allocated from the ERC Program base budget annually. Every few years, as older centers were approaching graduation from NSF support, support for each of the last two years was phased down at 30 percent of the prior year’s base support level. This was to allow centers to transition gradually to self-sufficiency rather than having funding cut abruptly. (See Section 9-F(b) for a discussion of budget strategies to encourage self-sufficiency planning.) The phase-down philosophy left money within the overall ERC Program budget that was not specifically allocated to centers. This gave some flexibility to ERC Program management to use these funds to promote new opportunities for centers such as offering one-time only supplement initiatives to meet special program goals.

As shown in Figure 9-2, in FY 2000 the total annual budget from all sources for all 18 ERCs was $155.5M (an average of $8.6M per ERC), of which the ERC base award plus supplements of $45.4M accounted for 29.2 percent of the total—roughly equal to that of industry. (See Figure 9-3.) 

Figure 9-2: Support for ERCs from All Sources, FY 2000 (Source: ICF)

Figure 9-3: Percentage of ERC Support by Sector (Source: ICF)

Eleven years later, in 2011, the ERCs showed an increase in average total support of $9.6M/center; but the percentage of NSF ERC Program funds within that total support had increased to 38.5% ($3.7M/center), as shown in figure 9-4. This appears to have been due to a decrease in proportional support from non-NSF sources.[55] In FY 2013, average total support per ERC was $10.1M and ERC Program support grew to 42%, or $3.6M, as shown in Figure 9-5.[56]

i.                   Education Supplements

The ERC Program began providing additional support for ERC education programs early in its history to balance the inclination of ERC leadership teams to favor research over education. These supplements started with Research Experiences for Undergraduates (REU) supplement awards. The purpose was to provide funds to ERCs to recruit undergraduates from non-partner schools to spend the summer in ERC research labs to expand their horizons regarding the opportunities provided by graduate research focused on technology in partnership with industry. Funds were set aside in the ERC Program budget to support these supplements starting in 1990 and the supplements were managed by the Mary Poats throughout the 1990s.

Figure 9-4: FY 2011 ERC Support (Source: ICF)

Figure 9-5: FY 2013 ERC Total Cash Support (Source: ICF)

When Poats became responsible for the ENG Research Experiences for Teachers (RET) Program in 2002, Esther Bolding, who had worked with Poats in processing the REU supplements, took over that responsibility. At the time, she also was appointed as the Program Manager for the ENG-wide REU Program, where she was responsible for the ENG-wide REU Site supplements program along with the ERC Program’s REU supplements. She continued in that role until her retirement from NSF in 2014. Set-aside funds for the REU supplements started at between $700K to $1.0M each year in the 1990s. By 2007, when AD/ENG Buckius transferred the ERC REU funds out of the ERC Program, the funding amounted to $2.5M. At that point, the ERCs were required to support REU programs out of their own funds and were encouraged to submit proposals to the ENG-wide REU program to gain support.

As noted in Chapter 7, Section 7-D(b), the ERC Research Experiences for Teachers (RET) Program served as a way to broaden the outreach of ERCs in the development of future generations of engineers. The goal was to bring middle and high school teachers to engineering laboratories to learn engineering concepts and return them to the classroom with model curricular materials to bring these concepts, like design-and-build, into their classrooms. Modeled after an RET Program supported by the Materials Science programs in another Directorate, the ENG RET Program was initiated for ERCs in 2001, with set-aside funding of $2.5M. At the same time, the ENG-wide RET Program started in 2002 with a budget of $4M. The ERC Program continued to support these RET supplements until 2007, when the ERC funds were transferred to the ENG-wide RET Program. As in the case of the REUs, the Gen-3 ERCs were then required to support their RET programs from their base budgets and were encouraged to submit proposals to the ENG-wide RET program to gain support.

As described earlier, when funds were released into the ERC budget from phasing down soon-to-graduate ERCs in their last two years, or when funds were released by graduations and new awards were not yet ready to be funded, funds became available for special one-time only supplements. The lesson here is that if you want to achieve certain enhanced features of your program over many years, it is best to set aside funds for the centers to compete for them. That way, the prestige of that feature is established and the funds devoted to the feature at the center level become large enough for someone at the agency to be assigned to manage the feature and the total amount of funds allocated to them. The energy that Poats and Bolding and the ERCs’ own Education Directors devoted to the REU and RET supplements raised the status of RETs and REUs on campuses and throughout NSF.

ii.     Testbed and Equipment Supplements

In addition to competing for supplemental funds for education and outreach, centers could also compete for supplemental funds to augment their base support to build out testbeds and purchase larger-scale shared equipment. One example of this type of award was a translational research supplement awarded in 2006 to a partnership between the faculty of the Computer-Integrated Surgical Systems and Technology (CISST) ERC at Johns Hopkins and Intuitive Surgical, Inc (ISI) to augment the capabilities of ISI’s da Vinci telepresence surgical system through a prototype Surgical Assistant Workstation for Teleoperated Surgical Robots. The da Vinci system combined advanced robotics and modeling and image analysis to make surgical interventions less invasive, safer, and more efficient, a longstanding goal of the CISST ERC. CISST had a large investment in modular, portable software and hardware libraries and components for supporting this research. ISI’s da Vinci system, components of which originated in research in CISST, was the only FDA-approved telepresence surgery system at the time. The purpose of this translational research supplement was to combine the strengths of CISST and ISI to develop a common software environment combining open source, modular CISST software components with the da Vinci-specific ISI Application Programming Interface (API). This would provide a uniform interface enabling more rapid research and technology transfer between CISST and other institutions, including ISI, using the da Vinci system.

The award supported research and prototype development to transform the da Vinci robot from a simple “surgeon mimic,” that gives the surgeon roughly the same capabilities as in a laparoscopic environment, into a tool that gives the surgeon a surgical environment like he or she is accustomed to in an operating room, becoming a true “surgical assistant” providing greatly enhanced capabilities. In addition, it provided a needed uniform, modular environment for developing new robotic devices and systems, promoting a wider range of university collaborations with applications beyond medical robots. This Surgical Assistant Workstation is illustrated in Figure 9-6.

Subsequently, as the first generation of da Vinci robots were retired and traded-in for newer models, ISI decided to donate the hardware components of these older systems to universities without intellectual property restrictions. There is now a research consortium of over 30 leading universities using the “da Vinci Research Kit (dVRK)” hardware and control electronics designed at Johns Hopkins as a result of this award. It has been replicated by a former CISST ERC graduate who, as a faculty member at Worcester Polytechnic Institute, is using the CISST software to provide a truly “open source” research environment for surgical assistant robots. This community is very active, with ongoing NSF support through the National Robotics Initiative,[57] well-attended workshops, user group meetings, and PI meetings every year. The research results produced with these systems are beginning to find their way into clinical practice, and many of the students trained using these systems are working in the medical robotics industry.,[58]

In the process of preparing this summary of the outcome of that award, Russ Taylor remarked to Preston on May 15, 2018: “Thanks for your initiative in supporting the effort! It really seeded a major impact on our community. The technologies enabled are beginning to show up in product development. Even more to the point, the students trained using the environment are going to work for Intuitive and a bunch of new companies in the field. Since the main “product” of any university is as much the people trained as is the actual research, this is really an important impact.”

Taylor’s tribute attests to the important venture capital-like role these types of ERC investments had on stimulating advances in technology.

Figure 9-6: The Surgical Assistant Workstation (SAW) developed at the CISST ERC provides a modular, open source software environment for linking advanced research and functions to the da Vinci Surgical System and to surgical research robots developed in academic laboratories. (Source: CISST)

iii.     Connectivity Across ERCs and Minority Outreach Supplements

1.      Connectivity Across ERCs

By 2000, as the ongoing ERCs began to understand the research programs of their cohorts, they asked Preston to set up a fund to encourage connectivity across ERCs. The fund was set up in 2000 and several connectivity awards were made. They involved collaboration between faculty and students at different ERCs, bringing their different skills together to address a mutual problem/opportunity.

In 2001 the ERC Program issued a call for proposals from ongoing ERCs to form Partnerships in Education and Research Opportunities. The following is an example of how useful this type of funding was. The funding enabled a three-year partnership among two ERCs and a research team at Carnegie Mellon that was supported by the Earthquake Engineering Program. The NSF Program Directors involved were George Lea, adjunct ERC PD for Computational Fluid Dynamics; Aspa Zerva, an earthquake engineer and an Earthquake Engineering Research Center (EERC) PD, and Joy Pauschke, an earthquake engineer and the Coordinator of the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) as well as the former ERC PD for the Mississippi State ERC and the three EERCs, when she worked in EEC.

Paraphrasing the award abstract: The purpose of the award from the ERC Program was to connect the expertise in (1) large-scale computational simulation and visualization at the ERC on Computational Field Simulation(CFS) at Mississippi State University, (2) geotechnical and structural response simulation at the Pacific Earthquake Engineering Research Center at the University of California-Berkeley (PEER), and (3) Professor Jacobo Bielak ‘s work in advanced computational ground motion and soil-foundation-structure-interaction modeling at Carnegie Mellon University in order to develop an advanced computational capability for modeling and visualizing the effects of earthquakes in urban regions on the built infrastructure, and to apply this capability to simulate the performance of collections of buildings and other structures in an urban region. The ultimate goal was to forecast the amount and distribution of damage throughout an urban region. [59]

The proposed methodology shows how the goals of this project could not have been met without the integration of these three loci of capability in earthquake engineering, simulation, and visualization and the resources of the ERC Program—$1.1M for three years for what amounted to a small-group project.

The project was designed to integrate “end-to-end” the earthquake source, path, basin, and surficial effects of the geological structure of the region on ground motion, with realistic models of buildings and bridges, including soil-foundation-structure-interaction effects, to develop a distributed, high-resolution simulation capability. Key features of the distributed high-performance computational simulation environment were the following: (1) realistic representation of the structure type, geometry and properties; (2) detailed modeling of the soil structure in the near region of the structure; (3) use of real surface topology and sub-surface geotechnical properties; (4) high resolution in the node density of the ground motion to capture the higher frequencies required for determining structural response; (5) explicit consideration of soil-structure-foundation interaction effects; (6) analysis of the simultaneous earthquake response of a portfolio of buildings to examine structure-to-structure interaction effects through the soil and the effects of the built environment on the free-field earthquake ground motion; (7) automated data storage, access, and transport; (8) visualization of complex and large datasets representing behavior of individual structures and aggregates; and (9) integration of these components in a distributed computational environment. This project used the simulation environment to investigate the effects of earthquakes in the Los Angeles urban region.

This problem was considered to be of great importance to hazard mitigation and seismic risk reduction because assessing the ground motion to which structures will be exposed during their lifetimes and predicting their response to this ground motion, including potential damage, is an essential step for the appropriate design and retrofit of earthquake-resistant built infrastructure. Performance-based earthquake engineering methodologies are motivated by the need for scientific and transparent methods to relate seismic hazard to structural performance and loss. Forecasting damage and loss can also be of great use for emergency planning and management purposes. Visualization of damage in an urban region can aid policymakers and stakeholders in making informed decisions on how to reduce earthquake losses. The simulation environment used the Globus toolkit for access to the computational grid, the deployment of which NSF supported as part of the NEES. After it was developed, the simulation environment was made accessible to the earthquake engineering community through NEES.[60]

See this link for an assessment of the importance and interim results of the then-ongoing project.

2.      Connectivity between ERCs and Predominantly Underrepresented Minority-Serving Institutions

During the 1990s, with an increased emphasis on outreach stimulated by the developing culture in NSF to achieve a broader impact of NSF awards, the ERC Program began to use supplements to stimulate ERCs to form collaborative partnerships with colleges and universities that served predominantly underrepresented populations. The ERCs were encouraged to submit proposals to a cross-ERC competition when funds were available. In this way the Tissue Engineering ERC (GTEC) at Georgia Tech and Emory University was linked to the cluster of historically Black colleges and universities (HBCUs) in the Atlanta area known as the Atlanta University Center (AUC).

One of the students funded to work at GTEC, Manu Platt, a biology major at Morehouse College, was such an excellent student and member of GTEC that he went on to get his PhD at Georgia Tech/Emory in Biomedical Engineering in 2006. He moved on to a post-doc appointment at the MIT BPEC ERC. Upon his completion of that assignment, he returned to Atlanta to an appointment as an Assistant Professor in the Wallace Coulter Department of Biomedical Engineering, a joint department with Georgia Institute of Technology and Emory University. Platt (Figure 9-7) is now an Associate Professor, researcher and Diversity Director of the NSF Science and Technology Center on Emergent Behaviors of Integrated Cellular Systems (EBICS). He is also a research team member and Pre-College Director of the Class of 2017 ERC at Georgia Tech on Cell Manufacturing and Technologies (CMaT). His awards for mentoring and outreach have included the Georgia Tech Diversity Champion award, the Junior Faculty Above and Beyond Award, and the Junior Faculty Outstanding Undergraduate Research Mentor Award from Georgia Tech. He was named an Emerging Scholar by Diverse: Issues in Higher Education magazine in 2015, “Atlanta 40 under 40” by the Atlanta Business Chronicle in 2016, and he received the Biomedical Engineering Society Diversity Award in 2017.[61]

Figure 9-7: Prof. Manu Platt (Credit: Manu Platt)

Another connectivity award joined the ERC for Reconfigurable Manufacturing Systems (Class of 1996), headquartered at the University of Michigan, with Morgan State University (MSU), an HBCU near Baltimore, Maryland. The partnership involved faculty and students in a remotely controlled inspection system between the RMB team at Ann Arbor and the MSU team. They developed a prototype Reconfigurable Inspection Machine (RIM) that was shipped to Morgan State and demonstrated at the site visit there by MSU researchers. The connectivity led to a full-scale, longer-term partnership. S. Keith Hargrove was the key faculty contact at MSU, where he served as the Chair of the Department of Mechanical Engineering and he joined in partnership with the RMS team, led by Yoram Koren, the RMS Center Director. The photo in Figure 9-8 shows the members of the MSU faculty team demonstrating part of the inspection system to the NSF site visitors. Professor Hargrove is now the Dean of Engineering at Tennessee State University.

Figure 9-8: Morgan State University Faculty Demonstrate a Prototype RIM to Site Visitors (Source: RIM ERC)

9-E(c)     Other Uses of ERC Funds—Stimulating Nanoscale Modeling and Simulation, FY 2000

As discussed above, there were times when funds were available in the base ERC budget that could be devoted to other uses without diminishing the long-term ERC budget. One such time was in 2000. Funds had been set aside to support up to six or seven new ERCs, using funds coming out of graduating ERCs. However, the quality of the proposals did not warrant that many new ERCs, since most of the visions proposed were essentially incremental changes in current technology. Only two awards were made—one to enable next-generation wireless integration microelectronic systems at the University of Michigan and another for subsurface sensing and imaging systems at Northeastern University. The ERC Panel was concerned that the ERC base budget would be cut if not all the funds available for new centers were used for new centers; but they did not want to recommend any of the “runner-up” proposals. Preston assured them that the budget was safe and they should make the recommendation for only two new ERCs, if that was what they felt was warranted.

Then came the decision about what to do with the residual funds. At that time the nanoengineering field was just being developed by Mihail (Mike) Roco at NSF and Preston had previously had discussions with him about the need to build a capacity for simulation and modeling at the nanoscale level. She and Mike went to talk with Gene Wong, the AD for ENG at the time, about the idea of using the ERC residual funds to support groups to build this capacity, thereby enriching the nanoengineering field, while perhaps also preparing teams that might compete for an ERC in the future. That discussion took about 30 minutes and Wong approved the initiative, as he was a careful and quick decision maker.

A solicitation (NSF-00119) was released and after peer review of the proposals, seven small-group awards were made to support the growth of capacity in the simulation and modeling of nanoscale phenomena.[62] Mark Lundstrom, from Purdue University, led one of those groups—the only one to develop a first-generation cyberinfrastructure/web-based platform to host the simulation capability and provide simulation services online—through its platform, a website called nanoHUB. This platform had the potential to provide synergy across a new community of investigators exploring modeling and simulation of nanoscale phenomena through the Web.

As the groups progressed, Roco and Preston asked the Purdue team to consider expanding nanoHUB to serve a broader community. They held a workshop in 2002 to explore the concept with academe and industry. There was strong support for the need and the Purdue team decided to submit an unsolicited proposal to start the Network for Computational Nanotechnology (NCN). Gerhard Klimek, the developer of nanoHUB, was the PI and Lundstrom the co-PI. Funding to start NCN initially came from the Nanoscience and Engineering Program, transferred to Preston’s division, and additional funds came over time from all the divisions of ENG and the NSF Office of Cyber Infrastructure. NCN and its cyber platform nanoHUB became the premier site to share tools for simulation and modeling of nanoscale phenomena for research and education.[63] Until her retirement from NSF in 2014, Preston was responsible for the management of a team of NSF PDs who provided oversight to the NCN. Figure 9-9 shows the goals and programs of the NCN at its start-up.

Figure 9-9: Network for Computational Nanotechnology

Today there are 400 tools and 4,500 resources supporting 1.4M users around the world.[64]

9-E(d)    2009 ERC Innovation Awards to Stimulate the Economy

President Barack Obama decided to invest in science and technology as one way to stimulate long-term economic growth and recovery from the “Great Recession” of 2007-09: “…….from the National Institutes of Health to the National Science Foundation, this recovery act (the American Recovery and Reinvestment Act of 2009) represents the biggest increase in basic research funding in the long history of America’s noble endeavor to better understand our world. Just as President Kennedy sparked an explosion of innovation when he set America’s sights on the Moon, I hope this investment will ignite our imagination once more, spurring new discoveries and breakthroughs that will make our economy stronger, our nation more secure, and our planet safer for our children.”[65]

From another Obama speech soon after:

I believe it is not in our character, the American character, to follow. It’s our character to lead. And it is time for us to lead once again. So I’m here today to set this goal: We will devote more than 3 percent of our GDP to research and development. We will not just meet, but we will exceed the level achieved at the height of the space race, through policies that invest in basic and applied research, create new incentives for private innovation, promote breakthroughs in energy and medicine, and improve education in math and science. This represents the largest commitment to scientific research and innovation in American history. Just think what this will allow us to accomplish: solar cells as cheap as paint; green buildings that produce all the energy they consume; learning software as effective as a personal tutor; prosthetics so advanced that you could play the piano again; an expansion of the frontiers of human knowledge about ourselves and world the around us. We can do this.[66]

These two speeches represent the commitment of the Obama administration to the role of basic research discoveries and technological innovation in stimulating and sustaining economic growth. As a result of the American Recover and Reinvestment Act (ARRA) of 2009, also known as the Stimulus Act, NSF received increased funding in 2009 to be used to advance discovery and innovation. The outcome was that the EEC Division received an increase in funding that the DD/EEC, Allen Soyster, and the AD/ENG, Tom Peterson, decided to allocate to the ERC Program to stimulate innovation through the ERCs. They had been deans at their former universities where ERCs were functioning and knew firsthand the power of ERCs to stimulate innovation. The money allocated to the ERC Program was referred to as the ERC Innovation Fund. Preston worked with the ERC PD team and issued a call for proposals under the ERC Innovation Fund to all the ongoing ERCs in the winter of 2009. The proposals were reviewed by members of each ERC’s site visit teams and others familiar with the ERC. Those reviews were considered by an ERC Innovation Fund “Blue Ribbon” Panel at NSF and awards were recommended. After passing through the approval process at NSF, six types of awards were made in the summer and fall of 2009. These were:

  1. Translational Research Platforms—Carry out research needed to span the gap between ERC-generated research outcomes and commercial products.
  2.  “Professors of Practice”—Hire experienced industrial personnel to bring knowledge of industrial practice to ERCsfor up to three years to enrich ERC testbeds projects with practical industrial experience and otherwise bring knowledge of industry to the ERCs’ research and education programs.
  3. Develop an additional testbed that will help speed the translation of ERC research to technology. The investment involved buying new equipment and hiring technical staff to help develop the testbed.
  4. Develop a design-build “facility” to provide design, fabrication, and prototyping experience for students and faculty to give ERC and other engineering students the experience of going from a design idea through to building early proof-of-concept “products.”
  5. Post-Doctoral Fellows in Industry for ERC graduates, supporting one to two years of work using a combination of NSF and industry funds.
  6. Establish an “entity” in partnership with other non-NSF sources of funds to generate a range of small business opportunities based on the ERC’s research which is reaching the translational phase.

The specific awards made under this program are described in detail in Chapter 5, Section 5-C(d).

9-E(e) Monitoring Center Expenditures of Budgets

Administering center awards with large budgets to universities requires oversight of financial management practices at the center and university sponsored-projects level. The start of the ERC Program was the start of a realization at NSF that centers and universities would require additional oversight to ensure effective allocation of funds to meet center goals, and in addition, cross-university allocation of funds to meet center objectives. This monitoring effort was localized within the ERC Program for several years. However, as an outcome of the growth of the ERC Program and the Science and Technology Centers (STC) Program, along with two incidents of malfeasance in reporting industrial members and support at two different ERCs, in the late 1990s through the early 2000s the Office of the Inspector General (OIG) carried out analyses of center-level management of their budgets for all center programs and provided some overall guidance on how centers should manage their budgets.

i.         Center-level Financial Management

Lesson 1: Centers need financial managers who are not the technical PI or co-PI. During the 1990s, more and more centers came online and center budgets grew as a consequence of gradual increases in ERC Program and industrial funding. Preston realized that providing universities with large awards for centers did not automatically guarantee effective accounting practices at the center level and at times at the university level. The “wake-up call” came as an outcome of a review in 1990 at the University of Colorado ERC, which was established in 1987. When the now-deceased Director, Tom Cathey, was briefing the site visit team on center management, he indicated that they had a budget shortfall. He had seen a large residual in the Center’s budget line from the university and had decided to use it to buy new equipment, not realizing that those funds should have been encumbered and should have been allocated to support the partner university, Colorado State University.

This led Preston to understand that the accounting practices at least at the University of Colorado, and perhaps at other universities, were not sufficient to support the financial management of ERCs, where funds came in from NSF, industry, academic cost sharing, and other sources. The “lost” ERC funds should have been set up in a separate account and managed to ensure that the Center met its technical goals and financial commitments for prime and subawardees. She asked the Office of the Inspector General to carry out an audit of that ERC and the University of Colorado to better understand its financial management practices. The findings were an important lesson for the ERC, the University of Colorado, and the ERC Program. The problem is summarized as follows in the OIG’s report:

  • The ERC’s budget is about $7M million, including $1.9 million provided by NSF.
  • The ERC had incurred a $1,536,829 operating deficit from FY 1990 through FY 1992.
  • The OIG’s review was carried out to determine whether the ERC’s financial management system provided accurate, current, and complete financial results of grant activities, the causes of the operating deficits, and the manner in which the operating deficits should be liquidated.

The findings are summarized as follows:

  • The ERC’s financial management system could provide accurate, current, and complete disclosure of the financial activities under the award.
  • The system provided for a comparison of budgeted versus actual grant expenditures and the source and application of funding.
  • The Center incurred an operating deficit largely because it did not adhere to budgets for the various programs—i.e.:
    • Financial reports did not include all ERC funds.
    • Program managers (PI and co-PI) were assigned responsibility for program and fiscal controls.
    • The ERC operated without an accountant.[67]

In other words, the technical leaders of the ERC could not implement effectively the financial management systems the university had put in place, as the site visit team had found.

As a consequence of her initial findings and those of the OIG, Preston asked the university to find an accountant who could serve as the financial manager of the ERC. They did that, hiring “Buz” Smith, who was a certified public accountant. Upon arrival at the university and the ERC, he did an analysis of the accounting practices at both (ERC and university) levels and found them “wanting.” He set up a financial management system for the ERC and administered it himself, thereby separating the technical leadership from the financial leadership, to improve both functions. In addition, he analyzed the accounting systems at the University of Colorado, Boulder and put in place revisions that were designed to stand the test of time as the university grew into a major research university, which it did.

As the Colorado example and the OIG’s analysis and recommendation indicate, managing such large and complex budgets requires financial management expertise and increased guidance from NSF. As a consequence, in addition to an Administrative Manager, many centers added a financial manager who was responsible for managing the inflow and outflow of funds and reporting to NSF through the ERC’s annual report.

Preston asked Buz to train the Administrative Managers and any financial managers for the ongoing ERCs in financial management at retreats and the ERC Annual Meetings, which he began to do in the mid-1990s. Figure 9-10 shows him on an Administrative Managers retreat in the Colorado mountains in the summer of 1999 with, from left to right, Sue Lewis (USC-Integrated Media Systems ERC), Penny LeBourgoise (NC State Advanced Electronic Materials Processing ERC), Darlene Joyce (University of Minnesota Interfacial Engineering ERC,) and Maryanne Hassan Risley (Duke Emerging Cardiovascular Technologies ERC). In addition, Preston asked Charles Ziegler and Tim Kashmer, staff of the NSF Division of Contracts and Agreements with cost accounting and cooperative agreement experience, respectively, to help train ERC administrative and financial managers at the ERC Annual Meetings.

Figure 9-10: ERC Administrative and financial managers on retreat.

Lesson 2: University cost sharing has to be certified by staff outside the ERC. The OIG’s analysis of cost sharing in centers made several references to inadequate documentation of cost sharing in ERCs and STCs. The outcome for the ERC Program was a requirement that all ERC annual reports include a certification of cost sharing by the university’s Sponsored Research Office. ERC annual reports always had a requirement for the submission of financial management tables, which have grown in specificity and complexity over time.[68]

Lesson 3: New centers need financial management training at startup. Preston also asked the OIG to review the accounting systems of the new Class of 1998 to be sure they were set up effectively based on prior findings. Individual reviews were carried out at all five of these new ERCs. The review recommended that these and all ERCs improve record-keeping policies and procedures while reducing administrative burdens. They recommended that ERCs combine industrial membership fees into a single account, maintain reserve funds in a dedicated account, and report reserve funds to NSF. They also recommended that ERCs improve internal controls by separating responsibilities for preparing and maintaining records from responsibilities for handling cash payments.[69] As useful as this exercise was for the new ERCs and the Program, Preston decided that having the OIG visit new centers after each competition was too “intrusive and intimidating” and she did not want this to be the first encounter with NSF for new centers. Instead, she provided the necessary information to the centers using the OIG guidance in the cooperative agreements, reporting requirements, and briefings to new ERCs and to ongoing ERCs at the Program’s annual meeting. In addition, she asked Charles Zeigler to join the ERC Program in an advisory role for new and ongoing ERCs. He became a member of the ERC team and she called upon him when she or a PD was concerned about financial management at a new or ongoing ERC. Briefings on effective financial management were provided at the ERC Program annual meetings and to start-up ERCs during the on-site start-up briefings that took place beginning in the 2000s.

Lesson 4: ERCs and NSF need to monitor residual funds. The ERC Program’s requirement that ERCs report funds in reserve to NSF through the annual reports and their budgets resulted in yet another finding and requirement. Tim Kashmer found that one of the Earthquake ERCs (EERCs) had funds in reserve in excess of $6.0M. As a consequence, he refused to provide the annual increment of support to the ERC until they reported to him whether or not those funds in reserve were unencumbered or encumbered. After much resistance on the part of the university, documentation was submitted and he finally determined that these funds were indeed encumbered and should have been paid to the many subawardees—i.e., partner universities—of this center. Apparently, these subawardees had not billed back to the ERC’s home university to ask for transfer of funds to support their efforts. The outcome was that the home university reported in detail on subawards that would encumber any residual or reserve funds and the partner universities were asked to bill the ERC’s lead university. The residual funds were expended and the annual increment was awarded.

This triggered an OIG review of the other two EERCs in 2000. The “backstory”—perhaps one of the causes for these residuals—is that the EERCs had been transferred from the Division of Civil and Mechanical Systems to the ERC Program for funding and oversight in FY 1999. Their first reports using ERC program reporting guidelines were submitted in FY 1999 and as a result, during the rest of FY 1999 and FY 2000 they worked to comply with the ERC Program’s strict financial management guidelines. Kashmer’s and the OIG’s findings were useful in strengthening the EERC’s financial management, as the residuals in all three EERCs were too high.[70] As a consequence, the ERC reporting guidelines augmented the ERC financial reporting tables with information that enabled tracking of residuals available, spent in any year, and remaining at the end of that year.

Again the NSF ERC program staff learned from an experience with a particular center and this resulted in a new requirement for the ERC Program. ERCs had to report reasons for residual funds in excess of 20% of the base budget. In addition, the EEC Division’s Administrative Manager was directed to send a letter to each ERC asking for the amount of residual funds and a breakdown of the amount encumbered and unencumbered.  For the unencumbered funds, the ERC was required to report plans for future use. In addition, the centers were advised to encourage subawards to submit invoices promptly and to speed up “billing back” to NSF because large uncommitted residual balances could put the next year’s funding increment at risk. This was discussed at the ERC Annual Meeting and resulted in more careful financial oversight at the center and Program levels.

Lesson 5: Bad Deeds” of a Few Impact All.

Director of NSF-Funded Research Center Convicted of Falsifying Reports to NSF:[71]The staff of the ERC at the University of Wisconsin sent documents anonymously to Preston indicating that the reported levels of industrial involvement and support were inflated. These allegations were referred to the OIG. At the same time, the University of Wisconsin was investigating staff complaints regarding Center management. The OIG found evidence of fraudulent behavior; and at that time, in November 1998, the U.S. Attorney for the Western District of Wisconsin charged the Center Director with one count of using a false written statement to obtain money from the federal government. He pled guilty, for which he was sentenced to three months in prison beginning in February 1999.[72] The Wisconsin ERC’s Director had overstated the number of industrial members in annual reports to NSF by nearly 50 percent, with the largest misrepresentation occurring during the sixth-year renewal review. The Director was sentenced by a Court in Wisconsin to three months in prison, supervised probation, and a fine of $10,000. NSF “worked with the U.S. Attorney General’s office to limit the former center director’s receipt of assistance and benefits under federal programs and activities for 3 years.”[73]

As a consequence, Preston worked with the OIG to develop new policies to help to avoid such practices in the future. The ERC reporting guidelines were revised to require certification of membership and membership fees by a university official at a level higher than the Dean. Preston, Marshall Lih (the Division Director at the time), and staff from the OIG discussed the issue in the strongest terms at the ensuing ERC annual meetings and at start-up briefings for new ERCs.

ERC overstated industry support by $6 Million: This incident involved industrial support and how it should be reported, rather than falsifying memberships. In this case also, Preston received a handwritten, unsigned communication from someone that this ERC was “smoke and mirrors”—that it was not all it had been reported to be. The inquiry unearthed a fault in the reporting system that left too much ambiguity regarding industrial project-level support. Some industrial funds were deposited directly into the ERC’s financial account for use by the leadership team at their discretion to fulfill the ERC’s strategic goals—i.e., as direct support. Other industrial funds were deposited into departmental accounts and tied to specific faculty members and their projects—i.e., associated project funds. The ERC database guidelines had not been clear enough regarding what to include and not to include. Associated projects had been defined as support for projects that went directly to an ERC faculty member’s department, not through the ERC’s account, for “work that was related to and substantially supports the vision of the ERC.” The words related to and substantially supports left a wide-open opportunity for exaggeration of industrial support, as the definition was too vague. This particular ERC had seized that opportunity to exaggerate its industrial support.

To deal with this ambiguity, Preston again worked with the staff of the OIG and the center directors to tighten the reporting guidelines in a way that would be fair to the centers but tight enough to preclude misinterpretation or fraud. Through a back-and-forth process, the new guidelines allowed reporting on associated projects that contributed directly to the ERC fulfilling its strategic goals and that were supported by industry or other agencies as an outcome of the ERC’s activities. This honored the center directors’ desire to appropriately report leverage of NSF funds, but with tighter restrictions on what could be included. In addition, any project reported as an associated project had to have a brief project description in Volume II of the ERC’s Annual Report, to guard against the Director claiming a faculty member’s project without his/her approval.

What is obvious from these two incidents is that the ERC Program had to be more restrictive and vigilant to guard again deliberate or unintended fraud—even if that meant “more bureaucratic.” This was a situation that Preston and Lih had not expected to have to deal with; but in hindsight perhaps they should have been less naïve. Another positive outcome for the Program was that there was a dialogue between the OIG and Preston in a problem-solving mode that established a pathway for her to turn to before problems arose, which helped to strengthen the Program and gave the OIG insights into issues that other center programs could face with post-award reporting on funding from other sources.

In summary, all of these findings and recommendations resulted in increased specificity and improved definition of the annual data reporting tables and annual reporting requirements. As was discussed in Chapter 6, the malfeasance of two ERCs regarding reporting industry members and ERC projects further impacted the financial and other reporting requirements. The OIG’s findings regarding accurate reporting of industrial memberships and the distinction between projects directly supported by ERC funds and associated projects supported by non-ERC funds, but operating under the ERC’s strategic plan, and reporting on achievement through the ERC’s annual reports were also added to the ERC reporting requirements. As a result, ongoing and new centers were briefed on their administrative and financial responsibilities.

9-F Center Life Span

9-F(a)    Base Support and Partner Funding for 10-11 Years

When the ERC program was initially established, there was no expectation of “graduation” or “self-sufficiency.” The initial ERC awards in 1985 were made for five years and it was not clear if NSF support would continue beyond that point and, if so, for how long. OMB asked NSF to determine an end point of NSF funding for an ERC, after which it could compete for a second award. Industry had asked NSF to have an early renewal review to quickly eliminate those ERCs that could not mount a strong effort within the first 2.5 years. The idea was that the savings from centers eliminated early would enable the Program to fund new and presumably better ERCs. This made sense to NSF, so the program developed an 11-year timeline, with a renewal review in the 3rd year and another in the 6th year. This is how it worked: Initial awards were made for five years, with a potential 11-year course of funding. If a center passed its third-year renewal review, the new funding levels for the fourth through eighth years were provided through an extension of the funding timeline for the ERC through its cooperative agreement. If a center passed its sixth-year renewal review, its lifeline was extended for another five years, years 7 through 11, to complete the 11 years of funding in the agreement. If a center failed its renewal review, funding would be phased down at 33% in the last two years of its agreement—i.e., years 4-5 for a third-year renewal failure and years 7-8 for a sixth-year renewal failure. This is why the renewal review was held two years before the end of an award period: so that the centers could be phased down gradually, which would protect students from abrupt funding cuts.

A successful center, having completed its full award term, could present a new proposal for consideration in competition with all new proposals. This plan was approved by the National Science Board in 1987.[74]

With the Class of 1998 and later, the life span was cut by one year because the National Science Board made a policy that all center awards could only be for 10 years. That decision was in line with how Science and Technology Centers had been funded: a 10-year award with one renewal review in the fourth year. For ERCs, it meant one less year for the final part of NSF’s support, and that did not work so well for the ERCs, negatively impacting their total level of support and the efforts in the last few years. That funding trajectory, shown in Figure 9-11, reflects a scaling-down of support in the last two years of an ERC agreement, a policy put in place in 1994 as discussed below.

Figure 9-11: ERC Life-Span (Credit: Barbara Kenny)[75]

i.         Phasing Down NSF Funding to Encourage Self-sufficiency Planning in Last Two Years of Agreement

Phasing down funding for graduating centers: In 1990, as the ERCs in the Classes of 1985 and 1986 moved beyond their sixth-year renewal review, it became apparent that most of the center directors thought that they would be successful in competing for a second ERC if they recompeted for a new award near the end of their full term of support, then 11 years.  They had the sense that their achievements would be so compelling after a decade of NSF funding that proposals for de novo centers would not be able to compete successfully with them. Therefore, very few were looking ahead to how they would become self-sufficient in case of a failure to win an award for a second ERC. This was troubling to Preston, as she was convinced that they should prepare for that negative outcome and work with their industrial supporters and universities to ensure continued support in case the second proposal was not awarded. Because of this, the centers were required to plan for self-sufficiency in their sixth-year renewal cooperative agreements. In addition, NSF expected that they would retain the core ERC characteristics post-graduation from NSF support, assuming that 10 to 11 years of NSF support was sufficient to change the academic culture.

However, ERCs with their university and industrial advisors made the case that the culture change that NSF was seeking with an ERC award might not last after graduation and thus NSF should at least provide some limited support after 10 years to maintain the systems view and the educational missions of the ERCs—both non-traditional roles for academic research centers. This idea was rejected by NSF as contrary to the policy that centers could only receive 10 years of support.

In 1994, the agreements were modified to scale down funding in the last two years of the agreements. By this time, the Class of 1985 would receive its 11th and last year of funding and the Class of 1986 would receive its 10th year of funding with one more to go, now at the scaled-down level. Initially, that rate was approximately 80 percent of the prior year’s support but was revised to a rate of 67 percent of the prior year’s support. The purpose was to serve as a stimulus for gaining increased non-NSF support and the base level was designed to preserve a critical mass going forward as a smaller center. The priority use for scaled-down funds was to support graduate students with ongoing projects so they could complete their dissertations.

Outcome of recompetition policy: The FY 1993 Program Solicitation was the first ERC solicitation released that impacted ERCs that were about to reach their 11th year of support. It called for proposals for new ERCs from wholly new teams and from teams from the ERCs in the Class of 1985.

The FY 1993 language regarding recompetition is summarized as follows:

  • ERCs in the final two years of the 11-year life cycle were eligible to recompete for a new ERC award and could submit a proposal in the competition.
  • These centers could propose to continue their ongoing program, or could modify their approaches for the future to address new issues that have become important.
  • These centers needed to justify what the new proposed ERC would contribute that was new and different from the body of work already underway.[76]

The result was that three of the four ERCs in the Class of 1985 were successful in the 1993 competition and were reestablished with differing timelines of support—3, 5, or 10 years. Two of them could not successfully redirect their efforts and did not pass the third-year renewal; only one ERC, the Biotechnology Process Engineering Center (BPEC) at MIT, was granted a second award for the full 10 years and successfully completed those additional 10 years of support.

As a consequence of that outcome and the rather less-than-precise language regarding a mature center’s vision for recompeting, plus the lack of self-sufficiency planning of the ERCs in Class of 1985, the FY 1994 ERC Solicitation had the following more stringent requirements:

  • It was expected that the home institution(s) of the ERC would ensure that changes due to the ERC are facilitated and would endure after NSF support ends.
  • ERCs in the final two years of their initial eleven-year life span would be allowed to recompete and submit a proposal.
  • Recompeting centers needed to demonstrate both high quality performance from their first award and a fresh, challenging vision for the future. NSF expected either an invigorated approach to the ERC’s research strategy, or a reorientation of the center’s vision and plans with evidence of high levels of continuing challenges, potential for continuing accomplishment, and continuing competence. [77]
  • Recompeting centers needed to significantly enhance their ongoing programs, focus on continuing work still needing to be done, build on past results, and modify their approach for the future to address new issues that have become important.
  • Recompeting centers were also allowed to propose a significantly different activity.

In the 1994 competition, none of the six ERCs in the Class of 1986 were successful in recompeting for a second ERC award. This was primarily because most proposals for wholly new ERCs presented investment opportunities to NSF to build capacity in areas emerging in the 1990s that could not be achieved by reinvesting in the Class of 1986.

The next two solicitations (1996 and 1998) treated recompeting ERCs with the same language as any new proposal, looking for value added over prior work. The outcome for these early classes was that of the 13 ERCs eligible to recompete from the Classes of 1985 through 1988, 10 recompeted, 3 did not, and only the 3 in the class of 1985 received new awards, with only MIT’s BPEC successfully awarded and completing a ten-year second life under NSF’s support.

9-F(b)    Self-sufficiency Policy and Outcomes

While ongoing ERCs could recompete for a new life span under NSF support, as an outcome of the policies laid down in the 1990s, they were expected to plan for self-sufficiency after the initial 10-year term of NSF support ended. By the 1996 Program Announcement, long-term self-sufficiency was defined specifically as: “NSF expects ERCs to become self-sustaining and to maintain the ERC culture beyond the end of their term of NSF support. By that time they will have developed an effective and productive collaboration with industrial and other partnerships. They should be prepared to continue that productive relationship with university, industrial, and other support when NSF funding ceases.”[78] Thus, NSF’s expectation was that, given the NSF, industry, and university investment to create a flourishing and productive ERC, the culture created would generate opportunities for sustained investment by the universities, industry, and other NSF and other government sources once NSF’s term of support was completed.

That was NSF’s perspective; but there were strong concerns raised by the ERC Directors about that policy throughout the 1990s, as they feared losing the center culture they had built for 10 to 11 years soon after graduating from ERC Program support. Preston remembers the following types of issues being raised during several annual meetings and in the ERCs’ planning documents for self-sufficiency:

  • Firms might not continue support if their support was not leveraged by NSF’s support and the prestige that it brought.
  • Faculty might not continue to participate in the center’s research because there would be little financial reward for doing so.
  • The interdisciplinary culture created by the ERC might not be a strong enough pull to keep the faculty engaged.
  • The ERC’s pre-college programs would not be sustained.
  • Universities might not contribute to faculty efforts without an obvious financial return on their investment.
  • The cuts in funds would put the center’s positions for management and industrial collaboration in jeopardy.

To better understand the graduating and graduated ERCs’ planning processes for self-sufficiency and early post-award outcomes, Preston supported two studies by the Stanford Research Institute, one in 1997[79] and another in 2000.[80] There were 12 ERCs in the cohort studied at the time of the first study: the two ERCs from the Class of 1985 that had graduated (Columbia and Maryland) and ten that were ramping down in the last two years of their agreements—five in the Class of 1986, three in the Class of 1987, and three in the Class of 1988.

The findings of the first study confirmed Preston’s concern that the early classes had not devoted much, if any, effort to planning for self-sufficiency, instead expending considerable effort in preparing new proposals. Given the outcome of the first two classes’ efforts to gain a second long-term award, the new ERC policies requiring planning for self-sufficiency, and the requirements for a new vision to recompete, the third and fourth cohorts were less optimistic and did devote more effort to planning for self-sufficiency.[81] For the early cohorts, their lack of planning and unrealistic expectations regarding a new life-span under NSF support combined to produce the following early negative impacts on the sustainability of ERC key features in the first two cohorts after NSF support ended:

  • Fewer students and faculty were involved, with less cross-disciplinary research, some research thrusts disappearing, and more individual proposals in preparation for future support.
  • Faculty had enjoyed working on interdisciplinary teams but there was little opportunity for support for team-based research.
  • One center with a long-standing tradition of interdisciplinary research did not disintegrate in this fashion and those centers that had developed contiguous working spaces also found less dissolution.
  • There was an increase in shorter-term, more applied research as reliance on industrial support increased.
  • Student involvement outside research decreased as providing free pizza and cookies at seminars ended, REU opportunities for students from non-ERC institutions were cut, and opportunities for outreach to underrepresented students could not be maintained.
  • Support staff were let go or supported by higher levels of the university. [82]

However, there was an important exception to these negative concerns: the Institute for Systems Research (ISR, Class of 1985) at the University of Maryland. As a result of recompeting, the ISR received a short-term award of three years which would end in 1998. The original ISR award was credited with being “the first time we competed with the big boys and won” and also helping to spread industry interaction throughout the School of Engineering. Permanent support from the Legislature of the State of Maryland at $600,000 per year, gained for the ISR’s sixth-year renewal, formed the basis for its long-term survival. These funds were used to provide part-time support to faculty who are members of ISR. However, when it recompeted again at the end of its three-year award, it was not selected to submit a full proposal. This required fast action on the part of the University of Maryland to continue the leadership role of the ISR on campus and with industry. The ERC received $200,000 a year from the provost’s office for about four years to transition it to survival—at that time the only unit on campus to receive that level of support.[83]

Annual university support to the other ERCs in the Classes of 1985 through 1988 that were graduating from NSF support was as follows:

  • Class of 1985: Columbia—none, and a later study of graduated ERCs in 2010 indicated that it had disbanded by then.[84]
  • Class of 1986: $100,000 for the BYU ERC; $500,00 for the CMU Design ERC; none for the Lehigh, Ohio State, and Illinois ERCs.
  • Class of 1987: Duke ERC received no stipend, only indirect cost return, and was disbanded by the time of the 2010 study;[85] the University of Colorado ERC received no institutional support and in later years, when there was a return of royalties to that ERC, the University appropriated the royalties for its own use, and the ERC disbanded.[86]
  • Class of 1988: North Carolina State ERC received no institutional support and the later study of graduated ERCs found it had disbanded;[87] the Texas A&M ERC received no institutional support but did not disband; and the University of Minnesota pledged to match industry support to the center there.[88]

Early indications derived from the 1999 SRI study were that the later cohorts better understood the need for self-sufficiency planning, given the outcome of the first two recompetitions and the new NSF policies regarding phased-down support and self-sufficiency planning. At that time most appeared in their prime, with strong interdisciplinary interactions and academic and industrial support, and all had transition plans in place, but there was a concern that they could not retain all the ERC key features once NSF support ceased.[89]

The follow-up study by SRI in 2000 of five graduated ERC award classes confirmed the findings of the first study and these concerns. There was downsizing, a shift in focus toward applied research, and decreases in student involvement and educational outreach. Therefore, SRI found that the self-sufficiency model where NSF assumed that all the ERC key features would remain intact was not an accurate reflection of the experiences of most centers in the first five classes. There was variability in the outcomes and SRI concluded that the most successful transition occurred when the ERCs and its partners had the following features:

  • Strong institutional support in a culture that fosters ERC-like characteristics
  • Motivated faculty, with institutional incentives that further that motivation
  • Research program that lends itself to a continued evolution at the forefront of its discipline—i.e., still “hot” and evolving
  • Close involvement of faculty with a sense of “ownership”
  • Strong support from university administrators in transition planning
  • University policies that facilitate Center-based research efforts:

                             — Indirect Cost Recovery (ICR) return

                             — Release time for research

                             — Promotion and tenure criteria supportive of interdisciplinary research

  • Industrial support from firms in an industry that is more reliant on university-based research, closely involved in transition planning and with few alternatives to the center’s research and its graduates
  • Educational programs that are sufficiently valued within the university that they will be maintained and faculty will keep students involved.[90]

While most Centers did continue to exist as financially viable entities, few appeared able to retain all or nearly all ERC-like characteristics once NSF base support came to an end. In light of this, SRI concluded that NSF essentially had two policy options: 1) continue to choose to invest solely in new fields and new universities; or 2) choose instead to provide some continued support to graduated ERCs. If NSF were to have chosen the latter, the following alternatives, many of which were suggested by ERC-associated interviewees, emerged from the study:

  • Allow existing centers to recompete with newly proposed centers on an absolutely equal footing, without a requirement that they reinvent themselves.
  • Reevaluate the length of the fixed period of support: while centers should continue to be reviewed intensely, if after the end of their first full term of NSF support they are still viable as centers of excellence, NSF should have the flexibility to continue to fund them as full ERCs representing “national assets. Further, consider creating a separate pool of funding for the continued funding of the most successful ERCs beyond the first full term of support, so as to eliminate the competition with new ERCs, which is perhaps inherently unequal.
  • Continue, as long as is justified by review, to fund centers that are still viable at a level sufficient to support the core research and infrastructure that are most vulnerable.
  • Continue to provide to all graduated ERCs a small amount of annual funding to maintain their inputs into the ERC database and participation in ERC Program annual meetings.
  • Provide continued support and recognition to graduated centers to encourage them to continue to think of themselves as ERCs and benefit from an ongoing NSF “stamp of approval.

Preston, Bruce Kramer (the EEC Division Director at the time), and Gene Wong (the AD/ENG at the time) considered these recommendations in light of (1) the experience of several ERC competitions where graduating ERCs recompeted and (2) NSF’s strict policy that centers receive only one ten-year award without successfully competing for another. Prior competitions had seen ongoing ERCs unable to shift into new and emerging fields related to their core field with sufficiently convincing new and invigorated directions. Given these considerations, there was little inclination to keep funding an ERC to “plow the same ground it had plowed” for 10 or 11 years. The momentum of the staff and the review community was to invest in new and emerging areas to continue NSF’s role as a catalyst for the creation of new fields and new industries. There was also a policy determination by the National Science Board (NSB) that a graduated center of any program could not retain NSF’s imprimatur without continued funding once it had graduated and there was no inclination on the part of the staff of the NSB to change that policy.

By 2005, Preston was still interested in the outcomes for graduated ERCs and she asked one of the ERC PDs, Vilas Mujumdar, to carry out an internal survey of the ERC Directors. While 10 of the 16 graduated ERCs responded, only 8 filled out the survey, one had closed two years after graduation (Minnesota[91]), and one directed Mujumdar to look at the center’s website. Thus, the effective response was 50 percent. The results, which included information from that one ERC’s website, were presented to the ERC community at the November 2005 ERC Annual Meeting and indicated the following[92]:

  • Post-graduation support ranged from $0.5M to $25M, from a range of sources (the ISR at the University of Maryland’s support level was $25M).
  • Most of the research was migrating to industry-driven research and some was an extension of ongoing research, becoming more broad and diverse; all carried out fundamental research.
  • 8 of 9 retained their systems perspective, 1 became more basic.
  • 7 of 9 retained their university education programs, but only 2 retained pre-college RET and student outreach programs.
  • Most were happy to be relieved of reporting, diversity requirements, paperwork, and internal, inter-departmental politics.
  • Most missed the sustained focus, which allowed creation of multidisciplinary interaction, the ability to respond to new ideas, and special research programs.

Those that responded to the survey had the following suggestions to strengthen the ERC model:

  • ERCs should carry out more industrially relevant research.
  • There should be less emphasis on publishing for academics.
  • The ERC Program should:
    1. encourage more flexibility in strategic planning of research;
    2. provide baseline support to graduated centers that are active in the field;
    3. promote fewer mandatory program requirements; and
    4. allow the REU program to support foreign students.
  • Continued support from university administration should be required.
  • Maintaining a culture of innovation is important.

The outcome was similar to the previous studies, indicating that there was still a desire to have NSF support graduated ERCs to some extent. Again, for the reasons given above, this was not acted upon at NSF. Preston continued to encourage university administrators to more aggressively plan for sustaining the basic administrative support for graduated ERCs and to provide some transition support for research and education. The data from the response indicated that five of the responding eight ERCs received between $225,000 and $4.0M from their universities, with most in the range of $500,000 to $800,000. During the private meetings with industry at site visits of these maturing ERCs, she emphasized the joint investment NSF and industry had made to build these platforms for industrially-important research and that industry needed to continue that support once NSF’s funding ceased. NSF’s governing philosophy was to be a catalyst for change, not a sustained funder.

Because the response rate to the Mujumdar survey was low and there was some concern that the ERCs may not have responded as openly to an NSF staff member as they would to a contractor, Preston commissioned a new study of graduated ERCs. In 2009, she asked Court Lewis, President of SciTech Communications, to carry out another survey, which would gather more thorough information about the implementation of self-sufficiency planning and the status of the graduated ERCs and their support levels. They decided the study should be led by Jim Williams, who had been the Executive Director of the Carnegie Mellon Data Storage Systems ERC, because she and Lewis thought the ERCs would respond even more candidly to Jim, as he was “one of them.”

The cohort surveyed was 33 graduated ERC from the Classes of 1985 thorough 1999. In response to a suggestion in the beta testing of a web-based questionnaire, it was decided that there would be two surveys, one easy to respond to and a follow-up to those who were willing to provide more detailed information. The first survey was formatted to elicit the greatest possible response by framing the questions so that most of them could be answered fairly quickly (yes/no, multiple choice, or brief responses). The second was sent to those who indicated their willingness to provide additional explanations to allow the authors to gain deeper insight into the experiences of graduated ERCs. The response rate was over 70 percent; 26 of the cohort of 33 responded to the initial short questionnaire and 14 of the 19 recipients of the longer follow-up questionnaire responded. The respondents were center directors from all classes, across all fields of technology.[93]

The responses indicated that 82 percent (27) continued to exist as functioning units on campus. Only six had disbanded following graduation. These were the ERCs at Columbia, Purdue, and MIT BPEC from the Class of 1985 (the latter was merged into BPEC Class of 1994); Duke and Colorado (Class of 1987); and North Carolina State (Class of 1988). Noting that the disbanded ERCs were in the earlier classes, the authors surmised this might have been due to poor planning for self-sufficiency or less attraction to their fields by faculty as newer areas emerged on campus.

Most importantly, a substantial majority of the graduated and self-sustaining ERCs had maintained most of the ERC key features. Eighty five percent of them retained their cross-disciplinary research; 83 percent continued to have the integration of research, education, and industrial collaboration as their primary rationale; 82 percent continued to involve undergraduates; and 78 percent continued to have the systems type of research. Other features were still retained, but at lesser levels:

  • Proof-of-concept testbeds, strong industrial and academic support, and center-related curricula and degree program – 60 percent
  • Pre-college programs – 67 per cent
  • Multi-university collaboration – 71 percent.[94]

The authors noted that it is laudable that such a high percentage of graduated ERCs continue to be self-sustaining with strong ERC characteristic. However, funding remained a challenge for most, with resulting impacts on their programs:

  • 80% experienced reduced funding.
  • Wide funding variation from center to center, from $50K to $1.0M from their university, and total support varied widely from $0.3M to $50M, with a mean total support of $6M and a median of $2.0M due to the large variation in funding levels.[95]
  • 57% eliminated the education director and 31% eliminated the industrial liaison officer.
  • Most integrated administration and financial management into one position.
  • 62% reduced the number of faculty, 57% reduced graduate students, and 70% reduced undergraduates.
  • 50% reduced minority outreach programs.[96]

The range of total Center support by Class was the following:

Class of 1985                $9.0M to $14.0M

Class of 1986                0.5 – 8.0

Class of 1987                0.0 – 0.0

Class of 1988                1.4 – 2.5

Class of 1990                5.0 – 50.0* (*Mississippi State ERC and its expansion)

Class of 1995                2.0 – 5.0

Class of 1996                1.7 – 4.0

Class of 1998                0.7 – 5.3

Class of 1999                0.3 (Vanderbilt Bioengineering Education ERC)[97]              

The authors observed that serious and careful development and implementation of a self-sufficiency plan was key to survival post-ERC Program support. Factors contributing to success were:

  • Broad involvement of faculty, staff, industrial partners, and university administration in transition planning
  • Institutional factors—degree of university commitment, whether the center is prized and whether policies are supportive of cross-disciplinary research and education
  • Education program sufficiently valued by faculty and students that it will be maintained
  • Commitment and interest of a core group of faculty
  • Active industrial support and continuation of industrial membership and industrial advisory board guidance
  • Effective implementation of a realistic transition strategy that builds on and enhances the center’s strengths
  • Quality of leadership of the management team.[98]

Overall, 100 percent of those responding to the survey said being an ERC was “worth it all.” They pointed out that the best aspects of life as an NSF ERC were:

  • Teaming and visionary projects beyond the capabilities of single investigators
  • Funding to provide flexibility to start projects quickly
  • Fostering a cross-disciplinary culture, systems approach, and industry orientation
  • Integrating research and education
  • Opportunity to develop the next generation of engineering leaders.

The worst aspects of life as an ERC were:

  • Dealing with the interdepartmental processes and academic politics
  • Dealing with the burdensome amount of reporting and bureaucratic oversight internally and externally
  • Overall intensity of NSF interaction causing stressed-out faculty and center leadership
  • Constantly changing reporting guidelines.

The best aspects of survival as a graduated ERC were:

  • Absence of NSF’s reporting, oversight, and micro-management
  • Continued vigor among faculty to pursue team and interdisciplinary aspects
  • Recognition from industry and continuation of industrial partnerships
  • Gratification of seeing their technology making an impact in the marketplace
  • Changing culture of the university toward systems-oriented multidisciplinary research
  • Continued success and growth as a major university research center.

The worst aspects of post-NSF life as a center were:

  • Having to constantly justify the center internally
  • Difficulty in obtaining funds and inability to get long-term funding opportunities
  • Inability to continue to support educational outreach
  • Reduction in the number of companies willing to become members
  • Absence of NSF’s attention in observing progress and helping to address difficulties.[99]

The preceding sections on reporting (9-D(a)) and performance assessment (9-D(c)) addressed how the post-award oversight system grew in specificity, which most likely contributed to its burdensome nature. The need to verify the validity of industrial memberships and cost sharing because of OIG requirements certainly contributed to the burdensome bureaucratic nature of NSF’s requirements. Given the strength of the disciplinary, single-investigator culture in academic engineering, even with increased support of work at the interface of disciplines outside the ERC Program, it is likely that the features the graduated ERCs prized the most—cross-disciplinary team culture, systems research, and industrial collaboration—would not have been achieved to the depth and breadth that they were without rigorous post-award oversight and some micro-management to ensure that ERC leadership teams paid attention to their weaknesses, such as with the SWOT analysis.

9-G       Configurations of ERCs Over Time

As was described in Chapters 2 and 3, the ERC Program began life under some degree of sustained “attack” from the long-entrenched disciplinary culture both within and outside NSF. Even with strong support from NSF’s top leadership, the Program was an experiment that was under intense pressure to prove itself and succeed. Consequently, Program management began in a somewhat cautious, conservative mode. The “tough love” described in section 9-A(b) only gradually evolved toward a more nurturing, developmentally inspired guidance and management style. In consonance with that evolution from “tighter” toward “looser,” the configuration of ERCs evolved over time from a generally single-university model, with at most one partner university in a subordinate role, to a multi-university configuration encompassing several or many institutions as full partners.

9-G(a)    Gen-1 (Classes of 1985-1990): Mostly Monocultures

In the five year-classes of the first-generation ERCs, from 1985 through 1990, a total of 21 centers were established. Of these, 14 were strictly single-institution ERCs. Five centers had an “affiliate” university, usually geographically nearby or at least in the same state. Two, Duke University’s Emerging Cardiovascular Technologies ERC (Class of 1987) and the Advanced Electronic Materials Processing Center at North Carolina State University (Class of 1988), were affiliated with “other North Carolina universities,” although the degree of involvement of these other institutions in the ERC was not clearly defined. In all seven of these cases, the lead university had fiduciary responsibility and control; it was clearly “their ERC” and the participation of the affiliate university(-ies) was limited in scope and funding, both of faculty and research. In one or two cases, this subordinate role on the part of the affiliate institution led to friction between the two participants that was evident even to NSF Program managers.

9-G(b)    Gen-2 and -3 (1994-2006 and 2008-2017): Broadening Partnerships

The expansion of individual ERCs to include more than just the awardee university was initially not a requirement, even into the beginning of the Gen-2 period that began with the Class of 1994/95. However, the benefits of doing so were starting to be evident in the form of greater leveraging of Program funds: i.e., more faculty and facilities from other institutions could be brought to bear on elements of the ERC’s mission; and more students could be exposed to the ERC Program’s integration of research and education and its focus on engineered systems in partnership with industry. None of the three ERCs in the Class of 1995 were multi-institutional, but of the five centers in the Class of 1996, one—the Center for Environmentally Benign Semiconductor Manufacturing, headquartered at the University of Arizona—broke new ground with a total of five “core partners” (six, if MIT’s Lincoln Laboratories is counted separately). In fact, the CEBSM’s Associate Director was Prof. Rafael Reif of MIT (who later became the President of MIT.)

The next Gen-2 class of ERCs were the five in the Class of 1998.[100] All of these centers had at least one core partner. One, the Computer-Integrated Surgical Systems and Technology (CISST) ERC, led by Johns Hopkins University, was notable for including among its partners three hospitals that conducted medical research. The Center for Power Electronics Systems (CPES), with Virginia Tech as lead, included as a core partner the University of Puerto Rico at Mayaguez (UPRM). This was the vanguard of a rapidly emerging trend to include minority-serving institutions among ERC core partners; in fact, the UPRM, with a large and excellent college of engineering, became a popular go-to university for those ERCs seeking a Hispanic-serving partner. In addition, with 40% of all engineering students at UPRM being women, the university was and is an excellent source of high-quality female engineering students for ERC research and education programs.[101] These partnerships were stimulated by program guidance that encouraged multi-institutional partnerships to broaden impacts and more inclusion of institutions serving populations underrepresented in engineering. Not only did multi-institution ERCs become standard, but so also did the inclusion of a wide range of underrepresented minority-serving female-serving institutions, among the core partners.

The logistics, and the politics, of operating an ERC with a number of nearly co-equal partners could be challenging. Initially, the tendency was to manage unilaterally from the top down—that is, from the lead institution and its director. It was simpler. But that approach quickly began to be democratized, as NSF and site reviewers pushed the lead institutions to include the core partners as fully as possible in center-wide strategic planning and even center management. There were several practical reasons for doing so, including coordination among the participants in the various research thrusts, collection of center-wide “indicators” data, coordination of student exchanges and inter-campus course linkages, and participation in meetings with NSF and industry.

The main challenge was communication. By the late 1990s, several technologies were available to facilitate cross-institutional communication—not only teleconferencing but also closed-circuit and interactive TV and the internet, via email and websites. Long-distance phone toll rates, once exorbitantly high, became commoditized and inexpensive. Some centers set up distance learning facilities in which remote classrooms and even remote team teaching could be conducted.[102] Centers such as CPES at Virginia Tech instituted weekly teleconferences among key leadership. A new leadership position, Campus Director (sometimes called Associate Director), became common, with each core partner having a “branch manager” at the level just below the overall Center Director.

With the new level of management, distributed across generally three or more institutions, came a new level of management complexity. The allocation of general funds was broadly established in the cooperative agreement with each institution. But associated project funding from industry and supplemental funding from NSF were issues that needed to be addressed separately. Project selection and termination were often hot-button issues that could engender turf-protective stances on the part of partner leaders. Interaction with the core partner institution administrations on matters such as promotion and tenure, student course credit, use of facilities, and indirect cost often led to complications. “Broader” centers meant, by and large, bigger and better centers, but also meant decentralized entities that were harder to manage.

9-H       NSF Staff Leadership and Structure: “The Courage to Change the World”

Based on a survey of the literature on leadership, Kerry Hannon of the New York Times described leaders as people who are willing to “fight for a vision, take risks, and push boundaries to change the way we see the world.” They have common characteristics, ranging from innovative thinking to an ability to build trust among those who follow them, to strong confidence and a stubborn devotion to their dream.[103] It is not an exaggeration to say that the leaders of the ERC Program at NSF and the leaders of the ERCs on campuses across the country exemplified these characteristics. More specifically, they had the ability to:

  • Achieve the ERC Program’s or the ERC’s vision and evolve it over time, given changing economic conditions;
  • Protect and evolve it in light of constantly changing NSF management personnel or NSF’s oversight system;
  • Inspire faculty, students, ERC leadership staff, industry, or NSF staff to go beyond business as usual to meet the collaboration and partnership goals of the program;
  • Take risks, learn from the decisions, and improve leadership;
  • Work vertically and horizontally in NSF and with universities and industry;
  • Manage the various subcomponents of the program or ERC to an integrated whole;
  • Combine knowledge of fundamentals with experience in interdisciplinary partnerships and a commitment to advance systems technology;
  • Augment the technical skills of engineering graduates with systems-level experience and knowledge of industrial practices; and
  • Use knowledge of innovation, innovation ecosystems, and the role of research and technology to lead ERCs to achieve significant impacts on economic development.

The following is a history of the ERC Program team at NSF over the years. See Chapter 2, sections 2-B and 2-C, for additional information on this subject.

9-H(a)   1984–1990 (Start-Up: “Lean and Mean”)

These were the start-up years, the time when the ERC Program was initiated and was developing new systems to support its complex mission. As described in Chapter 2, the Program was housed initially in the Office of Cross-Disciplinary Research (OCDR), which became a division by the end of the decade. The start-up OCDR team was lean, committed to the mission of the ERC Program, enthused about the impact it could have on the Nation, and essentially a band of “revolutionaries” who looked forward to changing the culture of academic engineering and returning it to its roots in technological innovation, which would require working across disciplines and experimenting with new technologies.

The characterization of this team as “revolutionaries” connotes their devotion to this mission and the commitment to break new ground to ensure that the ERC funds did not support business as usual in academic engineering. In turn that reflected their commitment to rewarding new ERCs that could break ground in establishing this new culture, rather that supporting proposed or even funded teams that were more committed to the basic-science, disciplinary culture that prevailed in academic engineering at the time. Fulfilling that mission was harder for non-OCDR staff, because some were either more committed to a discipline or found it difficult to cut off poorly performing ERCs (post-award performance oversight was not a part of the traditional NSF culture). During this brief 7-year period, the Program funded 18 centers, set up its new proposal review and funding systems, and established its post-award oversight and assessment systems. The Program team also collaborated with industry to begin strategic planning at the ERC level.

The program team for these seven years is listed below. (Status given is as of April 2019.)

Office/Division Directors:

  • Pete Mayfield (1984–1987), Retired 1987, and now deceased
  • Marshall Lih (1987–1999), Retired 2011

Deputy Director/Program Manager: Lynn Preston (1984–2013), Retired 2014

ERC Program Managers/Leaders:  Mayfield, Lih, and Preston

Program Administrative Staff:

  • Sherry Scott (1984–1994), Retired
  • Darlene Suggs (1986–2008), Retired

Program Consultants:

  • Ann Becker, Ann Becker Associates (1986–2010)
  • Courtland S. Lewis, SciTech Communications, LLC (1990–present)

ERC Program Directors:

  • Frederick Betz (1986–1999), Retired
  • Tapan Mukherjee (1985–2004), Retired, and now deceased
  • Joe Mathias (1987–1992), Deceased
  • Howard Moraff (1985–1988), Retired
  • Radhakisan “Kishan” Baheti (part time, 1985–2000)

Pete Mayfield served as the leader of OCDR and the manager of the ERC Program until his retirement in 1987. Marshall M. Lih served as the Office/Division Director until 1999; initially he too managed the ERC Program, but he turned that role over to Lynn Preston in the late 1980s. Preston served as the Deputy Office/Division Director and initially as a co-manager of the ERC Program with Mayfield and Lih. Her efforts in developing post-award oversight systems and other management innovations were recognized, leading to her appointment as the manager of the Program in the late 1980s, as Lih took on full-time management of the Division. During this time, two ERC PDs were recruited from the disciplinary divisions supporting electrical engineering; one served full-time (Howard Moraff, PD for the UC-Santa Barbara ERC), and the other (Kishan Baheti, ISR PD) served part-time. During this time, the number of PDs from the disciplinary divisions was not expanded further because most were committed to an NSF business-as-usual culture, rather than to the new ERC culture.

The composition of the team reflects the need for staff who had experience working across disciplines to establish new interdisciplinary efforts (Mayfield, Preston, Mukherjee, and Moraff) and experience joining academic engineering with industry to advance technology (Betz and Mathias). Sherry Scott was an exemplary administrative manager who was able to put in place the new administrative systems needed for the review, funding, and oversight of these complex efforts in a short period of time. She hired Darlene Suggs to help her and trained Darlene to succeed her in case she was transferred to another division or retired, both of which happened following this startup period.

Preston worked closely with Fred Betz, an ERC PD who was interested in the management of technology, in guiding the ERCs in strategic research planning and testbed development. From Joe Mathias, who had been a VP for Research at Sperry Corporation, the team learned more about the realities of industrial collaboration with academic engineers. Preston, Betz, and Mukherjee developed the format for the post-award oversight site visits as an outcome of several of the early site visits. (Figure 9-12 depicts most of the members of that original ERC team.)

Figure 9-12: Key members of the early ERC Program team

9-H(b)   1991–2000 (Expansion and Developmental Guidance)

During this ten-year period the number of centers grew to 25, the NSF ERC team expanded to include more full-time ERC PDs in the ERC’s home division and part-time PDs from other divisions in ENG, and more analytic staff were added. The shift in the post-award oversight from “lean and mean” to a more nurturing and developmental system took place in this time period. The oversight system moved away from the more “aggressive” pass/fail approach in the start-up years. The new system evolved over time and rested on strong post-award oversight of the ERCs and feedback from site visit teams and ERC PDs to encourage growth and development, enhanced by the inclusion of the SWOT analysis tool used by industry, as discussed earlier.

There was also a developing culture of the ERC family – NSF staff and ERC leaders and staff learning from each other how to improve their efforts at the ERC Annual Meetings. Leaders of teams of ERC Industrial Liaison Officers and Education Program leaders were emerging. This knowledge led to the ERC Best Practices Manual, which was written by ERC leaders and edited by a new consultant to the Program, Courtland S. Lewis. He organized and managed the writing process, edited the manual, and in 1996 develop a new website outside of NSF to house the Manual and other ERC-generated documents: www.erc-assoc.org.[104] Assessment and evaluation at the Program level was required and two new staff members with evaluation expertise were added to the NSF team, Linda Parker and William Neufeld. Parker contributed significantly to the post-award oversight system and to the knowledge of effective ERCs through the support of various evaluations of ERC impacts.

Marshall Lih established a culture that encouraged innovation and experimentation; he did not micromanage the ERC team and every innovation did not require his approval. Rather, he was there to provide a sounding-board about important new ideas in post-award oversight and strategic planning, providing sound judgement and imparting the culture of win/win rather than a zero-sum game, which had been the culture prevalent in the 1960s and 1970s.

i.         Program Team 1985 – 2000

The ERC Team during these years included: (Status given is as of April 2019.)

Division Directors:

  • Lewis W. (Pete) Mayfield (1985–1987) Retired
  • Marshall Lih (1987–1999), left EEC
  • Bruce Kramer (1999–2004)

Deputy Division Director: Lynn Preston (1984–2014), Retired

ERC Program Leaders:

  • Pete Mayfield (1985-1987), retired, now deceased
  • Marshall M. Lih (1987-1988)
  • Lynn Preston (1988–2013), Retired

Program Evaluation:

  • Linda Parker (1993–2007), Retired
  • William Neufeld (1994-1995) left EEC

Program Management Staff

  • George Brosseau (database) (1984–1995), Retired 1995, now deceased
  • Mary Poats (1990–1995)

Program Support Staff

  • Sherry Scott (1984–1994), Retired
  • Darlene Suggs (1986–2008), Retired

Program Education Manager

  • Mary Poats – (1992 – 2007)

Program Consultant/Contractors

  • Ann Becker, Ann Becker Associates (1986–2010)
  • Courtland Lewis, SciTech Communications, LLC (1990–present)

ERC Program Directors from ERC Program’s Home Division:

  • Frederick Betz (1986–1999), Retired
  • Cheryl Cathey (1996–2000), Left NSF for industry
  • Christina Gabriel (1991–1994), Left NSF for academia
  • John Hurt (1994–2006), Moved on to develop Partnerships for Innovation Program, later retired from NSF, now deceased
  • Jay Lee (1992–1998), Left NSF for United Technologies Research Center; since 2000 leads two I/UCRCs in academe
  • Theresa Maldonado (2000–2001), Returned to academe
  • James Mink, EEC (1999–2000), Left to join ECCS
  • Tapan Mukherjee (1985–2004), Retired, now deceased
  • Joy Pauschke (1994–2000), Left EEC to manage the Earthquake Hazards Mitigation Program.

ERC PDs from outside the ERC’s division who served part-time:

  • Deborah Crawford, Division of Electrical, Communications, and Cyber Systems (ECCS) (1996–2000), Left NSF for academe
  • Mita Desai, Directorate for Computer and Information Science and Engineering (2000–2001), Left NSF for DARPA
  • Larry Goldberg, ECCS (1995–2003)
  • Frederick Heineken, Biological Engineering Systems (1995–2009), Retired, now deceased
  • Dan Herr, Semiconductor Research Corporation (1995–2005)
  • Rajinder Khosla, ECCS (1998–2010), Retired and serves as a strategic planning and management expert for two current NC State ERCs
  • George K. Lea, ECCS (1996–2000), Retired
  • James Mink, ECCS and EEC (2000–2004), Retired from NSF, now deceased

ii.      Exemplar PDs

This section provides links to personal reflections by two ERC Program Directors from the 1991–2000 period: Cheryl Cathey and Rajinder Khosla.

9-H(c)    2001–2014 (“Life Coaches”)

In the following period, the ERC PD role evolved further into what might be called “Life Coaches.” During these 13 years, the oversight of the ERC Program at the Division level changed considerably from prior periods, which were characterized by long-term stable relationships between Preston and her team on the one hand and with the EEC Division Director on the other. In this period there were four Division Directors—three of whom were on rotation from academe, which was consistent with the staffing practices for division management throughout NSF at the time. Preston served as the Deputy Director (Centers) of the division until her retirement in 2014.  

The three Division Directors from academe included two who had held senior positions in academic administration and management of faculty: Gary Gabriel, Academic Vice Provost and Dean of Undergraduate Education and Associate Dean of Engineering at RPI; and Allen Soyster, Dean of Engineering at Northeastern University. That type of experience proved useful in providing a sounding board for the impact of the ERC Program’s policies in academe as well as guidance to Preston as she dealt with the removal of a director of one ERC by its home university and the consequent impact on the ERC.  

  1. Program Team (Status given is as of April 2019) 

Division Directors:  

  • Bruce Kramer (1999–2004) 
  • Gary Gabriel (2004–2006), Returned to academe  
  • Al Soyster (2006–2010), Returned to academe 
  • Theresa Maldonado (2010–2014), Returned to academe 

Deputy Director:  

  • Lynn Preston (1985–2014), Retired 

ERC Program Leader:  

  • Lynn Preston (1988–2013) 
  • Eduardo Misawa (2013–2014) 
  • Keith Roper (2014–2015) 

Program Management: 

  • Barbara Kenny (2006–2013) (Post-award oversight after 2008), Retired 
  • Sharon Middledorf (2006–2013), Retired 
  • Victoria Kwasiborski (2009–2014) (Post-award oversight), Left NSF and the U.S. workforce 

Program Administration: 

  • Darlene Suggs (1986–2008), Retired 
  • Shalika Walton (2009–present) 
  • Marshall Horner (2010–2015), Promoted within EEC 
  • LaTanya Sanders-Peak (2009–present) 
  • Shalika Walton (2008–present) 

Program Evaluation: 

  • Linda Parker (1993–2007), Retired 

Program Education: 

  • Esther Bolding (REU) (2002–2014), Retired 
  • Mary Poats (REU and RET) (1992–present) 
  • Carole Reed (ERC Education Programs) (2011-2019), Left EEC to serve as a Program Director in the ENG Division of Chemical, Biological, Environmental, and Transport Systems (CBET) 

Program Innovation Ecosystem: 

  • Deborah Jackson (2009–present) 

Program Consultant/Contractor: 

  • Ann Becker, Ann Becker Associates (1986–2010) 
  • Courtland Lewis, SciTech Communications, LLC (1990–present) 

Full-Time ERC PDs: 

  • Daniel DeKee (2008–2012), Returned to academe 
  • John Hurt (1994–2000), Moved on to develop Partnerships for Innovation Program, later retired from NSF,  
  • Deborah Jackson (2006–present) 
  • Barbara Kenny (2006–2013), Moved on to Lead the NSF Partnership for Innovations Program, later retired 
  • Bruce Kramer (2003–2005)  
  • Carmiña Londono (2014–2017), Moved to ECCS 
  • Eduardo Misawa (2013–present)  
  • Vilas Mujumdar (2005–2008), Retired 
  • Sohi Rastegar (2001–2006), Moved on to lead the EFRI Program  
  • Carol Reed (2011–2015), Moved to CBET  
  • Keith Roper (2009–2015), Returned to academe 
  • Aspa Zerva (2000–2001), Returned to academe 

Part-Time ERC PDs: 

  • Fil Bartoli, ECCS (2003–2007), Left NSF for academe and returned in 2014 to lead ECCS 
  • Dominque Dagenais, ECCS (2010–2018) 
  • Leon Esterowitz, CBET (2006–present) 
  • Rick Fragaszy CMMI (2001–2002) 
  • Larry Goldberg, ECCS (1995–2003) 
  • Theresa Good, CBET (2010–2012) 
  • Marc Ingber, CBET (2006–2007), Returned to academe 
  • Bruce Hamilton CBET (2010–present) 
  • Rajinder Khosla, ECS (1998–2010), Retired  
  • Bruce Kramer, CMMI (2005–present )  
  • Steve Nelson, Atmospheric Science (2006–2014), Retired 
  • Judy Raper, CBET (2007–2008), Returned to academe in Australia 
  • Carol Reed, CBET (2015–present)  
  • Glenn Schraeder CBET (2005–2006), Returned to academe 
  1. Exemplar PDs 

This section offers a personal reflection by one ERC Program Director from the 2001–2014 period, Dr. Barbara Kenny.  

Reflections on Being an ERC Program Director 

I worked as a Program Director for the Engineering Research Centers program for eight years, from 2006 to 2014. Of my various responsibilities as a Program Director, I enjoyed the site visits the most. I enjoyed putting the site visit team together and finding the right people to give the best feedback to NSF and the center. It was fun to interact with such knowledgeable people. I enjoyed seeing the center operations in person—touring the labs, talking to the faculty, interacting with the students, and learning about the cutting-edge research going on within a particular center.  

Another aspect I enjoyed was the ability to help the ERCs overcome some of the obstacles they faced. Enabling good technical feedback from the world-class experts on the site visit team was one aspect of this. Another was working on the NSF side to solve any bureaucratic issues the centers ran into. The reporting requirements were complex and I tried to facilitate the communication between NSF and the centers such that they knew exactly what we wanted and we only asked for information we truly needed.  

Intellectually, I enjoyed the breadth of research topics that the ERC program supported. As part of the ERC team, I worked on assembling and leading merit review panels to review pre-proposals and full proposals and I always learned a lot about the latest technology trends. A challenging aspect was to find the best possible reviewers: ones who knew the technology area, had no conflicts of interest, took the time to write thoughtful and accurate reviews, were available to participate on the selected panel dates, and participated well as part of the panel discussion group. Meeting all five of those criteria was often difficult, particularly in narrowly focused technology topic areas, where the most well-known experts were usually conflicted in one way or another and thus couldn’t serve. This was probably the most challenging part of the job. 

The ERC Program has long been considered one of the “crown jewels” of NSF programs and it was a privilege to serve as a Program Director. 

Barbara Kenny, Ph.D. 

June 29, 2018 

9-I        ERC Program Management vis-à-vis Higher NSF Management

Managing and leading the ERC Program within an agency like the NSF was like starting up and leading a small business within a corporate structure. It required a blend of passion, risk taking, innovation, and courage plus the ability to communicate the goals and achievements of the Program to higher levels of management, gain their buy-in, and champion the Program through myriads of changes over the years, plus protect the Program from those above who might want to take it over and/or change it to suit their own interests—a tall order.

In the start-up years, Pete Mayfield and Marshall Lih did the “heavy lifting” required to develop and protect the Program and work with higher levels of NSF management. Fortunately, the initiation of the Program was championed and enabled by a team of upper-level managers committed to the ERC Program’s success: Nam Suh (1984–1988) and John White (1988–1991), who were serving as Assistant Directors (AD) for the Directorate for Engineering (ENG), and Erich Bloch, serving as the Director of NSF from 1984 to 1990. Suh and Bloch had served on the NAE committee that crafted the mission and goals for the ERC Program and White was on leave from Georgia Tech, where he had led an NSF Industry/University Cooperative Research Center. All wanted to help the ERC Program succeed in its mission to alter the strictly single-investigator culture of academe and contribute knowledge, technology, and engineering graduates to strengthen the competitive position of US industry.

Preston was in a leadership development mode during those early years, learning and managing many aspects of the Program, which she was then assigned to lead and manage more broadly in 1988. She worked closely with Suh, Bloch, and White on the program management innovations needed to help ensure that the ERC Program met its goals, such as the new funding and post-award oversight mechanisms, including mentoring ERC PDs. These interactions provided a base for her role in leading the ERC Program.

By the early 1990s the ERC Program had established itself as a leader for change in NSF and academe. The Directors of NSF in the decade of the 1990s were less closely associated with the ERC Program but still championed its goals and successes—Neal Lane being the strongest supporter. By this stage, the direct interface with the Director’s level tended to become a frequent interface with the Assistant Director for Engineering, as NSF became less horizontal in its structure. Subsequently, Preston and Marshall Lih interfaced most directly with the new AD/ENG, Joe Bordogna, who served in that role between 1991 and 1996, when he moved up to serve as the Deputy Director of NSF.

The dynamic of negotiation with Eric Bloch consisted of back-and-forth “arguments” to better understand each other’s goals and requirements, a style that Bloch had learned at IBM in New York and with which Preston was quite comfortable, having grown up in northern New Jersey. Lih added the nuances of a win/win approach, which proved valuable over the long run, as program management is not a zero-sum game—I win, you lose; rather, it requires direction, feedback, and accommodation to a certain extent to ensure that both sides gain from the outcome.

Starting in the 1990s, NSF became larger, more hierarchical, and more bureaucratic. Successive Assistant Directors for Engineering and their Deputies became more involved in the decision process to determine awards and higher levels of NSF became more involved in reviewing and authorizing those awards, given their size. These changes are discussed in section 9-C(b)ii, ERC Proposal Review System, above.

Preston left NSF in 2014. The ENG AD at that time, Pramod Khargonekar, an electrical engineer from the University of Florida, where he had been Dean of Engineering, commissioned the National Academy of Engineering in 2015 to study the ERC Program and recommend any needed new directions. The NAE conducted several data-gathering meetings and a symposium in 2016 to gather views and input from the broad engineering community. In 2017 the study committee produced a report that presented a new vision for multidisciplinary academic research centers, such as the ERCs, focusing on “the convergence of knowledge from formerly separate engineering disciplines in technology development, the sciences, as well as the emerging best practices in engineering education, team research, and the deliberate nurturing of innovation.”[105] The key word was convergence, which led to a concept of “convergent ERCs,” larger than the current ERCs and focusing on grand-challenge-like problems that address high-impact societal or technological needs, rather than industrial competitiveness. At the time of writing (2019), these convergent centers—in effect, Gen-4 ERCs—were just being formulated.

9-J        Community-building and Program Communications

An ancillary aspect of ERC Program management was and is a variety of activities aimed at strengthening the sense of esprit de corps with the Program along with communications efforts intended to improve the centers’ awareness of each other’s challenges and achievements across the Program.

9-J(a)     Annual Meetings and Retreats

Beginning in 1986, the ERC Program began holding Program-wide annual meetings of all the centers and their key staff with NSF Program staff. For the first several years these meetings were organized by a contractor, Ann Becker & Associates, and featured presentations by NSF Program leadership and selected ERC participants. In early 1990 a two-day symposium entitled, “The ERCs: A Partnership for Competitiveness,” was held in Washington, DC, to review the accomplishments of the ERCs, the lessons learned to date, and the future outlook for the Program. The symposium was opened by NSF Assistant Director for Engineering, John White. Speakers included W. Dale Compton, who had chaired the original National Academy of Engineering panel that formulated Guidelines for ERCs, as well as various Center Directors, students, and industrial partners. A comprehensive report of the symposium was subsequently published.[106]

Annual meetings continued to be held in the ensuing years, usually at hotel conference centers in the Washington, DC area, but on one occasion at the University of Colorado-Boulder, hosted by the ERC that was located there. These meetings gradually expanded to cover parts of three days, with an evening welcoming reception and a day and a half or more following. Closed meetings of the various staff groupings across the ERCs began to be added on, beginning with the Administrative Directors and eventually expanding to Center Directors, Industrial Liaison Officers, Education Directors, and Research Thrust Leaders. At the 1996 meeting, representatives of the Student Leadership Councils met together for the first time; these SLC retreats continued annually after that. The closed meetings of the ADs and, soon after, the ILOs, eventually led to separate summer retreats, conducted at or near a host ERC.

The annual meetings expanded further through the addition of breakout sessions for specific staff groupings, focusing on particular topics of interest to that group and led by a moderator chosen from among the ERC staff. The breakouts alternated with plenary sessions featuring a keynote speaker and presentations by NSF and ERC Program leadership and others on various topics of broad, ERC-wide interest. From 1990 to 1997, various NSF Program Directors organized the meeting program. By 1998, the program for the annual meeting was being developed and organized by Courtland Lewis, of SciTech Communications LLC, in consultation with Lynn Preston; Ann Becker & Associates managed logistics and hotel arrangements for the meeting in collaboration with Mr. Lewis and the NSF staff.

These annual meetings were developed expressly to be of use to the ERC staff in doing their jobs well.[107] Although lengthy and involving extra effort on the part of many of the ERC staff to manage breakouts and develop presentations, they came to be appreciated and even enjoyed by the ERCs for the collegiality and camaraderie that developed across staff groupings, the sharing of experiences and achievements, and the often-inspiring presentations given by outside speakers.[108] Over time, by establishing a culture of sharing and trust, the meetings became instrumental in building a strong sense of community among the ERCs—what Program Leader Preston came to refer to, without exaggeration, as “the ERC Family.”

9-J(b)     Creating a Shared “Social Ecosystem”: The ERC Family

The ERC Program is highly unusual among government programs, not only for its longevity but also for the sense of shared mission and cohesion among its grantees. The sense of an “ERC Family” emerged gradually over time as a result of innovative program management practices as described throughout this chapter that encouraged an ethos of communication, cooperation, and information-sharing, rather than competition, among the ERCs. Put more formally, the ERC Family was and is a type of “social ecosystem,” deliberately planned, constructed, and managed.

i.         Role of the Program Leader

Give the size and visibility of the ERC Program, the Program Leader’s position can be described as a kind of “super Program Director.” Especially in the case of Lynn Preston, a Senior Executive Service officer who served as Program Leader from 1988 through 2013 and who was concurrently EEC Deputy Division Director for Centers during all that time, she had considerable authority and influence within NSF. Her long tenure in the role translated into consistency in policies, deep familiarity with the mission and history of the Program, the ability to experiment with management approaches and adjust them as warranted, and, overall, an unusual degree of long-term stability in Program leadership. That stability in turn translated into clarity on the part of the ERCs in terms of NSF’s practices and expectations.

It might be said that Preston’s management style was shaped by her reflections on her experiences as one of very few non-support-staff women in a male-dominated NSF culture. Rather than seeing growth and innovation arise from competition and conflict, she encouraged collaboration and learning from mistakes to improve and excel. Having been closely involved in the birth of the Program in 1983-85—as Deputy Program Leader under Peter Mayfield (see Chapter 2) and subsequently as Program Leader throughout the first 28 years of its life—she felt a personal connection to and identification with it that led her to nurture the Program in ways that were unusual in engineering programs at the time. For example, she transitioned the program culture from a binary pass-or-fail culture to a developmental culture with strong performance standards, “punishments,” and rewards. The development of a sense of the “ERC Family” across all the ERCs and Program staff was an outgrowth of her style of management.

ii.      Role of PDs

The role of ERC Program Directors has been described from various perspectives in previous chapters, beginning with section 2-C(f), which described the initial “invention of a new role for NSF Program Directors” in the ERC Program. Essentially, those chosen to serve as ERC PDs had to have relevant education and experience and took an active, hands-on approach to interaction with their assigned center(s). They were and are involved in guiding the center throughout its lifetime under ERC Program funding, not only in the periodic reviews and site visits, but also as sounding-boards when center leadership have questions or concerns.[109] They serve as liaisons between the center leadership and NSF Program and Division leadership. Although they are supportive of their center(s) and, to some extent, function as cheerleaders for them within the Program and NSF more broadly, they are expected to practice ”tough love” as well as provide the developmental guidance described in section 9-A(b) and objectively carry out annual and renewal reviews.

iii.    Role of ERC Leaders as Peer Mentors

Especially in the early years of the Program, when ERCs were often unique research units on their campus, they were breaking new ground in many ways—financially, administratively, organizationally in their involvement with industry, and academically in both research and education. To some extent this iconoclastic quality continues today, even though university research centers have become more commonplace (partly as a result of the success of the ERC Program)—especially at universities which have not previously hosted an ERC.

Consequently, there has always existed, on the part of center-level ERC staff, a need for advice and counsel by those who have relevant ERC experience. This is provided to a great extent by NSF PDs; but a more direct source of such mentoring is peers at other ERCs. Initially this kind of support was provided informally, through interpersonal contacts made at the annual meetings and elsewhere; but after the establishment of the ERC Consultancies for ADs, ILOs, and Education Directors, such support became more direct and in-depth.[110] In addition, staff-group breakout sessions at the annual meetings and especially pre-meeting workshops for new ERCs, given by experienced ERC-level staff, provided invaluable “teaching” opportunities. Monthly teleconferences that were held for a number of years by these staff groupings provided another avenue for informal training.

iv.     Role of Communications Consultant as Informal “Ombudsman”

Despite the high level of mutual trust that prevailed among the ERCs and NSF leadership, occasionally center staff at all levels, especially in relatively new centers, had questions or concerns that they were uncomfortable about sharing with NSF leadership—and even with their center’s PD. They felt a need for someone knowledgeable about NSF’s policies and directions but who was not actually “NSF,” with whom they could discuss sometimes sensitive matters and ask for opinions or advice.

Over time, this role gradually came to be filled by the longtime ERC Program communications consultant, Court Lewis, employed externally to NSF with his own consulting firm, SciTech Communications. In part because of his close involvement with staff across the ERCs in developing the annual meeting program, as well as in working with them on chapters of the ERC Best Practices Manual (see next section), Lewis developed a friendly rapport with many of the staff. Over time it became known that he could be trusted to provide this kind of “middle-man” perspective without divulging the nature of the conversation to others, including NSF. Thus, he began to function as an occasional and informal “ombudsman” within the Program, although he did not advertise that role. Being conscious of the risks of misinformation, if Lewis did not know the answers to questions he was asked, he said as much.

Lewis let Lynn Preston know that these kinds of interactions were occurring, and that if he ever learned of something serious or problematic enough to warrant NSF management attention, he would alert her to that situation. Preston gave Lewis her tacit approval to continue serving informally in that role. Aside from issues related to the burden of preparing annual reports and the impacts of the recompetition and self-sufficiency policies, no other serious matters were ever brought to his attention. However, these interactions did seem to add in some small measure to the overall sense of a functional “ERC Family” and were therefore useful.

9-J(c)                 The ERC Best Practices Manual

Several previous sections in this chapter have referred to the ERC Best Practices Manual. This document, first published in 1996, has been an invaluable resource, first for the ERC participants themselves (including students), secondly for university faculty planning to propose an ERC, thirdly for ERC industrial partners, and finally for other U.S. and foreign government agencies considering establishing university-industry research centers similar to ERCs. Housed on the ERC Association website at http://erc-assoc.org/best_practices/best-practices-manual, it includes chapters prepared by and for the various staff groupings within each ERC. The chapters are written in a “how-to” style intended to explain to new and even existing staff in detail the procedures, rules, expectations, goals, and pitfalls of all aspects of carrying out their role.

The Manual was first compiled under a contract to ERC Program communications contractor Court Lewis, who convened six working groups of selected ERC staff at NSF headquarters to draft the various chapters over a period of usually two days.[111] Lewis served as Project Director and overall editor. Chapters were drafted by the task groups, edited by Lewis, reviewed by NSF, and then finalized by Lewis and an Authoring Committee consisting of the chairs and co-chairs of the six working groups.

Over a period covering most of 1996, the following chapters were prepared:

Chapter 1: Introduction (written by Lewis)

Chapter 2: Center Leadership and Strategic Direction

Chapter 3: Research Management

Chapter 4: Education Programs

Chapter 5: Industrial Collaboration and Technology Transfer

Chapter 6: Administrative Management

Chapter 7: The NSF/ERC Interface

In subsequent years, two more chapters were added:

Chapter 8: Student Leadership Councils (2002)

Chapter 9: Multi-University Centers (2006)

All of these chapters have periodically undergone wholesale revision, as the practices pertinent to that staff function have evolved. Those revisions have been less formal than the original effort, being conducted largely by email with the selected task group members. They have been initiated and directed by the communications consultant, Court Lewis, in coordination with NSF for working group selection and outline development—often with the assistance of subcontractors who manage the process, with oversight, guidance, and final editing performed by Lewis.

In 2011–2014, several chapters of the Best Practices Manual were updated. However, in contrast to the earlier process of team writing and updating, by this time it was becoming increasingly difficult to focus the ADs, Education Directors, and ILOs on the time-consuming task of providing input and synthesizing it into drafts of chapters. For this reason, Janice Brickley, former AD of the recently graduated CASA ERC, was tasked with leading an effort to update the Administrative Management chapter, working with Court Lewis. Erik Sander at the University of Florida was funded to prepare a revised edition of the Industrial Collaboration chapter, based on experience gained from the consultancy he led and input from the ILOs in place at that time. In addition, Carole Reed, an NSF Program Director (2011-2019), brought the leaders of the ERC education programs together via teleconference to update the chapter on education. There was a less than enthusiastic response to the latter effort, so in 2014 they were brought to NSF in a retreat led by Anne Donnelly,[112] who had chaired the working group that prepared the first version of the education chapter of the Best Practices Manual. She gathered their input and updated the chapter.

In 2014 Lewis, using the services of the ERC Association site’s programmer, developed a custom wiki-like system for use in updating Best Practices chapters. This system was first applied to the Administrative Management chapter, which had just been updated by Janice Brickley, as SciTech Communications’ subcontractor in charge of the revision. The new system allowed designated site administrators (in this case, the ADs) to edit the chapter directly online and/or post comments. Ms. Brickley was engaged as the moderator, who would review and approve suggested updates. The wiki approach eliminated the need for long periods between full-scale chapter updates and the organization of working groups to accomplish it. Thus, the updating process became more timely and cost-efficient. However, it does depend on a commitment by the ADs to periodically review the chapter for needed updates as their practices evolve. Experience showed that the commitment was generally short-lived and required periodic reminders by the moderator to engage ADs and produce updates.

Following the successful “beta test” of the wiki structure on the ADs chapter, late in 2014 the same wiki structure was applied to all other Best Practices chapters.

9-K       External Information Dissemination

The ERC Association website at www.erc-assoc.org serves as a way to share Program-wide as well as center-specific news and information on many topics not only within the Program but also with the public at large. It serves as a useful conduit for NSF to provide the ERCs with necessary information such as proposal preparation instructions, information on supplementary funding opportunities, guidelines for student competitions, annual meeting registration and planning, etc. It also provides general information on the Program itself, the range of centers, both current and graduated, and their locations and missions. There are main navigation tabs for the Program, Centers, Achievements, Research, Innovation Ecosystem, Education Programs, and the Best Practices Manual. Each of these has a number of submenus with information of interest to a wide range of audiences.

Hits on the website are tracked through a Google Analytics account. The tracker shows that the site garners an average of 500 to 600 users per week, year-round, with large spikes during calls for proposals and before biennial Program meetings. Users come mainly from the U.S. but are located around the world, with India often presenting the second-highest number of users, and with the U.K., China, and Canada following.

Information is disseminated externally from the ERC Program to a number of audiences through a variety of means and media, in addition to the ERC Association website. The EEC Division’s web page on nsf.gov, at https://www.nsf.gov/div/index.jsp?div=EEC, offers links to programs and funding opportunities including those for ERCs. Achievements of ERCs are collected through the centers’ annual reports and on an ad hoc basis throughout the year by the communications contractor, SciTech Communications, organized, written up, and posted on the Achievements Showcase at http://erc-assoc.org/content/achievements-showcase. Until 2012, these were winnowed down each year to the highest-impact examples in research, education, technology transfer, and infrastructure development through an iterative process involving NSF ERC Program and Division leadership and the communications contractor, and were then posted on NSF’s web page for the Government Performance and Results Act (GPRA). The Engineering Directorate then selected the “best of the best” Directorate-wide achievements for forwarding to NSF-wide selection for inclusion in a GPRA report to Congress. A few of the ERC achievements were generally included in this report.

These achievements provide the “raw material” for use by ERC Program and EEC Division officials in making presentations to higher levels of NSF, at Congressional briefings, and at professional conferences as the need arises. The communications contractor, Court Lewis, often assisted with building these presentations as requested. Two examples of external presentations are a briefing on the ERC Program hosted by the American Chemical Society on Capitol Hill in 2012 (see Figure 9-13)[113] and a talk given by NSF AD for Engineering Pramod Khargonekar to the National Academy of Engineering in 2015.[114]

Figure 9-13: Lynn Preston was one of a panel of ERC participants who briefed Congresspeople and staffers on the ERC Program in February 2012. (Credit: C. Lewis)

The individual ERCs themselves often host or participate in events that disseminate information to the public or other audiences about the ERC Program. These include participation in science fairs, national competitions such as FIRST Robotics and iGEM, and center open houses. Centers also often mount poster sessions at professional conferences. Some have had booths for multiple years at conferences such as the International Consumer Electronics Show (CES), held annually in Las Vegas;[115] and at SACNAS.[116]

9-L       Diversity Policy, Strategy, and Results

Diversity in race, ethnicity, gender, and physical ability was always expected in the faculty, staff, and students of ERCs to strengthen the engineering workforce and provide open opportunities to all those with a talent for engineering. Industry had urged NSF early on to broaden the pipeline and strengthen the diversity of the engineering workforce and this was taken to heart in the early stages of the Program.  The initial approach was to require diversity and collect demographic data—in other words, encourage compliance by measuring results. .

These requirements became more structured through time. Starting with the Gen-2 ERC solicitations in the class of 1994–1995, diversity goals were explicitly required.  By 2004—at the request of the Deputy Director of NSF, Joe Bordogna—ERCs were required to prepare strategic plans for diversity with their associated departments and deans. Noting the generally lower levels of diversity in most of the ERC lead and core partner universities, ERCs were encouraged in the 2004 solicitation and beyond to form partnerships with universities that served groups predominantly underrepresented in engineering or with NSF’s diversity initiative awardees.

Figures 9-14 and 9-15 show a progression of ERC diversity requirements over time.[117]

Figure 9-14: Progression of ERC diversity requirements during the 1980s and early 1990s.

Figure 9-15: Progression of ERC diversity requirements from the early 1990s through the late 2000s.

Preston prepared the slide in Figure 9-16 to reflect the lack of diversity she found when working with the Gen-1 ERCs. As she went on site visits to all of these ERCs in these early years, she often found herself to be the only woman in the briefing room, except for the ERC’s administrative staff and a few female students, who often were foreign, and the only woman when meeting with the faculty and industrial members.

Figure 9-16: In the earliest ERCs, most participants were white or Asian males. Few women, African Americans, or Latinos were represented on the teams.

9-L(a)    Evolution of Requirements

By 2012 the diversity in ERCs had clearly improved, as shown in Figure 9-17. These improvements resulted from an evolution in diversity requirements as part of the ERC Program Solicitations from 1985 through 2013.

Figure 9-17: This composite image, made for the 2012 ERC Program annual meeting, reflects a much broader diversity among ERC participants.  (Credit: SciTech Communications)

Gen-1 ERCs: Classes of 1985–1990[118],[119]

  • Strongly encourage women, minorities, and the physically handicapped through specific wording in solicitations, reporting requirements and data collection, and merit review criteria.

Gen-2 ERCs: Classes of 1994–1995 through 2000[120],[121],[122]

  • Special emphasis on enhancing the Nation’s supply of engineers was added by expecting ERCs to include a significant number of women, underrepresented minorities, and persons with disabilities at all levels of the ERC team.
  • Solicitations NSF 97-5 and NSF 98-68 pointedly referred to expectations for “Teams diverse in gender, race and ethnicity” and suggested formation of partnerships with universities and colleges with predominantly underrepresented student bodies—the first-time outreach was referred to as a way to enhance the diversity of an ERC.
  • Increased data collection post-award, with status presentations at the ERC Annual Meetings, plus special sessions devoted to increasing diversity

Gen-2 ERCs: Classes of 2003–2006[123],[124]

  • Solicitation NSF 02-24 stated that “NSF expects the leadership, faculty, and students involved in an ERC to be diverse in gender, race and ethnicity. Diversity is expected of the participants from the lead and any core partner institutions and it will be enhanced through affiliations with minority or women’s institutions, either as core partners or outreach affiliations.”
  • NSF 04-570 specified the same requirements as above, plus: “ERCs prepare and execute diversity strategic plans with goals and intended actions and they report annually on impact, and form collaborative partnerships with NSF Diversity Awardees.”
    • This solicitation also encouraged synergistic partnerships with NSF’s diversity program awardees, such as the Louis Stoke Alliances for Minority Participation. Ongoing ERCs were also encouraged to form these connections. A serious effort was made to form these partnerships on the part of the NSF staff and the staff of the ERCs; but very few students involved with these awardees availed themselves of the opportunity to spend a summer in ERC research. Consequently, the requirement was eventually dropped by 2008.
    • Requirement that all ongoing ERCs operate with a diversity strategic plan that provides plans for enhanced diversity and annual reports on progress.
      • This requirement was requested by Joe Bordogna, the Deputy Director of NSF, who wanted to use ERCs as a “forcing function” for increased diversity in engineering schools and academe in general. Preston initially objected, her view being that he should make that a requirement for single-investigator awards or universities as a whole,  because it is the departments, not centers, that hire faculty and admit students. She argued that ERCs already had strong diversity and she did not want to make them accountable for formalized plans that would only be successful in increasing diversity if they were to achieve even further increases in the diversity of faculty and students—which, again, would be governed by decisions made the university and its departments, not its centers. She thought the requirement was too “heavy-handed,” but the requirement nevertheless stood.
  • Increased data collection post-award, with status presentations at the ERC Annual Meetings, plus special sessions devoted to increasing diversity.

Specifically, by 2004 the new diversity policy had the requirements shown below:

ENGINEERING RESEARCH CENTERS PROGRAM DIVERSITY POLICY (2004)

  • All Centers will operate with strategic plans that include goals, milestones, actions and impacts to increase diversity at all levels to exceed national engineering-wide averages.
  • All Centers will form sustained partnerships with affiliated university Deans and Department chairs to enable this enhancement
  • All Centers will develop core partner or outreach connections with predominantly female and underrepresented minority institutions
  • All Centers will develop outreach connections with at least one LSAMP and one or more AGEP, TCUP, CREST awardees (long-term REUs and bridge fellowships)[125]
  • All Centers will operate diversity oriented REUs and pre-college programs focused on diversity involving teachers and students.
  • In compliance with federal law, no quotas or set-asides based on gender, race or ethnicity are allowed. No numerical goals can be used; quantification of impacts will be reported.

Gen-3 ERCs: Classes of 2008–2012

  • The Gen-3 Solicitations required:
    • One core partner that served groups predominantly underrepresented in engineering, and pre-college education partners that had diverse student bodies.
    • The ERC will rest on a culture of inclusion through the involvement of a diverse body of faculty and students.
    • Multicultural environment through involvement of foreign partner institution(s), but those foreign faculty must respect the diversity of the U.S. team.
    • Due to the ruling in Supreme Court Case Gratz v. Bollinger, 539 U.S. 244 (2003), the NSF OGC required that proposals could not include numerical projections. However, annual reports were allowed to include quantitative information on the demographics of the ERC’s personnel, benchmarked against engineering-wide averages.
    • The ERC will provide and operate with a diversity strategic plan.
    • Diversity Directors were required in Gen-3 ERCs.

9-L(b)     Diversity Policy Results

Figure 9-18 (a and b) shows the funded ERCs in 2013 and their core partners with significant diversity, represented by a large blue circle, as well as outreach partners, marked by an orange triangle and with their diversity emphasis in different colors (see the legend).[126] Figure 9-20 a and 9-20 b both show the same ERCs on the left but have different outreach partners listed on the right-hand side and the charts together show the entire picture of diversity relationships that was present in 2013.

Figure 9-18a: ERC diversity core partners and outreach partners, 2013.

Figure 9-18b: ERC diversity core partners and outreach partners, 2013. (Credit: SciTech Communications)

One of those 2013 centers, the ERC for Revolutionary Metallic Biomaterials, which started in 2008, was led by North Carolina State Agricultural Technological University (NCAT), an Historically Black College and University (HBCU). NCAT was the first HBCU to receive a major center award through the ERC Program.

Did the requirements for greater diversity, a diversity strategic plan, and diversity-focused outreach produce for the diversity of ERCs and potentially for the diversity of the engineering workforce?  Ultimately, the requirements produced outstanding diversity in ERCs across all demographic groups in comparison to nationwide engineering student data. In 1993, ERC data showed that only 4 percent of the ERC faculty were women, 14 percent of the graduate students were women, and 29 percent of the undergraduates were women. There was clearly a strong interest in engineering on the part of women undergraduates, but the male-dominated reality of engineering in the 1990s did not provide a welcoming environment for women to continue their academic careers, even in ERCs. However, by 2012 the climate for women had changed in ERCs, with significant involvement at all levels in ERCs, as shown in Figure 9-19, where 23 percent of the ERC faculty were women, in comparison to 10 percent in engineering programs across the country. That difference was consistent for all levels of graduate students and for undergraduates.[127] However, there was still a “turnoff” for female undergraduates when it came time to enter graduate school. The reasons for this are not understood.

Figure 9-19: Participation of women in ERCs, 2009–2012. (Credit: ICF)

In 1993, ERC data showed that Underrepresented Racial Minorities (predominantly African Americans), represented only 1 percent of the faculty, 2 percent of the graduate students, and 8 percent of the undergraduates. Again, there was a stronger interest in engineering among the undergraduates; but the climate of engineering in the 1990s did not provide a welcoming environment for them to continue their academic careers, even in ERCs. However, by 2012 the climate for African Americans had changed in ERCs, with significant involvement at all levels in ERCs as shown in Figure 9-20, where the percent of the ERC faculty grew to 8 percent, in comparison to around 4 percent in engineering programs across the country.[128] That difference continues for all levels of graduate students and for undergraduates. However, as in the case of women there is still a “turnoff” for undergraduates when it comes time to enter graduate school.

Figure 9-20: Participation of underrepresented minorities in ERCs, 2009–2012. (Credit: ICF)

As shown in Figure 9-21, the same patterns exist for Hispanics at all levels between 1993 and 2012, but at much lower participation rates for undergraduate students (13% in 2012), compared to the rates for women (40%) and underrepresented racial minorities (26%) in that year.[129] Note, however, that beyond the undergraduate level the rates for Hispanics and underrepresented racial minorities were similar.

Figure 9-21: Participation of Hispanics minorities in ERCs, 2009–2012. (Credit: ICF)

To determine whether ERCs were building a climate of inclusion for all groups, the ERC Program asked the 16 ERCs in the Classes of 2003 to 2011 to carry out SWOT analyses of their diversity climate. Dr. Brooke Coley, an ERC AAAS Fellow, in 2012 constructed and carried out the survey, with OMB approval. The SWOT analyses were focused below the center level, to address diversity at the center director, faculty, student, and staff levels. The findings are summarized in Figure 9-22. She gave a presentation at the 2012 ERC Annual Meeting on the results of the SWOT analysis and her plans for continuing the study “To examine the impact of the ERCs on transforming institutional culture as it relates to women, underrepresented minorities and persons with disabilities in terms of creating a climate of inclusion…”[130]

Figure 9-22: Finding of a Strengths, Weaknesses, Opportunities, and Threats analysis conducted in ERCs in 2012. (Credit: Brooke Coley)

Taken together, these results point to significant improvements in ERC climates for inclusion at all levels, except postdocs. It points to the recommendation that all ERCs should have a designated Diversity Director position and to concerns that industry members have more to offer ERCs regarding the programs for diversity in industry than the centers are taking advantage of. An important threat is the increased involvement of personnel at all levels from foreign backgrounds, who instead of integrating tend to form self-segregated “cliques” of students and faculty from their home countries. This is especially important because ERC faculty who have come to maturity in this country, whether immigrants or native born, have had a long history with people from diverse backgrounds. If foreign faculty without this experience rise too quickly to leadership positions in engineering and retain too much of the discriminatory practices of their home countries, it can pose a threat to improving diversity and creating climates of inclusion not only in ERCs but also throughout U.S. engineering schools.

9-M      ERC Program Evaluations/Assessments

Given the high visibility of the ERC Program, almost immediately after its launch evaluations and assessments of the Program’s concept and operations began. These studies and their findings will be summarized in chronological order in this section.

9-M(a)   NAE/NRC (1985–1987)

It was the National Academy of Engineering that, in 1983 and 1984, had begun the process of planning for a new NSF program that culminated in the blueprint for ERCs.[131] In keeping with this stewardship, the Academy conducted a number of early studies through the Cross-Disciplinary Engineering Research Committee of the National Research Council on various specific aspects of the ERC Program. These included:

  • Report of a Workshop on Information Exchange Among Engineering Research Centers and Industry (1985)
  • Evaluation of the Engineering Research Centers (1985-86)
  • Assessment of the Engineering Research Centers Selection Process (1987)

Eventually, in 1988, the NAE conducted a broader assessment that was requested by Erich Bloch, the Director of NSF at the time, to determine the progress of the ERC Program in achieving its goals.[132] This assessment found that:

  1. The mission of the ERC Program is at least as important to the Nation’s engineering schools and industries today as it was when the program was first designed.
  2. The Program is achieving the objectives set for it in the NAE’s Guidelines Report.
  3. Certain problems broadly affect the Program. First among these is funding, which strongly bears on scope and quality.

The report recommended:

  • To Congress and the Administration that:
    • a renewed effort be made to achieve the original funding targets of the Program.
  • To the director of NSF that:
    • the ERC Program continue to be managed as a distinct program with a unique mission and that it not be subsumed under other programs;
    • if funding constraints continue, first priority be given to growing the already established ERCs to their original award sizes; only then should new ERCs be established; and that
    • NSF should continue to increase the representation of industry in its various advisory groups.
  • To the assistant director of engineering at NSF that:
    • suggestions to add to or change the original mission of ERCs be resisted;
    • a preproposal process be established to ease the costs of proposal preparation;
    • the selection and review process be thoroughly examined, particularly in regard to consistency of reviews from year to year, composition of review teams, and the purpose of “informal” annual reviews; and
    • if NSF chooses to target areas of technology for future ERCs, it do so in a way that does not preempt the possibility of worthwhile proposals in other areas.

The most far-reaching of these recommendations—to increase funding of ongoing ERCs in lieu of making new ERCs—resulted in a hiatus in the creation of new ERCs for three years between the start of the Class of 1990 and the Class of 1994.

9-M(b)   GAO (1987)

Congress requested that the General Accounting Office conduct a study during 1986-1987 to determine if the Program was meeting its goals and that Program leadership had maintained its planned mode of “active” management with respect to the centers. The study included a survey of funded ERCs and analyses of current management practices at NSF. The GAO survey[133] of funded ERCs in the Classes of 1985 and 1986 was described earlier in this chapter (see section 9-D(c), for example) as well as in Chapter 3. The main focus of the GAO survey team was on the evaluation and assessment of the performance of ERCs. In this regard the GAO’s input was very influential in establishing guidelines by which the centers were reviewed and the process for doing so.

As was described in section 3-A(h), the main findings were that: (1) research quality was the most important criterion in selecting centers, with a proposal’s contribution to industrial competitiveness and education following in importance; (2) NSF had an effective post-award oversight system through on-campus site visits using outside peer reviewers to evaluate ERCs, but it was too early in the process to determine the strengths and weaknesses of the ERC approach. They also found that (3) a wide range of industries participated in the program and planned to continue their support; industry believed that the quality of the research was the most important reason they sponsored an ERC, and that it was too early to determine the program’s impact on engineering education, since it was too early for the firms to have hired ERC graduates. Finally, (4) industry voiced a need for a way to strengthen their input to and influence on the ERCs’ research agendas.

9-M(c)   Program-level SWOT Analyses (1998)

In 1998, to complete the circle of input from center-level SWOT analyses provided by ERCs’ industrial members and students, the ERC Program devoted the ERC Annual Meeting to a Program-level SWOT analysis by the leadership groups from the ERCs. The groups were asked to focus on the key features (vision and planning, research, education and educational outreach, and industrial collaboration), NSF oversight system (performance review, reporting, and database), and NSF support strategy (levels, leverage, life span, and self-sufficiency). Before the meeting, during the summer of 1998, the ERCs were asked to carry out their own analyses so they would be prepared for informed participation during the October meeting. The findings of the SWOT analysis carried out at the October meeting are summarized as follows:

Strengths:

  • The ERC concept is a powerful model that has substantially changed the culture of academic engineering research and education, and it is maintained though substantial financial support and effective post-award oversight by NSF.

Weaknesses:

  • Program resources are not well matched to expectations and reporting and data collection are time-consuming.
    • ERC Program’s response was to continue high expectations for performance information, useful for post-award review teams and Program reporting to NSF and Congress.
  • The mission is so broad that staff are stretched thin.
    • ERC Program’s response was to increase the required number and expected expertise of non-faculty members of the leadership teams, in order to provide more expertise in industrial collaboration, education, and financial management at the ERC level.
  • In some cases, NSF staff assigned to oversee some center’s operations are not sufficiently knowledgeable about the technical aspects of the field and lack significant knowledge of industry.
    • ERC Program’s response was to broaden the technical expertise of ERC PDs by increasing the number of ERC PDs, drawing them on a part-time basis from other NSF divisions to provide sufficient technical oversight. To strengthen input on industrial collaboration, NSF added more industrial members to the review teams and later formed the Industrial Consultancy, referenced in Chapter 6, Section 6-E(f).
  • Many host universities struggle with the ERC concept in cultures that are dominated by traditional disciplines and single-investigator research, which can jeopardize junior faculty involved in ERC research.
    • ERC Program’s response was to increase the interaction of site visit teams with deans and department chairs to make it clear that they had to support and facilitate the interdisciplinary culture of their ERC or its success would be jeopardized.
    • A host university that stood in the way of the success of their ERC often led to the failure of the ERC at renewal or a wariness on the part of NSF to recommend a second ERC at that university.
  • The process of graduation of an ERC from NSF support causes turbulence at ERCs in their later years, jeopardizing research, education, and the ERC concept itself.
    • ERC Program’s response was to set up a system of gradual reduction in NSF support in its last two years to stimulate better financial management and self-sufficiency planning. In addition, the site visit teams and NSF staff met with university administrators and industry members during the sixth-year renewal review to put them on notice that graduation was not far off and the university and the industrial members should step up funding to support the ERC’s transition to self-sufficiency.
    • Graduated-center directors came to subsequent ERC Annual Meetings to coach centers nearing graduation on how to survive.
    • ERC Program’s response was to carry out analyses of the actual process of graduation and whether these programs were jeopardized as discussed below.

Opportunities

  • Most of the opportunities were derived from the weaknesses and won’t be addressed again here.
  • Intellectual property rights should be addressed by asking each center’s ILO to prepare a model IP policy.
    • Subsequent special meetings of the ILOs, their member firms, and NSF staff addressed this issue and provided some guidance.
  • ERC base funding for ERCs approaching self-sufficiency should be expanded through increased foreign, state, and university funding.
    • NSF viewed this as a responsibility of each ERC. Beyond the added discussions at the sixth-year renewal reviews, no new actions were initiated.

Threats

  • The threats concentrated on (1) the fear of loss of their own ERCs and the ERC concept post-graduation from NSF support and (2) the burden of data collection and reporting and the preparation of annual reports—issues that were address above.

9-M(d)   ERC Program Internal “Assessment of Benefits and Outcomes” (1997)

In 1997, The ERC Program’s own Program Evaluation Specialist, Dr. Linda Parker, reported on a study of two key aspects of the ERC Program: ERC-industry interaction and the effectiveness of former ERC graduate students in all sectors of employment.[134] The primary intent of this two-part study was to examine the extent to which the Program was making progress toward its goals. The purposes of the “ERC-Industry Interaction” study were to (1) examine the patterns of interaction that have emerged between ERCs and industry; (2) determine which types of interaction were most useful to industry and brought firms the greatest benefits; and (3) assess the value of these to the companies. The purpose of the “ERC Graduate Effectiveness Study” was to examine: (1) the extent to which masters and doctoral graduates with substantial ERC experience are more effective than their peers; (2) what the graduates did while at an ERC; and (3) the impact of ERC activities on the graduates’ effectiveness on the job.

It was found that interaction with ERCs provided 90% of ERC companies with a wide range of benefits. Of these, among the highest-valued were the employment of ERC students and graduates, gains in intellectual property (product/process development or improvement, patents, and copyrights), and access to specialized equipment and facilities. Nearly a quarter of all firms reported having developed a new product or process as a result of their interaction with an ERC.

A majority of the firms surveyed indicated that their ERC involvement had influenced their firm’s research agenda. Two-thirds of the corporate representatives reported that their firm’s competitiveness had increased as a result of benefits received and the level reached 80% for firms involved with an ERC for eight to ten years. Finally, corporate personnel in firms hiring ERC students or graduates rated these employees as more productive and effective engineers than peers in the same firms. These are the basic impacts that the Program was designed to achieve.

The second part of the study found that graduate degree recipients with ERC experience took the knowledge, skills, capabilities, and techniques they learned in ERCs with them to their subsequent jobs. They continued working in ways they learned in ERCs—in interdisciplinary teams and by engaging in industry-university collaboration to advance technology. ERC firms employing ERC students and graduates valued this result of ERC interaction more than any other type of benefit. Supervisors and other industry representatives of firms employing ERC graduates judged the ERC-trained employees to be superior to non-ERC employees on a number of key performance dimensions. For example, at least 85% of their supervisors rated ERC graduates as better than their peers in overall preparedness, contribution to technical work, and depth of technical understanding.

9-M(e)  Impact of ERCs on Academic Culture (2001)

This NSF-sponsored study by a group of researchers affiliated with SRI International focused on the degree to which ERCs had “produced or contributed to changes in institutional and cultural norms of academic engineering research, education, and technology transfer in the universities that host ERCs” by examining institutional and cultural changes both in the units directly involved in the ERC and in the university more broadly.[135]

The study focused on 17 ERCs that had been in existence for the 10 years prior to the start of the study in 1999. Study findings included the following:

  • The “engineered systems” concept was understood in a variety of different ways across the centers, depending partly on the nature of the center’s research (system-development-focused versus more theoretical or conceptual), and was often difficult to implement. In all cases, the concept had little impact beyond the ERC itself.
  • Most ERCs valued the strategic research planning process strongly enough that they continued using it as a key management tool even post-graduation from the ERC Program. However, it had little impact on the larger university’s planning.
  • ERCs contributed significantly to the development and acceptance of interdisciplinary research and education at each of the 16 institutions hosting the ERC. Participating faculty found that the interdisciplinary orientation had contributed positively to their research, and host universities had in many cases begun emphasizing interdisciplinary research centers as a way to promote their excellence in specific research areas and to seek funding. Finally, the interdisciplinary research performed at ERCs usually led to changes in promotion and tenure policies, but often not without a considerable degree of “education” of P&T committees on the part of ERC Directors.
  • Allocation of sizable indirect cost recovery (ICR) funds attributable to ERC activities was often a contentious issue in its relationship to other campus units.
  • Education was the area in which ERCs were found to have had the most widespread impacts on the 16 university campuses studied. In nearly every case at least some changes in the direction of increased interdisciplinary exposure, team-based research experience, industry interaction, and/or undergraduate involvement during the academic year in research was at least in part attributed to the models set forth by the new curricula and courses and other educational programs and activities initiated by the ERCs. In some case these impacts were campus wide. Enrollment in ERC-developed interdisciplinary courses by students not otherwise directly exposed to the ERC, even students from other institutions, often served as a multiplier of the number of students the ERC was able to influence directly. This was even more true of new degree programs initiated through the influence of the ERC. The impact on undergraduates was greatest; but graduate students also benefitted from experience on cross-disciplinary research teams and greater exposure to industry, which tended to recruit heavily from ERC-experienced graduate students.
  • The ERC Program had major impacts on how universities interacted with industry, bringing closer collaboration between universities and firms throughout the 1980s, as reflected in increased R&D funds supplied by industry to academe and the spread of university-industry-government cooperative R&D centers. ERCs had modest impacts on the formulation of university intellectual property rights policies across the host campuses, partly because university offices of sponsored research projects often were unfamiliar with industry practices; but interactions with ERCs and their industry partners in many cases broke new ground for universities in this area.
  • By the time of the study, the main features and goals of the ERC Program had begun making inroads into academe from many different directions, so the study authors found it difficult to definitively assess the overall impact of ERCs on the culture of their host university. Results and impacts, while evident, were widely dispersed across institutional types, so that patterns were difficult to discern. “Although this study identified many positive impacts of ERCs on their host institutions, it would be incorrect to speak of wholesale change in the structures, activities, or norms of academic research, education, and technology transfer, whether on the part of the university or of colleges of engineering which are the immediate organizational homes of ERCs.”[136]

9-M(f)    NSF Committee of Visitors

Starting in 1990, every three years, the ERC Program received reviews and recommendations from division-level Committees of Visitors. These reviews focued on the pre-award and post-award review processes, program impacts, and program leadership and management. Since these findings and recommendations were confidential to NSF, they cannot be included in the ERC History.

9-M(g)   Designing Next-generation ERCs: Insights from Worldwide Practice (2007)

Between July 2006 and March 2007, researchers with the Science and Technology Policy Institute conducted case studies, under NSF contract, of over 40 research centers in China, South Korea, Japan, England, Ireland, Germany, and Belgium. The aim was to inform ERC Program management of the policies and practices with which these centers operate, as input into the planning for Gen-3 ERCs. The studies focused on the following attributes of these centers:

  • Vision and program-level practices
  • Center-level planning, organization, and management
  • Industry and other external partnerships
  • International partnerships
  • Engineering education.

The resulting report[137] provided an overview of practices worldwide with respect to ERC-like centers. The overall finding was that the ERC Program, when compared to the centers abroad, is unique in its numerous mission elements and in the relative “rigidity” of its approaches to these missions, which included funding strategy, requirements and benchmarks, and lifespan. Although the foreign centers typically included several or all of the ERC features as part of their mission, their funding agencies tended to prioritize those mission elements and to vary their emphasis upon them over time according to technological area, economic conditions, and other external factors. According to the study authors, these centers “…employ a clear vision of the relative importance of their multiple missions. With this clarity comes program-level flexibility that we recommend the NSF ERC program consider.”[138] It should be noted that the ERC Program was the only one to require pre-college education.

9-M(h)  Designing Gen-3 ERCs: Recommendations by ERCs (2008)

Much of the FY 2008 ERC Program annual meeting, held in November 2007, was devoted to a series of workshops aimed at “Designing the Future ERC.” (See Chapter 3, section 3-C(a) and the linked file on ““Process for Designing the Future ERC working meeting.”) The recommendations of those workshops were summarized in an internal report, “ERC Key Features:  Designing the Next-Generation ERC: Report from the 2007 Annual Meeting.”[139]

The objective of this exercise was to discuss and identify strengths and shortcomings of a broadly pre-defined set of combined Gen-2 and Gen-3 ERC key features.[140]  The overall finding emerging from the workshop effort was that the existing set of combined Gen-2/Gen-3 ERC key features was generally excellent and should be retained, with some modifications to the characteristics of a few features.

Figure 9-23 shows the number of the 11 breakout groups recommending that a feature be retained or deleted. No new key features were recommended for addition to the existing set.

Figure 9-23: Results of the analysis regarding recommendations to retain or delete current ERC key features in future-generation ERCs. [Source: NSF]

The most significant weakness identified by the participants reflected a concern that NSF asks the ERCs to do too many things relative to the funding levels of the Program—in particular, the requirement for extensive annual reporting was pointed to. Nearly half of the groups recommended deleting pre-college education as a key feature. Taken as a whole, the Strengths and Weaknesses identified by participants together reflect an appreciation for flexibility on the part of NSF, since there may be many ways of meeting Program goals.

9-M(i)    ERC Innovations (Products, Processes, and Startups) (2010)

This study, first carried out in 2007 and updated in 2010, was described in some detail in Chapter 11, section 11-C(a)iii.[141].”)After tracking down well over a hundred commercialized outcomes of the ERCs over a 25-year period, the researchers estimated their combined economic impact to be in the range of $50–$75 billion, an impressive and surprisingly large return on NSF’s roughly $1B investment at that time. The valuation was an estimate derived by adding together many “ranges of estimates,” since precise market figures were usually difficult to obtain for many reasons, and since in many cases the ERC-derived technologies were essential components of larger systems and products.

One revelation of the study was that the results of ERC technology development in many cases took years to come to fruition as they traveled “downstream” through different licensees and corporate startups, buyouts, and mergers.

9-M(j)    Post-graduation Studies

Three studies of note have been conducted regarding the status and fortunes of ERCs after graduation from the ERC Program. These were described in detail in section 9-F(b). Figure 9-24 presents an overview of the status of all ERCs as of December 2019, when 83% of all successfully graduated ERCs were still functioning as a center, with some degree of “ERC-ness” in that a group of faculty pursue cross-disciplinary research on engineered systems with the collaboration and support of industry. The three studies focused on sustainability as an ERC-like center.

Figure 9-24: Status of ERCs [Credit: SciTech Communications]

9-N       Lessons Learned

The ERC Program broke new ground at NSF in many ways. It pioneered new features and processes for managing government-funded university research centers that served as an example for agencies throughout the Nation and the world. The main lessons learned in managing this program have been described throughout this chapter and will be summarized here.

Above all, big risks often lead to big rewards—although of course there are dangers inherent in taking those risks, and government agencies are themselves inherently risk-averse. But from the outset in the mid-1980s, in its pursuit of “culture change” in academic engineering the ERC Program took chances and made management innovations that have contributed, collectively, to its long-running success. These innovations include:

  • The length of the award—initially 11 years and later 10 years—to give centers the room to take risks and the time needed to achieve ambitious goals
  • The funding instrument, a cooperative agreement specifying performance expectations and goals (which ran counter to the technical specificity of a contract and the relative open-endedness of a grant)
  • The type and frequency of reviews, with site visits including non-NSF peer reviewers on teams as consistent as possible over time, and with the results of reviews possibly leading to loss of funding
  • An insistence on true cross-disciplinarity in the center’s research, with faculty committed to a vision of a next-generation engineered system
  • A requirement that undergraduates be included as full members of the research teams
  • Allowing centers to recompete for a new award in the same field but with a different focus
  • An emphasis on long-range strategic research planning, including the development of a standard but customizable graphical planning tool
  • Detailed reporting and database requirements and guidelines.

Other unusual characteristics of the ERC Program from a management standpoint include:

  • Aligned with the specific performance criteria, a major emphasis on quantitative data collection and reporting—rigorous, but systematized as much as possible to reduce the overhead burden on centers
  • The development of a collaborative community of effort—the “ERC Family—through interactive annual meetings, collaboration grants, and tools such as an extensive website and best practices, as opposed to inter-center competition
  • Consistent and committed top Program leadership (not always possible, but a major benefit when it can be achieved).

Other important lessons learned from the ERC Program experience that can contribute to the success of government-funded centers programs include:

  • Continual evolution of every major aspect of program management is key to the program’s continued success.
  • Just as it is vital for center directors to have the support of academic administrators as well as the active support of industry leaders, in industry-funded centers—i.e., champions—it is crucial for the program itself to have support at the highest levels of the agency.
  • Nevertheless, selection and renewal decisions should remain as close to the program level, involving program staff, as possible.
  • Mentoring of new center staff by experienced center staff from existing centers by such means as “consultancies,” new-ERC start-up briefings, and the like, is highly useful.
  • Be sure that reporting of industry participation and financial support (if applicable) are clearly defined and verifiable.
  • Requirements relating to university cost sharing must be clearly and unambiguously stated.
  • Supplemental funding beyond the base funding is a way to provide flexibility in pursuing desirable goals and initiatives that may arise, such as in education, testbeds, equipment purchases, and collaboration with other entities.
  • Key roles in a center where the budget and reporting functions are demanding are an administrative manager and a financial manager—either one person or, preferably, two.
  • Self-sufficiency leverages the government’s investment in centers. Early emphasis on self-sufficiency planning helps to ensure the center’s continued success after program funding ends.
  • After government program funding ends, even in continuing centers the features that tend to be cut back include education outreach (especially precollege, if any), center support staff, university support, and student involvement apart from research.
  • With pressure from industrial partners, whose influence now becomes stronger, there is often a tendency to revert to more project-focused research.
  • If the construct for a center is robust and there is strong support from industry, the university administration, and opportunities for other government awards, a center can be self-sustaining for several years and can even expand beyond its original size and scope.

[1] NSF (2007). Audit of NSF Practices to Oversee and Manage its Research Center Programs, (Appendices II and IV), Office of the Inspector General. Arlington, VA. National Science Foundation, OIG-08/2/002, November 14, 2007. https://www.nsf.gov/oig/_pdf/08-2-002_Research_Center_Programs.pdf

[2] See Chapter 3, Section 3-A(c), Post-Award Oversight, for additional discussion of these processes.

[3] GAO (1988). Engineering Research Centers: NSF Program Management and Industry Sponsorship. Report to Congressional Requesters (August 1988). Report No. GAO/RCED-88-177. Washington, D.C.: General Accounting Office, p. 22.

[4] Revised 4-10-2013.

[5] QRC (1998). Centers data provided to NSF.

[6] NSF (1984). Program Announcement, Engineering Research Centers, Fiscal Year 1985, April 1984. Directorate for Engineering, National Science Foundation.

[7] NSF (1985). Program Announcement, Engineering Research Centers, Fiscal Year 1986, April 1985. Directorate for Engineering, National Science Foundation.

[8] NSF (1993). Program Announcement, Engineering Research Centers, Fiscal Year 1994, April 1993. Directorate for Engineering, National Science Foundation, NSF 93-41.

[9] NSF (1994). Program Announcement, Engineering Research Centers, Fiscal Year 1995, April 1994. Directorate for Engineering, National Science Foundation, NSF 94-150.

[10] NSF (1997). Program Announcement, Engineering Research Centers, Fiscal Year 1998, April 1997. Directorate for Engineering, National Science Foundation, NSF 97-5.

[11] NSF (1998). Program Announcement, Engineering Research Centers, Fiscal Year 1999, April 1998. Directorate for Engineering, National Science Foundation, NSF 98-146.

[12] NSF (2002). Program Announcement, Engineering Research Centers, Fiscal Year 2003, April 2002. Directorate for Engineering, National Science Foundation, NSF 02-24. 

[13] NSF (2004). Program Announcement, Engineering Research Centers, Fiscal Year 2005, April 2004. Directorate for Engineering, National Science Foundation, NSF 04-570.

[14] NSF (2007). Program Announcement, Engineering Research Centers, Fiscal Year 2007, April 2008. Directorate for Engineering, National Science Foundation, NSF 07-521.

[15] NSF (2009). Program Announcement, Engineering Research Centers, Fiscal Year 2010, April 2009. Directorate for Engineering, National Science Foundation, NSF 09-545-537.

[16] NSF (2011). Program Announcement, Engineering Research Centers, Fiscal Year 2012, April 2011. Directorate for Engineering, National Science Foundation, NSF 11-537.

[17] NSF (2013). Program Announcement, Engineering Research Centers, Fiscal Year 2014, April 2013. Directorate for Engineering, National Science Foundation, NSF 13-560.

[18] NSF (1984), op. cit. and NSF (1985), op. cit.

[19] NSF (1993), op. cit.

[20] NSF (1997), op. cit.

[21] NSF (1998), op. cit.

[22] Often termed Administrative Director (AD). These titles were and are interchangeable in ERCs.

[23] NSF (2002), op. cit. and NSF (2004), op. cit.

[24] NSF (2007), op. cit. NSF (2009), op. cit., NSF (2011), op. cit., and NSF (2013), op. cit.

[25] NSF (1984), op. cit. and NSF (1985), op. cit.

[26] NSF (1994), op. cit., NSF (1997), op. cit., and NSF (1998), op. cit.

[27] NSF (2002), op. cit. and NSF (2004), op. cit.

[28] NSF (2007), op. cit. NSF (2009), op. cit., NSF (2011), op. cit., and NSF (2013), op. cit.

[29] NSF (1984), op. cit. and NSF (1985), op. cit.

[30] NSF (1994), op. cit., NSF (1997), op. cit., and NSF (1998), op. cit.

[31] NSF (2002), op. cit. and NSF (2004), op. cit.

[32] NSF (2007), op. cit. NSF (2009), op. cit., NSF (2011), op. cit., and NSF (2013), op. cit.

[33] GAO (1988). Engineering Research Centers: NSF Program Management and Industry Sponsorship—Report to Congressional Requesters (GAO/RCED-88-177, August 1988). General Accounting Office,: Washington, DC. pp. 15-16.

[34] Preston, Lynn (1998). COI Memo for Review of ERC Full Proposals for NSF 98-146.

[35] Ibid., pp. 2-3.

[36] ERC 2006 Agreement (General Program Terms and Conditions) Final.

[37] Kenny, Barbara (2008). “Class of 2008 Administrative Directors Training.” Arlington, VA: National Science Foundation, Nov. 6, 2008.

[38] Brickley, Janice (2011). “2011 ERC AD Training, Part IV: AD Perspective.” Presented at the National Science Foundation, Nov. 4, 2011.

[39] NSF (2006). ERC Best Practices Manual. Ch. 9, Multi-University Centers, Sec. 9.1, Introduction and Overview. http://erc-assoc.org/best_practices/91-introduction-and-overview

[40] GAO (1988). Engineering Research Centers: NSF Program Management and Industry Sponsorship. Report to Congressional Requesters (GAO/RCED-88-177, August 1988). Washington, DC: General Accounting Office, p. 23

[41] Ibid. p. 26.

[42] NSF (2002). Goals and Features of an Engineering Research Center (ERC) and Guidelines for Preparing Annual Reports on Progress and Plans for Continuing Support and Renewal Proposals. Rev. January 7, 2002.

[43] QRC became a division of Macro International in 2007. Macro was later acquired by ICF, which was replaced as the ERC data contractor by Creative Business Solutions, Inc. (CBS) in 2019.

[44] Preston, Lynn (1999). ERC Program Review and Oversight System, slides 17-20.

[45] ICF (2013). ERC End of Year Slides 2012 (Dec 12, 2013).

[46]NSF (1991). The ERCs: A Partnership for Competitiveness, Report of a Symposium, February 28-March 1, 1990 (NSF 91-9), Directorate for Engineering, Engineering Centers Division. Washington, DC: National Science Foundation, p. 14.

[47] GAO (1988). Op. cit., p. 26.

[48] NSF (2002), op. cit.

[49] NSF (2013), op. cit.

[50] FY 2013 Guidelines for an Engineering Research Center Annual Review Site Visit.

[51] See Sections 7-C(d) and 7-D(d)iv for descriptions of the SLCs and their formation.

[52] NSF (2007). Op. cit.

[53] Year 1 funding, ERC Class of 1985; and 1986 funding of ERC Classes of 1985 and 1986

[54] https://www.nsf.gov/pubs/2004/nsf04570/nsf04570.htm

[55] From 2011 ICF data: All ERC Slides Revised, November 2012, slide 20.

[56] From 2013 ICF data: ERC_EOY_2013_slides_compiled, op. cit., slide 22.

[57] NSF Award no. EEC-0646678

[58] This testbed supplement example writeup was developed in May 2018 in collaboration with Russ Taylor, Director of CISST.

[59] See https://www.nsf.gov/awardsearch/showAward?AWD_ID=0121989&HistoricalAwards=false

[60] Ibid.

[61] https://platt.gatech.edu/drplatt.html

[62] https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf00119

[63] https://nanohub.org/

[64] Ibid., homepage.

[65] Obama, Barack (2009). Speech on economic stimulus given in Denver, Colorado, February 1, 2009. https://www.cbsnews.com/news/transcript-obama-remarks-at-stimulus-signing/.

[66] Obama, Barack (2009). Speech given at the National Academy of Sciences, Apr. 27, 2009. http://www.sciencemag.org/news/2009/04/obama-academy-iv-speech-text

[67] Office of Inspector General (2004). Articles on NSF Supported Centers, from Semi-Annual Reports, October 1993 through September 2003. Semiannual Report No. 10. Arlington, VA: National Science Foundation. pp 7-8.

[68] Office of Inspector General (2004). Articles on NSF Supported Centers, from Semi-Annual Reports, October 1993 through September 2003. Semiannual Report No. 11, April 1, 1994 – September 30., 1994. Arlington, VA: National Science Foundation. pp. 8 and 38.

[69] Office of Inspector General (1999). Semi-Annual Report to Congress, September 1999. Arlington, VA: National Science Foundation. pp. 4-5, 8

[70] Office of Inspector General (2004). Articles on NSF Supported Centers, from Semi-Annual Reports, October 1993 through September 2003. Semiannual Report to Congress, September 2000. Arlington, VA: National Science Foundation. pp. 17-18

[71] Office of Inspector General (2004). Articles on NSF Supported Centers, from Semi-Annual Reports, October 1993 through September 2003. Semiannual Report March 1999. Arlington, VA: National Science Foundation. Pp. 7-8.

[72] This level of disclosure is permissible in the History because the case became public and details of the charges and sentence appeared in the press in Wisconsin. See https://digital.bentley.umich.edu/midaily/mdp.39015071755016/211

[73] Ibid.

[74] NSB (1987). Approved Minutes, Open Session, 272 Meeting, National Science Board (NSB-87 -50).

[75] Kenny, Barbara (2012), op. cit. Slide 4.

[76] NSF (1993). ERC Program Solicitation NSF 93-41, op. cit., p. 2.

[77] NSF (1994). ERC Program Announcement NSF 94-150, op. cit., p. 2.

[78] NSF (1997). Engineering Research Centers: Partnerships with Industry and Academe for Next-Generation Advances in Knowledge, Technology, and Education. Program Announcement NSF 97-5, p. 2.

[79] Ailes, Catherine P., J. David Roessner and H. Robert Coward (1999). Documenting Center Graduation Paths, First Year Report, Arlington, VA: SRI International, May 1999.

[80] Ailes, Catherine P., J. David Roessner and H. Robert Coward (2000). Documenting Center Graduation Paths, Second Year Report. Arlington, VA: SRI International, May 2000.

[81] Ailes (1999), op. cit., p. 3.

[82] Ibid, p. 4.

[83] Ibid, p. 12.

[84] Williams, James E. Jr., and Courtland S. Lewis (2010). Post-Graduation Status of National Science Foundation Engineering Research Centers, Report of a Survey of Graduated ERCs. Melbourne FL, SciTech Communications LLC, January 2010, p. 12.

[85] Ibid., p. 11.

[86] Ibid.

[87] Ibid.

[88] Ailes, op. cit., pp. 16-99.

[89] Ibid, p. 4.

[90] Ailes, Catherine P., J. David Roessner, H. Roberts Coward (2000). Documenting Center Graduation Paths, Second Year Report. op. cit., pp. 6-7.

[91] It should be noted that when the industrial members of the Minnesota Center for Interfacial Engineering realized it had officially shut down, they asked the University administration to resurrect it. They merged the faculty from the ERC with a Materials Research Lab and the combined effort is still in operation (2019) as the Industrial Partnership for Interfacial and Materials Engineering (iPrime).

[92] Mujumdar, Vilas (2005). “Graduated Engineering Research Centers—Feedback & Analysis.” Presentation at the ERC Program Annual Meeting, Bethesda MD, November 17-19, 2005. http://erc-assoc.org/sites/default/files/download-files/Graduated%20ERCs%20-%20Feedback.ppt

[93] Williams and Lewis (2010), op. cit., pp. 9-10.

[94] Ibid., p. 13.

[95]ibid., p. 20.

[96] Williams, James E. (2009). “Study of the Sustainability of Graduated ERCs.” Presented at ERC Program Annual Meeting, Bethesda MD, November 20, 2009, slides 10-11.

[97] Williams and Lewis, op. cit., p. 12.

[98] Ibid., p. 17.

[99] Ibid., pp. 28-29.

[100] Technically the next class were the three Earthquake Engineering Research Centers, formed in 1997 in a different division but “adopted” by the ERC Program in 1999. These were all highly multidisciplinary.

[101] Cruz-Pol, Sandra and José Colom-Ustáriz (2002). A Case Study: High Percentages of Women in Engineering College at UPRM. Paper presented at WEPAN 2002 Conference, June 8-11, 2002, San Juan, Puerto Rico. http://ece.uprm.edu/~pol/pdf/WepanHighPerc.pdf

[102] See, for example, NSF (1995). Highlights of Engineering Research Centers Education Programs (NSF 95-56). Arlington, VA: National Science Foundation, pp. 64-66.

[103] Hannon, Kerry (2018). “Visionaries with the Courage to Change the World.” New York Times, May 27, 2018. p. 2.  https://www.nytimes.com/2018/05/24/us/visionaries-change-the-world.html

[104] Housing the website of an NSF program on a server outside of nsf.gov was a novel concept at the time, when the web itself was fairly new. This was deemed appropriate since the content of the Best Practices Manual was written by members of the ERC community external to NSF. It was also more efficient because this was a complex website that would require frequent changes and updates, which could best be made by Lewis, and with little delay. The NSF’s web site was managed by NSF’s Office of Legislative and Public Affairs. All changes would need to be submitted for review and would take considerably more time to implement. Special approval for this new website was sought and received from the NSF General Counsel. Subsequently, the use of outside servers was adopted by other programs within NSF.

[105] National Academies of Sciences, Engineering, and Medicine (2017). A New Vision for Center-Based Engineering Research. Washington, DC: The National Academies Press. https://doi.org/10.17226/24767

[106] NSF (1991). The ERCs: A Partnership for Competitiveness. Report of a Symposium, February 28-March 1, 1990 (NSF 91-9). Washington, DC: National Science Foundation.

[107] After 2012, following the stepping-down of ERC Program Leader Lynn Preston, the meetings became biennial, alternating with summer retreats for ERC staff groupings in the alternate years.

[108] Examples of annual meeting keynote speakers include: VLSI pioneer and photonics entrepreneur Carver Mead (2000); The World Is Flat author Thomas Friedman (2005); Vivek Paul, President & CEO of Wipro Technologies (2006); William Haseltine, founder & CEO of Human Genome Sciences, Inc. (2006); inventor extraordinaire Dean Kamen (2007); Ethernet inventor Bob Metcalfe (2008); and Nicholas Donofrio, Executive Vice President for Innovation & Technology, IBM (2009).

[109] Those PDs who have been “rotators” at NSF have had, of course, a shorter involvement with their centers, given their typically two-year term at NSF before returning to their home institution.

[110] In the early 1990s a short-lived “ERC Association” consisting of the ERC Directors was formed and met a few times. However, it was superseded by lengthy closed meetings of the Directors at ERC annual meetings.

[111] The members of the six original working groups are listed at http://erc-assoc.org/best_practices/appendix-working-group-members

[112] Dr. Donnelly was the Associate Director for Education and Outreach at the University of Florida’s Particle Engineering Research Center and one of the leaders in shaping that role at ERCs. Currently she is the Director of the Center for Undergraduate Research at the U of F. She is profiled in section 7-E(a).

[113] Preston, Lynn (2012). “Engineering Research Centers Program: Two+ Decades of Productivity & Innovation…and More to Come.” Presentation given at the ACS Science & the Congress Project on Engineering Research Centers: Seeding Innovation and Jobs, Washington, DC, February 17, 2012.

[114] Khargonekar, Pramod (2015). “The Future of Center-Based Multidisciplinary Engineering Research. Presentation given to The National Academies Committee on the Future of Center-Based, Multidisciplinary Engineering Research,” Washington, DC, December 14, 2015.

[115] See, for example, http://erc-assoc.org/achievements/assist-nanosystems-center-presents-2013-consumer-electronics-show

[116] The Society for Advancement of Chicanos/Hispanics and Native Americans in Science, annual meetings

[117] Preston, Lynn (2013). “Engineering Research Centers: Presentation to the CEOSE” (Committee on Equal Opportunities in Science and Engineering), May 1, 2013. Directorate for Engineering, National Science Foundation. Slides 4-5.

[118] NSF (1984). Program Announcement, Engineering Research Centers, Fiscal Year 1985, April 1984. Directorate for Engineering, National Science Foundation, NSF 84-X.

[119] NSF (1985). Program Announcement, Engineering Research Centers, Fiscal Year 1986, April 1985. Directorate for Engineering, National Science Foundation, NSF 85-X.

[120] NSF (1994). Program Announcement, Engineering Research Centers, Fiscal Year 1995, April 1994, op. cit.

[121] NSF (1997). Program Announcement, Engineering Research Centers, Fiscal Year 1998, April 1997, op. cit.

[122] NSF (1998). Program Announcement, Engineering Research Centers, Fiscal Year 1999, April 1998, op. cit.

[123] NSF (2002). Program Announcement, Engineering Research Centers, Fiscal Year 2003, April 2002, op. cit.

[124] NSF (2004). Program Announcement, Engineering Research Centers, Fiscal Year 2005, April 2004, op. cit.

[125] LSAMP=Louis B. Stokes Alliance for Minority Participation. AGEP=Alliances for Graduate Education and the Professoriate, TCUP=Tribal Colleges and Universities Program , CREST=Centers of Research Excellence in Science and Technology

[126] Ibid., slides 9-10.

[127] Ibid., slide 11.

[128] Ibid. slide 13

[129] Ibid., slide 12.

[130] Coley, Brooke (2012). “A Study of the ERC Impact on Diversity.” Presentation at the ERC Annual Meeting, Bethesda, Maryland, November 2012.

[131] National Academy of Engineering (1984). Guidelines for Engineering Research Centers: A Report to the National Science Foundation. Washington, DC: National Academy Press. [https://doi.org/10.17226/19472]

[132] National Academy of Engineering (1989). Assessment of the National Science Foundation’s Engineering Research Centers Program: A Report for the National Science Foundation by the National Academy of Engineering. Washington, D.C.: National Academy of Engineering. https://www.nap.edu/read/19054/chapter/1

[133] GAO (1988), op. cit.

[134] Parker, Linda (1997). The ERC Program: An Assessment of Benefits and Outcomes (NSF 98-40). Arlington, VA: National Science Foundation. https://www.nsf.gov/pubs/1998/nsf9840/nsf9840.htm

[135] Ailes, Catherine P., Irwin Feller, and H. Roberts Coward (2001). The Impact of Engineering Research Centers on Institutional and Cultural Change in Participating Universities: Final Report. Report to the National Science Foundation, Engineering Education and Centers Division. Arlington, VA: Science and Technology Policy Program, SRI International, June 2001. P. iii. http://erc-assoc.org/sites/default/files/studies_reports/ERC%20Cultural%20Impact_SRI%202001.pdf

[136] Ibid., p. x.

[137] Lal, Bhavya, Craig Boardman, Nate Deshmukh Towery, and Jamie Link (2007). Designing the Next Generation of NSF Engineering Research Centers: Insights from Worldwide Practice. Washington DC: Science and Technology Policy Institute, November 2007. http://erc-assoc.org/sites/default/files/topics/ERC_Benchmarking_Report_Foreign_Centers.doc

[138] Op. cit., p. 4.

[139] McLaughlin, David and Courtland Lewis (2008). ERC Key Features:  Designing the Next-Generation ERC: Report from the 2007 Annual Meeting. Report to NSF, January 2008. http://erc-assoc.org/sites/default/files/studies_reports/Key%20Features%20of%20the%20Next%20Generation%20ERC-Final.pdf.

[140] Although Gen-3 ERC key features had already been defined and published in an NSF program announcement (NSF 07-521), the first class of Gen-3 ERCs (2008) would be funded the following summer of 2008. Input from this report would be used to inform the next program announcement, for FY 2010 awards.

[141] Lewis, Courtland S. (2010). Innovations: ERC-Generated Commercialized Products, Processes, and Startups. Report to the National Science Foundation, Engineering Directorate, Engineering Education and Centers Division. Melbourne, FL: SciTech Communications, February 2010. http://erc-assoc.org/sites/default/files/topics/ERC_INNOVATIONS_2010_reprint.pdf