Research Impact

Update for Bentley Research Council. February 2019.
This source is available online at http://purao.us/research-impact-resources/
Compiled by Sandeep Purao
Contributors: Boeri, Herr, Kirsch, LaFarge, Ledley, Markus, Purao (sub-committee)

Key Points (based on conversations within the sub-committee)  

  1. Emphasis on “scholarship” (a broad range)
  2. Impact beyond academia (see also Spring symposium theme from Lynne and Sandeep)
  3. Let’s look at impact in addition to quality
  4. Let’s develop a suite of metrics

Pathways

  1. Culture of Impact including Communication and Incentives (Institutional-level)
  2. Faculty Efforts, Mentorship and Engagement (Individual-level effort)
  3. Collaboration and Integration (Cross-discipline and External Stakeholder)
  4. Metrics (Alternative and Multiple, countables and not so)

Resources

1. What does Impact of Research Mean? From Emerald Publishing 2018
http://www.emeraldgrouppublishing.com/authors/impact/index.htm
This is not a research publication. Instead, it is from a publishing house aimed at scholarly publications. It has a nice concise description of how to think about research impact within / outside academe. It also contains a number of useful how-to’s for researchers and institutions.

2. Time to make business school research more relevant. HBR 2018 
https://hbr.org/2018/07/its-time-to-make-business-school-research-more-relevant
The article contains several pointers and problems including HARKing, Lost in Translation, Lost before Translation and others. For example, Lost before Translation points out the need for dialog with managers and stakeholders before the research is conceived and carried out.

3. Scholarly Impact: A pluralistic conceptualization. AOM Learning and Education 2015
https://journals.aom.org/doi/10.5465/amle.2014.0121
Clarify strategic direction in terms of which stakeholders we are trying to affect and why, the way future scholars are trained, and the design and implementation of faculty performance management systems. A pluralist conceptualization of scholarly impact can increase motivation for engaged scholarship and design-science research that is more conducive to actionable knowledge as opposed to exclusive career-focused advances.

4. The spirit of science of socially responsible scholarship. MOR 2013 
https://onlinelibrary.wiley.com/doi/abs/10.1111/more.12035
One simple point – the more we write and talk about it, the more attention it will get. The editorial essay also laments that we are failing the spirit of science that underlies our scholarly mission.

5. London School of Economics Impact Blog

This is a wonderful and frequently updated blog that points to several interesting sources.  In addition to a regular blog they also contain several “e-collections” aimed at specific aspects. I found some of these to be particularly relevant for our discussions. I am adding these below as additional resources.

6. Building a culture of impact from the LSE Impact blog

Achieving impact “is a messy, iterative, pragmatic and unpredictable challenge whose methodology cannot be tightly prescribed.” Based on an analysis of case studies, the author suggests these four pointers:  1. Focus on collaboration, co-creation and an iterative approach – plan for impact from the beginning of your research project. 2. Emphasize local scholarship to build the credibility of the research results – a specific strategy to strengthen local relationships ensures that they will last long after the project has ended. 3. Networking is crucial – saying yes to every opportunity to speak can be exhausting but improves the ability to follow the breadth of the debates and thus the range of people reached by their messages and advice. 4. It’s all about the quality of the evidence – Provide a credible database can be just as influential as providing tailored policy advice.

7. Now is the time to update our understanding of scientific impact in light of open scholarship from the LSE Blog

The authors make three key points. First, impact comes from collaboration (across disciplines): “meaningful insights can often only be generated by a complex combination of expertise.” The examples cited are hyper-authorship, hundreds of authors on a single piece. Second, impact comes in different shapes: “Scholars that are investing time and effort in alternative products or even public goods … face disadvantages when it comes to the assessment of their work and ultimately career progression.” Third, impact is dynamic: “Traditional impact measures fail to capture the dynamic nature of novel scholarly products. For many they are not considered citable.”

8. Despite becoming increasingly institutionalised, there remains a lack of discourse about research metrics among much of academia From the LSE Blog

Here are some excerpts: Most participants in the study were concerned about the limitations of evaluative metrics and the extent of their use. They talked about how metrics should not be used, how they are not comparable, how they encourage gaming behaviour, and so on. Academics have accepted metrics as standards of evaluation, and that they are “thinking with indicators” … In other words, participants are following rules that are implicit and embedded in everyday academic life. Yet, when they are asked to reflect on the use of metrics, they articulate their limitations and inappropriateness. The author develops two concepts: 1. Evaluation complacency — when one is complacent about the achievements measured by evaluative metrics, not feeling the need to reflect on the limitations and shortcomings of metrics. 2. Evaluation inertia — when there is no tendency to reflect, critique, or change the existing standards of research evaluation, including the use of metrics, because, for example, the system encourages the chasing of metrics as a goal.
The author suggests bringing in ideas about “ideal speech and public discourse” – I’d like to suggest that we bring the discourse about metrics into our everyday academic lives. The understanding of metrics should not be limited to specialists. There is an urgent need for everyone who uses and is affected by these metrics to be fully informed and to engage in conversations in an open and public discourse.

9. Impact of Research: A Guide for Business Schools from AACSB 2012
This was suggested by Fred L. as well as Vicki L.
https://www.aacsb.edu/-/media/aacsb/publications/research-reports/impact-of-research-exploratory-study.ashx?la=en
1. Align research expectations with the school’s mission. Here is an example they provide:
The Schroeder Family School of Business Administration at the University of Evansville is a school that emphasizes teaching above all else, but its mission also draws attention to the important role of research: The mission of the Schroeder Family School of Business Administration is to provide a life-transforming, high quality, innovative business education within a liberal arts and sciences framework. The school’s faculty engages in the creation of knowledge through scholarship and provides its students with experiential learning and a global perspective that will enable them to engage the world as informed and ethical business professionals. 2. Define expectations (of impact – added by SP) at the school level (not the individual), and 3. Engage faculty. Here is an example: The Business School of Universitaet Mannheim described its process of faculty engagement as follows: The Business School created a working group … presided by the associate dean for research. The group represents six of the school’s seven academic areas (Accounting and Taxation; Banking, Finance and Insurance; Economic and Business Education; Information Systems; Management; Marketing; and Operations Management) and different academic ranks (professors and research assistants) in the faculty. It conducted in-depth discussions on a variety of issues related to the pilot study and drew up a statement of impact expectations. The report also elaborates some ideas about possible measures emphasizing the need for a collection of metrics and approaches instead of a single measure/approach and multiple stakeholders.
From this source the following were found as key points by Fred L. (Email shared 19 Nov)
Two passages seem particularly interesting
“An important question to address at the beginning of the exercise is thus how the mission of the school (or the university with which it is affiliated) drives decision-making about the kind of impact the school desires. For example:
• Does the institution desire to impact the development of theoretical models that deepen the relevant profession’s understanding of the disciplines under study? (In finance, examples of this would be Black-Scholes Option Pricing, CAPM, Modern Portfolio Theory, etc.)
• Does the institution desire to demonstrate application of business theory so as to inform academics and practitioners as to how to apply business principles more effectively?
• Does the institution desire to demonstrate pedagogical practices that produce enhanced student learning opportunities?
• Does the institution desire to create interdisciplinary paradigms that students can apply in varied and new situations (the liberal learning model)?
• Does the institution value consulting or other forms of external engagement with the community or professions? What is the relation of such activities to other research expectations?”
Also: “Equally important are the potential uses for insights from the study among external stakeholders. Research impact is clearly about more than just discovery. Much like innovation is more than just invention, great research without effective dissemination to the right audiences is likely to fall short of its potential impact. In fact, several schools participating in the exploratory study noted that they discovered new opportunities to share research findings with stakeholder groups who might benefit.”

10. Levitt, Ruth, et al. Technical Report: Assessing the Impact of Arts and Humanities Research at the University of Cambridge. Rand Corporation, 2010. From Ranjoo Herr
In Levitt et al’s 2010 report investigating the impact research at the University of Cambridge, UK, however, eight categories of impact have been identified for “arts and humanities” research: teaching, academic impact, impact on policy, impact on practice, impact on public knowledge, impact on preservation of heritage, impact on leisure and entertainment, and economic impacts.

11. Extracts from a brief report compiled by Ranjoo H. 2018. (with comments from SP)
The humanities typically encompass literature, history, and philosophy. The research “impact” of humanities is particularly difficult to gauge, as the language of impact presupposes certain assumptions that are at odds with the proper domain of the humanities. The language of impact, however, implies the assumption that knowledge is valuable when it has practical utility. Although the humanities formed an essential part of the university curriculum since its creation, the tendency to value knowledge for its practical utility began in the U.S. when the US Morrill Act (1862) established the Land Grant University, which taught agriculture, science and engineering. (We may think of Business vs. Arts/Humanities vs. STEM – and consider what impact means in each before connecting these ideas to the mission and priorities of Bentley.) Additional references suggested:
Bulatis, Zoe. Measuring Impact in the Humanities. Palgrave Communications 3/7 (2017): 1-10.
Busl, Gretchen. Humanities Research Is Groundbreaking, Life-Changing, And Ignored. The Guardian (Oct 19, 2015), https://www.theguardian.com/higher-education-network/2015/oct/19/humanities-research-is-groundbreaking-life-changing-and-ignored.
Hazelkorn, Ellen. Making an Impact: New Directions for Arts and Humanities Research. Arts & Humanities in Higher Education 14/1 (2015): 25-44.

12. How can we boost the impact of publications? Try better writing. PNAS 2019. Contributed by and Extracts From Fred L. 
“Our results suggest that writing more with the reader mind produces more citations, regardless of career stage or where you aim to publish. Of course, there are important caveats. Article content and the context in which it was written can determine how influential an article is, regardless of writing style. Furthermore, condensing writing to a set of quantifiable components does not encapsulate everything that is good or bad about writing, a challenge that is difficult, if not impossible, to overcome entirely.
“Yet our analysis is a first step toward understanding the benefit of writing with the reader in mind and gives some initial clues regarding what good writing in science can achieve. Although more citations do not necessarily translate to greater research impact, more citations do suggest a broader readership and may assist with greater knowledge transfer among peers and disciplines, greater research translation to industry, and greater uptake of research by the media, educators, and the broader community.”

13. Speaking engagements – a possible measure – From Miriam B. 
Other than the typical measures on google scholar and other online views, an important measure is how many times authors are invited to a speaking engagement because of their research work. Of course this means keeping up with degree works—although updating degree works can be a time-consuming. Maybe we can collect this information in another way.

14. The Research Excellence Framework UK 2014
http://www.ref.ac.uk/2014/
http://www.ref.ac.uk/2014/media/ref/content/pub/REF%20Brief%20Guide%202014.pdf
Comments from Fred
This is a highly informative model for thinking about research assessment, which has been developed in the UK. Two things are of particular interest:
1. This has been a highly reflective effort, with a number of trial runs and a lot published about the thought process and process.
2. Perhaps the most interesting part is the recognition of three, complementary levels of excellence: Output, Impact, and Environment
Some resources to emphasize from Sandeep
REF defines ‘impact’ as, an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia. 
Each university was asked to prepare impact statement as well as case studies. As an example, Warwick, which scored fairly high on the assessment. The overall impact document they prepared is here, including links to the impact statement and multiple case studies
https://results.ref.ac.uk/(S(ds2srqu2dlamxsmq5tupv1n4))/Submissions/Impact/1420

15. Penfield, T., Baker, M.J., Scoble, R. and Wykes, M.C., 2014. Assessment, evaluations, and definitions of research impact: A review. Research Evaluation, 23(1), pp.21-32.
In the context of the UK Research Excellence Framework, we can also look at this paper, which specifically focuses on the “Impact” component of the REF. The paper traces several initiatives and describes the evolution of the impact concern within academia. The authors identify important challenges to measuring impact. These include: (a) time lag, (b) the developmental nature of impact, (c) attribution – this can be tough at times, (d) knowledge creep – such as accumulation, and (e) gathering evidence – to show impact. The authors, therefore, suggest different approaches to understand impact, such as – metrics, narratives, testimonies, citations (outside academia).

16. Lyytinen, K., Rowe, F. and McGuire, C., 2018. Improving IS Research Relevance for Practitioners: The Role of Knowledge Networks. ICIS. – From Lynne M.
The paper develops a model that utilizes absorptive capacity theory and emphasize the role of knowledge networks. It identifies different kinds of knowledge generated by the IS researchers, such as: factual / technical / cognitive / explanatory / ethical – and suggests that the concepts from absorptive capacity such as identification, assimilation, transformation and exploitation can be useful to understand how to improve relevance. The paper mentions in the abstract that “the application of the model to field data also reveals why consultants have become and are likely to remain successful knowledge brokers between different knowledge communities within IS domain. We note the narrow band of influence that IS community bears on IT professional’s knowledge practices.”

17. Galletta et al. 2019. If Practice Makes Perfect, Where do we Stand? Communications of the AIS. Forthcoming – From Lynne M.
This is a paper that reports from the senior scholars panel at ICIS 2018. The paper provides history, comparisons across several research/professional bodies, describes practices and laments the lack of practice focus in IS research. It talks about impact, funding models, re-visits REF and points to possibilities for tenure practices and metrics. Although the paper has been compiled from an IS perspective – the comparisons and general pointers it provides are useful for discussion and for considering / evaluating how we may wish to change the culture / practice / measures.

18. From the Scholarly Kitchen blog – ImpactStory – From Lynne M.

The post describes this: Impactstory (http://impactstory.org/) recently announced a new tool in development. Called, “Get The Research” and aimed at serving the general public rather than an audience of scholars and specialists, it promises to provide a new level of accessibility (in multiple senses of the term) to published scholarship.
This appears to be still in the works – See here: https://gettheresearch.org/

19. San Francisco Declaration on Research Assessments (DORA) – From Fred L.

Important to note is this: The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. So, the declaration makes this general recommendation: Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

20. The Leiden Manifesto – From Fred L. 
https://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351
The paper questions what it calls the “impact-factor obsession,” and suggests ten principles that may be followed for a better evaluation practices. Examples include: 1) Quantitative evaluation should support qualitative, expert assessment. 2) Measure performance against the research missions of the institution, group or researcher. 3) Protect excellence in locally relevant research among others.

21. Altmetric – From Fred L.
https://www.altmetric.com/products/explorer-for-institutions/
A different way of assessing your research by examining ‘who is talking about your research.’ The site describes this as: an intuitive platform that enables you to monitor the online activity surrounding academic research. This can be an interesting approach. As Fred mentioned to me (in his email) – like any measure, this one may also be susceptible to some gaming / manipulation.

22. Boyer’s Definitions of Scholarship – from Vicki L.
Boyer, E.L., 1990. Scholarship reconsidered: Priorities of the professoriate. Princeton University Press, 3175 Princeton Pike, Lawrenceville, NJ 08648.
From Vicki L. – We have been using Boyer’s  definitions of scholarship fairly heavily as guidance in defining area of possible impact.  I believe that  he is cited in the AACSB white paper.  The categories suggested include scholarship of discovery, application, integration and teaching. Bentley has used these categories in the last AACSB report to discuss quality and impact as seen by each department.

Comments about Boyer from Gesa

Glassick, Charles E. “Boyer’s Expanded Definition of Scholarship, the Standards for Assessing Scholarship, and the Elusiveness of the Scholarship of Teaching.” Academic Medicine. 75.9 (2000). https://pdfs.semanticscholar.org/8449/5824183f7038f902466ef94b93f237fc7c9a.pdf

This concise article offers a summary of Boyer’s work and methodology. The last section speaks to the challenges of documenting the impact of scholarship of teaching. Glassick was one of the contributors to Boyer’s study and speaks with deep knowledge on this matter.

Defining Scholarship: Boyer’s 4 Models and the new digital scholarship: A Faculty Conversation. March 13, 2014 at Northeastern. This is a slide show with some useful examples offered for each kind of scholarship. https://www.northeastern.edu/cpsfacultycentral/wp-content/uploads/2013/03/Defining-Scholarship-with-Boyers-Four-Areas-of-Scholarship-Explored-and-the-New-Digital-Scholarship-A-Faculty-Conversation.pdf

Comments from Fred 

Bentley should (and I believe, does) formally recognize the broad scope of excellent scholarship that our colleagues produce in many different disciplines, ranging from traditional publications in peer-reviewed, professional journals, to books, blogs, commentaries, policy papers, creative/artistic works, performances, presentations, and – importantly – contributions to business and social practice. This begins by using the word “scholarship” instead of “research.” My sense is that this was the purpose of the exercise several years ago, when departments were asked to provide commentary on what was considered scholarly output in their disciplines. Our department focused on the conventions regarding authorship, team-based research, the genre of natural science publications, the role and rigor of (small) subspecialty journals as well as broad (we use the term interdisciplinary) science journals.

23. Shulman James, Executive VP of American Council of Learned Societies from Gesa
“College and Beyond II: The 21st Century, Measuring the Impact of Liberal Arts Education.” This article discusses a Mellon-funded research study that is examining how to measure the impact of a liberal arts education. This is student and education focused study. A brief synopsis below. https://mellon.org/research/college-and-beyond-ii/

In 2018, under the auspices of The Mellon Research Forum on the Value of a Liberal Arts Education, the Foundation has supported the launch of a significant new research initiative in collaboration with the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. College and Beyond II: The 21st Century will gather data to support research concerning the impacts on, and outcomes of, students who pursue a liberal arts education at a wide range of colleges and universities. The effort will be informed by a series of research reviews commissioned by the Foundation in 2017 on what we know about outcomes of a liberal arts education in a range of rubrics, from health and democratic participation to cognitive skill development and economic returns. These essays by experts in the field indicate significant gaps in our understanding of such outcomes, as well as ways in which thoughtfully designed social science and humanities research could begin to remedy them.

24. Hazelkorn on Making an Impact – New Directions for Arts and Humanities – from Gesa
Hazelkorn, Ellen. Making an Impact: New Directions for Arts and Humanities Research. Arts and Humanities in Higher Education. 14.1, 2015. Perspectives from Ireland, the Netherlands and Norway. https://journals.sagepub.com/doi/abs/10.1177/1474022214533891

Abstract. The severity of the global economic crisis has put the spotlight firmly on measuring academic and research performance and productivity and assessing its contribution, value, impact and benefit. While, traditionally, research output and impact were measured by peer-publications and citations, there is increased emphasis on a ‘market-driven approach’, which favours the bio-, medical and technological sciences, and has helped reinforce a disciplinary hierarchy in which arts and humanities research has struggled for attention. This article charts the changing policy environment across Ireland, the Netherlands and Norway. It draws on evidence from the HERAVALUE project that studied how different stakeholders value arts and humanities research; almost 100 interviews were conducted with representatives from the academy, policymakers and civil society in these three countries. Although the arts and culture have played a distinctive nation-forming role, and continue to do so, each country has adopted very different policy responses towards arts and humanities research.

25. From the Federation of the Humanities and Social Sciences, a Canadian perspective – from Gesa
Federation for the Humanities and Social Sciences, offers a Canadian perspective. http://www.ideas-idees.ca/issues/research-impact

The Federation released an updated statement, “Humanities, Social Sciences and Arts Research: A Framework for Identifying Impacts and Indicators, May 2014. http://www.ideas-idees.ca/sites/default/files/2014-05-05-impact-project-update-en.pdf

Simone Tulumello summarized and comments on the statement below http://www.ideas-idees.ca/blog/archive/201410

The document first defines, then proposes some indicators for measurement of, impact around five dimensions.

* Impacts on Scholarship – measured through bibliometric indicators, downloads from OA repositories, citations, prizes and awards, reputation as measured by surveys, …;
* Impacts on Capacities through teaching and mentoring – measured through surveys for students and colleagues, employer surveys, …;
* Impacts on Economy – measured through advisory roles, revenue opportunities resulting, income derived from patents, consulting contracts;
* Impacts on Society and Culture – measured through number and quality of partnership with community groups, media coverage of research, engagement with the public, …;
* Impacts on Practice and Policy – measured through citations in government documents, consulting for government, commissioned reports, ….

Two personal thoughts. On the one hand, this document is an important benchmark for future studies: going beyond impact as defined as a simple number of citations in the closed world of academia should be considered a priority for research policy, indeed. And an attempt at mixing academic dimensions with wider impacts on society, economy, and policy-making should be regarded as an important one. Future debates should thus focus on how the various dimensions should be balanced and how objective and subjective (surveys) indicators can dialogue.

26. Forget miracles – how can we gauge the real impact of science?
https://www.elsevier.com/connect/forget-miracles-how-can-we-gauge-the-real-impact-of-science
Summary of Funders’ Summit in early 2019

“The intellectual and practical efforts to understand, assess and improve impact are pretty much in their infancy,” he said. When gauging the value and social impact of science, he explained, we need to go beyond simplistic categories like “basic versus applied” and familiar quantitative measures like number of publications and patents or monetary ROI. Because of the complexity of scientific research and the system that supports it, these issues need more sophisticated tools and policies, he said. “Impact is achieved in a complex systems context,” he explained. “Yet science policy lacks theories, methods and tools for understanding and assessing impact in such a context. … “How do you know if you’re asking good questions – whether you’re asking questions that will lead to the outcome you want to achieve? How do you know if you’re getting that right?”

See also Bozeman, B & Sarewitz, D 2011, ‘Public Value Mapping and Science Policy Evaluation’ Minerva, vol. 49, no. 1, pp. 1-23. https://doi.org/10.1007/s11024-011-9161-7

This chart is from a 2007 paper Prof. Daniel Sarewitz co-authored in Elsevier’s journal <em>Environmental Science and Policy</em>. It visualizes a method to gauge whether research is relevant to those who need it.

27. Facilitating inter-disciplinary research as a way towards impact 

From Sandeep – https://www.nap.edu/catalog/11153/facilitating-interdisciplinary-research

I bring this up because I believe that “inter-disciplinary” research often forces us to reflect deeply about the kinds of issues we are discussing

From Fred – The “impact” of scholarship should extend beyond the academic community. Too often, the term “impact” (as in impact factor) is used as a measure of “quality.” The concepts are distinct. By impact, we should mean, variously, having a work disseminated to a target audience, having it recognized by a target audience (scholarly, media, social media, public, specific demographic group, etc,) , making a substantive contribution to a dialectic in academics or society, having it serve as a stepping stone for continued investigation or understanding, having an impact on business practice, having an impact on societal practice, etc….or even sometimes effective self-expression. To understand what should be in this list, we would have to repeat the exercise we did several years ago and ask departments to take a broader view of the potential impact of their work. Perhaps this even needs to be done at the level of individual scholars or even individual pieces of work.

28. What is Research Impact for a Private institution like Bentley 

We are Bentley – private – but also part of the landscape of higher ed / research. If we look at higher ed in the US – we know that the first wave was philanthropists. But the next round came from “land grants” as public investment, and now there is sea-grant / space-grant etc of course. Can we learn from land grant institutions for impact beyond academia? See https://landgrantimpacts.org/

29. Should we develop new Metrics? 

From Fred

Metrics can never capture the nuance of what an individual investigator may identify as the value or impact of their work. Nevertheless, metrics can measure things that are meaningful and can fulfill two very different missions:

(1) Assessment. This is not simply an outside requirement (though it is often presented as such). Rather, it behooves the institution, and each of us as individuals, to be able to demonstrate the outstanding work our colleagues are doing across the school. If the medium is an assessment metric, we should embrace this, alongside qualitative or even subjective estimates of the value or impact of our work that we believe to be more valid.

(2) Incentive. Metrics are powerful instruments for incentivizing ourselves to continuously improve our output and achieve our personal and institutional goals. This is important for many of our faculty, and for the institution as a whole. Metrics should be seen as aspirational, not remedial or putative. We should embrace metrics that measure what we want to achieve, not only the intrinsic structures of our disciplines.

What Bentley needs to do is to develop a suite of metrics that represented the various outputs of our colleagues and the multidimensional impacts that we would like to achieve as individuals and as an institution. It is not my place to propose what these metrics should be outside of my own discipline and personal interests; that is exactly the point. Rather than debating whether to measure impact, we should ask each individual what they would like to achieve with their scholarship, and how will they know if they are achieving it.

Discussion Points

  1. What do we want to achieve from a Bentley perspective? Culture / Practice / Metrics / Infrastructure – How do we connect to the Bentley mission?
  2. Should we travel to “Metrics” or talk about other aspects first? The temptation would be to move to measures but would that be appropriate?
  3. What can / should the Research Council do for bringing this opportunity / these opportunities to the Administration / Faculty?
  4. Next steps for this sub-committee – Deeper investigation of .. ?

 

Why Now?

IPM Department Statement from 2014. Link