Responsible Innovator Lecture
Series

The lecture series is designed to initiate new conversation around the responsibility of innovators as they imagine, design, build and deploy new technologies. Distinguished innovators and academics from around the world share lessons from their own projects helping engineers and scientists understand the complexities around new technologies.

Katina Michael

Katina Michael BIT, MTransCrimPrev, PhD is a professor at Arizona State University, a Senior Global Futures Scientist in the Global Futures Laboratory and has a joint appointment in the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence. She is the director of the Society Policy Engineering Collective (SPEC) and the Founding Editor-in-Chief of the IEEE Transactions on Technology and Society.

Designing
an Implantable Therapeutic Device for US Soldiers

The ADvanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) aims to develop a travel adapter for the human body, an implantable or ingestible bioelectronic carrier. The therapeutic device aims to achieve the goal of better sleep cycles and the diminishment of traveler’s diarrhea in US soldiers. As an ELSI panelist in this Defense Advanced Research Project Agency’s (DARPA) Program, I will discuss the importance of anticipating ethics, law and social implications toward the design and development of a complex socio-technical system. The ELSI panelist acts as an interventionist, in order to raise guiding questions as opposed to providing concrete answers, that stakeholders at large should consider or address at critical stages of the development cycle. While an ELSI panelist cannot pre-judge ethics for cutting-edge technologies for which there are limited histories or experiences to rely on, they certainly can anticipate challenges through thought experiments and scenario planning.

JULY 28, 2022

Joel Fischer

Joel Fischer is a professor at the School of Computer Science, University of Nottingham, UK. His research takes a human-centred view on AI-infused technologies to understand and support human activities and reasoning. His research approach is multidisciplinary, drawing on ethnography, participatory design, prototyping, and studies of technology deployments, often with an ethnomethodological and conversation analytic lens. He is currently a Co-Investigator and Research Director on the UKRI Trustworthy Autonomous Systems TAS Hub.

Trustworthy Autonomous
Systems:

Why do we need them?

In this talk I will provide an overview of the research within the UKRI Trustworthy Autonomous Systems (TAS) Hub, providing some motivating real-world examples and perspectives to frame a hopefully productive sense of the term TAS. We take a broad view on Autonomous Systems (AS); we view AS as systems involving software applications, machines, and people, that are able to take actions with little or no human supervision (see https://www.tas.ac.uk/our-definitions/).
Some Autonomous Systems are already pervasive in society (e.g., algorithmic decision-making) while others are nascent (e.g., autonomous vehicles); and while there are many potential benefits, we unfortunately too often witness the wide-ranging negative consequences when AS ‘go wrong’ from downgrading A-Level results, to spreading hate speech, to wrongful conviction, to fatal accidents. We need expertise crossing a wide area of disciplines to tackle the challenges societies face, including in computer science and engineering disciplines, social sciences and humanities, and law and regulation. I will present some of the research within the TAS programme that is starting to address some of these challenges.

AUGUST 11, 2022

Ibo
van de Poel

Ibo van de Poel is Anthoni van Leeuwenhoek Professor in Ethics and Technology at the Technical University Delft, The Netherlands. His research focuses on value change, ethics of disruptive technologies, ethics of technological risks, design for values, responsible innovation, and moral responsibility. He has currently an ERC Advanced grant on Design for changing values: a theory of value change in sociotechnical systems.

Ethics and technology 2.0

Responsible Innovation and Design for Values

The ethics of technology has evolved from criticizing technologies for their undesirable social effects after the fact towards an approach that aims at addressing relevant moral issues already pro-actively during the research and design phase of new technologies. Responsible Innovation and Design for Values are two recent approaches that fit this general development. After a brief introduction about responsible innovation, I will explain the main ideas behind the Design for Values approach that aims at integrating values of moral importance from the start in the design of new technologies. I will explain how this approach might help to translate general moral values into design requirements for new technologies and how it might help to deal with conflicting values. I will touch on several examples and illustrations during the talk.

AUGUST 25, 2022

John Zerilli

John Zerilli is a philosopher with particular interests in cognitive science, artificial intelligence, and the law. He is currently a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge. His two most recent books are The Adaptable Mind (Oxford University Press, 2020) and A Citizen’s Guide to Artificial Intelligence (MIT Press, 2021).

C.P. Snow’s
Two Cultures revisited

The AI revolution provides a neat illustration of C.P. Snow’s idea of “two cultures” and a timely opportunity to reflect on why a large gap between the cultures of the hard and soft sciences persists. I take aim at the attitude among some computer scientists (and many undergraduates, indeed possibly the wider public) that ethics is all fluff. And I’d like to do that by first suggesting why the attitude exists and then diagnosing why it’s wrong. It exists because there’s a definite difference between science and non-science that has to do with the theoretical posits of hard vs. soft sciences. The downbeat attitude to ethics and the soft sciences is mistaken because while there’s a clear difference between the sciences and the humanities, the wrong conclusion is too easy to draw. Ethics isn’t easy, it’s actually hard, and it’s hard in two ways: it’s hard in a technical sense (as the debates over fair algorithms and various incompatibility theorems illustrate); and it’s hard in a nontechnical sense, requiring skills which aren’t evenly distributed in the population. Indeed, the skills required may be about as (un)evenly distributed as mathematical and other STEM skills.

SEPTEMBER 8, 2022

Anil Gupta

Professor Anil K Gupta is a visiting faculty, Indian Institute of Management, Ahmedabad; after retiring in 2017 and at IIT Bombay, NIPER-A, and AcSIR. He is also the founder of Honey Bee Network, National Innovation Foundation, Nifindia.org, SRISTI.org and, GIAN.org; Fellow, National Academy of Agricultural Sciences (NAAS); Fellow, the World Academy of Art and Science, California, INSA, Hon Fellow, ISAE; CSIR Bhatnagar Fellow, 2018-21.  His book, Grassroots Innovation: Mind on the margin are not marginal minds, received best Business Book Award at Tata Lit Live festival in 2016.

Transcending Limits of Inclusive Imagination

Responsible
and Reciprocal Innovation Ecosystem

Limits of technology, institutions and policies can often circumscribe our imagination and initiatives towards inclusive design of social engagement. Developmental potential of youth, local/indigenous communities, women, and other disadvantaged sections of society remains under explored. We just adjust and adapt to these limits rather than transcending them. That explains increasing inequality, lack of fairness in access, assurance, ability of the delivery and demand system and subdued policies for harnessing the power of grassroots innovations.
In my talk prepared in association with my colleagues in the 35-year-old Honey Bee Network (HBN), I argue for several changes in the developmental paradigm. Norms of reciprocity, responsibility and respect towards knowledge rich and often economically poor people require revisit. Students and scholars tapping people’s knowledge in various domains still do not share their findings with the communities in an easily understandable manner. Their guides sign their thesys saying “all acknowledgements due have been made” while keeping people as anonymous and unaware of the use their data was put to. Sharing of benefits, consultancy income and other material gains by such data or its commercial application still remains very rare.
How do we then design a paradigm in which our academics responsibility and reciprocity towards communities we work with becomes more respectful of ethical and moral norms of fairness and justice. How do we change the criterion of research and developmental priorities so that unmet needs of disadvantaged sections of society do not remain persistently neglected? How do overcome fascination with very modest gains in social change and opportunities for those creative people who are trying to bring about innovative transformation like David Unaipon. How many such Davids got risk capital, grants for taking their ideas forward and helping bridge the social divide?

SEPTEMBER 22, 2022

Chandran

Nair

Chandran Nair is the Founder and CEO of the Global Institute For Tomorrow (GIFT), an independent pan-Asian think tank based in Hong Kong and Kuala Lumpur focused on advancing a deeper understanding of global issues including the shift of economic and political influence from the West to Asia, the dynamic relationship between business and society, and the reshaping of the rules of global capitalism.  He is a Member of the Executive Committee of The Club of Rome, a member of WEF’s Global Agenda Council on Governance for Sustainability and Experts Forum, as well as a Fellow of the Royal Society of Arts.  

The Future is Biological, Not Digital

Now Innovate!

Digital technology is today widely understood to be the most transformative creation by human beings and advancements in innovation are viewed as inevitably changing the way the world operates for the better. But this is a myopic view of humanity, the planet and the future. The next era of innovation requires a rejection of this imminent surrender to a dystopian digital future.
If the global pandemic has taught us any lessons, surely the most important is the need to recognise that in the next chapter of human awareness and development “the future is biological and not digital”. Human beings will need to desist the march of the algorithm and its capture of human societies. This will require bold innovations.
Chandran Nair will outline his views on why the future is biological and not digital, bringing together existential threats of climate change together with the future of work, the role of purposeful technology and food and security issues. He will outline why we need to move from our obsession with the Industrial Revolution 4.0 (IR4.0) to Insured Resilience 1.0 (IR1.0) based on the scientific reality that the future hinges on managing the biological realm of our existence.

OCTOBER 6, 2022

Andy Stirling

Andy Stirling is Professor of Science and Technology Policy at the Science Policy Research Unit at the University of Sussex, where he co-directed the ‘STEPS Centre’ for sixteen years. Working on issues of power, uncertainty and diversity in science and technology (especially around energy and biotech), he has served on a number of UK, EU and wider governmental advisory committees including (presently) as a lead author for the Intergovernmental Panel on Biodiversity and Ecosystem Services (IPBES).

Questioning Responsibility

uncertainty, participation,
and sustainability in the governance of emerging innovation

Steep gradients of power and privilege drive turbulent ebbs and flows in the politics of research and innovation. Contrasting idioms around ‘sustainable development’, ‘ethical research’, ‘smart solutions’, ‘clean technology’, ‘inclusive engagement’, ‘precautionary regulation’ and ‘responsible innovation’ vie for attention and traction. Variously associated with competing disciplines, cultures and institutional interests across different policy ‘markets’, each follows others in disparate ways, enjoying brief episodes of ‘mainstream’ status in the language and practice of particular settings.

Despite many significant differences, all these ostensibly divergent idioms share a crucial trait in common. Each provides an arena within which incumbent and subaltern interests contend to shape onward developments. By modulating processes of churn, privilege can be leveraged to ratchet contingent gains or neutralise losses. As a result, a series of inconvenient contradictions are concealed in ways that help justify the most powerfully favoured orientations for change. These cliental pressures are intensified by policy patronage and academic incentives for ‘impact’ and ‘relevance’.

Resulting rhetorics (or body-language) of supposedly singular definitively ‘scientific’, ‘legitimate’ and/or ‘responsible’ orientations for policy are refuted by persistently intractable ambiguities, uncertainties and ignorance. The reality in research and innovation remains one of irreducibly open-ended scope for a plurality of equally conditionally-valid political choices. Yet for brief periods until their currency becomes tarnished, the concealment of these actualities by this bewildering succession of vocabularies and methods, supports mainstream incumbent directions for research and innovation in specific fields as if they were uniquely technically-resolved, expert-accredited or “evidence based”, “pro innovation” “ways forward” for a “sound scientific” “public good”. If discourses and practices around ‘responsible research and innovation’ are to live up both to the name and to the widely aspired progressive function, then they must openly and directly challenge (rather than evade) these dynamics of power and privilege. When on a steep gradient, balance is maintained by bias. Rather than seeking to supersede or mediate other idioms, ways must be found to embed the legal and institutional traction gained by rare previous episodes of precious critical influence.

For instance, multiple national and international legal instruments around the ‘precautionary principle’ and ‘participatory deliberation’ offer essential load-bearing resources in this continuing struggle. Yet if ‘responsibility’ is represented as different from (rather than subsuming of) these idioms, then it will simply provide a pretext to side-line these gains and so erode progress made in earlier windows of opportunity. Across diverse fields like artificial intelligence, zero-carbon energy, sustainable agriculture, resource management and public health, the implications could hardly be more practical or profound.

OCTOBER 20, 2022