Despite the uncertain timeline for Artificial General Intelligence (AGI) becoming a reality, we need to assure responsible and ethical development today – says Jen Rosiere Reynolds.
As part of our new AGI Talks, experts from different backgrounds share unique insights by answering 10 questions about AI, AGI, and ASI. Kicking off the series, we are privileged to feature Jen Rosiere Reynolds, a digital communication researcher and Director of Strategy at a Princeton-affiliated institute dedicated to shaping policy making and accelerating research in the digital age.
About Jen Rosiere Reynolds
Jen Rosiere Reynolds focuses on digital communication technology, specifically the intersection between policy and digital experiences. Currently, she is supporting the development of the Accelerator, a new research institute for evidence-based policymaking in collaboration with Princeton University. Previously, she managed research operations and helped build the Center for Social Media and Politics at NYU. Jen holds a master’s degree in government from Johns Hopkins University focusing her research on domestic extremism and hate speech on social media. She has a background in national security and intelligence.
The mission of the Accelerator is to power policy-relevant research by building shared infrastructure. Through a combination of data collection, analysis, tool development, and engagement, the Accelerator aims to support the international community working to understand today’s information environment – i.e. the space where cognition, technology, and content converge.
AGI Talks with Jen Rosiere Reynolds
We asked Jen 10 questions about the potential risks, benefits, and future of AI:
1. What is your preferred definition of AGI?
Jen Rosiere Reynolds: AGI is a hypothetical future AI system with cognitive and emotional abilities like a human. That would include understanding context-dependent human language and understanding belief systems, succeeding at both goals and adaptability.
2. …and ASI?
ASI is a speculative future AI system capable of human-outsmarting creative and complex actions. It would be able to learn any tasks that humans can, but much faster and should be able to improve its own intelligence. With our current techniques, humans would not be able to reliably evaluate or supervise ASIs.
3. In what ways do you believe AI will most significantly impact society in the next decade?
I expect to see further algorithmic development, as well as improvements in storage and computing power, which can expedite AI.
Broadly, there are so many applications of AI in various fields, like health, finance, energy, etc., and these applications are all opportunities for either justice or misuse. Lots of folks are adopting and learning how to use human-in-the-loop technologies that augment human intelligence. But right now, we still don't understand how LLMs or other AI are influencing the information environment at a system level, and that's really concerning to me. It's not just about what happens when you input something into a generative AI system and whether it produces something egregious. It's also about what impact the use of AI may have on our society and world.
I've heard 2024 referred to as the year of elections. We see that in the United States as well as in so many global elections that have already taken place this year and will continue this summer and fall. We need to be really thoughtful about what effect influence operations have on elections and national security. It's challenging right now to understand the impact of deep fakes or the manipulation or creation of documents and images have to influence or affect people's decision-making. We saw CIA, FBI, and NSA confirm Russian interference in the 2016 US Presidential election and there was a US information operation on Facebook and Twitter that got taken down back in 2022, but what's the impact? The US-led online effort got thousands of followers, but that doesn't mean that thousands of people saw the information, that their minds or actions changed. I hope very soon we can understand how people “typically” understand and interact with the information environment, so we can talk about measurements and impact more precisely. In the next decade I expect we can much more specifically understand how AI and the use of AI affects our world.
4. What do you think is the biggest benefit associated with AI?
Right now, I think that the biggest benefit associated with AI lies in its potential to minimize harm in various scenarios. AI could assist in identifying and prosecuting child sexual exploitation without exposing investigators to the imagery and analyze the data much more efficiently, resulting in faster, more accurate, and less harmful analysis. AI could help with early diagnosis and support the development of new life-saving medicines. AI could also help reduce decision-making bias in criminal justice sentencing and job recruitment. All of these can happen, but there are also decisions to be made, and that's where education and open discussion is important, so that we can prioritize values over harm.
5. …and the biggest risk of AI?
Right now, I see two significant risks associated with the development of AI that are the most urgent and impactful. The first is the need to ensure that AI development is responsible and ethical. AI has the potential to be used for harmful purposes, perpetuating hatred, prejudice, and authoritarianism. The second risk is that policymakers struggle to keep up with the rapid pace of AI development. Any regulation could quickly become outdated and ineffective, potentially hindering innovation while also failing to protect individuals and society at large.
6. In your opinion, will AI have a net positive impact on society?
I think that AI has great potential to make a positive impact on society. I see AI as a tool that people develop and use. My concern lies not with the tool itself, but with people – how we, as humans, choose to develop and use the tools. There is long ongoing debate in the national security space about what should be developed, because of the potential for harmful use and misuse; these discussions should absolutely inform conversations about the development of AI. I am encouraged by the general attention that AI and its potential uses are currently receiving and do believe that broad and inclusive open debate will lead to positive outcomes.
7. Where are the limits of human control over AI systems?
Focus on the limits of human control over AI systems may be a bit premature and potentially move focus away from more immediate issues. We don't fully understand the impact of AI that is currently deployed, and it's difficult to estimate the limits of human control over what might be developed in the future.
8. Do you think AI can ever truly understand human values or possess consciousness?
I can imagine AI being able to intellectually understand the outward manifestation of values (i.e., how does a person act when they are being patient). When raising the issue of whether technology can truly feel or possess consciousness, we get into debates that are reflected across society and the world that raises questions like, what is consciousness and when does personhood begin? We can see these debates around end-of-life care, for example. While I personally don't believe that AI could truly manifest the essence of a human, I know that others would disagree based on their understanding and beliefs of consciousness and personhood.
9. Do you think your job as a researcher will ever be replaced by AI?
Maybe. I think that lots of jobs could potentially be replaced, or at least parts of jobs could potentially be replaced. I think we see that right now, with the human-in-the-loop tools, a part of someone's job may be much more efficient or quick. This can be very threatening to people. I think everyone should have the dignity of work and the opportunity to make a living. If there are cases where technology results in job displacement, society should take responsibility – say that yes, we allowed this to happen – and support those affected people.
10. We will reach AGI by the year…?
OpenAI announced that they expect the development of AGI within the next decade, though I haven't come across any other researchers who share such an aggressive timeline. I'd recommend to prepare as best as possible for the earliest possible AGI deployment scenario as there are several unknown elements in the equation right now – future advancement of algorithms and future improvements in storage and compute power.