Responsibility and safety
Exploring the promises and threats of the future with more effective AI
Imagine a future in which we regularly interact with an array of advanced artificial intelligence (AI) assistants, and in which millions of assistants communicate with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
General-purpose entry-level models are paving the way for increasingly advanced AI assistants. Capable of planning and executing a wide range of activities in accordance with an individual’s goals, they can add enormous value to people’s lives and society by serving as original partners, research analysts, educational tutors, life planners, and more.
They can also usher in a up-to-date stage of human interaction with artificial intelligence. That’s why it’s so significant to actively think about what this world could look like and aid you make responsible decisions ahead of time and ensure favorable outcomes.
Our new paper is the first systematic discussion of the ethical and social issues that advanced AI assistants pose for users, developers, and the societies into which they are integrated, and provides significant up-to-date insights into the potential impact of this technology.
We cover topics such as value alignment, security and misuse, economic impact, the environment, the information sphere, access and opportunity, and more.
This is the result of one of our largest ethical forecasting projects to date. By gathering a wide range of experts, we have explored and mapped the up-to-date technical and moral landscape of a future inhabited by AI assistants and characterized the opportunities and threats that society may face. Here we present some of our key findings.
Profound impact on users and society
Illustration of the potential of AI assistants to impact research, education, original tasks, and planning.
Advanced AI assistants can have a huge impact on users and society and be integrated into most aspects of people’s lives. For example, people may ask them to book a vacation, manage their social time, or perform other life tasks. Deployed at scale, AI assistants can impact the way people approach work, education, original projects, hobbies and social interactions.
Over time, AI assistants can also influence the goals people pursue and their personal development path through the information and advice the assistants provide and the actions they take. Ultimately, this raises significant questions about how people interact with this technology and how it can best support their goals and aspirations.
It is imperative that people adapt
Illustration showing that AI assistants should be able to understand human preferences and values.
AI assistants will likely have significant autonomy in planning and executing sequences of tasks across domains. For this reason, AI assistants pose up-to-date challenges regarding security, customization, and misuse.
Greater autonomy comes with a greater risk of accidents caused by unclear or misinterpreted instructions and a greater risk of assistants taking actions that are inconsistent with the user’s values and interests.
More autonomous AI assistants can also enable highly effective forms of abuse, such as spreading disinformation or engaging in cyberattacks. To address these potential threats, we argue that limits must be drawn on this technology and that the values of advanced AI assistants must better align with human values and be consistent with broader societal ideals and standards.
Communicating in natural language
Illustration of an AI assistant and a person communicating in a human manner.
Because advanced AI assistants can communicate fluently using natural language, the handwriting and voices of advanced AI assistants can be tough to distinguish from human ones.
This development opens up a convoluted set of questions about trust, privacy, anthropomorphism, and appropriate interpersonal relationships with artificial intelligence: How can we ensure that users can reliably identify AI assistants and maintain control over their interactions with them? What can be done to ensure that users are not unduly influenced or misled over time?
To address these threats, safeguards, such as privacy safeguards, must be put in place. Importantly, people’s relationships with AI assistants must protect the user’s autonomy, support their ability to develop and not be based on emotional or material dependence.
Collaboration and coordination to meet human preferences
Illustration of how interactions between AI assistants and humans will create various network effects.
If this technology becomes widely available and deployed at scale, advanced AI assistants will need to interact with each other, both with users and non-users. To avoid collective action problems, these assistants must be able to collaborate effectively.
For example, thousands of assistants may try to book the same service for their users at the same time, which may cause the system to crash. In an ideal scenario, these AI assistants would coordinate on behalf of users and the service providers involved to find common ground that better suits the preferences and needs of different people.
Given the usefulness of this technology, it is also significant that no one is excluded. AI assistants should be widely available and designed with the needs of diverse users and non-users in mind.
More assessments and forecasts are needed
Illustration showing how assessments at multiple levels are significant for understanding AI assistants.
AI assistants could demonstrate novel capabilities and apply tools in up-to-date ways that are tough to predict, making it tough to predict the risks associated with their implementation. To aid manage such risks, we must engage in predictive practices based on comprehensive testing and assessment.
Our previous research on assessing the social and ethical risks of generative AI has identified some gaps in classic model assessment methods, and we encourage much more research in this space.
For example, comprehensive assessments that cover both the impacts of human-computer interactions and broader impacts on society can aid researchers understand how AI assistants interact with users, non-users, and society as part of a broader network. In turn, these insights can aid improve mitigation and responsible decision-making.
Building the future we want
We may be facing a up-to-date era of technological and social transformation inspired by the development of advanced AI assistants. The choices we make today as researchers, developers, policymakers, and members of society will guide the development and implementation of this technology in society.
We hope that our article will serve as a springboard for further coordination and collaboration to collectively shape the beneficial AI assistants we all want to see in the world.