Published on June 9, 2025 9:26 PM GMT
Just posted the following on Medium. Interested in comments from readers here, especially pointers to similar efforts and ideas I didn't mention below.
This is the first in a series of articles describing features, functions, and components of Personal Agents — next generation AI virtual assistants that will serve as trusted advisors, caretakers, and user proxies. Personal Agents will preferably be developed as an open source project. Primary goals are to specify agents that (1) Significantly benefit people (are not just cool or fun) and (2) Avoid harmful side-effects (like those plaguing social media or that worry AI safety advocates). A clear and open specification will facilitate agent development and certification. This article provides a brief overview of Personal Agents.
Personal Agents (PAs), introduced here and here, are next-generation virtual assistants[1] that will support people in all aspects of their lives — from health and safety to education, career, finance, commerce, hobbies, friendship, romance, governance, and more. A PA will be securely paired with a person; deeply understand their circumstances, psychology, interests, abilities, and goals; and constantly work to support and improve their wellbeing. PAs will be adept advisors and caretakers and serve as proxies that represent the user in interactions with other agents and systems. They will be the ultimate fiduciaries[2].
PAs will be a key force for democracy and equity. They will protect and empower individuals so they can compete and thrive in a complex world teeming with powerful agents — from wealthy people to corporations to countries of geniuses in data centers. Every person will have a genius partner by their side.
A PA will manifest as a single persona that provides comprehensive support to its user — analogous to a parent, chief of staff, guardian angel, fairy godmother, or genie. When user needs exceed the knowledge and affordances of their PA, the PA will be able to connect with specialized agents (AI and human) to satisfy those needs. The PA will be the focal point for all such interactions.
PAs will maintain up-to-date, authoritative user data that can be shared as appropriate so user support is efficient, reliable, and highly effective. Data sharing will comply with terms negotiated by agents representing all parties to such transactions.
Personal Agents will be:
- Modeled on good parents and trusted friends,Hyper-competent,Hyper-personalized,Highly secure and privacy protecting,Able to distinguish truth from fiction,Capable of accurately attributing sources and describing reasoning,Enabling of frictionless commerce and governance,Adept at balancing user interests with social interests,Leveraging of latest technology, andCost effective.
PA-related ideas have been described by Anthony Aguirre et al. in discussing AI loyalty, Mustafa Suleyman regarding Inflection AI’s Pi chatbot and Microsoft’s personal Copilot, and people associated with some of the virtual assistants noted in footnote 1. Sam Altman of OpenAI recently suggested:
[ChatGPT could exist as a] very tiny reasoning model with a trillion tokens of context that you put your whole life into. This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.
Such capabilities exceed those needed for PAs but will enable future versions that are even more capable and robust. Efforts like Microsoft personal Copilot, OpenAI’s collaboration with Jony Ive, Google Gemini Live, Meta AI, and Apple Siri 2.0 may be on track to deliver some PA features within a year or two.
A coordinated (and open) effort to develop Personal Agents would focus the talents of people from different organizations and backgrounds. Development would be transparent. Prospective users will be able to learn about and contribute to the effort. No single company will control the project or the resulting market.
Inputs from psychologists, psychiatrists, and moral philosophers will be particularly important given the emphasis on personal wellbeing. Inputs from people in AI development, AI safety, education, law, medicine, finance, and other areas will also be critical.
Coming next: Descriptions of features, use cases, and other details.
- ^
Other terms for virtual assistants are AI assistants, AI personal assistants, digital assistants, AI companions, chatbots, and AI agents. Examples include Google Assistant, Google Gemini, Amazon Alexa, Apple Siri, OpenAI ChatGPT, Microsoft Copilot, Anthropic Claude, Perplexity, Character.AI, Replika, SnapChat My AI, Khan Academy Khanmigo, and Cognition Devin.
- ^
Per Google AI Overview, “a fiduciary is a person or entity legally and ethically bound to act in the best interests of another, placing that party’s interests above their own.”
Discuss