The technology of Conversational AI has made significant advancements over the last eighteen months. As a consequence, conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives. Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents that can skillfully persuade them to buy particular products, believe particular pieces of misinformation, or fool them into revealing sensitive personal data. For many users, current systems like ChatGPT and LaMDA feel safe because they are primarily text-based, but the industry is already shifting towards real-time voice and photorealistic digital personas that look, move, and express like real people. This will enable the deployment of agenda-driven Virtual Spokespeople (VSPs) that will be highly persuasive through real-time adaptive influence. This paper explores the manipulative tactics that are likely to be deployed through conversational AI agents, the unique threats such agents pose to the epistemic agency of human users, and the emerging need for policymakers to protect against the most likely predatory practices.