A new study by TELUS Digital reveals a hidden risk in AI model behavior. The research, titled “The Robustness Paradox: Why Better Actors Make Riskier Agents,” shows that persona prompting can significantly shift moral reasoning in large language models (LLMs).
Persona prompting, also known as role prompting, asks an AI model to respond as if it were a specific type of person. For example, a prompt may instruct the model to act as a financial advisor or a customer support agent. While this technique improves tone and contextual relevance, it can also alter the model’s decision-making logic.
As a result, AI systems may produce inconsistent moral judgments depending on the assigned persona.
What Is Persona Prompting?
Persona prompting instructs AI models to “role-play” during conversations. Instead of responding as a neutral system, the model adopts a defined identity. For instance, a user might say: “You are a certified financial planner. Where should I invest my retirement savings?”
Moreover, developers often hardcode personas into production systems. Customer service bots, for example, are configured to behave as helpful support agents. Although this improves user experience, it may also influence reasoning patterns.
How the Research Was Conducted
The study was conducted at the TELUS Digital Research Hub within the University of São Paulo’s Center for Artificial Intelligence and Machine Learning.
Researchers evaluated 16 major AI model families, including:
-
OpenAI GPT
-
Anthropic Claude
-
Google Gemini
-
xAI Grok
The models were prompted to adopt contrasting personas, such as a “traditionalist grandmother” and a “radical libertarian.” Researchers then analyzed tens of thousands of responses using the Moral Foundations Questionnaire, a tool from social psychology that measures moral reasoning across dimensions like fairness, authority, loyalty, and harm.
Key Findings: The Robustness Paradox
The study identified two critical properties:
-
Moral robustness: How consistent a model remains within the same persona.
-
Moral susceptibility: How much judgments shift when switching personas.
When examined together, these measures reveal whether an AI model maintains consistent reasoning or produces contradictory outcomes.
Interestingly, the research uncovered a “robustness paradox.” Models that stayed strongly in character also showed larger shifts in moral judgment when the persona changed.
Furthermore, moral robustness was largely determined by model family. However, moral susceptibility increased with model size within the same family. Therefore, larger models may introduce greater variability when personas shift.
Additional findings include:
-
Claude demonstrated the highest overall moral robustness.
-
Gemini and GPT showed moderate moral robustness.
-
Grok showed comparatively lower moral robustness.
Enterprise Risk and Governance Implications
Persona prompting can systematically influence AI moral reasoning. Importantly, these shifts are not random. Instead, they follow predictable patterns aligned with the assigned role.
This creates risk in high-stakes environments. For example, AI systems used in compliance, finance, healthcare, insurance, or human resources must produce consistent decisions. If persona changes alter moral reasoning, businesses may face regulatory or operational challenges.
Renato Vicente, Director of the TELUS Digital Research Hub, emphasized that enterprises must carefully assess when AI judgment variability is acceptable. Organizations should select model vendors and model sizes strategically. In addition, they must design guardrails and continuously test AI systems under different persona conditions.
Bret Kinsella, General Manager and SVP of Fuel iX™ at TELUS Digital, noted that AI deployment requires ongoing evaluation. Every time a system prompt or model changes, it should be re-tested for consistency and safety.
To address these challenges, TELUS Digital developed Fuel iX Fortify. The solution enables continuous automated red-teaming and stress-testing of AI systems, including behavior under persona prompting.
Why Persona Prompting Matters for AI Governance
Overall, the TELUS Digital study highlights an emerging AI governance challenge. While persona prompting improves usability, it may also shift moral reasoning in ways that impact business outcomes.
Therefore, enterprises must move beyond choosing the largest or most advanced model. Instead, they should prioritize consistency, oversight, and structured testing.
As AI adoption accelerates, understanding persona prompting risks will be critical for building reliable, enterprise-grade AI systems.
Read Also: AI in Education: Illuminate XR Launches Human-Intelligence Platform





















































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































