BNY builds “AI for everyone, everywhere” with OpenAI

AI Summary8 min read

TL;DR

BNY adopted a 'AI for everyone, everywhere' strategy post-ChatGPT launch, creating an AI Hub and Eliza platform to train employees and deploy over 125 use cases safely. They integrated governance into tooling and achieved 99% workforce training, fostering innovation and trust in a systemically important institution.

Key Takeaways

  • BNY implemented a centralized AI platform called Eliza to democratize AI access, training 99% of employees and enabling over 125 live use cases with governance built-in.
  • The firm's governance model includes cross-disciplinary review boards and integrates oversight into Eliza, balancing innovation with accountability to maintain trust in financial operations.
  • Initiatives like 'Make AI a Habit Month' and hackathons empowered employees, leading to a cultural shift where teams collaborate on AI projects, reducing tasks like legal review by 75%.
  • BNY leverages OpenAI models and deep research for advanced agents, turning enterprise knowledge into autonomous workflows while extending existing risk frameworks for AI governance.

Tags

ChatGPTAPI

When ChatGPT launched in late 2022, BNY(opens in a new window) made a decisive move to embrace generative AI across the enterprise. Rather than limiting experimentation to a few technologists, the firm created a centralized AI Hub, launched an internal AI deployment and education platform called Eliza, and trained its employees on responsible AI use. 

“Our mantra is ‘AI for everyone, everywhere, and in everything,’” says Sarthak Pattanaik, Chief Data and AI Officer at BNY. “This technology is too transformative, and we decided to take a platform-based approach for execution.”

That platform now supports over 125 live use cases, with 20,000 employees actively building agents.

From its start, Eliza was designed not just as a tool, but as a system of work, pairing BNY’s governance rigor with leading models—including OpenAI frontier models—to help employees build safely and confidently. 

“We’re not building side projects,” Pattanaik says. “We’re changing how the bank works.”

Maintaining trust in a systemically important institution

BNY plays a systemically important role in the global economy, managing, moving, and safeguarding assets, data, and cash across more than 100 markets. As one of the world’s largest financial institutions, with more than $57.8 trillion in assets under custody and/or administration, trust is non-negotiable. 

“We are much like the circulatory system of the global financial services ecosystem,” says Pattanaik. “And from that perspective, we must ensure trust is built into everything we do.”

With that level of responsibility, deploying AI couldn’t be an afterthought or a side experiment. BNY needed an approach that balanced innovation with accountability.

“A lot of folks could have said, you have such a huge responsibility - maybe we’ll wait and see what happens with AI,” says Pattanaik. “We believe AI is going to be like the operating system of technology going forward.”

two images on transparent background. image on left: A man wearing a pink shirt and a navy BNY-branded vest stands smiling with his arms crossed in front of large digital screens displaying data, charts, and AI-driven visualizations.. Image on the right: Three colleagues walk together down a brightly lit office hallway. A woman holding a tablet speaks while two male coworkers walk beside her, listening and engaged in conversation.

Scaling AI safely through governance by design

Key to Eliza’s success is a governance model that supports scale without slowing experimentation. “Some might see AI governance as a barrier, but in our experience, it’s been an enabler,” says Watt Wanapha, Deputy General Counsel and Chief Technology Counsel. “Good governance has allowed us to move much more quickly.”

At BNY, there are several cross-disciplinary groups that meet regularly to review and consider new AI use cases:

  • A data use review board, which brings together cross-functional leaders in intellectual property rights, cybersecurity, engineering, data, privacy, third-party relationships, and others.
  • An Artificial Intelligence release board, which aligns similar teams plus additional groups to reconsider initiatives before they are deployed into production. 
  • The Enterprise AI Council, providing senior oversight and policy alignment across the firm.

Insight from the data use review board flows daily to the AI Council, which then evaluates high-impact or novel scenarios. “We had to iterate as we went along,” Wanapha notes. “As our use cases expand, and as the models shift, we have to constantly evaluate AI projects to maintain accuracy.” 

What makes BNY’s approach different is how governance is fully integrated into the tooling. Within Eliza, all prompting, agent development, model selection, and sharing happens inside a governed environment. 

“Eliza embeds governance at the system level,” Wanapha explains. “It standardizes permissions, security, and oversight across all models and tools, ensuring every workflow meets the same level of protection.”

Empowering every employee through training and community 

At BNY, governance isn't just about oversight - it’s how employees engage with AI every day. Eliza enforces responsible use by design. All employees complete mandatory training before they can use it, and that foundation is reinforced with additional trainings, tools, challenges, and community support. The company now has 99% of its workforce trained on Gen AI, with many more advanced enablement opportunities available. 

“We introduced a number of different learning solutions to meet people where they are and to bring them along on the journey,” says Michelle O’Reilly, Global Head of Talent.

One standout initiative: Make AI a Habit Month, a daily series of seven-minute trainings designed to build confidence in prompting, agent building, and peer sharing. “From this month, we saw a 46% increase in the number of agents people were building,” notes O’Reilly.

This enablement model has unlocked a broader cultural shift. “People feel empowered to solve problems themselves,” says Pattanaik. “We’re seeing a culture shift in how teams operate.” 

That culture shows up in events like bank-wide hackathons, where teams from Legal, Sales, and Engineering build side-by-side. “We had a recent hackathon in Sales,” says Ed Fandrey, Head of Sales and Relationship Management. “There were no IT or tech folks present, but everyone felt like a developer.”

Two images on transparent background. Image on the left: A large, open office atrium filled with natural light, where groups of people sit at tables working, eating, and socializing across multiple levels of the space. Image on the right: A group of coworkers sit around a conference table with laptops, smiling and laughing during a relaxed, collaborative meeting.

Unlocking firmwide impact from early use case learnings 

The first wave of agents built in Eliza, in collaboration with the AI Hub and different BNY departments, showed how quickly teams could turn ideas into impact:

  • Contract Review Assistant: Reduces legal review time by 75%, from four hours to one, across 3,000+ annual vendor agreements each year.
  • People Business Partner Agent: Provides fast answers about benefits and policies, cutting manual requests and improving consistency and accuracy.

These early projects sparked a cultural shift. “Before, collaboration meant more meetings,” says O’Reilly. “Today, it means experimenting together, sharing prompts, testing agents, and learning by doing.” That mindset created a flywheel of innovation, with one team’s agent often becoming another’s foundation.

Built for controlled autonomy, Eliza initially allowed only private agent builds. Now, agents created by certain teams and roles can be shared with up to ten colleagues, fueling reuse and scale. The result: more than 125 AI tools in production across every major business line, including:

  • Lead Recommendation Engine: Generates insights and opportunities that are relevant to propose and discuss with a client.
  • Metrics Agent: Summarizes learning platform usage and performance with permission-aware access.
  • Risk Insights Agent: Uses deep research to surface emerging risk signals across portfolios, helping analysts act before issues escalate.

Eliza also introduced the concept of advanced AI agents—what BNY calls “digital employees”—with identities, access controls, and dedicated workflows. Digital employees handle everything from payment instruction validation to code security enhancements. 

“Now, instead of handling certain tasks in the first instance, the role of the human operator is to be the trainer or the nurturer of the digital employee,” Pattanaik says.


Turning enterprise knowledge into autonomous workflows with deep research and agents

A select group at BNY is experimenting with ChatGPT Enterprise, equipping teams with capabilities like deep research to explore new ways of working with AI. 

Deep research enables multi-step reasoning across internal and external data, powering use cases like risk modeling, scenario planning, and strategic decision-making. 

“I use it daily,” says Watt Wanapha, Deputy General Counsel. “If I’m tackling a novel legal question, I use deep research as my thought partner to help me evaluate whether there are  questions I’m not asking.”

For client-facing teams, deep research is also reshaping how they prepare for conversations and strategic planning. Paired with agents, those insights could be acted on instantly, triggering follow-ups, drafting outreach, or scheduling next steps directly within client systems. 

Together with Eliza’s orchestrator layer, these advancements form the foundation for autonomous digital employees built with permissioning, oversight, and telemetry at the core. And the next frontier is already in view. 

“We continue to mature beyond knowledge extraction and reasoning,” says Pattanaik. “It’s about connecting the dots across the organization to innovate on new products, personalized for our clients.”

Lessons for AI leaders: Build it in, don’t bolt it on

BNY’s governance strategy offers a blueprint for enterprise AI teams navigating secure environments:

  • Leverage existing risk frameworks: Instead of creating generative AI-specific governance from scratch, BNY extended its mature legal and compliance processes to cover new use cases.
  • Create shared responsibility: Cross-functional councils review AI use cases, ensuring domain-specific risks are considered in real-time.
  • Make governance visible and accessible: Eliza’s interface enforces tagging, telemetry, approval flows, and access controls - without burdening end users with manual steps.
  • Invest in culture and consistency: Nearly 99% of employees have completed responsible AI training and received Eliza access. “Unless you already know how the AI and how the platform works, you're not going to be able to really think about the risks and also the possibilities,” Wanapha notes.
  • Build with the right partner: “With AI, we are all encountering new questions that have not been answered,” says Wanapha. “So it's very important to have the right partner and an open channel of communication.”

The combination of in-house accountability and external partnership continues to be a key enabler of growth. “It’s a great mix,” says Pattanaik, “of the research OpenAI provides and the purposeful business case BNY provides.”

Power your institution with advanced intelligence

See how OpenAI can help your organization scale AI securely and responsibly.

Visit Website