Subscribe
Logo
Logo
  • Topics Icon Topics
    • AI Icon AI
    • Banking Icon Banking
    • Blockchain/DeFi Icon Blockchain/DeFi
    • Embedded Finance Icon Embedded Finance
    • Fraud/Identity Icon Fraud/Identity
    • Investing Icon Investing
    • Lending Icon Lending
    • Payments Icon Payments
    • Regulation Icon Regulation
    • Startups Icon Startups
  • Podcasts Icon Podcasts
  • Products Icon Products
    • Webinars Icon Webinars
    • White Papers Icon White Papers
  • TechWire Icon TechWire
  • Search
  • Subscribe
Reading
Fairplay’s Kareem Saleh on private sector data maturity
ShareTweet
Home
Fintech
Fairplay’s Kareem Saleh on private sector data maturity

Fairplay’s Kareem Saleh on private sector data maturity

Adam Willems·
Fintech
·Sep. 4, 2025·12 min read

Hiring, acquisitions, retirement planning, and more depend on a financial system underpinned by dependable government data; with those systems under fire, can the private sector backstop with certainty? 

The fidelity of U.S. economic data and forecasting is in question after the President fired Bureau of Labor Statistics Commissioner Erika McEntarfer for publishing allegedly “rigged” jobs figures — there is no evidence to back this allegation — and firing Fed Governor Lisa Cook for similarly spurious reasons, and which Cook is now disputing in the DC District Court. 

Nerves are understandable. Previous attempts by presidents in other countries to sway economic data in line with their political agendas have resulted in runaway inflation, business uncertainty, and, often, capital flight. While things may not have reached that level in the United States — and will depend in part on how litigation against the White House unfolds — financial institutions which rely on the trustworthiness and consistency of public data are looking for alternative ways to make sense of borrower health and the overall state of affairs. 

In an interview with Fintech Nexus, Kareem Saleh, Founder & CEO of AI-enabled lending-model- and agentic-verification solution Fairplay, indicated businesses have some private data sources they can use to triangulate public data, though it may come at a price. 

The following has been edited for length and clarity.

Fairplay sits at the nexus of lending, fraud detection, and compliance. How much does publicly issued data affect Fairplay’s work, and that of its clients, and has the politicization of this data affected your work or your clients’ work?

For decades, financial institutions have relied on government-published statistics as the “gold standard” inputs for risk and forecasting models. But as confidence in official numbers erodes, trust in these figures creates vulnerabilities. Models that rely on government data may be carrying hidden risks that few institutions fully appreciate.

For example, Fannie Mae and Freddie Mac use the Ten-Year Home Price Appreciation data published by HUD in their underwriting models. If that number starts to diverge from reality, then nearly $2 trillion in annual mortgage originations will be based on assumptions that don’t reflect the truth. Many lenders rely on data points like the Consumer Price Index, the unemployment rate, housing starts, or average household income to calibrate credit policy, forecast delinquency rates, and determine where to allocate capital. Commercial and small-business lenders use sector-specific government data — such as retail sales figures, manufacturing output, or agricultural yield statistics — to make judgments about borrower health, collateral values, and repayment capacity.

In short, most financial institutions make a practice of trusting government data in one form or another. To be sure, the government has historically revised the statistics it publishes, and lenders are accustomed to small adjustments over time. But what’s new — and far more destabilizing — is the possibility that the underlying numbers may be consistently biased, delayed, or politicized in ways that could fundamentally distort the risk models that keep our financial system running.

What technological workarounds are available regarding the fidelity of public data, in case its reliability changes?

Financial institutions will have to apply enhanced risk management practices to the government data they rely on. There are ways to test whether official numbers are distorted. The most effective approach is reconciliation: comparing government figures with independent or proprietary data sources.

  • For example, if the Bureau of Labor Statistics reports unemployment at 4%, does your customer base show the same trend — or are you seeing something different in loan performance or account activity?
  • If the CPI is reported at 1%, do your deposit inflows, spending data, or merchant volumes tell a consistent story?
  • Use alternative datasets — such as payroll data from ADP or Paychex — to triangulate labor market conditions. These private releases often come out ahead of the government reports, and many institutions already use them to “nowcast” the real state of the economy.
  • Where possible, develop internal benchmarks from your customer data to validate or challenge official numbers.
  • Now, if the fidelity of government-published data changes, it could reshape the competitive landscape. Large banks with deep proprietary datasets might adapt more easily, while smaller players could lose access to a common reference point they can’t readily replace.

The takeaway is that government data may still serve as a baseline, but financial institutions need to adopt a “trust but verify” posture — treating official statistics as one input among many, not the single source of truth.

What happens when easily verifiable public information is no longer as easily verifiable or as trusted and institutions have to use these workarounds? What does that do to earlier-stage players who don’t have commas in their bank account the way a bulge-bracket bank does?

It puts them at a competitive disadvantage. Because now you need the bandwidth, you need the human capital to do that kind of triangulation, or to do that validation of the government data, and then it requires some amount of financial capital to go acquire those data sources. 

Some financial institutions — like the big ones — may already be buying some of this data, or they may have pre-existing relationships with some of these data providers. So the cost to them maybe is in terms of human capital, of doing the reconciliation, but maybe they’re not coming out-of-pocket any more than they already are today, whereas some of these new players are going to struggle with that.

Some of these workarounds are up in the air — such as for cash-flow analysis — because they depend on open-banking statutes and unimpeded access to bank customers’ data. Things seem less dire than they did a few weeks ago, but does this dynamic aggravate the situation? 

It certainly could. On the one hand, most of the cash-flow data providers are providing really granular consumer-level information, not the kind of macro-level information that the government provides. But of course, if you’re pulling a lot of consumer information, you can paint a picture at the population level of what’s going on with respect to things like consumer income and spending patterns. So it does seem like this potential undermining of the government statistics is happening at a time when some of the relationships amongst the private-sector consumers and providers of the data are themselves strained.

The workaround that everybody talks about with respect to cash-flow data is going back to screen scraping, and the problem with screen scraping is that it’s tremendously insecure and tremendously vulnerable to scammers and fraudsters. On the one hand, you might think you’re saving by reverting to screen scraping, but on the other hand, you may be exposing you and your customers to a whole other set of potentially devastating financial risks. 

Have regulatory shifts since the beginning of the year substantively changed the compliance requirements Fairplay solves for? If so, what are those changes?

I would say the compliance burden has shifted, not vanished. Federal supervision and enforcement have cooled somewhat, yet the laws remain on the books, and enforcement energy has moved to the states — especially New York, California, Massachusetts, and Maryland — and to the plaintiffs’ bar. A lot of these laws provide for private rights of action. Consequently, banks have kept compliance practices largely business-as-usual.

At the same time, the administration has not rolled back any of the model validation guidance that was put in place after the financial crisis. With the emergence of alternative data and AI, of which the administration is very supportive, you still want to know that those models are accurate and stable and robust, and that they’re not going to precipitate a new financial crisis, especially given that we’re in a very choppy macro-environment when you think about tariffs and inflation and other geopolitical risks. We haven’t really observed a material change, because at the end of the day, people still want to know that their models work. And if your model is missing some population, that’s not just a fair-lending compliance issue, that’s a model-quality issue.

It sounds like there’s a privatization of enforcement that’s happening by having plaintiffs function as their own overseers of these laws. Assuming that takes some time to kick into action, have you seen businesses already cognizant of or preparing for that?

Well, the pointy end of the spear has really been the states, and you’ve seen this reverse brain drain from the [Consumer Financial Protection] Bureau into the big states. So New York has hired a bunch of former Bureau people. California just appointed Armen Meyer as the head of consumer protection. Massachusetts just brought this case against Earnest. Maryland just basically said that their Fair Housing Act incorporates disparate impact. So the first movers have been the states who have moved very, very quickly. 

Plaintiffs’ attorneys will take more time, because they have to go out and identify plaintiffs. They have to sift through all those plaintiffs to make sure that they’re bringing the very claims that are likely to succeed. Then they have to draft those complaints and file them, and then financial institutions move to dismiss them, and they have to demonstrate that they’ve got enough evidence to survive a motion to dismiss. So I think we’re probably in the second or third inning of the states moving, and probably still in the first inning of the private plaintiffs’ attorneys. 

But four years is a long time, and you’ve got plaintiffs’ attorneys who are pretty expert in bringing these kinds of claims. You know, there was a lot of hullabaloo last year when claims were brought against Navy Federal. Probably expect to see a lot more of that to come — especially, by the way, because every day brings some new headline about an AI gone wrong. 

There’s this thing called the AI Incident [Database], which logs news reports from people who claim that they’re being harmed by AI and agentic systems. The curve is hyperbolic in terms of the number of allegations, and it’s not just financial services, but also industries that are adjacent to financial services. So you see it a lot in insurance, for example, with claims administration cases that are now being brought against Allstate. We’re still very, very early on in plaintiffs’ attorneys gearing up to fill the gap that’s been left by the retreat and supervision and enforcement at the federal level.

Since you mentioned the agentic side of things, I’m curious about Fairplay’s own agentic solutions, and how you go about deploying those services while also being cognizant of the potential liability that they represent as an unprecedented and oftentimes unproven technology.

We’ve invested a lot of time and effort over the course of the last six or seven months building an agentic assurance product which attempts to answer: How do we know that this agent does what it claims to do? Is its logic and reasoning traceable, explainable? How will we know if it starts to drift outside the scope of its agency? What kind of testing and red-teaming needs to be done?

You need to make sure that you’re not injecting anything in the prompts the inputs to cause the agent to do crazy stuff. And you also need guardrails to make sure that the agent isn’t gonna say crazy stuff to consumers, leak PII, etc. And so what we’re seeing is that the model validation that financial institutions used to do has to be done on steroids, because before they were doing it on these deterministic models, and these agentic systems are largely built on these non-deterministic models. Plus, in many cases, they are replacing humans in ways that the traditional models didn’t. And so you have not only the risks that arise from the use of the AI system, but you have the risks that arise from essentially the management of a human because you’ve got to control for those risks too.

So there’s the question of who’s ultimately liable if these agents cause some sort of perceived harm.

All of these agents, by the way, are built on top of these foundation models. And the foundation model providers are not always totally transparent about how they were trained, how they were stress-tested, and how they were de-biased, and so there are a series of legal questions that are raised by the use of these AI agents. If they run a bank off a cliff or do some harm to consumers, where does the buck stop? 

I think most people will say that, at least in banking, it’s ultimately the banks’ responsibility, because they’re the ones who are sourcing these technologies from these third parties, and so they have an obligation to do their due diligence on the third parties. But the banks themselves sometimes struggle to get the information they need from the third parties. You know, OpenAI and [Anthropic] and Google are, on the one hand, very sophisticated. On the other hand, if you’re a middle-market bank, you don’t really have a lot of negotiating leverage vis-à-vis companies like that.

Last question: I know the MIT NANDA Program recently came out with a report suggesting enterprises are struggling to see agentic projects deliver ROI or address the use cases they wanted them to address. Especially given the added liability concerns, is there a similar dynamic at play in financial services?

There is, for sure. Some of that relates to limitations of the technology. But if you look at that MIT study, a big part of it also is that the institutions’ data is already very disparate across multiple systems. It’s not generally clean or in a format that can be readily consumed by an agent to allow for fine tuning and training of the agent, and then their infrastructure — with respect to things like access permissions and controls — is not in a place that can permit an agent to operate autonomously. 

Where you have seen successes is in these narrowly scoped use cases, and generally in what a friend of mine at Bank of America calls “no regrets” applications: these back-office applications where they’re not touching a consumer, or things like fraud which is not heavily regulated, and is also generally regarded as being important to the safety and soundness of the bank. And so, yes, we haven’t seen the ROI of a lot of the AI investments in areas like underwriting, which are heavily regulated, or servicing, where you are talking to consumers, and you’ve got to be really careful that you’re not either saying crazy stuff to consumers or treating one class of consumers differently than another. At the same time, I think we are seeing more success in things like onboarding, where it’s like, Okay, is this person who they really say they are? I think the success rates have been much, much higher.

So we’re starting with anti-money laundering, KYB/KYC, and we have aspirations to grow to other parts of the customer journey, but right now, to maximize the odds of success, we are starting with these more narrow use cases, which are often not customer facing. The behavior of the agent is well-defined, the ability to evaluate the agents’ effectiveness against some sort of ground truth is also well-established, and the consequences of agentic failure are lower.

  • Adam Willems
    Adam Willems

    Adam is an experienced writer, researcher, and reporter whose work has been featured in publications such as WIRED, The Baffler, and more. Earlier in his career, he was the Head of User Research and Communications at Kite, a Delhi, India-based fintech startup, and worked as a researcher for Pushkin Industries, Malcolm Gladwell’s podcast studio. Adam is a graduate of Yale University and Union Theological Seminary. Adam also works as a local reporter in Seattle covering culture and sports.

    View all posts
Tags
agentic AIAI in financial servicesAI model validationBureau of Labor Statistics controversyeconomic forecastingFairPlayfintech compliancegovernment data reliabilityKareem Salehprivate sector data
Related

Fintech IPOs on Deck, U.S. Data Fidelity in Spotlight

Preventing AI Catastrophes

The Unique Challenges and Opportunities for AI Companies Working with Banks

Consulting the crystal ball— which 2025 fintech predictions came true, and what’s in store for the rest of the year?

Popular Posts

Today:

  • Fairplay AI – Kareem SalahFairplay’s Kareem Saleh on private sector data maturity Sep. 4, 2025
  • Fintech Nexus HeaderFintech IPOs on Deck, U.S. Data Fidelity in Spotlight Sep. 4, 2025
  • Jeff (1)Preventing AI Catastrophes Sep. 3, 2025
  • Sunil Sachdev, FiservFiserv’s Sachdev on stablecoins’ evolution Aug. 26, 2025
  • CasapCasap aims to tackle the triple threat of money friction, fraud, and AI enablement  Aug. 21, 2025
  • SolaFunded: Sola lands $17M Series A to transform BPO with AI-native automation Aug. 22, 2025
  • Stablecoins Rapid RiseThe Precarious Framework Underpinning Stablecoins’ Rise Aug. 19, 2025
  • FN 8:28The Unique Challenges and Opportunities for AI Companies Working with Banks Aug. 28, 2025
  • VercelType It, Ship It: Vercel Wants Everyone to Be a Coder Aug. 20, 2025
  • 124Female Fintech Founders Full Speed Ahead Aug. 14, 2025

This month:

  • Sunil Sachdev, FiservFiserv’s Sachdev on stablecoins’ evolution Aug. 26, 2025
  • keep-an-eye-on-these-female-fintech-founders 2 (2)Peer-Picked: Female Fintech Founders on the Rise Aug. 12, 2025
  • FNFounders and the Future Dispatch: Responsible AI in an Age of Acceleration Aug. 27, 2025
  • keep-an-eye-on-these-female-fintech-founders 2 (3)Future of Fintech: Female Founders in Focus Aug. 14, 2025
  • FN 8:28The Unique Challenges and Opportunities for AI Companies Working with Banks Aug. 28, 2025
  • CasapCasap aims to tackle the triple threat of money friction, fraud, and AI enablement  Aug. 21, 2025
  • Jeff Radke AccelerantAs Accelerant IPOs on NYSE, CEO Jeff Radke Hopes to Usher In Insurtech 3.0 Jul. 24, 2025
  • Stablecoins Rapid RiseThe Precarious Framework Underpinning Stablecoins’ Rise Aug. 19, 2025
  • Newsletter-graphicBig Tech’s Billion-Dollar Binge Aug. 13, 2025
  • Fintech Forecast (2)Consulting the crystal ball— which 2025 fintech predictions came true, and what’s in store for the rest of the year? Aug. 7, 2025

  • About
  • Contact
  • Disclaimer
  • Privacy Policy
  • Terms
Subscribe
Copyright © 2025 Fintech Nexus
  • Topics
    • AI
    • Banking
    • Blockchain/DeFi
    • Embedded Finance
    • Fraud/Identity
    • Investing
    • Lending
    • Payments
    • Regulation
    • Startups
  • Podcasts
  • Products
    • Webinars
    • White Papers
  • TechWire
  • Contact Us
Start typing to see results or hit ESC to close
lis digital banking USA Lending Club UK
See all results