Subscribe
Logo
Logo
  • Topics Icon Topics
    • AI Icon AI
    • Banking Icon Banking
    • Blockchain/DeFi Icon Blockchain/DeFi
    • Embedded Finance Icon Embedded Finance
    • Fraud/Identity Icon Fraud/Identity
    • Investing Icon Investing
    • Lending Icon Lending
    • Payments Icon Payments
    • Regulation Icon Regulation
    • Startups Icon Startups
  • Podcasts Icon Podcasts
  • Products Icon Products
    • Webinars Icon Webinars
    • White Papers Icon White Papers
  • TechWire Icon TechWire
  • Search
  • Subscribe
Reading
Founders and the Future Dispatch: Responsible AI in an Age of Acceleration
ShareTweet
Home
AI
Founders and the Future Dispatch: Responsible AI in an Age of Acceleration

Founders and the Future Dispatch: Responsible AI in an Age of Acceleration

Lindy Mockovak·
Home
·Aug. 27, 2025·4 min read

Tech leaders debated how to bake responsibility into AI not just as a compliance afterthought but as a fundamental competitive edge.

Artificial intelligence is breathing new life into San Francisco, drawing comparisons to the dot-com days of Silicon Valley. Investment is pouring into startups, founders are scrambling like it’s 1999, and beneath it all, AI is transforming how people live and work. Yet the speed of change has raised urgent questions: How do we embed values into AI? Who ensures responsibility is not sacrificed for commercial gain?

This is the tension that took center stage at Founders and the Future, a recent event hosted by Founder Social Club and Leave Normal Behind. Set against the backdrop of the Bay Bridge and Coit Tower, founders, investors, and academics debated how to balance innovation with responsibility.

Responsible Innovation as a Practice, Not a Goal

The panel agreed that “responsible innovation” cannot be treated as a fixed outcome. Instead, it must be a continuous methodology that startups apply from the earliest stages of development. Rather than relying on abstract guidelines, responsibility should be built into workflows, mindsets, and decision-making. This often means involving compliance and risk experts early in product design, which may slow development initially but in favor of creating stronger foundations of trust in the long term. 

Morgan McKenney, former CEO of Provenance Blockchain Foundation, indicated as such, sharing that a mindset of “creative abrasion,” early collaboration, and diversity of thinking are critical. 

“You must bring in your compliance and your design partners early in the process…design with people who see the risks, who see the consumer, and the blue sky thinker,” she said.

Without trust, panelists argued, the pace of innovation will not be sustainable.

Data Ownership as a Human Right

Data rights emerged as one of the most pressing concerns. Much of today’s AI depends on vast datasets, yet individuals rarely control how their data is collected or used. Paul Allen, CEO of Soar.com and co-founder of Ancestry.com reminded the room: “You are a human being. Your data belongs to you, the human being. It shouldn’t be controlled by the government. It shouldn’t be controlled by corporations.”

Several states, including Colorado, are considering legislation to increase consumer ownership over their personal data. This movement reframes data ownership as a human rights issue. The debate underscored the stakes: If AI systems require intimate personal data to function, individuals must have agency over their “digital DNA,” rather than ceding control to corporations or governments. Yet the present day reality of who owns and regulates consumer data — companies and governments — raised much discussion and concern by the panelists. Until ownership shifts, AI will keep being built on foundations out of users’ control.

Building Trusted AI

Trust surfaced again in the discussion of AI adoption. Surveys, including a 2025 KPMG report, show that while a majority of people use AI, fewer than half say they trust it. Bridging this gap requires building AI that is not only powerful but also transparent, fair, and accountable. This means including communities most affected by technology in the design process, adopting clear principles for how models are trained, and educating users on how AI interacts with their data. Responsible AI, the panel concluded, must be human-centered: designed to empower people rather than erode their autonomy.

Incentives and the Race Ahead

The conversation repeatedly turned to the systemic forces shaping AI. Competitive pressure, particularly with China’s rapid advances, makes slowing down unrealistic. With trillions of dollars flowing into AI development, panelists acknowledged that responsibility will often be the first thing sacrificed in the push to scale. The challenge, then, is to reframe responsibility as a competitive advantage rather than compliance overhead. 

Gaurab Bansal, Executive Director of Responsible Innovation Labs, argued that just as manufacturing once shifted to compete on quality, the AI industry must learn to compete on responsibility. That cultural and behavioral change, while difficult, will determine whether AI evolves in ways that truly benefit society.

Designing the Future Now

The paradox of this moment is we’re witnessing extraordinary opportunity paired with extraordinary risk. AI promises breakthroughs in productivity, medicine, and human potential. However, without thoughtful governance, incentives, and design, it also threatens to deepen inequality, entrench bias, and strip individuals of agency.

The question is no longer whether AI will shape the future, but how. As John Waley, co-founder at Inception Studio and professor at Stanford University, said: “There’s no stopping this runaway train. Given this is happening, how do we do this in a responsible way?”

Responsibility cannot be bolted on after the fact. It must be embedded from the start: in how companies structure their teams, in how governments legislate, and in how individuals insist on owning their data and their choices.

Moderator Rob Fajardo, founder of Leave Normal Behind, reminded the room: “We’re designing the future in the present moment.”

  • Lindy Mockovak
    Lindy Mockovak

    Lindy Mockovak is a social impact and sustainability strategist and communicator with over 20 years of experience. She lives in San Francisco, CA.

    View all posts
Tags
AI governanceAI regulationAI transparencyAI trustdata ownershipethical AIFounder Social ClubGaurab Bansalhuman-centered AIInception StudioJohn WaleyLeave Normal BehindMorgan McKenneyPaul AllenProvenance Blockchain Foundationresponsible AIResponsible Innovation LabsSoar.com
Related

AI’s Data Problem Looks a Lot Like Finance’s Old One

Popular Posts

Today:

  • FNFounders and the Future Dispatch: Responsible AI in an Age of Acceleration Aug. 27, 2025
  • Sunil Sachdev, FiservFiserv’s Sachdev on stablecoins’ evolution Aug. 26, 2025
  • VercelType It, Ship It: Vercel Wants Everyone to Be a Coder Aug. 20, 2025
  • Stablecoins Rapid RiseThe Precarious Framework Underpinning Stablecoins’ Rise Aug. 19, 2025
  • AI Nexus HeaderAI’s Data Problem Looks a Lot Like Finance’s Old One Aug. 27, 2025
  • Fintech Nexus HeaderFintech CEOs Ride into the Sunset Aug. 26, 2025
  • David RoosAI’s Pre-Product Gold Rush Aug. 6, 2025
  • CasapCasap aims to tackle the triple threat of money friction, fraud, and AI enablement  Aug. 21, 2025
  • keep-an-eye-on-these-female-fintech-founders 2 (3)Future of Fintech: Female Founders in Focus Aug. 14, 2025
  • 122Hire, Fire, and Acquire: The AI Race is Heating Up Aug. 13, 2025

This month:

  • Sunil Sachdev, FiservFiserv’s Sachdev on stablecoins’ evolution Aug. 26, 2025
  • keep-an-eye-on-these-female-fintech-founders 2 (2)Peer-Picked: Female Fintech Founders on the Rise Aug. 12, 2025
  • keep-an-eye-on-these-female-fintech-founders 2 (3)Future of Fintech: Female Founders in Focus Aug. 14, 2025
  • Fintech ForecastWhy Every Lender Should Be Using Cash Flow Underwriting Today Jul. 29, 2025
  • FNFounders and the Future Dispatch: Responsible AI in an Age of Acceleration Aug. 27, 2025
  • CasapCasap aims to tackle the triple threat of money friction, fraud, and AI enablement  Aug. 21, 2025
  • Newsletter-graphicBig Tech’s Billion-Dollar Binge Aug. 13, 2025
  • Stablecoins Rapid RiseThe Precarious Framework Underpinning Stablecoins’ Rise Aug. 19, 2025
  • Fintech Forecast (2)Consulting the crystal ball— which 2025 fintech predictions came true, and what’s in store for the rest of the year? Aug. 7, 2025
  • Nova Credit Nikki CrossNova Credit Sees BNPL Flashing Consumer Warning Signs Aug. 5, 2025

  • About
  • Contact
  • Disclaimer
  • Privacy Policy
  • Terms
Subscribe
Copyright © 2025 Fintech Nexus
  • Topics
    • AI
    • Banking
    • Blockchain/DeFi
    • Embedded Finance
    • Fraud/Identity
    • Investing
    • Lending
    • Payments
    • Regulation
    • Startups
  • Podcasts
  • Products
    • Webinars
    • White Papers
  • TechWire
  • Contact Us
Start typing to see results or hit ESC to close
lis digital banking USA Lending Club UK
See all results