I did two things last weekend that had me thinking. First, I downloaded a 30+ page guide to “modern fundraising” detailing how a system can assign scores, automate interactions, and decide which donors are worth engagement. Later that evening I watched Black Mirror’s “Nosedive”—the popular episode in which every interaction is rated out of five stars and scores determine where one lives, who will talk to them, and whether they get invited to their coworker’s wedding. At first, the overlap was hard to see. Then it smacked me in the face.
Because let’s be honest: we’re not that far off. We may not hand out ratings in real time through a glowing contact lens, but we’re doing something just as unsettling. We’re assigning value to people—silently, algorithmically, and without consent. In fundraising, we call it scoring. It’s marketed as a smart, savvy use of data—whether we gather it ourselves or buy it from someone else—and praised as a way to steward limited resources. But what it actually does is sort donors before a relationship ever begins. It creates a hierarchy of worthiness and tells us who matters before we’ve even said hello. Our attention follows the algorithm’s lead, with little thought given to who the system is quietly telling us to ignore.
That’s the part we don’t talk about. The posture we adopt when we score isn’t just analytical; it’s moral. We act like we’re being efficient, but what we’re really doing is pre-determining whose generosity we want to invite and whose we don’t. We assume the best people will rise to the top, the way cream does in old-fashioned milk. But generosity doesn’t float on predictive models. It shows up in the margins. It catches us off guard. This isn’t about big data or whether we should all unplug our tech. It’s about what happens when we build systems that reward probability and eliminate surprise.
I first told the story of Olive Cooke in my book. She was a retired woman in the UK who received thousands of appeals each year—many from charities she had never supported. While doing my research, I couldn’t find a single fundraiser who had ever spoken with her. She was, in every way, a donor who didn’t make the list. Scored, segmented, and never meaningfully engaged. After her tragic death, the ripple effects were massive: public outrage, new legislation, and a wave of hand-wringing across the sector. But the damage ran deeper. Her death reminded us what happens when efficiency replaces empathy—and it cast a shadow over the integrity of our work.
Perform For The Machine
Let’s talk about ”Nosedive.”
It’s the Black Mirror episode where every interaction is scored in real time, and your rating determines everything—housing, job prospects, social access, even how nice the barista is to you. Bryce Dallas Howard plays Lacie, a woman who performs relentless niceness to boost her social score. She smiles through her teeth, posts pastel-filtered photos, and practices polite giggles in the mirror. Eventually it all implodes. Her score tanks, she’s socially exiled, and the episode ends with her screaming—joyfully—for the first time.
What makes “Nosedive” hit so hard isn’t the technology. It’s how familiar the behavior is. Lacie isn’t broken; she’s calibrated. And we’ve all seen that calibration at work in our sector. We optimize for opens and clicks. We filter lists by engagement scores. We tweak our messaging to match modeled donor preferences. Slowly, silently, we begin to perform not for the donor, but for the machine. We shape our tone, our timing, and our outreach all in the name of being data-informed. But let’s be honest: it’s the same performance Lacie was doing—just dressed up in warm and fuzzy nonprofit language.
And here’s the real twist: scoring doesn’t just shape our behavior. It shapes our imagination. Over time, we stop thinking about what might be possible with a donor and start reacting to only what the algorithm says is probable. Our fundraising becomes reactive, constrained, safe. We don’t chase surprise. We chase predictability. But if “Nosedive” teaches us anything, it’s that living for the score slowly kills your capacity for risk, joy, and authenticity. And fundraising without those things? It might raise money, but it won’t mean anything.
Surveillance Philanthropy
Let’s drop the euphemisms. Donor scoring isn’t just smarter segmentation. It’s surveillance with a nicer interface. No, we’re not hacking webcams or tracking donor GPS signals; but we are scraping behavior, assigning value, and determining access to relationship based on algorithmic predictions. That’s not engagement; that’s governance. And it should make us uncomfortable.
Shoshana Zuboff coined the term surveillance capitalism to describe how tech platforms extract data to predict and influence behavior. It’s about control, not connection. And, while nonprofits like to believe we’re immune to these dynamics, we’ve imported the logic wholesale. We use scoring tools to decide who’s worth a call. We run models that tell us who deserves attention. CRMs push lists to the top, and everyone else quietly vanishes. It’s not malicious. But it is mechanical. And it replaces human discernment with machine judgment.
The most dangerous part? We’ve normalized it. We don’t even notice when we stop listening. We just follow the logic. It’s efficient. It’s measurable. It feels smart. But, the moment a donor’s value is determined before a conversation happens, we’ve abandoned relationship in favor of prediction. We’re no longer co-creating generosity—we’re managing a behavioral portfolio. And, at that point, it’s not just the donor who’s being watched. It’s us, playing a role we never signed up for: agents of a system that would rather score people than trust them.
We like to think the nonprofit sector exists outside of the cultural problems we critique. But let’s be honest: we’ve imported the logic of the scoring society without much resistance. Scholars like Danielle Citron and Frank Pasquale describe this new social order as one where individuals are “assessed, profiled, categorized, and scored” in ways that quietly govern access to opportunity. And it’s not just governments doing the scoring. It’s platforms. It’s predictive models. It’s us.
When nonprofits adopt donor scoring systems, we’re not just responding to a trend—we’re helping normalize it. We’re reinforcing the idea that people should be ranked before they’re welcomed — that relationships should be earned with data, and that generosity must prove itself before it gets trusted. These are not neutral choices. They shape how people engage. They shape how we treat each other. And they quietly shift the role of the nonprofit from relationship-builder to reputational gatekeeper.
It’s worth asking: do we want to be another node in a society that filters humans by risk score and behavioral predictions? Or do we want to be a place where people are met as they are—not as their data suggests they might be? Because, if we don’t draw that line, we’re not just using the tools of the scoring society. We’re becoming part of its infrastructure.
But the danger isn’t just predictive error; it’s quiet erasure. Donor scoring systems don’t announce who’s been filtered out. There’s no flag that says, “This person didn’t make the list.” Instead, people are removed from cultivation streams, dropped from outreach, or never invited in the first place. They’re excluded, not because they opted out, but because someone—or something—opted out for them. And the damage compounds silently. A donor never gets called. A relationship never forms. A gift never moves. And nobody notices, because the system did exactly what it was designed to do: optimize for certainty and remove the rest.
The Gift Doesn’t Want To Be Scored
Let’s be clear: the gift doesn’t want to be scored. It doesn’t want to be ranked, filtered, or put through a matrix. It doesn’t want to be predicted. The gift wants to move. It wants to create relationship. It wants to arrive unreasonably, unpredictably, without the polite permission of a CRM dashboard. And, when we start designing fundraising systems around who might give instead of who is, we’re not just being strategic—we’re betraying the ethic that makes this work matter.
Scoring undermines the three fundamentals of the gift. First, it halts movement—because the system tells us to wait for a green light before we act. Second, it fractures relationship—because we’re not meeting people as they are; we’re sorting them before we’ve even made eye contact. And third, it strips away freedom—because generosity becomes conditional on passing a test you never knew you were taking. This isn’t a philosophical quibble. It’s a structural failure. A gift that only counts when it’s predicted isn’t a gift.
The defenders of donor scoring will say, “It’s just a tool—it’s how you use it that matters.” But that misses the deeper point. Tools aren’t neutral—they reflect the beliefs that built them and reinforce the values we bring to them. Donor scoring teaches us to see people not as complex, surprising, or generous, but as probabilities to manage or dismiss. That belief shapes behavior. It narrows our posture toward control, not curiosity. It rewards certainty, not trust. And, if the gift can’t be risky or surprising or free, then maybe we’re not fundraising anymore. Maybe it’s something else entirely.
Here’s the truth: scoring doesn’t scale trust. It replaces it. And, once we’ve outsourced trust to an algorithm, we’ve already forfeited the very thing we’re supposed to protect—the human act of choosing to give, freely and unpredictably, in relationship with another. That’s what makes the gift different. That’s what makes it powerful. And no model, no matter how sophisticated, can substitute for that exchange.
This isn’t a call to ditch your database. It’s a call to ask better questions about the systems we rely on—and the postures they invite. Do our tools deepen trust, or do they just manage risk? Do they help us build real relationships, or simply measure potential? Do they leave room for people to surprise us, or are we only listening to those we’ve already tagged as worth our time?
To my fellow fundraisers: the companies profiting from these scoring systems are only doing so because we’re buying. If we stopped—if we trusted our instincts, valued relationships, cultivated discernment, and let the gift play its role—those systems would lose their market. And maybe we’d recover something more important than predictive accuracy: our capacity to be moved, surprised, and in real relationship. Because, if we can’t make space for that, then what exactly are we doing?
“Nosedive” resonated for a reason. When I searched Reddit, I found thread after thread of people saying how close it all felt—how easily they could imagine the scoring society unfolding in real time. It didn’t feel like fiction. It felt like a mirror — a warning. And it made me wonder: what if a future Black Mirror episode wasn’t about a social media dystopia or a corporate rating system—but about a nonprofit? What if a donor’s experience—scored, filtered, and quietly ignored—became the next cultural punchline? Would we recognize ourselves? Or would we still be telling each other it’s just good strategy?
-
, Founder, Responsive FundraisingWriting Projects
Read our latest article in The Chronicle of Philanthropy.
Read our latest article in The Giving Review.
Order your copy of The War for Fundraising Talent.
Order your copy of The Fundraising Reader.
Job Opportunities
Regional Partner Engagement Officer, Texas - Every Home, Colorado Springs. Learn more here.
Upcoming Events
Free Webinars
Responsive Fundraising’s First Sensemaking Framework. TBD.
Speaking Engagements
Bridge of Hope National Conference, Lancaster, PA, Thursday, October 2.
You had me at “surprise” and “relationship.” We survive on individual donations that are on average $50. We can’t afford to treat everyone with possibility… and we are often surprised by the generosity when we have conversations. That’s where the scoring system often gets it wrong. I’m not going to lie - with millions in our donor database, it’s daunting to start. We do have to use some segmenting.
The problem is not that fundraisers are scoring. It's that they are using the wrong scores. Read my book Cracking Generosity.