DonorVoice

DonorVoice

Fundraising

Reston, VA 2,229 followers

The Behavioral Science Fundraising Agency

About us

DonorVoice is The Behavioral Science Fundraising Agency. We help clients turn donor-centric from slogan to strategy with science to unravel human decision making. We are equal parts social scientists and fundraisers who know that science is only useful if it is applied. We make copy, design and journeys better.

Website
http://www.thedonorvoice.com
Industry
Fundraising
Company size
11-50 employees
Headquarters
Reston, VA
Type
Partnership
Specialties
behavioral science, fundraising, segmentation, donor journeys, insights, statistical modeling, copy writing, behavioral design, and surveys

Locations

Employees at DonorVoice

Updates

  • View organization page for DonorVoice, graphic

    2,229 followers

    Activists Aren't Donors, Even If They Donate Activists and donors are different. And while activists often donate that initial act of activism (e.g. the lead-gen petition signing) should not be thought of as a donor stepping stone. Rather, it should be treated as an indicator of the type of person you're interacting with, one with very different beliefs and emotions from the person who will donate but not take activist actions. And if you hope to make your activist someone who also gives money then you'll want very different messaging. Here's an example of how donors and activists differ in thinking about global poverty.   Does your financial ask of the activist focus on their beliefs or is it more geared towards a donor ask? The latter won't work (well) on the former. Does your subsequent "journey" become a one-size-fits-all for activists that also give money and donors who aren't activists? These are likely, very different human beings and you're better off with that assumption than the less supported (but more common) one groups these two together. A note about emotion. Outrage is the emotion activists feel or express at any situation that lacks equity or fairness, which is their moral lens on the world. However, these same activists have higher levels of sympathy than the donors. But, sympathy isn't what motivates them and as it turns out, neither is outrage. They feel it, yes, but that is different than saying it causes helping behavior. One emotion that does cause the activist to act in the name of systemic inequity or unfairness in the world is gratitude.  Outrage doesn't motivate, it describes. Having that same, outraged activist take a moment to see and appreciate their own lot in life and be grateful for it (not embarrassed or shamed by it) is the better way to motivate action in the name of those suffering under political or institutional failings. Activists aren't donors, even if they engage in the act of donating.  

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    Don't Make Persuasion Even Harder Than It Already Is Jack Trout was a TV ad man who helped pioneer the brand positioning concept and famously quipped, "If your assignment is to change people's minds, don't accept the assignment." Sage advice. A huge study of TV effectiveness on changing people's attitudes and beliefs about social issues found it's a waste of time and money. The study was a randomized experiment with 31,404 voters who saw three weeks worth of interest group ads on either immigration or transgender non-discrimination. The participants completed surveys pre and post exposure measuring a variety of attitudes and beliefs on the respective topic. The results for the LGBTQ ad reflect the same pattern seen for the immigration ad tests. The gist? --People definitely saw the ads (1st set of bars - Recall) --They retained facts shared in the ads --The impact of the ads on attitudes and beliefs was incredibly small. If I was a 50 on a 0 to 100 scale measuring my attitudes, the ads moved me up (positively) to 50.5, on average. --The tiny effect was very short lived, not even lasting a day after the ads stopped running. --In a twist, people could recall the ads weeks after exposure. It's just that they had negligible effect Persuasion is tough but, not all media is the same. A similar experiment played out with Face to Face conversations (at residential houses) showing a marked, longer-lasting effect on attitude change. Persuasion is tough but, you make it much harder with this one-size-fits-nobody approach, which is precisely what this experiment did. The only chance you have to persuade is by starting where people are. This is why the F2F effort had more success. The conversation allowed for a back and forth dialogue. How to replicate this in mail, TV or digital channels? You need different ads tied to innate ways that people orient and think about the world. The ad needs to speak to them in a way that gets them to listen and process rather than tune out.  One of the immigration ads was, per the creators (an advocacy org), designed to "use narrative persuasion paired with moral framing through American values of family, hard work and freedom to increase support for immigrants". Great, good idea if only that line of thinking were extended to realize that moral framing won't work for everyone. We know Conscientious people (findable and targetable in every media channel) are much more likely to respond to this work/freedom frame than Agreeable or Open people. Persuasion is tough but you do it with at least one if not both hands tied behind your back if you have a one-size-fits-all approach. This random nth-ing of the world produces results full of false negatives. The ads likely worked differently for different people. But, all this tailored thinking needs to be brought to the beginning of the assignment. Or, just follow Jack Trout's advice and punt.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    From Talk to Action: The ROI of Measuring Donor Experience You can talk about donor experience but unless you're regularly measuring it, that's all it'll ever be. But what's the ROI? Is it a just-believe thing? Good and no, respectively. These are experimental results comparing asking for customer feedback after an interaction versus not. The "not" is likely your org's standard operating procedure. The experiment had several groups, --Control. Didn't receive a feedback request after the interaction. The received request for feedback group is broken into 3 naturally occurring or experimentally altered subgroups, --Mere Solicitation Effect Group - these people received the request but didn't give feedback --Mere Measurement Effect Group - received and replied to the feedback request --Open-end question asking for Positives Group - this was an experimental survey condition. This group had the first survey feedback question prompt them open-endedly to share any positives about the interaction. This group is compared to the group that did not include this first, positive prompt question. The spend variable (Y axis) is looking at spend in the 30 day period after the feedback request (or lack of it for the control group).  The researchers ran a similar experiment looking at spend over a full year after the single request for feedback and found statistically significant lift. I've added in the percentage change and so even if your mileage varies, there's a hell of a lot room between these gaudy %'s and your typical, failed A/B test. Why does asking for feedback get people to spend/give more? --The act of asking for feedback suggests you care. (mere solicitation request) --Giving feedback causes people to subconsciously self-evaluate and perceive themselves as more committed or interested in the cause. (mere measurement effect) --Soliciting open-ended positive feedback creates positively biased memories of an experience (positive solicitation effect) Even those customers who rated the experience as "poor" were positively influenced by soliciting positive open-ended feedback, spending 55% more than those who also rated the experience as poor but didn't get the open-end prompt. But these results undersell the value of feedback even with their eye-popping lift. How so? There's more upside to be had if you are responsive to the feedback. Measuring lifts, responding lifts more. All DonorVoice agency of record clients are using our enterprise level, feedback system and platform. This includes automated, tailored replies to those who give feedback. We're still using business rules, Boolean logic and templates for the auto-generated replies but generative AI opens up a world of possibility for much more tailored, conversational responses. You should be asking for more than just transactional experience data. Collecting Commitment (our measure of brand loyalty) and Identity means you can tailor the number of communications and content of them.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    What's Better, Ambiguous or Clear? Do you prefer noisy and ambiguous or clear and explicit? Doubt anyone says say the former but the sector, ironically, relies almost exclusively on noisy and ambiguous data. The person, --Clicked. Did they click out of idle curiosity or with intent? Did the context (e.g., time of day or mood) impact the click decision? --Donated online/offline. Was it guilt induced or internally motivated? Does the person intend to give again? Was the reply form confusing? We rely on mouse and pen movement with little to no understanding of why. We're not alone. Spotify wonders why someone skipped a recommended song. The skipping behavior is ambiguous. How does the machine in the recommender engine interpret the skip? Maybe that song would have been perfect if the person was exercising but not mellowing out on Sunday. Maybe it was an accident, they meant to hit the like button. These last-mile behaviors are a form of donor feedback.  But we need to marry the messy with the less messy to clarify. We need different feedback, the kind that only the user can provide. Put a Canary in Your Donate Page Coal Mine There are 3 categories of explicit feedback to create a better donor experience. --Clarifying Behavioral Signals: The list here is endless. Why did the person exit the donate page without finishing? We place code on the donate form to trigger a modal window appearing if the person moves their mouse to the top of the browser to exit.  The survey asks why they're leaving with a couple structured questions. This is the canary in the coal mine, I don't need a lot of people giving me this feedback to have some useful insight to this otherwise, dark hole. --Collecting Feature Feedback: I'm surprised every time I ask the online CRM form providers if they do user research on layout, design and features. The answer is always no, preceded by blank stares. It takes very few humans giving qualitative feedback on your form or home page to identify a myriad of low-cost, high upside fixes. --Understanding User Context: Did the person donate on a whim or because a friend asked, do they have a strong, personal connection? 2nd gift conversion would be aided immensely by knowing the answer. But how do we design these requests for feedback so it's not annoying or intrusive for the donor? Here are 3 design considerations. --The framing of the feedback: People are more likely to give feedback if it's framed as collaborative, meaning it'll help the giver (donor) and the receiver (charity) versus just the latter. --Indirect asking: It can be more effective to instruct donors on how to give feedback rather than asking directly. For example, after an online donation the confirmation page can suggest that if they'd like to give feedback on the experience they can click and do so. This is subtlety but importantly better than asking the person to do it directly. --Timing: Ask right after the interaction or don't bother.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    What's Worse, Actual Misinformation or Claims of It? The World Economic Forum named misinformation as the top global risk, higher than war. I think it's fair to say the WEF/Davos crowd ain't exactly a representative slice of America or any other country. Maybe they know more than us, maybe their ranking is useful or maybe they are just parroting the times. Note the explosion in "misinformation" search.  And yeah, generative AI didn't exist as a commodity in 2016 but the internet and social media sure did. Speaking of 2016, researchers analyzed the actual impact of the Russian attempts to influence the election. Here's what they found, --All those Russian bots on Twitter had 70% of their exposure reaching only 1% of Twitter users. --Those 1% were overwhelmingly Republican. So, if the point was to sway the election to Trump, this is akin to giving coupons to people who were going to buy anyway. --And the number of exposures to the Russian troll accounts was exceeded by hundreds of times with non troll content. A drop in the ocean. --The final nail in this particular misinformation coffin is that the exposure had zero effect on whether people voted and for who. So, misinformation isn't really the issue, it's the impact of it. And it's always useful to put the volume in context to the always exponentially larger firehose of other content we consume.  So, is the Google trendline matching the problem or out of proportion to it? The problem of mis/dis information, fake-news as it were, has been around forever. Julius Caesar's son Octavian secured his position as the leader of Rome after his father's death by spreading lies about Mark Antony, his challenger, being a traitor. All the research on mis/disinformation - which we used to just call lies - suggests it doesn't work to sway opinions or belief. Even factual information rarely sways opinion; the persuasion business is a fool's errand. But, lying about things that have a tiny ripple effect on behavior can work, in theory. In Australia voting is compulsory but a misinformation campaign stated it wasn't for a referendum vote last year. Did it work? Hard to know, turnout was higher than 2022 but it's hard to prove a negative. But, that sort of lie is the kind that can influence at the margins. How about the constant drumbeat of misinformation claims? Might this cacophony help explain the dramatic decline in levels of institutional trust? The actual impact of misinformation might be akin to a cold, but the incessant claim that everything is fake or could be fake might be more terminal to the social fabric.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    AI Imagery, Creepy or Credible? Imagine a world where for a modest fee, you get an AI-created image that looks real. No more expensive photo shoots or logistical nightmares—just a perfect image ready to go. AI-generated content is poised to revolutionize advertising, bringing cost savings and convenience that’s hard to beat. But, and it’s a big but, we need to tread carefully. A hot-off-the-press study explores how these synthetic faces impact charitable giving, and it’s not all smooth sailing. When donors know a face is AI-generated, as in the image to the left, their willingness to donate drops. This happens because there’s: --Reduced Empathy: Knowing a face is fake reduces empathy. Empathy is a powerful driver of donations. When it’s lacking, so too are the donations. --Reduced Emotion Perception: AI faces also make it harder for people to perceive the emotions they’re meant to convey, further dampening the emotional impact on empathy --Reduced Guilt: Less empathy means less anticipatory guilt—the feeling that you should help to avoid feeling bad. But the rules can change! --Transparency & Motives. Being upfront about AI use can restore giving, when the reason is to protect the real person’s privacy or valuable resources. Talking about saving money doesn’t sit well with people. --Disasters. When real images are hard to come by e.g. natural disasters people are more accepting of AI-generated ones. The urgency and need override the skepticism and AI images can be as effective as real ones. Kiki Koutmeridou offers these practical takeaways for charities considering AI Imagery: --Disclose when images are AI-generated and emphasize ethical reasons --Avoid cost-cutting as a motive for AI imagery. --Use in emergency situations

  • View organization page for DonorVoice, graphic

    2,229 followers

    The Secret to Evoking Emotions in Writing: Why ‘Crying’ Beats ‘Sad’ Every Time "She was sad." Boooorrrrring. Talk about telling not showing. Something so important as emotion and yet, most of the time we reduce it to the lowest common denominator, literal use of the word; angry, sad, happy. How about this instead; "she was crying." It shows, it describes. And for most of you, it likely invoked the sadness emotion. You instinctively and knowingly associate the crying verb with sadness. These emotion actions are not just any old actions or any old verb. They convey meaning tied to our personal, shared experience. These words have superpowers. They activate internal feelings and states.  For example, Justice is understood and easily invokes joy or frustration or dismay tied to a jury verdict. This abstract linkage of words to emotions is no different than concrete words like "pen", which immediately conjures up an image of what a pen looks like. But, their superpower doesn't stop there. It turns out that these abstract concepts are mentally processed faster than concrete words. "Crying" beats"sad" for mental processing ease and association. Creating mental ease is important if you want to write well and tell stories well. And emotion in writing matters. But, as we've written 1,000 times (make it 1,001), people don't give because you make them feel sad. They give because they think giving will make them feel better. Emotion is the goal, not the cause of the behavior. I can't stress this enough as it's one of the more important, ignored bits of effective fundraising. Anyhooooo, back to our originally scheduled program. We've got a large list of action words linked to emotions that we use for writing and designing fundraising. Here is a snippet that you should keep as reference. Do a search and replace with the simple, too literal emotion terms and try to tell a story instead with the action word. The top action words for each emotion are listed as are top synonyms for certain action words - e.g., scream/shout/yell/shriek.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    Inflation Messaging Giving USA reported charitable giving increased by 1.9% but after adjusting for inflation, it's down 2.1%.  The cost side of your P&L has likely gone up, whether charity or agency. Is this a macro, negative externality you're stuck dealing with or might this be a time to raise "prices"? If you raise your ask amount and explain it and it's perceived to be fair, it increases satisfaction.  So, what influences fairness judgements? 1) If it's because of increased costs then it's perceived as more fair. 2) If those costs are externally imposed then its tilts further into the fairness camp. 3) And if the organization is perceived as a good actor/benevolent, even more fair. This is a trifecta of perceived fairness that will be showered on a nonprofit's explanation for why it's asking for more.  We ran a test for a food bank client that changed the tick box messaging from asking to cover the 3% transaction fees to instead cover the 13% increase in food costs along with a bit more rationale. The test (Variant B) won, increasing conversion by 9% and getting more people to select to cover 13% (vs. the 3% transaction fees) so higher average gift too. Rationale matters as it determines perceived fairness.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    All Fundraisers Must Be Brand Builders The industry trendline is well worn - fewer and fewer people giving. The saving grace has been those left giving more. This is hardly a winning strategy. Your only path to sustained growth comes from increasing the percentage of prospects that know you. There's a huge chasm in likelihood to donate and lifetime value between "know you" vs. 'heard of you". Those who "know you" will cite your brand name when open-endedly asked to name a charity in your category.  Those who've only heard of you can't.  The lifetime value of the former is 50%-150% higher. Since the vast majority of charities have little to no brand spend budget they must rely on fundraising spend to do that job. Chances are your direct response campaigns are aimed at the same people who already know you. 1) That's problem #1, lack of exposure to your prospect universe. 2) Problem #2 is that your direct response fundraising likely falls short on creating memory associations with your brand name.  You wouldn't be alone. For years, Miller beer advertising during sporting events helped Budweiser. Why?  Bud had created such a strong mental link between it's brand and the occasion (watching sports) that competitor spend accrued to them. People literally watched Miller beer ads during sporting events and thought it was Bud. The key to building "know you" memory associations is multi-faceted but suffice to say, repeating your brand name is one of the ingredients. And there is turbo-charger value in saying and writing it. This research shows the relationship between the number of times the brand is mentioned in a communication and it's impact on likelihood to recall your brand with only a generic category prompt.  Digital advertising is well suited to carry a mandate of 2-3 brand mentions with a mix of aural and visual to place you at the top of the black line. It's also worth noting there's very little brand gain in direct mail from writing the brand name over and over, per the blue line. And what this doesn't account for is the downside of an appeal that feels disjointed and reads the opposite of a personal letter with the org name repeated 7 times, yielding likely lower conversion and no upside on brand recall. What's in a name? Apparently, the answer is "more" if you can combine sight and sound.

    • No alternative text description for this image
  • View organization page for DonorVoice, graphic

    2,229 followers

    The Anatomy of A Successful Conversation Ever had a conversation with the other person dominating talk time? How about one where they didn't ask you any questions? Contrast that with a balanced conversation, where the two of you talked about equally and the sequence was you, other person, you, other person, etc.. Both parties are listening and responding accordingly and often ending their response with a question or implicitly invited a response. Not all good conversations look this way but if it looks this way, it's good. Humans who might have these types of conversations naturally seemingly get lobotomized if a phone script is in front of them and it's now their job to talk with others. We have a telefundraising company, @DVCalling, that makes tens of thousands of calls/month and thousands of conversations. All of these ask the supporter to consider becoming a monthly donor. About 1 in 10 signup over the phone.  Many who don't donate over the phone do so online in the days after the call per match back analysis. In short, call success is not getting people to say "yes", it's fostering high quality interactions that build mental equity. When the "yes" happens is secondary and by-product of the real job. We've been using GPT (private server) to analyze the dynamics of calls by feeding it batches of call transcripts that signed up on the call vs. not. Here's a visualization of the conversation flow: --Turn taking: how many turns are in the conversation. --Turn ratio: how long is each turn, the length of each colored bar. This is normalized to assume both calls are 3 minutes. There are 3 clear dynamics: 1) The agent & supporter take an almost identical number of turns. In the unsuccessful calls, agent takes more turns 2) The ratio of talk time is almost a 50/50 split. In the unsuccessful calls the agent talks a bit more. 3) In successful calls the agent asks a question sooner Some other noteworthy features distinguishing success from failure, --The success calls have a slightly more positive sentiment score --Success calls have slightly less politeness. Yes, you read that correctly. The polite language in success calls is more natural and flows with the conversation. Unsuccessful calls have more positive language that likely feels a bit gratuitous. A few other noteworthy characteristics informed by what we know about long-term success - i.e. sustainers that stick around. 1) If the call is pressurizing you lose. The "you" is the charity because you probably pay for the new sustainer but they quit. This means we're light on objection handling. Other phone scripts I've seen have a laundry list of ways to covert "no's" to "yes". 2) One ask (maybe 2 depending on call dynamic). "best practice" often abides by the idea that a "no" isn't a "no" until you get 3 of them. No means no. In no other world would we repeat the ask in rapid fire succession other than maybe a young child pestering their parent.

    • No alternative text description for this image

Similar pages

Browse jobs