Keeping It Safe with AI Outreach

AI brings a lot of potential benefits, but also new risks. Drips Co-Founder and CEO A.C. Evans explores how to keep yourself safe while using AI for customer engagement.

The growth of artificial intelligence has always caused both hope and fear. Cautionary tales like Isaac Asimov’s I, Robot and pop culture like the Terminator movies show that society has been nervous about AI for a long time.

Even more present these days is the fear that AI will replace jobs and create unintended risks as the technology becomes part of our lives.

On the other hand, we’ve seen countless examples of AI unlocking productivity potential and giving us new ways of working and solving problems. Most recently, ChatGPT entered the public eye with a bang and made the potential benefits of generative AI more accessible to the masses and enterprises alike.

With these two perspectives in mind, how does Drips think about the risks and rewards of AI? And how, specifically, do we use AI to deliver Conversations as a Service?

Generative AI Versus Natural Language Understanding

AI is a wide and complex field, with many types of AI for different purposes. (For a helpful breakdown of these types, check out this piece from McKinsey.) For now, let’s focus on just two: generative AI and natural language understanding (NLU).

Generative AI is a subset of deep learning that is capable of generating new content, such as text, images, or music, mimicking human-like creativity and expression. It uses probability to learn patterns from existing data and create novel output that goes beyond the data it was given.

Let’s test your reasoning (don’t worry, it’s an easy test). Repeat after me:

  • Red, red, red, blue
  • Red, red, red, blue
  • Red, red, red, (blank)

I’m willing to bet you knew that the next color I would say would “probably” be blue. You know this because your brain is using probabilistic reasoning. Tools like ChatGPT reason in a similar way.

Generative AI has remarkable capabilities. However, its ability to create novel output also means that it’s not possible to predict 100% what it will say. The alternative to probabilistic reasoning is deterministic reasoning. Deterministic models are the only ones that you can 100% be sure of what they will say. We’ll circle back to this soon.

The unpredictable nature of generative AI raises concerns that it could spread misinformation or use an inappropriate tone when interacting with humans. It could cause a poor user experience, ignore brand guides, or potentially even break state or federal rules/laws, if it wasn’t trained on them for a specific use case.

Now onto our second type of AI. Natural language understanding (NLU) is a branch of natural language processing (NLP), a general term for technology that analyzes natural human language. NLU seeks to comprehend and interpret our language in a way that is both meaningful and contextually relevant. It analyzes both the content and emotional sentiment of messages in a human-like way.

Drips’ CaaS platform uses NLU, focusing on intent recognition in inbound messages. When it comes to sending out replies, we don’t use generative AI which comes with the risks mentioned above. (More on our approach shortly.)

Building Trust Through Safe AI Practices

Many companies are hesitant to use generative AI for two-way customer interactions because they’re worried about the customer experience. This is justified — if you’re talking about the current version of AI tools like ChatGPT. Even though ChatGPT is amazing, it isn’t ready yet to interact directly with consumers for major enterprises. It’s highly likely to say something off-brand or inaccurate (have you heard of hallucinations?), causing a major UX headache and potential legal risks.

Luckily, that isn’t the only way to use AI for two-way conversations at scale.

How Drips Keeps It Safe While Delivering AI Outreach

As I hinted above, one of the main reasons that Drips is a safe AI outreach platform is our natural language understanding (NLU) focus. We leverage large language models (LLMs) just like ChatGPT does. But we use them to understand inbound messages from customers instead of generating live and unique responses that weren’t pre-vetted.

When texting customers for administrative or marketing use cases, unpredictable is unacceptable. The risk of erroneous messages is just too high.

Drips uses our best-in-class NLU to understand what consumers need, then we send a matching response from a scripting package pre-approved by our client. These responses are crafted as a joint effort between our team of conversational outbound experts and our client’s brand and legal experts. This matching of customer intent to a vetted response is done in a deterministic way that avoids the risks of probabilistic models.

This gives your organization the best of both worlds. You get to leverage AI and ML via NLU while sending only quality assured, brand approved responses.

What’s more, you get the Drips Rules Engine (DRE), our rules-based policy-as-code compliance solution. DRE understands outreach rules based on state and federal laws, best practices, and your own brand guidelines. This intelligence enables us to make sure the right messages are going out at the right time. And even more importantly, DRE can prevent the wrong messages going out at the wrong time.

We've managed over 1.5 billion conversations with this approach, including for some of the biggest names in some of the most regulated industries out there. We’ve put a ton of guardrails in place so that conversations can zig and zag along with customers… while always staying on message and on track to a valuable outcome.

The Risks of Staying with Traditional Outbound

We get that new customer-facing technology can be nerve-wracking. However, even the old ways aren’t risk-free. Far from it.

Here are just a few ways in which a solution like Drips could address risk factors and create a more trustworthy audience connection.

Risks of Chatbots

Chatbots can potentially complete the same tasks as human agents, leading to faster responses at lower costs. While they can intercept and deflect tickets, their limitations often result in a mechanical and overall poor user experience. Not only do they struggle to understand complex queries, but they also lack empathetic replies. Most of all, no one wants to chat with a bot (unless of course, it feels human).

Risks of One-Way SMS Blasts

One-way SMS blasts also have risks associated with them. Even though these aren’t two-way conversations, you can still run into trouble when it comes to TCPA consent revocation. Many of these systems rely on set opt-out prompts like “Text STOP to end.” However, if customers want to opt out, they might reply with "No way!" or “I’m not Mary,” instead of the specific keywords required by the system to unsubscribe. TCPA rules are clear that systems must accept any reasonable method to opt out. Complaints could be raised if users’ non-standard opt-out methods aren’t recognized. The burden of fighting these FCC complaints will fall on the enterprises sending the message, leading to more risk and extra costs.

Risks of Human Agents

When it comes to customer service, human agents are often considered the gold standard. It’s true that human-initiated outreach can avoid many regulations. But humans are, well, only human. Different agents may respond differently even in similar situations, leading to an uneven customer experience. Human agents are also not always at their computers, and staffing outside of business hours is a particular challenge. Finally, human agents may not always stay on message and follow all compliance rules. Especially in complex industries like healthcare, finance, and insurance, even the best agents are likely to slip up now and again.

To ensure human contact center agents stay on message, it requires complex and expensive training and monitoring. This also means big problems if you need to quickly scale by bringing on more agents or third parties.

How Drips Can Resolve Compliance Headaches

Drips’ CaaS excels where these traditional methods fall short. Unlike chatbots, our technology understands complex queries and can deliver empathetic responses in a human-like way. When it comes to one-way SMS blasts and opt-outs, our consent management features can recognize hundreds of thousands of unique terms users use to unsubscribe. (E.g. “This isn’t John” or “Please remove me from this campaign.”) This bolsters your TCPA compliance efforts. As for human fallibility, CaaS is a scalable alternative that maintains quality assurance with pre-defined, vetted, and approved responses.

Safe, Quality, Scalable AI Outreach

When we talk with compliance and legal teams, they are rightfully concerned about incorporating AI into outreach efforts. But once they understand the details of how Drips safely drives outcomes, they actually prefer our solution over a human texting or a chatbot.

We deliver the best aspects of AI while keeping compliance front and center. Our unique approach and experience has allowed us to safely navigate complex conversations over 1.5 billion times.

If you’re looking for a way to innovate with AI that can overcome compliance concerns, let’s talk today.

learn more

Related Resources

No items found.