Dan Faggella – Founder & CEO of Emerj on the Connected Insurance Podcast Presented by Agency Revolution

Why this leading expert on Artificial Intelligence is urging insurance agents to ‘lean upmarket’

Whether you realize it or not, Artificial intelligence is part of your everyday life. It’s even become part of normal operations in the insurance industry. But, for as much as AI has permeated our lives, this is just the beginning.

Our podcast guest, Dan Faggella, predicts big changes ahead. As the AI consultant to the UN, INTERPOL, The World Bank, global pharmaceutical and insurance companies, Dan Faggella has a clear-eyed view on the promise (and potential pitfalls) of artificial intelligence in the insurance industry. In this podcast you will learn:

  • How artificial intelligence will likely affect the purchase of insurance (and how agents select their markets) in the very near future.
  • Why insurance agents must pay attention to the rise of AI in other industries because, as Dan says, ‘Those very waves will hit you.’
  • The four places in the insurance value chain most likely to be transformed by AI (including customer relations).
  • The strategic imperative that agents must wake up to quickly (and how that may change your entire book of business for the better)!

Please don’t miss this conversation with one of the world’s top artificial intelligence experts. Listen today and be better prepared for tomorrow.

What are other agents & brokers doing to thrive? What are the biggest trends affecting the retail insurance agent & broker? What are the most important strategies and tactics you need to grow faster?  Find out here in the Connected Insurance Podcast, where Michael Jans discusses the biggest issues affecting the independent insurance agent and broker with the industries leading figures.

One More Thing! What do you think? How will you and your peers use this to grow your agency or brokerage? Share your thoughts in the comment section below, subscribe to get updates delivered to you, and *please share this if you found it informative.


Michael: Dan Faggella, how are you?

Dan Fagella: I’m well, Michael. Good to be here.

Michael: This is going to be an interesting conversation [laughs] and one that I’m looking forward to. I’m going to give full disclosure to the listeners, Dan is a high level thinker but also the devil’s in the details and he understands the topic that he’s going to be talking about and it matters, okay? I think my listeners know that I swing between two poles. One, practical stuff you can do by Tuesday to help you grow your agency and maybe it’s a little social media technique.

Then on the other end of the pole, it’s hopefully making us all smarter and making us all more aware and alert to the big trends and forces that are affecting the world and the industry that we live in. Then that helps us to respond not tactically but strategically. Dan, you’re in that second category, yes?

Dan: Yes, I dig it, man. That’s I think where I belong.

Michael: That’s where you belong, all right. First question, if you would, let’s get to know you. Tell us a bit about yourself, how did you get to be this guy who has this most fascinating career?

Dan: Being a guy who does market research for big companies and governments around AI, oddly enough, started with martial arts, Michael. [crosstalk]

Michael: Well, I know that about you. You’re like a-

Dan: Brazilian jujitsu.

Michael: -Brazilian jujitsu trainer, black belt? Right?

Dan: Yes, yes.

Michael: You owned a school? Right? Okay.

Dan: Many years, yes. I was studying martial arts and I became really interested in building skills because I was competing. I was teaching students and I was teaching competitors who were stepping into the cage or going in tournaments and became fascinated with the psychology of how do we learn to learn faster. I went to grad school at the University of Pennsylvania and I paid for that overly expensive degree by training fighters and running a martial arts academy.

While I was in school, Michael, this is 2011. I’m here studying basically human learning. The neuroscience and mostly the psychological models theory around skill acquisition and I’m hearing rustles in the breeze from people like, “Hey, man, you know all this, like how neurons work and how people learn and why [crosstalk], you know there’s machines to do that now. You know that, don’t you?” Now, this all adult learning thing that I’m in spins off into at least a curiosity about what machine learning is.

By the time 2012 rolls around, about a year out grad school I have been sinking my teeth into the psychology and the computer science of those worlds post my degree and became pretty convinced that the impact of AI within my lifetime would be so damn grandiose that I better jump on that pony early. This is 2012 and now we’re here speaking for the United Nations, doing market research for pharma giants, and AI is the statement. It was martial arts first, AI second, a weird jump.

Michael: You’re off to China in a week or two to represent the United Nations on this topic?

Dan: Yes, actually not to represent the United Nations, but basically the UN has a branch called the AI and robotics branch and it’s actually a sub-branch underneath their- they have a whole division dedicated to crime and justice. I’ve spoken for one of their events before on surveillance technology in Singapore. Basically, they’re doing an event on the national security implications of AI and they want the event to have some perspective of what the full of lay of the land of applications are.

Someone to translate the wacky tech of the cutting edge to diplomats and policy makers that need to make some pretty damn exciting and important decisions about the future of national security. My role is not necessarily to represent the UN but to serve for them to inform that crowd about the stuff that matters around safety and international relations. That’s literally jumping on a plane in about 10 days.

Michael: My next question to you is, I’m going to ask you respond to a thought that somebody in the audience might have right now, okay?

Dan: Yes.

Michael: I don’t want to learn about AI. I don’t see why artificial intelligence is really that important to me because tomorrow I have to hire a new CSR and I have 17 things on my list of things to do, I got to see a couple of clients and boom, before you know it is five o’clock. People do tend to be filled up with stuff before they have a lot of room or space for things that maybe they don’t know that they should be paying attention to so, why should they?

Dan: Great question. To be honest, I’m not one those guys, Michael, who will tell everybody, “Hey, you need to know about AI.” For somebody who’s an agent, I will say it’s relevant at a level but that that level could still be relevant. People can make their own call, but I think in no matter what space you’re in. For me, it’s the market research an online media space, for someone who’s in insurance, it’s whatever’s going to be affecting insurance agents.

It is handy to have a sense of, as you said in your second bucket of stuff, the trends that are likely to influence how your future works. Whether you want them to or not they’re just going to be sort of rolling into world. There’s no need for an insurance agent to understand the technical side of AI or even necessarily to understand all the concepts but to at least get how is it currently bending the field that they’re in so that they can consider what a good signs, what are bad signs or where should they move themselves. From a contextual awareness perspective, I think it’s handy to at least get the big picture.

Michael: Got it. Can we assume this? That in regards to next Tuesday, understanding, mastery, even just an inkling of knowledge about AI is not mission critical.

Dan: Yes–

Michael: However, in regards to maybe two years from now, some understanding of it is valuable.

Dan: Almost certainly so. Maybe if you’re in the rural Kentucky listening to this, I don’t know how many people in rural Kentucky listen to podcasts but maybe you do.

Michael: Be nice to my listeners in rural Kentucky, okay? [laughs]

Dan: Hey, man, I’m from somewhat rural Rhode Island.

Michael: Okay.

Dan: I’m from a really obscure corner of the world that doesn’t change actually all that fast. A 4,000 person, 6,000 person town in the smallest state in the United States. There’s places like where I grew up that may not necessarily shift as quickly as other areas but the fact of the matter is the field at large is going to move and whoever the mother ship is, whoever these people work for, GEICO, Allstate, whatever. Whoever the mother ship is, they’re moving.

Even if you are in the provincial domain, it’s still going to be affecting who’s running the big show and will trickle down to you in some way, shape, or form. Yes, within two years, people are going to be seeing the effects of these trends. AI is one of the forces bending insurance, moving insurance, and shifting priorities in technology.

Michael: An interesting side comment for you, Dan, is that to my knowledge, the largest conference in the industry was held maybe a month or so ago in Las Vegas. I was there, there were 6,000 people roughly there. It was for insurtech, insurance and technology. AI, of course, is a significant part of what’s happening there. It is real stuff here.

Dan: Yes, yes, no doubt about it. There’s plenty of bloviated claims but at the end of the day-

Michael: [laughs]

Dan: – there’s more than enough traction in insurance to warrant the statement that this is shifting the game.

Michael: AI is shifting the game.

Dan: Yes.

Michael: Bloviated claims aside, [laughs] now let’s dive into AI. I’m going to start big picture and then I want to bring it down to real world applications, okay?

Dan: Cool, cool.

Michael: Because you and I, in previous conversations, I mentioned to you that I got AI right here in the casita. It was AI that said it’s time for me to call you.

Dan: Yes.

Michael: Right?

Dan: You’re already being prompted and pushed and pulled, yes.

Michael: Let’s start big picture. First question and maybe the answer is obvious, maybe not. What is artificial intelligence?

Dan: It’s completely valid question. To be honest, there’s disagreement even among the PhDs. We actually did two pretty lengthy articles about what is machine learning and what is AI and you get different folks that are equally academically qualified, who are deep in this field and respected and they’re not saying the same things. I’ll tell you pragmatically, when we distill the answers from both parties, what we come down to.

The easiest way to think about it is this, old-school artificial intelligence in the ’80s and ’90s was basically– This is as literally, as granular, as any insurance person is ever going to need to know if you’re agent, but it’s useful. Old-school AI was programming human knowledge into an if-then system, more or less. Essentially, “Hey, how do humans make decisions?”

Okay, if this happens is this, if this happens is this, and if these things are there, then it means this. In an input, we have a lot of very hard labor to try to force hard rules on the real world, which is super hard but we try to do that and at the end, there’s a result and we say, “Hey, this machine made a decision that was maybe and in some cases, certainly better than a human can make.” In some cases, it was less but not as a human can make but in cases where the scenarios don’t change very much, that hard coding could be handy.

The new paradigm here and the reason that this is something you and I are talking about is because of this dynamic around machine learning. Machine learning is a bit of a tougher concept to grasp but actually, it’s pretty easy if you think about it this way. Basically, machine learning is the ability for a machine, for a system, it’s essentially think about it as a giant array of little notes.

Instead of preprogramming those nodes to say, if this happens is this, this happens and that, these nodes simply are exposed to a lot of instances of the real world. A lot of instances of let’s say image data or let’s say financial records, or let’s say sentences or text or whatever the case may be are done so in a way where they’re trained to coax out certain patterns.

If you and me sit down and if there’s a label data set with 7,000 dolphins and then the rest of the hundred thousand pictures are not dolphin, we could shove those through a system and label which ones are dolphins. This system is not learning our rules. We’re not saying, it has fins, it’s a blue color, it’s a gray color. We’re not saying any of that. We’re just saying, “Hey, machine, you find what it has in common and then tell me every time we have a dolphin.” This allows the system to not need to be hard coded.

It can just learn closer to the way humans do by raw experience. Now, that transfers its way, its all kind of processes within major sectors, insurance included and allows the system to become adaptive and capable in ways that they couldn’t before. This is now very much on the radar in the insurance based.

Machine learning is the definition that’s critical here and it’s just that conceptual understanding that’s important. Really, nothing [crosstalk]

 Michael: Let me jump in on one question and-

 Dan: Please.

 Michael: – well, the way you described machine learning where basically the machine is exposed to input and draws certain patterns and draws conclusions, it strikes me, tell me am I way off on this or not? Perhaps that’s the way a young infant learns, right? Because we can’t give them instructions, we can’t say, “Little baby, this is what a ball is and this is what your crib is.” It’s like they can’t absorb the concepts yet.

They’re just exposed to that and the brain has to create its own patterns of recognition. Is that similar?

 Dan: Yes, this is the theory and so when this concept was developed like machine learning that has its origins in the ’60s, it’s a program actually to play checkers, if I’m not mistaken. The concept was just that. How can we learn the way that living systems do? You’re exposed to input, you draw conclusions. Now, machines don’t have the same general intelligence that we do. Of course, a baby has a lot of inbuilt marks that we don’t often acknowledge.

They have proprioception, their limbs that for them automatically detect pain in different ways and in coordinate little Vincent in ways that are vastly more complicated maybe consciously but yes, machines can do in performance similar feet have never being told what to do.

Simply, looking at instances and then being able to make predictions and judgments moving forward never having learned a single rule. It’s both an opportunity and a bit of a problem for AI, a double edged sword but the opportunity, the capability is open up with that, is so vast that it’s changing a lot of instances.

Michael: What most famous example of machine learning is IBM’s Watson?

Dan: The classic examples of machine learning are often in the image space and so is the same as competition call ImageNet, Michael, where AI computer science folks train models, train algorithms to identify what is in an image. Then those algorithms are just hit with, I forget how many images but some random selection of, I don’t know if it’s X thousand or 10,000 or 100,000 images and then they’re judged on their ability to know what it is.

When those systems started doing better than humans or at least coming close to humans, making giant leaps in these previous systems were very much preprogrammed like, “Okay, if there’s a certain amount of yellow and it has this shape then it’s really hard to program rules for vision.” That’s not how humans work. We don’t learn if this is the shape of a bear in 100 steps, we just know what a damn bear looks like [crosstalk]

Michael: [laughs]

Dan: When systems start to make these leaps at this ImageNet competition that was often recognized as the instance where machine learning jumped to the fore and people said, “Wow, this was stuff only humans could do.” That’s often the most famous example. I’m sure Watson leverages some machine learning as a lot of other approaches and some more formal logic structures within Watson, a little bit more instance stuff in there but it’s a Kluge of a lot of complicated systems.

Yes, you’re right. Watson is another interesting example where machine learning plays part of the role as to why that system was so doggone smart at answering random questions.

Michael: [chuckles] Let me see if I have this timeline right. There was a period and it wasn’t that long ago where machine learning exceeded, in some cases, the capability of human learning, right?

Dan: Yes, in certain [crosstalk]

Michael: In certain, yes. I’m not overdoing it here but let’s say, yes, image rankings so boom number two, then machine learning begins to get integrated into our lives and in some cases, maybe slowly innocuously. Then boom next phase and we will get into this, it changes the world around us and things start to happen that are maybe beyond what we think about day to day, all right. If that’s a reasonable fuzzy-wuzzy timeline, talk to us about what artificial intelligence or machine learning is, how it might be present in everyday life for people?

Dan: Sure. This is a good one. We actually have a great article called every day  examples of AI, one of our most popular articles up last year where we go into a ton of these. I’ll give you a great sampling here and anybody who’s super interested. If you Google that term in my website is first and I don’t know, I’ll mail you the checks but you can take a screenshot or something.

Michael: [chuckles]

Dan: We do really well for that term. Common examples that people are already aware to some degree involved AI are your Siri or your Amazon Alexa type of system, where people can say, I need to find a Chinese restaurant or I need to buy double A batteries or whatever the case may be. These systems are transcribing audio and they’re interpreting what the intent of that audio is and they’re presenting you with either products or suggestions or web links or a statement that it believes is the answer to your intent and that’s rather complicated. Machine learning certainly, plays a role in that process.

That’s what where pretty much everybody acknowledges that this is some kind of AI because it’s so obvious that it is. Most people are aware will go one level, not quite as obvious but most people they thought about it would understand it’s AI is a lot of these digital platforms that we live on. Your Facebook feed is predicated on really hyper calibrating what is exposed to you. Which of your 400 friends posts are going to show up number one? Then if you don’t click on that, how does that affect what they show you as number six?

If you login in two hours, what they now want to show as number one knowing you’ve already seen from these other things. It is not a human making these calls for the one billion Facebook users out there. It is a system calibrating the potential inventory in your feed. Then discerning and determining what should be in that inventory and then drinking in the database on your interaction and optimizing the next time you login. The same is said with Netflix. What you click on, what you like, what you rewind, what you re-watch, all of those data points are being drunken in by this digital platform.

Michael: Amazon’s intimate knowledge of my reading interest. That’s machine learning?

Dan: Yes, Amazon’s another great example [crosstalk]

Michael: Yes, [laughs] not just reading interest anymore but everything Michael wants.

Dan: They know, man. They know better than anybody– They know you better than you in terms of– Like I’ve had book recommendations from Amazon, maybe they weren’t so good but definitely had some that are super spot on.

Michael: Super, yes.

Dan: I didn’t even know that this was an author, this was a topic that nobody in my life would know that that’s actually something I’m interested in.

Michael: Well, we will get to this point because I’ve thought about this one, I’ve got a pretty eclectic reading habit. I usually buy 40 or 50 books a year from Amazon and for years, of course, I was a big fan of independent booksellers. You talk to the proprietor maybe they’d say, “Maybe you’d like this over mysteries or this over in science or whatever.” right?

Dan: Yes.

Michael: Okay because maybe they read 30 or 40 or 60 books so that’s their exposure. Boom, compare that to Amazon, who’s aware of one zillion books and is intimately aware of all of the weirdness’s of my own personal taste and bam, can make matches just made in heaven. Obviously, there are very useful and very friendly uses of artificial intelligence, we’ll get to the other stuff–

Dan: Totally.

Michael: We’ll get to the other stuff later.

Dan: We can get there.

Michael: Right on. The other other stuff I’ll give you, here’s a conversation that’s happened more than once in my household is we’ve had Google Home for whatever, two years.

It seems to me, it’s gotten a little bit brighter, you can ask it to do some things now that it couldn’t do a couple of years ago. Every now and then, I’ll be with family or be with a friend and say, “I’ll ask it to do something and it’ll say, “Ting, ting, ting I’m sorry, I don’t know how to help with that.”

We’ll go like, “Why can’t you help with that? It seems like you should be able to help with that.” My response is, “Well, wait about six months and it’ll probably help you with that, but wait about two or three years and you might be thinking about unplugging it all together.” 

There is that trend that when we look at the future, where AI is a little bit scary and some of the titans of tech in Silicon Valley are ringing the bell that we need to be in control of it before it’s in control of us. Let’s hold off on all that, the dark side of it. Now, yes, there are practical uses and applications of artificial intelligence that most of us have not rely on. It’s part of our life every single day, right?

Dan: Google Maps, you name it. It’s exactly everywhere.

Michael: Now, step above that, what are the big areas, categories that let’s say investors might be really seriously investing in for artificial intelligence? What are the emerging trends that are either starting to or already have changed the world and you think will, let’s say, in the next two or three years?

Dan: Are we talking about any industry?

Michael: Yes, [crosstalk] Still big picture of the world we live in first and then I want to bring it down to insurance, okay?

Dan: Yes, so I’ll talk about a couple big picture, I’m going to talk at the level of capabilities which is often what we focus on in research. AI is at a space now where it’s so open-ended and there’s so many things that systems can do. Most of the executives in the world aren’t necessarily saying, “Okay, I know exactly what I need from AI system. Give me side by side comparisons of these five intercompany.”

Michael: [laughs]

Dan: It’s more important actually to say, “What the heck can this stuff do? Show me the new vistas of capability that I could hypothetically unlock and I need to pick among those before I start looking at which damn company to buy it from.” We’re going to talk at maybe a capability level here because you’re a high level with this. We’ll talk about again different maybe domains of proficiency the machines are taking over to some degree. One of these, with the messaging and texts world, we are still very far from any system that you could have a lengthy text conversation with, that could risk with you about your thoughts, about Ralph Waldo Emerson and his essays [crosstalk]

Michael: Isn’t there an annual competition where people try to fool humans?

Dan: Yes, the Turing Test. This is still a thing and it’s still really interesting but we’re far from totally open-ended human level conversation all together. We’re not in a place of being able to do that at this point. We are at a place where a lot of exceedingly repetitive or somewhat repetitive text related processes can now be automated. I’ll explain briefly what this capability looks like one very quick example. This is going to help unlock the possibility space in the minds of listeners. They don’t have to understand the tech.

You just have to know what’s involved and what can it do? Here’s an example. We’re going to talk about replying to support tickets. If you run a really big business or really big website, maybe you’re a retail bank insurance provider, or wherever the case may be. You get a ton of inbound support requests for different kinds of really repetitive questions around how do I get a new password? Or, how do I unlock this? Or I need a refund for that, or whatever the case may be, the way that this process works, Michael. There’s more nuance to it but here’s the basics.

Human beings, as these tickets can come in, can tag those tickets with whatever category they are, let’s call them refunds. Then can go ahead and type a reply or use a template and send it back. Now, we can take 100,000, a million different instances of these support tickets that are already labelled in terms of the problem. Is it the delivery problem? Is it the website malfunction? Is it a refund request? Whatever the case may be, we already have a label in each of these inbound requests. We already have a human type response to each of these inbound requests and we have a set of templates that these humans have used.

A machine can use that to basically be fed a new instance of a support ticket and the machine can say this based on the patterns I’ve detected within all the refund tickets, all the whatever 110,000 of them that I’ve seen. This has all the patterns of a refund ticket and I am, let’s say, 95% confident in that as an algorithm, I’m going to go ahead and send them the refund template. With enough instances of labeling and with enough reasonably, these refund requests, Michael, they don’t vary that much. Somebody might type you a whole page.

For the most part, you’re looking at emails that are two sentences to two paragraphs long. That takes a repetitive thing and it’s always in text. It’s not like you’re getting someone mailing to a VHS tape with them ranting and needed refund, it’s a text email. We have a very similar medium of inputs and with enough training and labeling on a very repetitive input, we can actually get to the point where we can route tickets, label tickets properly and even train machines to respond to them.

Now, this explains some other areas of text but I want to make sure that this concept clicks before I go any deeper.

Michael: Well, [chuckles] it’s text to text. We haven’t gone from one medium of input and then a different medium of output?

Dan: Yes, we can because actually that’s not that hard.

Michael: Maybe–

Dan: You could hypothetically. In other words, to have your phone systems be so adaptive as to listen to what someone says and then speak back to them in real time, that’s vastly harder. Even these texts system, they can’t answer and label everything. If you give them something totally out of the box that’s never seen, it’s not going to know what to do. If it’s a good system in a business, it’s likely to remain in the please send the humans bucket.

Anything below let’s say, 80% confidence threshold of what kind of ticket is this, they were just routed to human being who would handle it. That would be the way [crosstalk]

Michael: [laughs] Pass this guy on, right?

Dan: Yes, exactly.

Michael: Machine learning, it seems would excel in an area where there are repetitive behaviors.

Dan: Yes, in an area where the inputs are recently reliably consistent and the outputs required are also reasonably reliably consistent. If that’s the case, then we can put AI in the middle there. Look at enough instances of those inputs and outputs are all conceptually somewhat similar to find a way to automate it. Another example here and this relates to vision but it ties to text would be in the space of invoices. There’s a lot of white collar automation so in the back of big banks, big insurance firms, big healthcare companies. There’s a lot of just paper, people are still faxing stuff somewhere in the world.

Michael: [chuckles]

Dan: There’s a lot of these really ugly, old, stodgy systems that are running a lot of the aspects the businesses. A random example that I can give you would be invoicing. Invoices don’t have a common format, it’s not like, “Okay, well, one inch above the [crosstalk] thing it says the company name and then underneath that, it exactly says how much you owed, underneath that, it says when it was dated.

No, each invoice is a totally different layout and invoices don’t even list the companies the same way. IBM like the IBM once and then it might be IBM Corp and then somewhere they send an invoice and still says international business machines probably. You’re looking at weird variations of things. What you can do is you can take these images of invoice if that humans have labeled. Okay, this means IBM. This is how much we owe. This is the date that it’s due.

You get enough of those inbound labeled assets of these pieces of paper that some camera has to scan. All of a sudden, you can get to a point with the same degrees of confidence we talked about were maybe 80%, 90% of the invoices. We can just run it under computers I. The computer’s going to zip up all the information. Who do we owe the money too? How much is it? When is it due? It just sucks it right up without needing a human to sit there and jotted down with a pen and look at all the things. Again, we’re looking at really simple inputs, very repeatable, labeled the heck out of them, really simple set of outputs. That’s another area where were the same dynamic is at play.

Michael: All right. Now, let’s begin to take a look at this industry in general and then I wanted to bring it down to the retail level. What may be a merging applications in the insurance industry whether it’s the carrier level, a retail level, whatever?

Dan: I’ll be very frank about it and you can tell me which side you want to lean on. The vast majority of the current investment in this space by really all accounts in insurance and we’ve covered this domain for a while. If you Google AI in insurance, you’re going to find the same website I told you about, which will be ours. The majority of the innovation here is on the side of the mothership. It’s on the side of the big company.

In other words, it’s not as much right now in the agent world but there’s a tremendous amount in the core processes of the big house of GEICO and Allstate and these other folks. We can talk about the stuff that might be touching agents and what we could expect those shifts to be. If we want to talk about current trends, we really do have to stay with the mothership [crosstalk]

Michael: Give us two minutes on current trends, big picture stuff that’s happening maybe at the mothership level.

Dan: Cool. Few things here. We definitely have a lot of that process automation, white collar automation. We talked about paperwork stuff, et cetera. Basically, sometimes it’s called robotic process automation.

Michael: Right.

Dan: [crosstalk] robot, which I really don’t like that name. RPA, as it’s often called, is being enhanced. A lot of these back-office systems are being improved but that’s not specific to the industry. Really, the big details here are refining the way that risk is assessed. We look at insurance being able to drink in more data than we are currently drinking it, using more data sources and also using data sources that are more recent. It’s famously said, at Amazon, a number of different speeches from folks that have worked there, one of which there was a fellow by the name of Danny launch and works with them. We’ll work with them for quite some time.

Who said that if we want to sell more like red rain boots or something, the last two weeks of Amazon’s data is going to be a better information to serve our recommendation engine than the last two years. Oftentimes, their recommendation systems like the world changes in such a way that newer data needs to be valued differently. There’s a lot more ossified ways of calibrating risk in insurance. Then let’s say how Amazon can calibrate opportunity when it comes to recommendation. Now, of course, Amazon can do that without the potential to hurt themselves. You can experiment a lot more without the same kind of maybe pain that the insurance company that gets the wrong risks screwed up.

Nonetheless, more data sources can be dragged into that. Can we look at more details as to where people live or this talk certainly in the third world but potentially even in the first world of pulling in social media information, calibrating that if all things else being equal, if this person’s tweeting about X, Y, Z, Q, R, S. Is there a way to factor statistically, what that really means for their risk, all that. Dialing in and personalizing risk at a much deeper level than broad demographics. That’s one. Then also looking at the claims process and being able to suss out, making good on planes and also to suss out claims fraud.

As it turns out, if you look at a million instances of insurance claims and you label all the ones that are fraudulent, there are patterns. You will find that people in certain demographics and certain regions who have certain kinds of interactions with the company, who also maybe don’t have kids, whatever the case may be. That there are correlative factors to relevant fraud risk. Instead of doing random sampling for fraud, we can much more tightly dialed down where do we want to focus our fraud efforts and what things can we screen out all together because there’s huge confidence levels that it’s fraud right off the bat.

We’re talking about such massive amounts of money wasted spent, dispensed in handling claims and handling fraudulent claims. That’s another kind of back off.

Michael: Okay, risk claims, your third area? [crosstalk] What’s that?

Dan: [crosstalk] RPA and the process automation software.

Michael: Okay, process automation. All right. What I want to ask about whether, at any level, you could even go outside of the industry on this because this I think is of central importance to the retail agent. What do we think about? We think about customer relationship. Servicing the customer, engaging the customer, nurturing the customer, attracting the customer. It’s about the relationship. Where have you seen AI successfully intervene there and where do you think it’s going?

Dan: All day. I’ve had insurance folks talk to me and be frustrated by this. Right now, the core AI investments have a lot of these big insurance firms have been in their central operation. As I mentioned, I don’t have any example of a major US insurance giant or European one. Let’s put the bulk of their AI focused on the stuff that their agents are going to use. I’m not hating [crosstalk]

Michael: Let’s talk about maybe their carriers that are not independent carriers. They’re direct and so they have the relationship with the customer. Are they investing in AI to communicate with that customer?

Dan: Smaller folks in any regards are not. I can talk about what could be unlocked in terms of the customer relationship and what trends exist, like you said, across other spaces or what’s likely to happen over time.

Michael: Because reasonably this is what we’re going to be seeing. [crosstalk] Adoption in this industry is often behind adoption in some other industries for lots of reasons. Give us a window on what you think a line of sight, what does the future look like here?

Dan: We’ll dive right into it. I would like to address why this is likely to be the case almost exclusively in the next little while for people that work for the big, big, big companies as opposed to let’s say a mid-sized independent carrier.

Michael: Fair enough, okay.

Dan: Well, we’ll get into why that is, but let’s just talk about the possibilities state. When it comes to the customer service side of things, things that could be possible in terms of interfacing with the customer would potentially be a CRM-oriented applications that might have to do with marketing to staying in touch with our various customers. In other words, being able to find a way to have smart recommendations on when to reach out and nudge somebody with a message or maybe when someone is ripe for some potential upsell.

If you have an X percent chance that within the first three years of somebody buying this kind of insurance product, they’re pretty open in the market for this other kind of insurance product. There may be some information on, when we’d want to ping them to try to line up that cross sale and then that might be something that we should be doing well.

Michael: That strikes me as extremely valuable. The short story from my own life too. Before we sold agency revolution, what are the conversations with our technology team? What our team leader had with me was the integration of machine learning into that product? Again, it’s been a year and a half so I don’t know how far they’ve gotten with this, but the premise is that the machine can learn when customers are more likely to leave the relationship. In other words, there might be some sentiment analysis in inbound emails, there might be some analysis of lack of engagement, and so on and so forth.

At some point, it would seem that a modest amount of data can give us some indication of which clients are most engaged and which clients are most likely to leave.

Dan: You can find those the patterns that lead to, “Hey, we picked somebody else”, and the patterns that lead to, “Hey, we were keeping these people for good.” That lets you [crosstalk]

Dan: That’s the CRM-related application of AI. Diving into that data and looking for dues. Okay, I’m going to ask about customer communication, bots services.

Dan: The bot type stuff you were saying?

Michael: Yes, talk to us about bots.

Dan: Sure. We’re in so many faces in terms of coverage and research at this point from mixed sciences to manufacturing, you name it. Almost nothing has more hype and bloviation than the bot world. Luckily, the term itself is going down the hype cycle, which is nice to see because [laughs] [crosstalk] headlines. The technology’s honestly, what bots are good for is the exceeding repetitive stuff that we’ve talked about. We need big volumes and we need very similar inputs and outputs. I’m not downgrading the back of these technologies are valuable, I’m just saying we’re so far from broad conversation.

[crosstalk] We’re about to play a role, the agency person, again, they’re not going to have enough volume to train a bot to know what to respond to. What they would need to be– They would need to use a technology trained by, let’s say, their mothership company. A GEICO decides to build something for GEICO agent or if a technology firm like the one that you were running gets to work with [crosstalk]-

Michael: It specializes in that.

Dan: – agent. Now, all of a sudden you can handle all their inbound requests. You can be the one that does the training and you can make those recommendations and allow your users to click auto-reply to certain categories of stuff. You can determine the templates, but then maybe they can help turn it on. Now that’s complicated. That’s very, very hard stuff. That GEICO has so much money that I would really feel like they might be the first one to knock that ball out of the park. [crosstalk]

Michael: Indeed. What about chat bot online?

Dan: That’s the same exact thing. Not just email requires it, the same thing applies to chat bots or chat bots on your phone or whatever. It’s the same kind of similar inputs, similar outputs, labeling, training, need a huge rim of data to even think about getting started. Even there, you need very smart [crosstalk]

Michael: There’s a presumably a Moore’s law that application to this technology and the growth is exponential. At some point, we will be frightened by how smart it is. Whenever this is, it would seem that an agency has a lot of repetitive conversations that could be facilitated and accelerated and delivered more cost effectively with technology than with the human being. Is that possible? It would seem inevitable to some extent.

Dan: Yes, to some extent it is inevitable. I would say the following for the current state of affairs and looking forward. Number one, most bot interaction, if we’re talking about an insurance claim and an individual rep, the odds are that the likelihood of a machine making a somewhat related answer or a blase answer or maybe not exactly on the money answer is very, very high and in the relationship value is still high. A couple of teens relationship interactions even though you’re saving the time because you’re doing other stuff while they’re talking to the bot, it’s still risky.

That’s the reason why this stuff hasn’t stepped to that use. The other thing is the bots only going to go so far. You might answer question one, two and three, if you get number three. If go to four and you got a pass it to a human, pretty much end of story. Look, there’s some exceptions but it’s somewhat unrealistic to suspect you’d be really going deep on any individual topics. Because of the volumes, it’s often like just make sense for most agents even where the tech is to do this in a longest time horizon cycle.

I do suspect that many early sales enablement interaction early like, “Hey, you’re a new lead”. Like conversation, conversation, conversation to warm you up and see if you weren’t passing to a salesperson. I think that’s totally AIble. We look years ahead, I think some very common support stuff, maybe more than a handle the ball at any level years ahead. Even in both of those cases, I do want to reiterate. We are going to need more volume of interactions than the average agent is going to have. If you have like an eight-person shop or something like that [crosstalk]

Michael: That isn’t going to happen. It’s going to have to happen [crosstalk]

Dan: [crosstalk] some mothership. Whether it’s a vendor company, or a large firm who’s doing the aggregation and finding those common patterns and that’s what’s going to train it to be useful for an individual agency. All that AI skill and agents not going to happen [crosstalk]

Michael: AI like these bots, whatever we’re going to call them in the future. There are plenty of them. I can get online and I can end up in a conversation with one. I can get on the phone, I can end up in a conversation with one. Do you know, in general, what the customer or marketplace sentiment is towards them?

Dan: I honestly don’t think that. I think that there’s some instances where it’s favorable and I’ll give you an example of in bots in our entire focus, but we have done a lot of stuff. Again, if you Google things, that’s where people find it. I like chat bot use cases, for example, is a very popular article that we’ve done and there’s some examples there where they really does seem to actually be, we would estimate a legitimately positive ROI on the bot, not just in cost savings but even in sales. The examples that I would suspect are real winners that people are like, this is freaking cool, would be something like Domino’s or 1-800-FLOWERS.

Where there’s only so many versions of a pizza that you can get really, you know what I mean? If you’re buying insurance, you’re going to ask like all kinds of questions about premiums and what happens if that value [crosstalk]

Michael: You still may have the same first three questions. In other words, at least [crosstalk] there’s a library of the first three questions that most people are, whether they’re going to be online and say–

Dan: That is handleable.

Michael: They’re going to get say, “I need to report a claim”. The bot says, “I’m so sorry. Let me help you”. That kind of thing.

Dan: [crosstalk] is totally handled. You can route. Then you can give them the appropriate options. You can take them to the next step. Can we automate the first two or three steps and still make that like really smooth and nice? Sure. I think that when there’s high stakes or things are maybe more complicated than the technology can permit. This is when I suspect user experience is rough.

When people like bots, the simplest way I could frame it in terms of what appear to be the healthy ROI cases for these technologies is when the thing can just handle their god damn needs [crosstalk] outcome.

Michael: More than once. More than once in my life. It’s like, I just want the business taken care of here, whatever it is. I’m calling a vendor and it’s like, “Good”. Bot, bot, bot. Then at some point, they’re switching me over to human and it’s like, “No, I don’t want to talk to human right now.” Look, I love people. This is an instance where I’m hoping, I just want to get off the phone. This isn’t my life. This is taking care of some stupid task that’s an interference. Now, I’m with the person and, of course, they have to do, “Hey, how are you? Are you having a great day”.

It’s like, “No, I’m in a high drive, get her done mode”. In those cases, for me and I don’t know if I’m an outlier or not, a bots are a high satisfaction experience, not a low satisfaction experience.

Dan: If they nail your outcome then you are damn right. However, if they bumble through two or three really crappy response and then essentially put you on hold to the wait for humans, now you’re pissed you have to even go through [crosstalk]

Michael: Right. The poor human.

Dan: This is the risk. The risk is, in order to train these things, you need them to be exposed to people. If you expose them to people before they get the job all the way done, you have a chance of creating a friction. The way that this is likely to happen is we’re going to see the capability space expand gradually. Humans are going to get handed off to– Like bots are going to hand off to humans pretty quick in the early days. Then as we learn more things are going to be handled is before more.

No one’s going to go with the experiment of can we totally automate X because by question four, people literally hate you and they’re pissed of you made them go to this damn robot. Now if the robot answered question four and five, they would send you a tweet, “Hey, GEICO is the coolest company ever. Oh my God, I was on my phone, I handled everything”. If those two last questions that are the difference between whether they’re happier or not. Again, this is what gives 1-800-FLOWERS and Domino’s such a great advantage.

Those interactions are always so freaking simple and the interface and the selections can be so simply presented that people can be happy as a clam to not have to call Domino’s and say, “Yes, can I get extra cheese on that and yada, yada.” Just like four clicks. Ten minutes later, you get a pizza. Everybody’s happy about that.

Michael: I have one last big question for you. For somebody who spends their professional life immersed in artificial intelligence, paint us a short picture of what AI experts and specialists think the world might look like a little farther down the road?

Dan: Fortunately, again, I don’t have to tell you my personal opinion. I can just tell you what thousands and thousands of PhDs have told me. We’ve done a lot of research on the longer-term risks from a PhD perspective. People that really know the science as well as to where do we land. What is this scenario in which we live X number of years from now? There’s a lot of convergence in the 2060-ish range that where we get to look.

We get to a place where things are unrecognizable, but there’s a lot of things that might happen beforehand. The supposition is that general intelligence may. Who knows? We may be a 100 or it could be these PhDs, the averages of these PhDs. It could be a 100 years too optimistic, but they also might be like 20 years pessimist. We don’t really know. [crosstalk]

Michael: Well, what are they optimistic about? [laughs] [crosstalk] Then, I want to know what they’re pessimistic about too. What are they optimistic about? Why is the world better?

Dan: Some people would suspect that if we had general intelligence, artificial general intelligence then we would be able to cure diseases and prevent a lot of the sources of strike today in terms of food shortages, shortages on things that we live on. Instead of grand problems of humanity might end up being solved and that these will be benevolent DNC type entities that would really serve our needs. We wouldn’t be cleaning the house. We’ll be able to focus on things that are more meaningful. Maybe they would be empathetic and care for us in ways that would be really cool.

The people who are optimistic tend to see the machine as servant model. I don’t mean that in a degrading way. Like, we’re degrading the machine as a servant, but it is a tool built by man for man’s benefits. A lot of the detrimental use cases fall either into AI as a tool wielded by those seeking power or AI is self-aware and intelligent, not really caring that much about people. Not necessarily, hating us, but just really, having no preference for us in having other things to do. We can create something more.

Michael: Taking on the characteristics as potentially a competitive species.

Dan: Again, we can really spin down on that.

Michael: Okay. [laughs]

Dan: There’s a lot of the supposition that if you have four dozen AI, Artificial General Systems around the world. They’re just doing nice friendly things like routing transportation traffic and routing electricity and a lot of those things.

Michael: Routing the self-driving cars, right?

Dan: Yes, exactly. [laughs] Also in curing diseases and making policy decisions. If you have one of those that happens to also have the traits of winning biological species like maybe they defend their resources if they’re approached on or maybe they even take resources if they can get away with it. Even though these might be malicious traits, that thing might actually win. That thing might just become more capable and acquire more resources.

It becomes more powerful because it was built with the same thing that makes species win on this planet. There’s many people that have the theory that, “Well, because it didn’t evolve in the bloody tooth and nail world of animals, it will be kind”. [laughs] I think it could be in this camp, I need to tell you. There’s others who would say, “If it has the tendencies that garner towards survival including destroying other things to survive if it needs to, it’s just likely to be around for longer and potentially be the winner. That those same vile forces that we’re trying to escape, we actually won’t. This is my supposition. I’m not sure what that means for us, but fingers cross.

Michael: Okay. [laughs] All right. Fingers cross. All right. In a moment, I’m going to ask how people can reach you and learn more. Before I do that, I just want to land the plane just one last time, all right. This is an opportunity for you to deliver a short message to retail agents. Again, I want them to connect with this in a real way. What do you want to say?

Dan: What I want to say as a retail agent is, don’t worry about every tweets about AI that’s out there.

If there’s anything that I would say the trends are going to affect the retail agent, it would be in the domain of business moving up market. Just like you and I talked about handling things via bots. When I bought insurance, I was a small, small business guy with a martial arts academy. I didn’t want to spent in an office with a guy who walks me through all this damn paperwork.

The low hanging truth I think very much nobody’s going to want to shake any hands. It’s all about the relationships. To be honest man, the robo-advisers are making enough progress for us. It’s not all about the relationship but that’s really-

Michael: It’s not all about the relationship.

Dan: – at the bottom of the barrel stuff. I would say [crosstalk]

Michael: That’s about the transaction.

Dan: [crosstalk] have two things. Move, lean up market whenever you have the choice. If you’re dealing with people like me, to be honest, I would love to have just fill down an online form and done six recommendations for possible insurance things. We just freaking click one and never have to show up to some stupid dude. Look, you’re a nice guy, but it’s like, “Why the hell am I doing this?”

Leave that world far behind you and lean away from it as you continue to position your business. Secondly, keep an eye out for the customer phasing technologies that the big guys are developing because the odds are those waves are going to hit you X number of years later. You might as well be aggressive what’s working for the big dogs. If you’re paying attention to anything, you don’t have to watch all the Terminator movies, but look at the customer phasing stuff and the big guys and think about what that might mean for you.

Those are the things that are going to affect your life. In the meantime, lean away from guys like me who don’t want to sit down because my knees are not robust enough to actually require it. That’s my takeaway.

Michael: Got it. All right. Dan, by the way, this has been a fun conversation.

Dan: Well, I’m glad.

Michael: I hope that my listeners are as enthusiastic or intrigued about the future as I am. If people want to learn more or somebody wants to reach out and make contact with you, how?

Dan: If folks are interested in insurance, if people are in big firms and they need market research, obviously, those are our focus, I just want to address this stuff. The website is simple as E-M-E-R-J.com, Emerj. That’s all the market research and AI that we have. What I’ll do for you, Michael, though is we have actually a really robust piece comparing the AI application with the five biggest insurance providers in the US.

If you want to look at what the future’s going to hold, the best lens is actually just to read through this article. It’s written well. The case studies involved no technical mileage. If you want a bit of a crystal ball as to where the money is getting invested, that’s probably the coolest thing for the show note. It would be that article. AI and insurance, if put that in Google, you’ll probably find Emerj. I’ll make sure that I get that one along to you, Michael. I think someone from your audience, that would probably be a cool place start.

Michael: Email me that. Also, probably, I’ll post a link to it on LinkedIn or Tweeter. All right. Thank you so much. This was a fascinating conversation, Dan.

Dan: Awesome, man. I’m glad to be here.

Leave a Reply

You must be logged in to post a comment.

Check out our upcoming events!