The Cyber Periscope
Extending Vision in Cyber Risk & Claims
Introducing Wiley's The Cyber Periscope podcast, hosted by Pamela L. Signorello, a seasoned partner in Wiley’s Cyber Insurance Practice. Like a periscope – two mirrors aligned to reveal blind spots – this series looks around corners with professionals from across the cyber insurance community. Pam and her guests share their perspectives on cyber risks and opportunities, creating a collective lens into the emerging trends implicating cyber insurance.
To receive future episodes of The Cyber Periscope, subscribe on the platform of your choosing:
Episode 3: Beyond the Malware: The Leadership Test Hiding Inside Every Ransomware Event
February 4, 2026
What determines whether a ransomware event becomes a controlled disruption or a full blown crisis? In this episode of The Cyber Periscope, Pam Signorello sits down with Dr. Stephen Boyce, CEO and President of The Cyber Doctor, and Don Wyper, COO of DigitalMint, to explore why ransomware outcomes diverge so dramatically. From leadership readiness to the shrinking role of reputational risk, they break down how decisions made months before an attack, and in the pressure cooker moments after, ultimately define the fallout.
Transcript
Transcript not yet available.
Episode 2: AI’s Role in the Evolution of Cyber Threats
December 16, 2025
What happens when the familiar faces of trusted clients, coworkers, and peers turn out to be AI-generated imposters? In this episode of The Cyber Periscope, Pam Signorello sits down with Nate Lovett of Wiley's Cyber Insurance practice, to dissect the rise of AI-driven cybercrime.
From multimillion-dollar deepfake scams to automated ransomware and hyper-targeted phishing, AI has introduced a range of new, rapidly evolving threats for insurers to navigate. Together, Pam and Nate explore why organizational defenses are lagging, how insurers are adapting policy language, and what this next wave of digital crime means for risk management.
Transcript
Pam Signorello: The finance team filed into the small conference room just after 8:00 AM. Coffee cups, laptops, half-finished bagels, a routine meeting with the CFO before the workday really got started. The CFO was already on the video screen, along with several senior colleagues. Everyone on screen looked a bit tired. One rubbed his eyes, another shuffled papers. The CFO, her tone slightly clipped, the way it always was on tight deadlines, said the company was moving forward with the confidential transaction. A large funds transfer needed to be executed that morning – no delays, no chatter outside the room. The team on site exchanged nervous glances, but the CFO's face was steady on the screen. She asked the controller to share her screen. Everyone watched as the controller logged into the banking portal. The CFO nodded along, eyes flicking between the camera and something off [00:01:00] screen, as if lawyers or bankers were waiting just outside the frame. The numbers were dictated. The destination account confirmed. The CFO said, "Proceed." The controller clicked submit, the screen flashed, transfer successful. That was the moment when every person in the room exhaled because the pressure of the moment had broken. But none of them, not one, had actually spoken to their CFO or to anyone else on that screen. Every face, every voice, every breath in that video had been a deep fake, and more than $25 million had just left the company's account.
Artificial intelligence is rewriting the rules of cyber risk in real time. Today, we are diving into how insurers [00:02:00] can keep pace with a threat landscape that has started writing its own scripts. I'm joined by Nate Lovett, a seasoned cyber insurance coverage attorney at Wiley, here to help us decode this new frontier. Welcome to The Cyber Periscope, Nate.
Nate Lovett: Yeah, thanks for having me. That was a pretty chilling intro.
Pam: You know what? It's a true story.
Nate: No way.
Pam: Yeah, it is. I mean, I may have taken a couple of liberties, but more or less, true story. And not an old one, 2024. I mean, the truth is, artificial intelligence is transforming a lot of things all at once. On the one hand, it's empowering fraud detection. On the other hand though, it's powering new threats like deep fakes and other AI-driven scams. In the insurance context, it's being used to help underwriters underwrite cyber risks. But the first thing I'm curious about, Nate, is what trends are you seeing in cyber claims involving AI and what are some of the most concerning [00:03:00] AI-enabled cyber threats that insurers should have on their radar, in your opinion?
Nate: Yeah, so it's becoming abundantly clear that the cyber threat landscape will be among the many things that are touched or transformed by AI in the coming years. And there are a few, what I think are overarching factors that will likely be at play, and the first is just the sheer scale and speed of cyber attacks. So take ransomware, for example. Threat actors are using AI to automate ransomware attacks, and this is done by spotting vulnerabilities in companies' networks, writing malicious code or malicious scripts. AI is also allowing threat actor groups to execute multiple attacks at the same time or simultaneously. So that's what we mean by the scale and the speed. There's also what I think is a lower barrier to entry. In the past and in the current landscape, we've seen the ransomware as a service model where developers sell ransomware code or malware to other hackers, [00:04:00] and those hackers then deployed the attacks, and that's led to just an uptick in the number of incidents that we're seeing. And now with AI, unsophisticated parties can write code and carry out attacks that will likely lead to an increase in frequency in these types of events.
Pam: So, more to come in terms of the use of AI in the execution of ransomware attacks, in all likelihood.
Nate: Yeah, threat actors are always looking for new ways to leverage these attacks, and I think AI is unfortunately the next thing that they will focus on or harness. Other than that, in terms of the scale and the speed, some of the other trends that are likely on the horizon and what we've seen bubbling up already are just the highly personalized schemes. So automated spearfishing, for example, which I'll get into what that means in a little bit. Also, the deep fake exploitation. So similar to the example that you provided at the outset is using voice fraud or video fraud to carry out different types of attacks.
Pam: Right, certainly. Alright, [00:05:00] so you mentioned some important concepts like generative AI, deepfakes, automated spear phishing. Can you break those down a bit for some of our newer listeners?
Nate: Yeah, sure. So I'll walk through those three at a high level. So first, generative AI – it's a type of artificial intelligence that creates new content. So think text, images, audio, video, and it does so by learning patterns from existing data. So one of the most classic or prevalent forms of generative AI is ChatGPT or other types of chatbots that produce newer, novel outputs. So they ingest all this information, they receive prompts from a user, and it spits out the response based on what it has learned from the data that it has ingested and analyzed. Going back to the ransomware example quickly, I have heard of threat actors using [00:06:00] AI chatbots to negotiate ransom payments. So they're using generative AI in that sense, in terms of how they are carrying out their schemes and how they're negotiating their extortion demands against the companies that they're targeting.
Pam: That's interesting. And what about deepfakes? Can you dive a little bit deeper? No pun intended, on what those are.
Nate: Yeah, so deepfakes are AI-generated media. So think video, audio, or images that show a person or persons doing or saying something that they never did. So when I hear deepfake in terms of the way this plays out in the real world, I think about this in two different buckets. The first bucket is social engineering or e-crime. This is similar to the example provided at the top of the show with the Zoom video transaction, where folks at the company thought they were engaging with higher-level management and [00:07:00] went through with a significant transaction under the impression that they were dealing with actual humans, their actual colleagues, when it turns out this was all fake or imposters, so to speak. The second bucket is the extortion-related deepfakes. So think about an example where a threat actor generates an AI image, a fake image, an intimate photo of you or someone within your organization, and says, unless you pay this extortion demand, we're going to publish this. This obviously has reputational ramifications and is not something that you want to be disclosed, so they might come to you with a demand to pay this in response for a commitment to delete or destroy the message. So that's something that we actually have seen in our work.
Pam: And then, how about automated spear phishing – using personal details about the victim? Can you give us a little bit more on that and [00:08:00] how that plays out?
Nate: So a lot of our listeners are probably familiar with the concept of phishing, and threat actors are now using AI to make the phishing schemes or phishing campaigns more targeted and more sophisticated. So AI tools can gather troves of public information from social media, LinkedIn, and company websites. And the threat actors are trying to leverage or use this information to learn about their targets, so that the targets, when they receive a communication, they don't have as many red flags that often make someone pause and think, wait, is this a legitimate email that I received? Is this a legitimate person on the other end of this phone call?
Pam: It's believable.
Nate: Right, exactly. The voice sounds like the voice of somebody you know because the threat actor has used a YouTube clip or found other public speaking clips of this person to crib their voice to make a very targeted and highly sophisticated [00:09:00] clip to send to a third party. So it's really trying to prey on human trust, and I think a general inclination to help, and often a sense of urgency.
Pam: That makes perfect sense. If someone calls or texts me and, let's say, mentions the name of my dog, I mean, I'm not active on social media, but if I were, then presumably they could get that information from there. And if they start building a profile and using that in their efforts to get information from me, they've now developed some level of trust on my part. My guard is down, I'm vulnerable now.
Nate: Yeah, I think that's exactly right. And they're also using a nickname that might not be widely known to others, or finding these little hooks that, like I said, make the red flags kind of stay away. In addition to the more targeted schemes, AI will likely also allow threat actors to cast a wider net with their phishing efforts. And by doing so, they increase the chances of success and finding someone who might not have their guard up, so to speak, or be able to tell that the communication [00:10:00] is actually a phishing attempt.
Pam: Yeah, that makes sense. I think, as a practical matter, we have to start questioning what we see more and what we hear because AI is really being used in these sophisticated attacks more and more. And we can't count on AI to solve all of our problems in the defense against these tools. But of course, on the upside, AI is being used defensively for predictive anomaly spotting as well as faster triage and containment. So in those respects, the adaptive use of AI by the good guys really has the potential for securing large loss reduction. And I know I'm not the first person to use this term, but it seems clear that we are in an "AI arms race" where attackers are leveraging AI to move faster and [00:11:00] breach mitigation, or the use of AI in breach mitigation, has to evolve to match pace. Is breach mitigation matching pace?
Nate: At this point, no. I think we're still in the early phases of this. We saw a report from Experian, a 2025 report, that said only 37% of companies are currently using generative AI to fight fraud. And compare that to the 72% of business leaders who expect AI-generated fraud and deepfakes to be among their top operational challenges by 2026. So this was in the same report, so we have 37% of companies who are actually using it, and then you have 72% of leaders who expect deepfakes and different types of AI fraud to be among a top operational challenge. So hopefully going forward, these two concepts will be more aligned.
Pam: Right. Yeah. So it sounds like from what you're saying that the gap [00:12:00] between threat acceleration and organizational readiness may be actually widening.
Nate: And I think people are hesitant to put things in place before they really know the extent of the exposure, which, that often is a sensible approach, but it might put you behind the eight ball, so to speak.
Pam: Sure. According to Microsoft's Digital Defense Report for 2025, this is so interesting to me; AI-generated phishing emails achieved a 54% click-through rate compared with 12% for human-written phishing lures. So AI-generated phishing emails are 450% more effective than the human-written phishing email. So, wow.
Nate: Yeah, that's a little bit disturbing.
Pam: So the challenge here obviously is building AI models with secure code, free from security flaws [00:13:00] and vulnerabilities that can spot cyber intrusions before they happen, and so folks can deploy countermeasures. So essentially, we're looking to AI to help us be right a hundred percent of the time. Which seems reasonable and likely, right?
Nate: Yeah, exactly.
Pam: So if you had to bet on the next big AI crime wave, what would it be?
Nate: That's a good question, but I don't think it's really one thing as much as it is an increase in sophistication of the current threat landscape. So threat actors are always looking for a leg up. Take ransomware, for example. When these attacks first started happening, it was based on data encryption. So the threat actor would deploy the malware to lock up systems so that the company couldn't operate its business, and it would try to compel folks to make a payment in exchange for a key to unlock their data and systems. When companies got more resilient and had better backups and [00:14:00] did not need the key in order to restore their data, threat actors got smart and started to, not only encrypt data at the outset, but also exfiltrate data. So they would go in, they would access or exfiltrate information that they collected within the system and also encrypt because, in this case, if a company has the wherewithal or the means to restore the data from backups and don't need to pay in order to get a key, if the threat actor is also dangling the fact that they exfiltrated six terabytes of their data over their head and saying, hey, we're going to publish this unless you pay us.
They're looking for ways to put companies' feet to the fire, so to speak. So they're always trying to figure out a way to stay one step ahead and AI, unfortunately, it allows attacks to be more personalized, as we talked about, be more convincing, and this includes the sophisticated phishing [00:15:00] schemes, the deepfakes that we've mentioned, and automated large-scale attacks. I don't think it's one thing as much as it is an increase in what the landscape currently is.
Pam: Are cyber policies built for AI risk, and how is policy language evolving to address AI-related threats?
Nate: So I don't think you can go to a cyber conference right now and not get a panel that's talking about this exact question, but I'll hit it based on my thinking about it. So, in traditional non-cyber liability policies, insurers are starting to introduce absolute AI exclusions. And then, within cyber specifically, AI is turbocharging the familiar risks for carriers. So there's a lot of thoughtful conversations being had within cyber insurers about how to respond to this, and they will likely consider adding endorsements or sublimits that directly address AI fraud, synthetic identity, and deepfake impersonation events. They'll likely also [00:16:00] consider offering products with dedicated AI coverages, but part and parcel with this, I think the underwriting process will be more robust, or we'll have questions that are tailored specifically to the new AI landscape. So that might include warranties regarding AI-related security protocols. So, think today's MFA requirements, there might be something similar for AI-related protocols and what you have in place to prevent AI-tailored fraud.
And then similarly, there might be questions surrounding voice validation requirements. So in the past, in the fraudulent instruction context, there has been the out-of-band authentication requirement or the verification requirement, essentially meaning if you receive an email from somebody and they're asking you to wire funds, before wiring funds, you need to reach out to them by a means other than email. So pick up the phone and call that person to verify or validate the [00:17:00] request. And if you don't do so, coverage is not going to apply if you haven't satisfied that verification requirement. So there could be something similar down the road in terms of the voice validation, but it's just interesting because when you receive a call from somebody, what's the process that's going to take place to validate that the person you're speaking with is actually them?
Pam: Yeah, that all makes good sense. In terms of the policy wording, it's tricky, right? Because AI is embedded in almost everything now. So I think there's just going to need to be a lot of clarity around the coverage and the limitations. I think that'll be really important, and it'll be interesting to see how the language evolves in cyber policies in the market as they address AI with more specificity. Is there any emerging coverage case law relevant to AI-related claims?
Nate: So I think a lot of our discussion today, we're at the very early stages [00:18:00] of all this. Courts are just starting to see disputes involving machine-generated impersonation, and at least at this point, I'm not aware of any decisions in the AI context in terms of a first-party event. There's also, this comes with a lack of precedent. So with that, there's an unpredictability in coverage outcomes, which is not necessarily new for the cyber space in general, which has a lack of substantive case law. But where we have seen AI in the litigation context, and where we will continue to see it, is in the consumer privacy litigation, which I know you work on quite a bit as well. So we're in the midst of this wave of website tracking litigation, which is based on a company's use of website analytics technology. And these are typically pixels or cookies that are used for advertising and marketing purposes.
And the genesis of these cases is that the use of tracking technologies are akin to wiretapping or eavesdropping, and [00:19:00] if it's done without the website visitors' consent, it's akin to wiretapping or eavesdropping and violates common law invasion of privacy or privacy statutes like the Federal Wiretap Act and the California Invasion of Privacy Act. But with the hyperfocus on AI, companies are looking for ways to implement AI tools into their day-to-day operations. So it's almost certain that the surge of privacy-related litigation is going to continue. And where we have seen AI and privacy litigation so far is through the use of AI-powered chatbots. So these are chatbots that are serving as customer service agents or virtual customer service agents. They assist with returns and exchanges, for example, or they're operating in the background and providing human agents with real-time guidance during customer service interactions with customers. And these lawsuits are typically alleging that the third-party AI technology is intercepting [00:20:00] and recording the customer communications without the customer's consent.
Pam: So more of the same, in terms of the tracking litigation and the causes of action that plaintiffs are exploiting to pursue those claims, but now with kind of AI-generated mechanisms behind the tracking.
Nate: Yeah, exactly. I think it's just going to be a new variation on the same theme. We have the pixels and the chat bots and the session replay and the different types of website tracking claims now that they're just going to have this new component of AI-focused business practices that the plaintiffs’ bar will likely go after, in terms of this next phase.
Pam: What are the top two to three things that you think claims professionals or coverage attorneys should be learning right now to stay relevant?
Nate: I think the key really is to stay hyper-informed of developments [00:21:00] in this space. I mean, this is moving at such a rapid pace. A few years ago, we all knew about AI. We didn't know how it was going to rear its head, so to speak, in our day-to-day, but at this point, you can't go a day without reading an article about AI or going to a conference that doesn't have AI on a panel. So it's on everybody's mind. It's really moving at this rapid pace. So the key is just not getting left behind. So I think there's a few different ways to try to stay on top of it. So Google Alerts are a good practice. Taking the time to read up on new issues, attend conferences, and speak with others in your network, and taking the time, I think, is important. We all, when we start our day, we really just usually jump right into it, start responding to emails, start working on reports, start taking calls. But I think if there's a way to carve out 30 minutes at the start of your day to read Law360 or read different articles or read different opinion pieces, I think [00:22:00] that's going to be really helpful just to stay on top of things and not get left behind.
Pam: Yeah, that's excellent advice. I think you're right on, staying plugged in and connected with the cyber community, I think, is going to be invaluable. I also think your insights are invaluable, so I know that your practice has you deeply embedded with these issues on a daily basis, so I really appreciate you taking the time to share your experience with me today. Thanks so much, Nate.
Nate: Yeah, thanks for having me. This was fun.
Pam: Thank you for joining me on The Cyber Periscope. Just as a periscope works by aligning two mirrors. This podcast works because professionals like you are willing to share your experiences, insights, and perspectives. Together, we create a clearer, wider view of the risks and opportunities that lie ahead. If you found today's conversation helpful, I encourage you to keep the dialogue going with your colleagues because the more we connect our lenses, the fewer blind spots we'll face [00:23:00] as an industry. I'm Pam Signorello, and I look forward to building our collective periscope with you next time.
Episode 1: Phishing Claims Test “Direct” Causation Chain
November 6, 2025
Welcome to the premiere episode of The Cyber Periscope, where Pam Signorello sits down with Bill Knauss, special counsel in Wiley's Insurance Practice, to discuss the 2025 NetDiligence Cyber Claims Study, the evolving landscape of cyber risk, the outsized impact of ransomware, and a pivotal court decision challenging the concept of “direct” causation in a cyber insurance policy. View the court case reviewed here.
Transcript
Pam Signorello: Welcome to The Cyber Periscope, the podcast where cyber insurance professionals join forces to see around corners together. A periscope, at its core, is two mirrors aligned to see what can't be seen directly. That image reflects what we do in this industry. Each of us brings our own experience, knowledge, and perspective, but it's when we turn our lenses toward each other's that the real clarity emerges. In a world where threat actors evolve daily [00:00:30] and surprises are the enemy of sound claims handling. The cyber periscope is our shared tool. Here, we bring together voices from across the cyber insurance community to extend our vision, eliminate blind spots, and better protect those we serve. I'm Pam Signorello, a partner in Wiley's Insurance Practice Group, and I invite you to join me in building the periscope of all periscopes, a collective view of cyber risk, liability, and opportunity. [00:01:00] Bill Knaus, welcome to the Cyber Periscope.
Bill Knauss: Hey Pam, thanks for having me.
Pam: Thanks so much for joining. Before we dive into today's conversation, it's time for our "First Notice of Fun." This is where we shake things up with a quick game to get to know you a little better.
Bill: I'm excited.
Pam: Yeah. And hopefully make you laugh.
Bill: Okay.
Pam: I'm definitely going to laugh. Think of it as your own playful version of a claims first notice of loss. Except here, nobody loses.
Bill: Okay. Good.
Pam: You ready?
Bill: Yes.
Pam: Alright. Your game. You didn't know anything about this.
Bill: I don't know anything about this.
Pam: That's what's going to make it fun. Your game is word association.
Bill: Okay.
Pam: Okay. So I'll say 10 words, one at a time. You tell me the very first word that comes to mind. No overthinking.
Bill: Okay.
Pam: Ready?
Bill: What happens at the end?
Pam: We all win.
Bill: We all win. [00:02:00] The world just gains a little bit of insight into how my brain works.
Pam: Okay. Exactly.
Bill: Good.
Pam: All right. You ready?
Bill: Yeah.
Pam: Phishing
Bill: Line.
Pam: Even though phishing is "ph"?
Bill: I was thinking of the animal.
Pam: And that's fine. All right. Firewall.
Bill: Firewall. Firewall. Bricks.
Pam: Insurance.
Bill: Money.
Pam: Password.
Bill: Joke. And I could explain that one.
Pam: Go for it.
Bill: Okay, so I was driving to work, this is probably two or three years ago, and was listening to the radio, and they were talking about the joke of the year. And the joke of the year – I'm going to botch it here, but was there was a little, Billy was going to school and had to come [00:03:00] up with a password for his email account and was just struggling over and over because all of the passwords kept getting rejected because they were too long. And so he asked his teacher, well, I can't think of anything else that works that's short enough. And the teacher said, well, what are you trying to input? He said, well, I was trying to use Snow White and the Seven Dwarves because that was the only thing I could think of that was eight characters.
Pam: You're a dad, huh?
Bill: Kind of botched it.
Pam: Yeah, that's a dad joke.
Bill: That is a dad joke, but I thought it was funny.
Pam: That's a good one, though. I like it. I like it. Alright, moving on.
Bill: Okay.
Pam: Coffee.
Bill: Daily.
Pam: Malware.
Bill: Evil.
Pam: Vacation.
Bill: Fun.
Pam: Not [00:04:00] evil.
Bill: Not evil. Yes.
Pam: Encryption.
Bill: Encryption...secure.
Pam: Claims.
Bill: Occupation.
Pam: Last one.
Bill: Okay.
Pam: Dog.
Bill: Hairy. I am allergic to dogs.
Pam: That explains it.
Bill: My fears are realized.
Pam: Alright, that wraps up today's "First Notice of Fun." Thanks for playing along, Bill. I think our listeners are going to enjoy your answers almost as much as I did. Now let's get into the heart of the episode. Net Diligence recently issued its 2025 Cyber Claims Study, 77 pages of solid cyber claims info. I know I look forward to its release every year. How about you?
Bill: Yeah, it's always interesting to read because you see the high profile data security incidents in [00:05:00] the news and then doing what I do every day in the world of cyber insurance claims. I have my view of the world, but it's always so interesting to see totally zoomed out how huge the cyber insurance industry is and what sort of risks are developing, growing, how expensive they are. It's just a massive risk and I'm glad that we have an insurance product that addresses that risk.
Pam: Awesome. Surprise. Ransomware continues to make up a disproportionate amount of cyber claims, both in terms of number and average cost per incident. I'm sure you didn't see that coming.
Bill: No. Ransomware has been at the top of the list for a couple years now. Definitely no surprise there. I think one of the big factors that drives that [00:06:00] average cost for ransomware is that it's a first party and a third party risk, meaning that an insured can incur its own costs due to a ransomware event and then also be subject to third party liability in terms of claims, lawsuits, demands, regulatory actions that may come as a result of the ransomware incident. Yeah, no surprise there.
Pam: Yeah, I'm sure we'll have ample opportunity to talk about ransomware in future episodes. The other causes of loss, rounding out the top four though, include business email compromise, which comes in as a second in terms of number and cost, wire transfer, fraud and hacking. And because each one of those categories often does involve social engineering, I thought we'd spend some time today talking about an important decision issued this year in an insurance coverage case [00:07:00] involving both social engineering and computer fraud coverage. I think you know the one.
Bill: Yep. Office of the Special Deputy Receiver v. Hartford Fire Insurance Company, et al.
Pam: Right. And we'll include a link to the case in our show notes, but before we dig into that decision, for the benefit of our newer listeners, do you want to give a 20 second description of social engineering?
Bill: Sure. So like the name implies, social engineering involves the use of social or psychological manipulation to obtain information or access from someone that then can be used illicitly. Common social engineering methods that you may have heard of or things like phishing, smishing, and vishing, and even spear phishing that we'll talk about in a minute here. The idea is that a threat actor lures a victim to click a malicious link [00:08:00] or disclose information by email, text, or otherwise, and then use that information to their criminal purposes.
Pam: Perfect. Yeah. Social engineering is often considered one of the most effective and dangerous cyber attack methods, particularly because it actually bypasses traditional security tools, right? Because the target itself grants the access to the threat actor and it's the threat actor's engagement of an otherwise innocent person in the fraud that can complicate a cyber coverage analysis. Which brings us to the case at hand. In the Office of the Special Deputy Receiver case, the Federal District Court for the Northern District of Illinois addressed the issue of direct loss in the context of computer fraud coverage. And more specifically, the question before the court was whether loss is incurred as "a direct result" of a computer [00:09:00] crime when there is some intervening human element. In that case, it was an employee effectuating a transfer of money instead of say, a hacker installing code that forced the transfer.
Bill: Yeah. The question of "direct" or the interpretation of the word "direct" comes up a lot in cyber insurance. It's a word that we see all the time in cyber insurance policies. And just to give a little bit more context, the incident in question here involves a spear phishing attack against the CFO of a company. The spear phishing attack targets a specific individual, in this case, a CFO, who presumably had access or knowledge of money that the cyber criminal could then try to take. So in this case, the CFO [00:10:00] did fall victim to a spearfishing attack and his credentials were compromised. So the cybercriminal gained access to the CFO's email account. The cybercriminal set up email forwarding rules, and then now impersonating the CFO got employees of the company to issue payments from accounts belonging to a couple of insolvent entities that the insured was managing during liquidation.
So that's the factual background of the case. There were two insuring agreements under the cyber insurance policy in question here. One of those insuring agreements was for social engineering fraud and the other was for computer fraud. And the cyber insurance carrier recognized coverage [00:11:00] for social engineering fraud, declined coverage for computer fraud. And this is where the discussion of direct comes in because the carrier said, okay, well the policy language for computer fraud requires that the transfer of money happen as a direct result of data entry by the cyber criminal. And the carrier said, okay, well that didn't happen here because yes, the CFO was duped. Yes, there were email forwarding rules set up. Yes, the cyber criminal sent emails from the CFO's account, but ultimately it was the employees of the insured company that issued the payments from the accounts belonging to the insolvent entities.
Pam: One [00:12:00] of the main takeaways from the court on the direct loss issue really is that the concept of direct loss does not require, right, that the underlying computer fraud be the sole cause of the loss so long as it was in fact a direct cause of the loss.
Bill: Yeah. Well, there is differing authority between jurisdictions on the question of direct loss. A direct loss could mean proximate causation, and there's authority for that. There's other courts that use a narrower definition of the word "direct." In other words, that "direct" means no intervening causation, immediate cause and effect. And the insurance carrier cited an Illinois appellate case called RBC [00:13:00] Mortgage for the proposition that a direct loss is much narrower than approximately caused loss. But in this case, the court found RBC Mortgage to be distinguishable and ruled that the loss was a direct result of computer crimes, even though it was an employee of the insured and not the cyber criminal that triggered the payment. So yeah, in this case, the court did find that there was a direct cause and use an interpretation of direct meaning more along the lines of approximate cause.
Pam: It almost begs the question why the word “direct” is in the policy.
Bill: Yeah, certainly.
Pam: Not why it's in the policy, but why a court would read it out of the policy, in a sense.
Bill: Yeah, certainly there's other authority that supports the idea that direct means exactly what it says – direct, no intervening causation.
Pam: Another important issue that the [00:14:00] court decided in this case is one relating to mutual exclusivity, specifically whether the social engineering coverage under which the insured here did in fact pay its sub limit toward this claim is somehow mutually exclusive from the computer fraud coverage.
Bill: Right, so based on the specific language in the cyber policy, the court held that the two insuring agreements under which the insured sought coverage were not mutually exclusive. The policy divided social engineering coverage into two parts, Part A and Part B, and the insurance carrier recognized coverage for social engineering under Part A, which is where the insured organization receives fraudulent instructions from a cybercriminal. The policy specified that part B of the social engineering coverage could not include a claim that was also covered under the computer fraud coverage. But the policy did not [00:15:00] include the same limitation about Part A, namely, the policy did not indicate that Part A of the social engineering coverage could not include a claim that was also covered under the computer fraud coverage. So because the policy was explicit that Part B of the social engineering coverage was mutually exclusive from the computer fraud coverage, but was silent as to Part A, the court inferred that there was no mutual exclusivity between Part A of social engineering coverage and computer fraud coverage.
Pam: So for a claims handler analyzing potential coverage for one of these matters, or something along the lines of one of these facts scenarios, necessary practices, right, is to consider all potentially relevant ensuring agreements and not rest after finding that perhaps one may have been [00:16:00] triggered.
Bill: Yeah. Yeah, and I think this is why we as coverage attorneys probably find ourselves in the position of drafting 20-page coverage letters is because you can find yourself in a position of looking back after the fact, discussing coverage under a part of the policy that may not have been immediately obvious. And instead of addressing each and every insured agreement that may potentially apply upfront and putting those positions out there and giving the insured a chance to review and respond to those.
Pam: Because you and I live all things cyber insurance coverage every day, we maybe sometimes lose sight of the relative newness of cyber insurance. But the truth is cyber insurance is a relatively novel product, and as a [00:17:00] result, there aren't yet a lot of court decisions interpreting the coverages provided.
Bill: Absolutely. We're still in a state of play where every decision necessarily deserves our focused attention, particularly as cyber threats are rapidly maturing and evolving, and the language and scope of cyber policies are doing the same in tandem.
Pam: Bill, thanks so much for talking through the Office of the Special Deputy Receiver decision with me today. Although, as with all cases, the decision was of course fact-specific, it adds to the small but growing body of case law in this context.
Bill: It does. And really each decision issued in the cyber context now can be influential in shaping the understanding of the scope of these policies for insurers policy holders and brokers. It was great to join you today.
Pam: Thank you for joining me on the Cyber Periscope. Just as a Periscope works by aligning [00:18:00] two mirrors. This podcast works because professionals like you are willing to share your experiences, insights, and perspectives. Together we create a clearer, wider view of the risks and opportunities that lie ahead. If you found today's conversation helpful, I encourage you to keep the dialogue going with your colleagues. Because the more we connect our lenses, the fewer blind spots we'll face as an industry. I'm Pam Signorello, and I look forward to building our collective periscope with you next time.

