Fireworks and Fundamentals - Cybersecurity Essentials in the Age of AI

EXP Technical recently spoke with Eva Benn, on Cybersecurity Essentials in the Age of AI.

Eva Benn is an Offensive Security Program Manager, Microsoft. She is a Co-Founder Women in Tech Global, Board Member at Women in Cybersecurity - Western Washington Chapter. Ms. Benn's certifications include CEH (Certified Ethical Hacker) and CISSP.

The conversation included a discussion of Eva's role at Microsoft securing the infrastructure that support many of the products that we all use daily. We discussed the dawn of the artificial intelligence era and continually reinforced the idea that cybersecurity essentials are more important now than ever before.

Video and transcript follow:

Click on image above to open the video in a new tab.


Fireworks and Fundamentals - Cybersecurity Essentials in the Age of AI

Kelly Paletta, EXP: Good afternoon, everyone. I'm Kelly Paletta, Director of Sales and Marketing at EXP Technical. Welcome to our webinar. This is "Fireworks and Fundamentals - Cybersecurity Essentials in the Era of Artificial Intelligence.

We have a great guest speaker joining us today.

I have a few administrative announcements at the top here, and then we'll introduce our speaker.

One thing I want to mention is that this will be an interactive presentation. You're welcome to submit questions as we go. You can submit questions via the chat feature or Q&A in Zoom, in your interface. The difference is if you use Q&A, your questions will be confidential and will go only to our panelist today. If you use the chat session, questions will be visible to everyone in attendance, but use either; that's fine. We'll try to take questions in line as we go.

One question that always comes up is, "Is this event being recorded?" In fact, it is. It usually takes me about a week to have that available because I like to edit the transcript a little bit. But you can check your email in a week, and you should see a transcript available for you at that time. I'll send that out when it's available.

For those of you not familiar with EXP Technical, we provide IT support to hundreds of small and medium-sized businesses all across the Pacific Northwest. This isn't intended to be a sales pitch, but if you do need IT support services, feel free to connect with me directly, and I'd be happy to help you out in that regard.

That's a good time to introduce our guest speaker.

Joining us today is Eva Benn.

Eva is a Program Manager for Offensive Security at Microsoft. She's the founder of Women in Tech Global. She's currently on the leadership board for Women in Cybersecurity, the Western Washington affiliate. She is a CISSP and a Certified Ethical Hacker. Eva brings a wealth of technical depth and knowledge as well as a lot of real-world expertise to our session here today.

Eva, welcome.

Eva's Role at Microsoft

EXP: Eva, can you tell us a little bit about your role at Microsoft and what you do there? And while you're doing that, I'll troubleshoot your video and get you connected as well.

Eva Benn: Fabulous, fabulous. Happy to. Well, first off, thank you so much for having me. It's a pleasure to be here.

I am on the Offensive Security team that supports the Azure platform—the whole platform, the Windows platform, the cloud and Edge technologies, as well as all of the devices and gaming.

Our scope is large, and we arguably support some of Microsoft's most critical products and services.

Our team, being an offensive security team, approaches assessing our security posture from the lens of real attackers. So, we strategically select representations of critical services to target, emulating real threat actors out of the real world and identifying vulnerabilities within our services and mitigating the risk before actual attackers have the opportunity to exploit them.

More importantly, something that I'm very proud of that I'm working on, we are learning from these vulnerabilities to apply risk mitigation strategies across our broader portfolio.

It's a very interesting, very dynamic space with lots of responsibilities but also lots of great impact.

EXP: And for our audience, they are mostly business leaders in small and medium-sized businesses, but this relates directly to their world in the sense that you are protecting the environment in which some of the perhaps hosted services that they use run.

Eva Benn: 100%, 100%. I think we are the backbone of Azure, so I can assure you we take security very, very seriously.

EXP: No doubt.

EXP: There is one thing that I want to interject here. We're going to launch a poll and ask our audience a poll question. I'll just give it 20 or 30 seconds here. It has no real bearing on the conversation but might inform some of our discussion as we go.

The question is...

Eva Benn: I can see it. I can actually participate.

EXP: Sure, please do if you choose to." But the question is, "What are your top AI-related cybersecurity concerns?"

And reporting back here's what I see: 73%, the overwhelming majority of people are saying "AI-assisted phishing and social engineering."

Fighting it out for the number two spot, are "unintentional data leakage"... I think that's probably the number two spot. So I'll give the poll another maybe five seconds, and we'll close. We'll end that and share results, and people can see now on their screen the results here.

So we had 76% "AI-assisted phishing," then social engineering, unintentional data leakage, and then some data poisoning, AI abuse, evasion attacks. Some of those might not be as familiar to people, and we'll address some of those as we go on here.

And kind of shifting gears, we're at the dawn right now of a new era in technology, and I don't mean to be hyperbolic, but it really does seem like this is a significant change from everything that's come before with the introduction and widespread adoption of AI. Do you have any comments to that? Can you speak to the historical significance of the dawn of the artificial intelligence era?

Eva Benn: I think that, with the risk of sounding like a broken record, I think over the past year, virtually every keynote at every event has been around AI, including some of the keynotes I've delivered.

Primarily, I think we talk a lot about being at a historical moment where, just like some of the big technology advancements over time have disrupted our lives, now with AI in the picture, our lives are yet being disrupted once again, and pretty much everything we know is not going to be the same.

Now, I do argue that while we can compare this technology advancement to similar big advancements, such as the internet, such as the rise of personal laptops, they're disruptive in a similar fashion.

However, AI is a very different beast.

It's different from everything we've seen so far because, unlike everything we've seen in the past that's built with code, AI is built on data.

So there are certain security implications as well as privacy and ethical verticals that we need to consider when it comes to the adoption of AI. And I think that this is specifically pertaining to security and how we approach security. And we definitely, there are some things that we need to reconsider when it comes to AI. So this is just my kind of initial answer to this. We can dive into more detail. I don't want to sound like a broken record.

Brave New World of Artificial Intelligence

EXP: And it's brand new, right? I mean, I was thinking this morning about how we approach cybersecurity, and we've got, like you mentioned, personal computers. We've got about 40 years of experience in securing personal computers, and we've got about 25 or 30 years of experience securing internet-connected devices. And I know it changes all the time, but AI has been out in the wild for just about a year now, and its availability integrated into the applications that we use has been available for days or weeks or months. So it's a rapidly changing world, correct?

Eva Benn: Yes. So AI and machine learning technologies are not new, as you know, and I want to make sure we're acknowledging them. However, they are new to the majority of the population on this planet, and we've never seen them used at the rate and scale they're being adopted today. I mean, quite frankly, it's insane, the rate of adoption we've seen over the past year.

Therefore, it's very normal for all of us to have certain levels of anxiety, as it comes with any other change, especially such a rapidly evolving change. However, we can't let this uncertainty paralyze us.

I think, as security professionals, there are a lot of opportunities for us to leverage AI to get ahead of attackers and to turn the tables in our favor.

And there's also a lot of productivity opportunities. I mean, this is actually the unique value proposition of AI that really helps us do things faster, that helps us, you know, kind of eliminate or minimize the human effort in some of the tasks that we can automate with AI and artificial intelligence.

So, the value proposition is definitely there; however, the fears interrupting adoption are not unfounded as well, right? As we are still learning to navigate this new landscape, it's very common to have the fear of the unknown, and so that's what we're all trying to navigate here.

Microsoft 365 Copilot Brings an AI Assistant to Microsoft 365 Products

EXP: Right, and you actually—you're one question ahead of me here, which is good, because you're talking about the value proposition and productivity enhancements that are available with AI. And maybe we can explore that a little bit deeper because it's especially timely this week. Microsoft on Monday announced that Microsoft 365 Co-pilot is now available to a broader audience. It's available to everyone now, although I will say, for those of you that are in attendance here, they're like, "Oh my gosh, I gotta get it right now!" The only caveat is that you have to pay upfront for an entire year. So, I'll warn you about that. But it is—when it was released in November, it was only available for enterprise clients, and you had to buy 00 seats at a time. And now, small businesses have access to it, and it's available in any quantities that you choose. It's an add-on to certain Microsoft 365 licenses. But can you speak to, without—I mean, this isn't intended to be a Microsoft 365 pitch, but can you speak to some of the productivity, some of the ways that AI can help us with productivity?

Eva Benn: Yeah, absolutely. So, I can certainly address that. Before that, I want to make a disclaimer: I don't work for the M365 product, so I can't make educated announcements on behalf of Microsoft and M365. However, I can comment generally on the productivity advancements as well as some of the security benefits from adopting some of these Microsoft products.

AI, as I mentioned earlier, is a tremendous productivity enabler, and it does allow us to do more with less.

The whole purpose of Copilot is exactly that—to help us be more efficient, to help us offload some of the tasks that we can to AI so we can do more, we can be more productive, we can get more out of the resources.

I think the key benefit of utilizing Copilot, such as the Microsoft Copilot and its various forms, is that you are not just leveraging that platform and the product; you are also leveraging the whole security team, the whole research behind it, and being able to take advantage of these AI benefits securely and without having to try to figure this out locally, which has its own challenges.

When you're trying to use AI in a small business, particularly, I would highly encourage you to take advantage of some of these enterprise products because there is a lot of research, a lot of resources, and a lot of security investments that are done in this to ensure that you can leverage AI effectively and securely.

When we think about AI and trying to use it locally, there are a lot of additional security pivots that we need to think about when implementing it—around authentication, input validation, securing the data, securing the underlying models, etc., etc. So, I think that this is the unique value proposition here, that we are not just buying the product; we are also buying the security team, the research team, and everything else behind it.

What security concerns should small businesses consider before deploying artificial intelligence?

EXP: I wasn't intending to go into this so early, but I'm going to jump into this one right away here because you kind of addressed that.

One of the things that artificial intelligence is really good at is retrieving data. And when you're using that within your tenant or within your data, there are some great things about that because it really becomes like a true assistant. It can help you create PowerPoint presentations, proposals. It also—and this came up in our last webinar—it also is very good at retrieving data. Presumably, your SharePoint site is already locked down with the right appropriate permissions and controls around sensitive data. But, for example, if you create a new Teams site that creates SharePoint data that might be restricted to maybe a strategic planning the executives on a team, and AI gives you the power to give somebody else in the organization perhaps the power to query and search for that data even if they don't know it exists. Am I correct about that? And can you speak to ways that people, things that people need to be mindful of before they implement AI within their computing environment?

Eva Benn: Yeah, that's one of the reasons why I highly advise small businesses to leverage big enterprise products for this, just because there are several different pillars that we need to think about differently when it comes to implementing AI:

  • Securing that data, (as you mentioned).
  • How do we store that data?
  • Where do we send this data?

We have to understand that if we're just leveraging open-source AI, such as ChatGPT, we're sending our data somewhere else. Do we want that, right?

Usually, the most secure way to do this is to have your own AI instance. But this is not cost-effective; it requires highly skilled personnel to manage that. It's not really a feasible solution.

Therefore, leveraging enterprise solutions and add-ons, such as the Microsoft M365 Copilots, would be your best bet.

And I can only speak to Microsoft because I work for Microsoft, but you can definitely look at other cloud providers out there. I'm sure there are products that are going after that same goal.

But, as you know, Microsoft is a primary investor in AI. We are heavily leaning on AI in everything we do, and we take security very, very seriously. So, one thing that, being on the team that secures some of the Microsoft critical underlying infrastructure for Azure, I can tell you that we're really taking that seriously.

So, yeah, so that's one consideration. How do you secure that data?

How do you ensure that you have proper input validation, and this data set is not being poisoned? This is another thing that you need to worry about, which is again not very feasible for us to worry about if we're a small business. We have other things to worry about. We want to make sure that we have cost-effective operations; we want to make sure we're worried about our revenue and so forth.

Another thing is also the models, the underlying models' integrity. This is also something that we need to worry about. How do we ensure that these models cannot be compromised, that they can be bypassed with techniques such as jailbreaking and so forth?

The other thing that is also a very interesting consideration when it comes to AI is that—and I talk a lot about this—is how do we secure the humans? I think you alluded earlier that humans always will be the weakest link in the organization. We do see phishing continues to be a top attack vector for organizations.

How do we ensure that AI actually is safe and does not carry out attacks against humans and can recognize bias but also does not provide bias in its responses?

And this is a whole another area that we can probably spend more than an hour talking. But yeah, one of these are just some things that we need to think about, and these are the big problem themes that big organizations such as Microsoft are addressing on the large scale for customers so then you can that for free and worry about your business.

Types of Artificial Intelligence Attacks

EXP: So, can we even rise up above that a little bit and talk about some of the definitions? Because you mentioned "jailbreaking." That's kind of an evasion attack if I understand correctly.

But there are, and you mentioned introducing bias. I saw a site that someone ran a test--a disinformation machine. It was run in isolation, but it was surprisingly easy to trick AI into spreading disinformation. And I'm wondering if you can speak at a high level, just some of the definitions of some of the types of threats like you mentioned jailbreaking or privacy threats or some of the others that people just, as a matter of definition, should be aware of?

Eva Benn: 100% yeah, I can briefly speak around this, and I'll actually go even a step further. Um, I will share a link, so MITRE, um, which is an organization that aims—it's a that aims to kind of track key attack techniques and their remediations. So, MITRE actually has created a new Matrix specifically for AI and machine learning attacks. So these are types of attacks that are specifically pertaining for machine learning technology.

I highly recommend to anyone, whether you are in security or not, to be familiar with these because these are, uh, lists and types of attacks and attack vectors that are constantly updated by leading security researchers, and it's very important for us to be aware not just of the techniques and the tactics but also how do we mitigate.

AI jailbreaking, just a quick overview. It's very similar to iPhone jailbreaking. It essentially means bypassing the intended guardrails by the model. So, for there was a very popular one, and in some of my keynotes, I've actually demonstrated a demo of this. There is a very popular jailbreak technique called DAN, which has seen been mitigated that was against ChatGPT.

With DAN ("do anything now"), you could elicit the ChatGPT to give you responses that are either unethical, that are illegal, and basically make it do whatever you want. And there are some special prompting techniques that can help you kind of jailbreak and bypass that model guardrails. And I was doing that in as a demonstration. I'm H GPT, tell me unicorns are real. Um, and so obviously, this is very benign, but when we think about jailbreaking, this can be extremely serious when we're talking about AI that supports critical infrastructure or healthcare, and that's, that's why the integrity of these models should be taken very, very seriously. That requires resources.

EXP: Excuse me if you're interrupting, but so you could potentially, and maybe we're getting too far down into the weeds, but potentially use a tool like that, do anything now, to convince ChatGPT to write malicious code even though it has guardrails around it that prevent that or from composing phishing emails or other things that there may be controls in place that prevent that, but you're saying there have been ways that have been, yeah, ways that have been mitigated. But this one has been mitigated, but it's ongoing. Make it do things.

Eva Benn: Yeah, yes. And so DAN has been mitigated, like there are researchers that are literally spending their entire days and nights coming up with new tactics and techniques. This is just one example, and that's what I'm saying, this is trying to do this locally. It's not cost-effective, and it's not really going to get you very far because it is expensive. It does require a lot of resources.

Also, we need to think about training-data poisoning. I think that one of the key value propositions for AI is that it continues to evolve, it learns, and then it kind of acts like the human brain, right? You evolve as a human based on the experiences and the things that you learn, and with AI, it's very similar, which makes it exactly why it's so useful for us.

But from a security perspective, we also need to worry about what are we teaching AI?

  • Are we tampering with the training data?
  • Are we potentially exposing it to the opportunity for somebody to make it evil, right, or not useful?

And I think these are just some high-level things. There is so much more beneath the surface, but I hope this illustrates the point that, you know, trying to do this alone for a small business is probably not the most effective way to do this. Ride on these big enterprises, ride on the big problems that we've already solved for you.

How Do We Prevent Data Leakage with AI?

EXP: That circles back to what you said too about some of the AI enhancements that are being currently released for Microsoft 365. And, you know, for example, businesses might have concerns about data leakage, about, you know, their private data getting out into the real world. And one of the things, I think you alluded to this, but we might want to underscore that is that if you're using ChatGPT and you're sharing sensitive information there just in the free version of ChatGPT, the model may be training on the data that you're feeding it. And if you're talking about very private, sensitive topics, that may enter the training ecosystem.

Whereas if you're using Microsoft 365 in a protected version, it's only training on that with respect to the data that's in your tenant. It's not sharing the conversations and not saving the conversations that you have. That's my understanding, at least, of Microsoft 365 copilot and some other enterprise products that have security features built into them.

Eva Benn: Yeah, so I can't speak to the specifics because, once again, I don't represent the product. However, one thing that I can tell you is that each customer's data adheres to all regulatory compliance and legal requirements for that customer. And your data is secured within the cloud, and it's isolated as per your requirements.

Now, we have global customers that have very different regulations and requirements across the globe. And so one thing that you get is that by leveraging enterprise product such as co-pilot, using AI is made easy for you. And we do a lot of extensive research on the use cases and some of the security threats and vulnerabilities that could be anticipated. And we try to mitigate them for you ahead of time.

And I'm not a product person (M365). I'm not trying to sell you the product, but I I'm trying to give you more efficiency guidance.

If I was a small business, what are some of the decisions that I would want to make?

I don't want you to feel that, 'Oh my gosh, I'm falling behind. Am I using AI enough?' You probably are.

Just be very, very careful on implementing local models, local instances, and, you know, trying to solve these problems on your own because they're very complex and they do require a lot of resources and skill set.

Cybersecurity Essentials

EXP: That goes back to something that you posted on LinkedIn when you announced this. You said something about...imagine you have a Lamborghini or a Ferrari. You have this, really exciting sports car that performs very, very well. That's great! And it's exciting. You get so excited about that, but you can't lose track of the fundamentals.

The most important features in a Ferrari are still the brakes and the seat belts.

Can you speak a little bit to fundamentals then--fundamental essential cybersecurity controls that need to be in place everywhere, for small business and everywhere?

Eva Benn: This is my favorite topic!

I think it's because it's very easy for us to get blinded by the sparkle of AI. It's sexy! it's cool and it does enable us to do a lot of cool things.

From a security perspective it does make us more efficient.

It also makes attackers more efficient, right? It is a tool.

However, I think that many of us lose sight of 'let's have AI do everything for us' and we lose sight of the basic things that we need to do to actually stay secure.

Even in the era of AI, our traditional security approaches still apply and they're still very, very relevant.

AI makes us more efficient, right? It is a tool for efficiency. It can make us do the things that we already do more efficiently.

However, AI, no matter how fancy it is, it's not going to solve your training and awareness program, right? Where you make sure your employees know what they need to do and they don't click on links and they don't, you know, pick up random USBs off the floor, right?

Then we won't to solve your MFA, right? If you have a single-factor auth, good luck having AI solving this problem for you.

If you're not on top of patching,

I mean, like these are just simple, simple things. A lot of, especially small businesses, often overlook patching because of possible downtime, productivity, even, you know, limited resources. Um, there are multiple reasons, they're all valid, right?

I think that we can't rely on AI, that new shiny solution that makes us more efficient to solve our basic problems.

Just like what I said with the expensive car with a Lamborghini or whatever it is, it can be super fancy. Like I love my ambient light in my car. But if I'm not wearing my seat belt, right? It really, it doesn't matter if I get into an accident, the ambient lights and the fancy AI assistant are not going to help me. Um, so I really highly encourage small business owners and stakeholders to just think about your basic security strategy and then leveraging and riding on economies of scale by using reputable products that are available on the market and riding on their strategies and how they're using AI versus trying to figure out how am I doing this myself.

One thing though, I have to just do a little bit of an asterisk. You do have to understand the benefits of AI. You do have to understand its threats. You do have to understand because this would make you able to make informed decisions whether or not the ROI for implementing certain or adopting certain AI technologies within your business makes sense. However, for small businesses, I do not recommend trying to implement your own AI local models and trying to solve these problems on your own.

Also, I think we need to think about what do we allow our employees, right?

I think for non-small businesses, there has been a widespread data leakage issue where, you know, people often don't understand the implications of going online and putting company confidential data, sometimes customer data in open-source AI tools.

This is absolutely scary.

So these are the basics, right? Just make sure your employees understand this is crucially important and make sure you have just your basic security fundamentals in check before you go look for AI because the AI-driven attacks, such as AI-driven phishing or, you know, evading detection, they make attackers more efficient. But, you know, that doesn't negate the fact that we still need to have, like, our basics in place before we go try to prevent like boil the ocean.

Cybersecurity Recommendations in the Age of Artificial Intelligence

EXP: I want to circle back to that. I took notes on some of the things that you said. It's a relief because they mirror some of the standard best practices that we suggest too.

Some of the things that you mentioned were things like multi-factor authentication, patching your computers' operating systems, applying security updates regularly. You mentioned security awareness training. I might also add reliable backups and test those backups regularly. We often recommend to our clients endpoint detection and response, NextGen antivirus, and other, again, using Enterprise tools, using standard products that are out there that are available, and enabling, perhaps—and again, I don't want this to be a Microsoft pitch because we're agnostic too—but there are a lot of services that are available, for example, in Microsoft 365 Business Premium that are security controls as well that you can enable. It looks like you have something to say there. I don't want to talk over you.

Eva Benn: No, no, no. This is good. I think that these are really good.

One thing I also want to mention here is that a lot of businesses end up just going heavily too much on detection. I think where we need to focus on is prevention.

And this is why I'm talking about training, making it hard for people to get it wrong by ensuring we have things such as MFA, we have our latest patches in place. We also implement least privileged access, like just best security practices. Do you have everybody in your office an admin in your environment or on their machines? What do they have access to, right? Because the humans again are the weakest link. It only takes one person with an admin access to have extremely serious consequences across your entire environment.

These are some of the things that I would highly recommend. And today, I think the world is moving, the world of security, unlike back in the days, moving more towards buying security as a service as a part of a product offering versus trying to have your own local security team. Having your own local security team is not—I mean, obviously, depending on the situation, it may make sense, but for a small business, it's normally not very cost-effective.

EXP: You want to take advantage of products that have been developed and tested out in the real world and in widespread usage. Is that what you're saying?

Eva Benn: Like when you're buying Microsoft cloud, you're not just buying whatever version you're not buying the infrastructure only, right? You're buying the team behind it, you're buying the security features behind it, you're buying the whole extensive research, the latest technologies available.

I think that these are some of the things that people often forget.

EXP: Ah, you know, I'm going to maybe change the direction on that only slightly, but I think this came up in our prep conversation as well. You can be, like the note from my doctor here, you can affirm something that I've been telling people all the time. We meet with small business leaders often, and sometimes they will say, "Well, I want to have my own servers on-premises because I just don't trust the cloud." And I think that's one of the things that you're getting at is that it's very difficult to secure your own Exchange server or your own file server that's operating in a closet in your office with all of the layers of security. Maybe I'm speaking over you, but am I heading down the right direction? There are so many layers of security in Azure infrastructure that you could never implement on your own in a small business in your office on-premises.

Eva Benn: You're getting—I mean, just not just the user experience, right? Because if you have your admins that are administrating your environment, they're already riding on the research and the security features and some of the guardrails that already come with the Azure environment itself.

I think that there is an anxiety of the release of control in certain cases, but we have to understand that we are not going to be ever able to protect our environment as well as our big enterprises such as Azure because we have a lot more resources. We have teams, and I mentioned we regularly assess the security of our environments from the lens of an attacker. We are actually emulating real sophisticated actors, and so these are things that small businesses can very rarely afford to do because they're very costly.

EXP: Right, you're working on their behalf, whether they know it or not, in the development and security stage long before they ever use your product. And, you know, I'm going to insert a little bit into the middle of that and say there are a couple of things that are relevant that we do at EXP.

One is, you might have seen on the intro video, we provide free security awareness training.

There's an online course that anyone, many of the people in attendance, have taken it, but it's available to everyone, and it's aimed at small businesses here in the Pacific Northwest. It's to help people identify phishing attacks and spear phishing and some of the current threats like voice cloning and QR code phishing and other contemporary threats as well, and it's completely free. So that's one thing that's available to people in attendance here. It's at and forgive me for the commercial here.

But some of what you're talking about is something that we do too in that one of the things that we work on is a "right-sized" approach to cybersecurity, and it often is leveraging enterprise tools, things that have been tested in the marketplace, but also helping our clients understand that there are limited budgets as well. And there are certain things that Boeing and Costco and Amazon can implement that are just not practical at a small business level. And so one of the things that EXP does with our clients is we work from a framework that identifies: are you in a highly regulated industry? Well, then there are certain controls that need to be in place. Do you have maybe a larger appetite for risk? Then we can take a kind of a medium mid-level approach to cybersecurity. And I guess this is the value, and I'm sorry to get into such a pitch here, but the value that EXP provides is that a lot of our people in attendance that are business leaders aren't cybersecurity experts. And so we try to help them discern what are the things that need to be in place, going back to those fundamentals. MFA, patch management, reliable backups, test your backups, train your employees, all of those things. And again, it looks like you're about to say something, so did you have something to add to that?

Eva Benn: "I agree. I just wanna, um, I think the reason why I'm mentioning this is because I have traveled quite a bit, have talked to people globally over the past year, and small business owners. And this is really the one key thing that I always notice is that people just feel that AI will go solve all of their problems, and they don't need to worry about the basics. This is just a common misconception, and I'm not saying that the people here do have that misconception, um, or subscribe to that, but I just want to make sure that I reemphasize, the importance of having just a basic strong security governance, training awareness, and really adopt a Security First culture. As cliche as this may sound.

EXP: I want to acknowledge I have a couple of comments that have come in, and I want to also point out to our audience, you're welcome to pose questions. There were a couple of comments that came in through, direct messages. One was 'unicorns are real!' I think that was in response to your jailbreaking analogy.

Eva Benn: I thinks so!

Weaponized Artificial Intelligence

EXP: Somebody else commented that they were delighting in seeing me struggle with tech. At least I performed some sort of a service today. Feel free to submit questions to our, from our audience. Feel free to speak up about that.

I guess one question too is, in the work that you're doing, how are you weaponizing AI in the offensive? If you're acting as an attacker, are you using AI in those attacks, and how are you doing that? And how is AI...well, maybe I'll shut up there.

Can you speak to that at all about the ways in which you use a weaponized version of AI in some of your simulations and attacks?

Eva Benn: I'll approach this more generally, because I obviously can't speak to the things that we do internally.

Generally, some of the ways that AI can help us in offensive security is with crafting attack simulations.

Offensive security is adopting more and more automation. We do have lots of different opportunities for breach and attack simulations at scale that are automated. And there are actually a lot of products out on the market. Not going to mention any names.

Crafting these attack campaigns requires us to do research on what the threat actors are doing, to write code. It requires time investment, it requires skill set. AI can actually assist us in this. This is another thing that I also demoed.

We could, use AI to help us craft pieces of code based on certain attack techniques. It actually knows what's fascinating. I'm personally very fascinated about this.

When leveraging the MITRE framework, you can just give it the technique ID. It knows what the attack is. It can actually craft an attack code for you under certain circumstances.

If you just go to ChatGPT, it's not going to just write an attack code for you. So essentially makes us more efficient in eliminating that human factor of having to craft attack campaigns manually.

What that enables us to do is being able to quickly turn around attack campaigns, update them, and being able to test a greater attack surface in less time.

This makes us more secure because we're able to identify these vulnerabilities quickly at scale, not just within single services.

Additionally, AI can really help us in breach path analysis. And this is again something that we offer within Azure. It is a service, check it out, breach path, I think it's called "attack path analysis." And what that enables us to do is evaluate the security posture of our environment by emulating attacks, attack paths without actually executing them, which can be dangerous and it's also expensive.

And it does that by quickly scanning large volumes of data sources, um, from user analytics, from, um, you know, from any kind of logs, from environment configurations, and quickly can tell you, 'Hey, you have a misconfiguration here that can allow for this particular, particular lateral movement opportunity.'

This is actually already available within Azure. Once again, this is something you're literally getting for free as buying the product where you can get prioritized list of security vulnerabilities and gaps that can allow attackers to navigate through your environment, simply by scanning and analyzing large amounts of data and correlating it very quickly, which is exactly the value proposition of AI.

EXP: I want to be sure I understand. It does that evaluation in ABC company's real production environment that exists in Azure?

Eva Benn: Yeah, yeah!

This is something that AI has the opportunity to really boost on steroids, if you will, because that's exactly what it's good at. And this is what we want. We want to have a constant view of our environment and what we're doing because configurations change, right? Our environment is not static. So testing it once in depth is great, but it only provides us a point-of-time assessment.

Being able to evaluate that at a low cost and at scale constantly does have its unique value propositions.

It can also help us to measure ROI and security investments without executing real attacks.

If we implement a security investment, we can easily use automation to measure where we were before and after. So that can justify some of the cost investments.

Another area where that, just in the simulation, right?

It's really cool. This is another link that I could also share, the attack path analysis feature at M. And again, this is something that continues to evolve. I think that we are only at the infancy of this field, and this is an opportunity for us to really get ahead of attackers because it could enable us to very quickly and continuously figure out where we have gaps in our security, in our environment, to quickly fix them.

EXP: You mentioned human factors earlier, and this is just a curiosity, do you use AI to simulate human behavior as well? Like do you say, "If we launch this attack on this population with this sort of tech-savvy, how many fall for it? Or is that kind of a not quite the approach?

Eva Benn: To be frank with you? I definitely see an opportunity to use this for this use case. I think the use cases for AI are limitless. I mean, that's actually a really good, I'd see it's a very viable use case where you train AI to represent a certain employee type, like whether that's going to be an admin or whether that's going to be just the general user, and then just see. I don't know. I'm not familiar with this use case actually being implemented, but I'm sure it sounds interesting. I think that when we think about this, though, now that I'm thinking out loud, it may get a little bit hairy and blurry because I think that this will involve a lot of assumptions and a lot of possible biases. So you have to be kind of careful around that.

Human Factors in Cybersecurity: Distraction Creates Vulnerability

EXP: Yes, and you know, and, and I'll add one of my Soap Box issues too related to that.

We provide security awareness training, but in an abstraction, you say, "This user is so savvy that they won't fall for that!"

What people don't usually account for is distraction.

You might be pretty good at spotting a phishing attack, but how good are you at spotting a phishing attack when it's a spear phishing attack and it's really personalized and you get it at 5:30 p.m. when you're on your way to pick up your kids from soccer practice and you're hungry and you don't know what you're going to do for dinner, and you're just checking your email on your cell phone, and at that moment, that's when you're the most vulnerable.

I guess can you speak a little bit more to human factors because I think that's the real—we talk about humans being the weak link, but then when I think people don't realize this, they'll say, "Yeah, but I'm pretty smart. I'm pretty savvy," but there are other factors that come into play.

Eva Benn: Yeah, I think that this is definitely the key um, the key gray area.

AI has the opportunity to the ability to interact with us in the way that forms meaningful emotional connections. And I don't know if you've been following kind of what's happening in the world, but people are getting in serious relationships and even getting married to AI. I don't know how this works, but this is just kind of a proof of how dangerous our interactions with AI can be.

That's why we need to make sure when we think about AI security, we are thinking about that human element.

And what you said earlier, right? That's why I was thinking about the biases when we were thinking about emulating how a user would act because it's all about making emotional connection.

The reason why humans are the weakest link is because we have emotions. So that's why what you mentioned earlier, right when I'm hungry, when my basic needs are not met, I am definitely much more inclined to be vulnerable. That's yeah, that's just one consideration um, and why training and awareness is so important.

It's hard to measure impact from training and awareness, but it is one of the most important areas that you can invest in, primarily because this is really heavily focused on prevention and kind of trying to secure your users, which is the most challenging part.

EXP: Which is one of the reasons why we focus in this webinar series a lot on human factors in cybersecurity.

You know, I was going to mention this later, but I will tease it right now. Our next webinar happens on Leap Day on February 29th, and it is Dr. Eric Huffman. He has presented to our audience before, but he is an expert not only in cybersecurity but also psychology. And so he calls his area his field of study cyberpsychology, and he has presented before about human factors and what makes certain personality types at higher risk to falling for certain threats. In our conversation coming up on February 29th, I think our focus will be on this new world of artificial intelligence and the cyber psychology of artificial intelligence. So for those that are in attendance, mark your calendar. You'll probably be getting an invitation from me to attend that event too.

Diversity in the Tech Community

Eva, I want to change direction here a little bit and speak about diversity too.

Earlier I said we've been we've had personal computers for 40 years and internet for 25 or 30. There is a lot of sort of stuck thinking that comes from that and a lot of people that are attracted to careers in IT do so because there are established best practices and a lot of rules and they like a world that has certain structure and certain format, and that tends to lead to a lack of diversity.

I suspect there are times where you are the only person in the room who is from Bulgaria or there may be times when you're the only person in the room who has a background in marketing. Can you speak a little bit to your career path and, and to diversity in Tech and cybersecurity in general?

Eva Benn: I find myself the only person in the room that has this unique characteristic of being Eastern European, a woman with a non-traditional technology background.

I think that that is the beauty of having diversity. Everybody's life experience shaped them differently and that creates the opportunity for us to bring diversity of thought.

We're trying to secure the world here. For us to secure the world, we need to represent the world, and the world comprises of many different types of people.

I think that my background is not traditional growing up in Bulgaria. I did not have a computer. I honestly only thought careers with computers are for men.

I thought that I just need to grow up and just look pretty and maybe work in design or something like this.

But this changed. I really got inspired while pursuing a career in marketing and education in marketing, to pursue a cybersecurity career.

I saw really an opportunity for me to set an example and pave the way for other women, and I continue to be very, very inspired by this mission.

I tried to carry myself with authenticity so more people can see themselves in me and more women can see themselves having a successful career, not having grown up with computers or having those opportunities early on.

"How do I get started in a career in cybersecurity?"

EXP: So what was it specifically? How did you specifically make that jump from marketing to cybersecurity, and the reason I ask is because there's a lot of information like if you go on LinkedIn, people are like, 'How do I get started in a career in Tech or in cybersecurity?'

Eva Benn: I'm going to tell you, okay? Honestly, the story is kind of ridiculous. I went to this event, it was in the University of Washington. It was a cybersecurity event, and I talked to, um, to some people there that inspired me.

And I remember just feeling super fascinated.

I didn't see many women there, and this just sounded very hard, and I felt like I don't think I can do this, and I was like, "You know what? This is exactly where I'm going to go for this because I don't think I can do this."

And I remember thinking, wow, am I making the worst mistake in my life, um, changing my career path, changing my education because I mean, I was I was pretty clueless, but I wanted I didn't see many women that represent me there, actually I didn't. There were really not that many women to start with, and so I decided, "OK! Challenge accepted!"

I always like to go for hard things. Ever since it's been the best choice in my life.

I love cybersecurity and I want to see more women in cybersecurity.

EXP: Yes, and I do too because of the—you know, what got me started thinking about that was your comment earlier about bias. If it's only people that look like me that are interacting with AI or that are writing software, there will be biases written into it.

And if it's only people like me that are talking about these scenarios there will be things that we just don't think about. So having other people in the conversation...

It's a cliche but that really is where diversity is our strength or where lack of diversity is an extreme vulnerability. There are things that you're just not considering because you don't have enough diversity of thought and different perspectives.

So I'm with you 100% there, yeah looked like you had something more to say I didn't want to cut you off there...

Eva Benn: You didn't cut me off. I just want to make it clear I'm super passionate about this topic if anyone has, any interest or needs any guidance on how to get into cybersecurity I am always here available reach out to me on LinkedIn.

I'm super passionate about not just women in security but also just creating path for more people to believe in themselves that no matter what your background is you can do it and it's never too late.

EXP: Yeah and and that's a good segue. We're running out of time here so I will say if anyone has questions I've gotten a handful of comments and not many audience questions but I appreciate the comments about AI girlfriends and unicorns etc. So I've gotten plenty of those um, if you have questions ask them now but also to your last point where can people find you if they want to learn more about what's next for you or your next event or where can people connect with with you in the, you know, in the virtual world?

Eva Benn: Connect with me on LinkedIn, that's the best way. I'm just super excited and appreciative of everybody who stuck around and listen to us rant, happy to take some of the conversation offline if there are any specific questions.

EXP: I'll follow up with an email that has a link to this event um, we will have links to your LinkedIn profile if that's OK. Your website as well and the MITRE information. We'll share that and a few other things that we've referred to in the course of this conversation.

I will also mention before we end I'll I'll remind you people in attendance that our next event will happen on February 29th and that's Leap Day today and our guest will be Dr. Eric Huffman speaking about the the cyberpsychology of artificial intelligence and human factors in in cybersecurity with respect to this new era that we find ourselves in.

And we have one minute left Eva. Do you have any closing words or any other comments that you want to share with our audience?

Eva Benn: In closing uh just I want to anchor back on make sure you have your security fundamentals make sure you don't lose sight of the basics and try to leverage economies of scales by relying on Enterprise products that really bring you not just underlying infrastructure but also lots and lots of security expertise and research.

EXP: I will reiterate I've been repeating you many times but I will reiterate those basics include:

And with that I see that we're up to 1:00 p.m. I want to thank you so much Eva! You made this really enjoyable. I learned a lot in this session. That's all we have for today so thank you again so much for joining us and for everyone in attendance and with that we will wrap it up here thanks again Eva!

Related Posts