Cybersecurity Mentors Podcast

Interview with Evan Reiser: Founder & CEO of Abnormal AI

Cybersecurity Mentors Season 4 Episode 3

In this episode of the Cybersecurity Mentors Podcast, John, Steve, and Evan Reiser, Founder & CEO of Abnormal AI, discuss Evan's journey from a gaming enthusiast to a leader in cybersecurity. We explore the evolution of email security, the impact of AI on the industry, and the importance of mentorship and continuous learning. Evan shares valuable lessons from his early startups, the significance of asking questions, and how to prepare for an AI-driven future in cybersecurity. The conversation emphasizes the need for curiosity, accountability, and the ability to adapt to new technologies.

Send us fan mail via text

Check out our Networking is King Course: How to Build a Career Through Real Connections

Steve:

Could you teach me, First learn stand, then learn fly. Nature rules Daniel-san, not mine.

Evan:

I know what you're trying to do. I'm trying to free your mind, neo, but I can only show you the door. You're the one that has to walk through it.

Steve:

What is the most inspiring thing I ever said to you Don't be an idiot. It changed my life.

:

Welcome back to the Cybersecurity Mentors Podcast. On this episode, we're speaking with Evan Reiser, the CEO and founder of Abnormal AI, and Evan and I got to know each other when I was on his podcast, the Enterprise AI Defenders podcast, and I had an awesome time. It was actually my first podcast ever. Yeah, evan invited me on and he said I did a decent job. He's like you've only done this one time before.

:

But it helped inspire and us to aspire to building a podcast. I was like, hey, maybe we can do this thing. So thanks, Evan, for coming on. We have a lot of cool things to talk about and appreciate you taking the time.

Evan:

Yeah, thank you so much for having me and great to see you, john, and excited to talk to you and Steve, absolutely.

Steve:

Thank you. Well, let's just get started with just a brief intro and just kind of tell us a little bit about yourself how you got started, how you ended up where you are now, and we can go from there.

Evan:

Yeah, sounds good. I think I'll try to give the short version, since the long version is not super interesting. But I've always been a computer nerd and a kind of entrepreneur of sorts. Abnormal is my fourth startup, so I've been building companies and businesses and my products for the last I don't know forever. My background is actually not in cybersecurity, so in some ways I'm a horrible guest for your show, but there's probably other people that kind of are just getting started, so hopefully I can share some of my experience. I spent most of my career doing behavioral ad targeting, so you know all the annoying ads that fall around the internet and like steal your data to like personalize ads and websites.

Evan:

Probably my fault, so I apologize. You know, now I'm the founder and CEO of Abnormal AI. We're trying to, you know, stop some crime and, you know, do things a little more worthy, getting people to click on ads. So, like I said, yeah, I've been an entrepreneur for a while. I've started, you know, abnormal's my fourth company. Last company was bought by Twitter where I ran the kind of ads targeting and kind of applied machine learning teams. So that's been kind of most of my background in machine learning and AI. And you know, about eight years ago I started Abnormal. The idea was to take some of the same technologies to understand people and behavior and context and use that to stop the next generation of cyber attacks. So we're about a thousand person company. Today About one out of four Fortune 500 companies are abnormal customers. They were the fastest growing private cybersecurity company and as I've gotten more into this great people like you, joe, I just feel more passionate about trying to fight the good fight and also helping to go inspire the next generation of cybersecurity defenders.

:

Absolutely Very cool. The one startup company that stood out was Gamer Nook. So how did that come about? That sounded cool. It sounded like you learned a lot from that experience. So what was that company?

Evan:

Well, calling it a company was probably a bit of an overstatement, but it was my first tech business I started. I've always been a gamer. Actually, I got into computers because I love video games. I went to school for computer engineering because I thought it was about building computers, which was a hobby and small business I had in high school. It turned out computer engineering is really electrical engineering with like extra calculus. So I even got into software engineering cause I wanted to build, um, you know, I built games and kind of um tools to like track scores on games. Um, so, um, you know, after I kind of graduated, had a normal boring job and it lasted about 18 months, I was like I can go start my own company and, um, the thing I was really passionate about was gaming and so we tried to create the MySpace of gaming. It was actually a very popular product, a very popular website. We had probably over 100,000 users in the first year.

Evan:

But the problem was it was just me and my buddy kind of building everything. We built the entire web application, the database, to go run this. The problem was we had so many users and kind of ads were very immature right in the early 2000s that we couldn't actually support the server costs. We had two servers and that was cost more than our you know all of our ad revenue. So we ended up kind of shutting it down, despite being a popular product, because it was a horrible business. We made no money but it wasn't. It wasn was a lot of fun and yeah, that's kind of how I got into building. How I kind of got into ad targeting actually is we built a lot of custom software to try to find relevant ads to just increase the value of kind of getting people to our website. This is before kind of real-time bidding and kind of more modern day ad technology. But that was kind of the first thing. That got me just general software engineering and kind of application development into kind of ads and, you know, behavioral targeting.

Steve:

Were there any lessons that you learned in kind of your early startups and just that, going through that experience?

Evan:

Oh, yeah, I've, I've. Uh, you know, if you look at my resume, it's basically a history of like, kind of what not to do at entrepreneurship. I mean, from this first company, you know, the, the biggest, I mean, I actually really had a lot of fun, although, um, you know, we, we spent so much money on these servers. I was like I was dead broke and sleeping on my friend's couch. So I think, like the, the first big lesson from that company was, um, you know, a great product is not a great business, right, and you have to kind of understand what the business model is and you can have an awesome product that your customers love, but if you can't make enough money to pay your bills, right, it's not a great business. Um, and, and I kind of I kind of, I think, deceived myself a little bit Everyone said, oh, as long as your website stats are up into the right media, you know all the exponential growth curves, it's like we must be doing something right. We never kind of thought like, well, where does this end up and how does this work? So I think that was a big lesson for me.

Evan:

I started another company after that, which is basically Luxury Groupon. That was a business and we raised probably $50 million of venture capital money. We had millions of subscribers but it kind of did make money. But it turned out like the long-term economics were not super profitable. It became increasingly profitable or increasingly costly to acquire new users and then decreasedly profitable to kind of monetize as you do through the daily deals. So that was a big lesson on, just like unit economics, and I think the other big lessons along the way were just making sure you're in a market that is big, you can have an awesome product that is profitable, but if you only have 20 customers who really care, it can only get so big. And you can have an awesome product that is profitable, but if you only have 20 customers who really care, it can only get so big and you can only have so much impact in the world.

Evan:

I think the biggest lesson for me kind of, I think, post the Twitter acquisition was just learning how to work as a team.

Evan:

I'd always been kind of like the one man show. I was the programmer and the accountant and the marketer and the website optimizer and like the security team, the IT team and security team, the IT team, and I kind of like prided myself on kind of doing everything myself and I think honestly there's a little bit of ego there, I think, just trying to prove I could do it myself. So I think just like realizing that I'm not the smartest person in the room and that if you want to do something impressive, you got to really work as a team. I think you know I didn't like working at Twitter for many reasons, but I did have a lot of professional growth there, learning to work with teams of 10 people and 100 people and 400 people. So that was kind of a big wake-up call for me as a professional and I've learned a lot since starting at Bravo. But those are some early lessons in entrepreneurship that took me 10 years to learn.

:

I hope, maybe some less years can learn those in 10 months instead. Thank you, yeah. So I think that leads us up to email security and ai. Right, like, where did this? You talked about kind of the behavioral. You kind of were dabbling in behavioral a little bit with the ad um marketing, but how did this? Hey, let's throw ai and email security together, um, which sounds cool now. But back, right, it wasn't necessarily like how's this going to work? So how did this start come to mind?

Evan:

Yeah, so AdRable security is like a behavioral security platform or most known for email security, so kind of most our customers buy from us. When we started the company, actually we didn't realize we would become an email security company. The idea was to take some of the same behavioral modeling, kind of contextual analysis and decision-making technology we built at Twitter and kind of past companies, and the idea was that if you could apply technologies to kind of enterprise users rather than consumers, there was more standardized ways to get data, you could build more robust behavioral models and you could kind of do good stuff. So we started the company. It was really more of a technology vision than a product vision, which I don't recommend. Right, it's good to start with a very clear use case, but the one smart thing we did is, before we actually wrote any code, we went to go talk to over 100 CIOs and CISOs of big and small companies and we said here's some trends with AI. Right, we think they're going to be more powerful, more cheaper data, to be more standardized. Here's things you can do. Here's a platform we're trying to build and here's the type of capabilities that are possible. Right, making kind of behavioral and contextual decisions. What are some use cases that you think we can apply this to.

Evan:

And the very specific question we asked, I think was effective. We talked to both CIOs and CCOs and we said a problem that if we could come back to the end of the year and solve really well for you by your own definition what would you pay us a quarter million dollars a year to go do? And that's a very like qualifying question, because no one's gonna pay a quarter million dollars for a problem that doesn't really matter, right, it's gotta be really high pain, high priority. And when we kind of, you know, talk to these customers, probably, you know, 20% were outside of security, but 80% of the use case were inside security and it was fraud and social engineering and phishing and identity security, maybe DLP, and the most common thing we heard was around email security and we said, well, that's weird, there's a lot of email security companies. Why is that a problem? Why do you care about that? What we learned from customers and I knew nothing about cybersecurity On my first call, someone talked about a, a sock, and in the background I was Googling like cybersecurity socks, right, like like things you wear on your feet. I was such an idiot, right, um, but you know, I I did learn a lot along the way, um, but we kind of asked customer, why is that? Why you know you buy, you know all these other security gateway companies like they claim they sell to everyone, like, why these problems?

Evan:

And the trend that we kind of uncovered, which was probably obvious to you guys, but I think not to a lot of entrepreneurs, was that kind of email attacks that shifted from these kind of spray and pray campaign type attacks, which can be very easily stopped with threat intelligence or kind of heuristics and rules, and it's a shift to these more kind of personalized attacks, specifically social engineering and business email compromise and a lot of things you would look for in kind of a conventional email attack, right, a bad URL, a bad IP address, maybe some keywords in the text, links, attachments they didn't exist. In kind of a social engineering attack, and so it was like a unique email address, a unique message every time, and so conventional technology that focused on threat intelligence and heuristics. They weren't effective at stopping that, and so we had this idea, which was well, imagine, like every other competitor in the world, that perfect threat intelligence that means every attack that's ever been seen for could be stopped. How do you stop the things that don't get through right, or how do you stop the things that have not been seen before?

Evan:

And so we kind of took this kind of idea of behavioral modeling and we said, okay, we don't have a lot of data on the never before seen attacks by definition, so we can't build a machine learning model to predict that. We do have a ton of data on normal behavior, but if we integrate into Microsoft and Google and Salesforce and Workday, we have tons of data about what normal patterns of behavior are. And so rather than try to model for unknown stuff, let's model for what known good looks like. And then when we see things that don't look known good, right, we'll do some risk assessment and we'll kind of assume it's bad. And so that was kind of our unique approach. And that kind of pulled us into email security and we focused primarily on email security for the first, you know, couple of years, and since then we've kind of um, kind of branched out to other security applications that are still focused on trying to find abnormal patterns of behavior.

:

Yeah, makes sense. I've been living it, we've lived and breathed it. Right, right, steve. Yeah, we've had the um like why can't this not stop this email, these emails, from getting through?

Evan:

Right Um, a short a short anecdote, right Cause I knew nothing about cybersecurity. So many people ask me well, how did you build this product that works right, If you don't know anything about cybersecurity? And my secret strategy was to not be scared to ask dumb questions. And so I would talk to all these customers and I would just say, hey, what's the last email attack that got through? I would say, how did you stop? How did you find it? Yeah, Right. And people would just like tell you.

Evan:

The answers are like, oh well, if you just like, do a DNS, look up on this thing, you look at the title contact field, you know that email address, right, that email, just the title contact, that's the real attacker's email address. So if you look at that signal, right, you can find all these other similar ones. So people just like tell us, you know all the secret answers, and so you know. John said this third thing and Steve said this fourth thing, and eventually you just keep asking people well, how did it get through and how did you find it as a person? Then obviously you can get AI to replicate some of those human, you know investigations, but do it in real time. So just my, I think, one thing that really worked well for us early on is just don't be afraid to ask like the dumb questions, right and sometimes there's a lot of wisdom you can pull out of seemingly, seemingly simple answers.

:

Yeah Well, I mean, I think that's a good lesson for anybody. It's like hey, just be humble, Right? Don't assume that you, you have they, you have all the answers and you don't need to ask those questions, because you might discover gold from just asking the simple questions.

Evan:

Yeah, I promise you got. Probably most of your listeners know a lot more about security than I did when I started. Right, and so I think if they kind of bring that humility and that curiosity, a little critical thinking right, they'll have huge impact in the world, Absolutely.

Steve:

So how, when Abnormal was founded? First of all, how did you come up with the name? I like it. And two, what was the? How big was the team Like? Was it just you and someone else, or or what did that look like?

Evan:

Yeah, so when we started the company, um, we had seven people were kind of like the founding team. Um, we did grow very, you know, very quickly Um, but it's kind of seven of us and none of us had ever worked in cybersecurity or even, I think, on a cybersecurity project it was. You know, I was probably like the dumbest of the seven people. The other folks were all like applied machine learning experts that I actually worked together in the past with kind of building similar technology for different applications and there's a. The name behind Abnormal was kind of funny. You know, when we started the company, we're like we should pick something that's like a very like responsible, like reasonable name and it was very boring. It was like I don't know granite security, so he's had very stable um, and we had. We didn't know exactly what we're going to do, but we knew it's gonna be an ai company and so our logo was like this brain with a circle around it, right, which is obviously what you do as a ai company, and we show this to one of our mentors. He's like Ooh, I hate the name. That sounds so boring, he's like, but I like that logo. It kind of reminds me of this old movie called young Frankenstein, where you know, there was this brain in a bottle called Abby normal. He's like you should call yourself abnormal, because that's kind of what you do. You're looking for abnormal patterns of behavior, so that that kind of just. I heard that and I was like, well, what it's it? I heard that and I was like, well, it's kind of silly because you can create the abnormal business plan, your abnormal executive team, but it also describes what we do.

Evan:

So we originally called the company Abnormal AI and that was our name in 2018. And we would go into meetings with CISOs and we'd talk about AI back in 2018, 2019. And we literally got laughed out of the room several times. I had CISOs hang up on me like, hey, you guys seem like you're reading too much sci-fi. You sound like you're building Skynet. This AI thing is kind of a joke. Like good luck, but call me when you have 20 customers first. And so we actually changed the name, probably in the first six months, from Abnormal AI to Abnormal Security, and we didn't talk about AI and machine learning at all, because this is obviously before ChatGPT and it meant a different thing. We actually recently renamed the company and reverted the name back to the original name, abnormal AI. So that's kind of the history of that story. It's a little bit silly.

:

So you talked about 2018 and you guys were ahead of the game. Now, I didn't go to RSA, but I hear all I heard were people that came back from RSA and AI is everywhere, right and every and everything, and we all know, we hear it's the buzzword, um, but there's some truth to it, obviously, um, but what do you see? Kind of the evolution. I think one of the things you told me that that really was was intriguing is how you guys are just, you know automating a lot of things, but you know taking advantage of the technology as much as possible. Right To to utilize it, um, not to be afraid of it, necessarily, but but also eating your own dog food, right, like whatever you are giving to customers. You are also utilizing yourself in different ways, um, but, but you know, if you had your crystal ball a little bit right? You know, luckily, you guys were ahead of the game, um, but what do you think this looks like in the next two, three years?

Evan:

Yeah, I would say I'm, I'm, um, I'm probably more on the like AI acceleration side where I think, just like the future is going to get way crazier, way faster than we think, and you're probably the three of us. We have a slightly unique view where we can see some of this in action. So I think AI is going to change lots of stuff. It's probably a less contrarian view these days, I think from a cybersecurity perspective, it slowly changes the threat landscape. We're going to have more criminals. There's people today that they don't know how to be a cyber criminal. They love to be, but they're not technical enough. Right Now, with things like chat, gbt, you don't have to write, you have to speak English to send a phishing email, right. You don't have to know how accounts payable work to do fraud. So we have more criminals. So the criminals are going to have more scalability in their attacks, right? What used to take an hour to send that attack, now it takes 10 seconds. I think the most scary part is that you know even just the average large language model, it's capable of inventing attack ideas or social engineering ideas that are way beyond what, like smart, you know, offensive research can even dream up of. So I think the threat landscape is going to change.

Evan:

Obviously, the you know the tools needed for defense are going to change, right, if you know the attacks needed for defense are going to change Because the attacks will also happen faster. They're less human-powered and more automated. Whether through AI or good scripting, the attacks happen faster. So if you look at the paradigm for cybersecurity today, a lot of it's based on the idea that we monitor lots of stuff. We suck up all the data, we guess what the new attacks are. We pop up an alert. Some smart human looks at that and makes a decision. In the future, if the attacks are automated, you can have the world's best FBI investigator. That alert pops up. By the time they read the thing, the attack's done, right? So we're going to have to have more automation right on the defense side.

Evan:

So I just think the nature of security architecture will also have to change and also I think, like then, maybe the third kind of big trend is that you know AI is obviously going to change the IT architecture and historically, security architecture has always fought IT architecture. And so you know what happens when you have, you know, all these AI agents running around, right? How do your AI agents do 2FA right? You know if the agents are, you know if humans are accessing things at like one request per second, what happens? The AI does it a thousand requests per second? Right? How do you know, like, what's appropriate for your HR agent to do? Right, there's some kind of contextual consideration you have to do to all these kinds of requests that you know may not be possible with, kind of, our current authorization models, right?

Evan:

So, like you know the, the nature of, like you know how businesses, businesses and people use technology. Totally changing that requires kind of an overall it architecture. Security will have to follow and I think that you know we're we're kind of in a paradigm for cyber security industry, that is, you know, more is kind of was built more for the past. In the future, I think there's an opportunity for the three of us and hopefully a lot of our listeners, to go help us, you know, feel out for listeners, to go help us, you know, upgrade quickly into whatever the new generation of defense is.

:

Yeah, one of the things that I've been asked this kind of question generally about the threat of AI and AI and I'm like, well, yeah, we're already using AI. You know we've been using AI and it is a little bit of, you know, our. Hopefully our AI is better than the bad guys, ai. Right, it's AI versus AI already, you know, um, so, yeah it, it will continue to be an arms race, um, but I do think it is a force multiplier. With automation, it can be a force multiplier for bad or for good, but it is, it is a force multiplier. So you need to be aware of that for sure. But it's going to be interesting. Stay tuned, right.

Evan:

Buckle up Exciting and scary times. Yeah, yeah, yeah, for sure.

Steve:

Absolutely, even now, I mean just with ChatGPT, what it can do for you, how it can help you. I mean, I'll be honest, I use it every day, multiple times every day, to just help me, right? I mean, we've had a couple episodes where John has pulled up his screen and he's been asking AI to write him a script to do XYZ, and it's like seconds, boom, you have something ready to go. So it is. And for someone like myself, I think about all the good things it could do, but I also think about all the bad things it can do, kind of like what you just said, like what you were just talking about, evan, and it is. It makes me a little paranoid, just paranoia, and just like man, this is just something just waiting to just explode and just revolutionize what we know today, especially in cyber.

Evan:

So, yeah, just yeah, I mean as extremely powerful technology. Right, all technology are, are are tools. Right, tools can be used for good or evil. Um, if you look at like nuclear energy, same thing right, and you can actually create some of the best things in the world and the worst things in the world. The different thing with AI is it's a little bit less controllable, right, anyone can just go download some open source monitor, laptop and God knows that thing can do so.

Evan:

I think that there is a bit of asymmetry, I think, between the attackers and defenders. The advantage the attackers have, right, is they're very quick to adopt new technologies. Right, because, like you know, for all three of us, you know we have to, like, protect the enterprise. We have to make sure we make no mistakes. They can make a thousand mistakes rolling something out, so they're going to always adopt these technologies faster.

Evan:

I think the advantage that the defenders will have in the long run is that we're going to have the data advantage and the context advantage, and so there's some things that are not knowable right outside the enterprise that aren't knowable inside. Right, I can have my personal I'm listening to everything I do, everything I say, really understanding my personality and my behavior, understanding my personality and my behavior. And so you know, if I, if I say hi, mr John, right, we know. Or some AI avatars, is hi, mr John, we'll know for sure. That that's not how you know, evident, john, talk to each other. We've known each other for a while. So I think there will be a asymmetric data advantage, but we have to kind of adopt and evolve our defensive tools, take advantage of those data assets inside the company to you know, and we have to move quickly right To make sure that the attackers don't win with their kind of you know time advantage or their first mover advantage.

:

Yeah, very cool.

Steve:

Absolutely so. We were talking about abnormal and kind of how you guys have now. There was a point in time where you kind of just shifted to email security, but that wasn't kind of the plan. What are some of the other things that you guys are branching out to and doing now that are not so much email security, or maybe some things that you have coming up for the future?

Evan:

Yeah. So there's some obvious things right, kind of adjacent to email security. We're doing kind of generative phishing training where we kind of, you know, almost take our behavioral profile, we invert it to create like a red team profile, we look at the real attacks and we kind of send a training example that is personalized for the user. If they fail, we then give them kind of personalized coaching. So it's not just like a script that sends emails, right, actually understands the person, their role, their job, give them kind of personalized advice in like a very friendly way. So there's some adjacencies like that that we've launched. There's some things that kind of are a little bit further outside of email security but are still around, kind of like behavioral security. We have a identity threat detection response product where we look at the behavior of how people use enterprise applications like Microsoft 365. And we say, hey, like Steve is authenticated, but like is working a lot different than how Steve normally works, right, that's not the right, that's not a weird web browser, right. Why is Steve using a PowerShell script to access something he has access to through Okta? That just doesn't look like Steve, you know what. We're gonna kick him out, right? So we're kind of monitoring the behavior, how people use applications and then blocking access, terminating sessions, things like that. That's kind of like what we do today.

Evan:

I think the future for us is trying to think more about how we build solutions for the future. I think that cybersecurity is very focused on protecting infrastructure. We're trying to focus on how do you protect people. The industry is very focused on using kind of threat intelligence and kind of known heuristics or known patterns to stop the next attack. We're trying to be very focused on being experts of identity, context and behavior to look for, try to stop the new attacks.

Evan:

In terms of kind of like what products we'll launch in the future, I'm not totally sure, but I think like a, a big um, you know, a big trend is obviously, you know, ai and the kind of enterprise adoption of these AI agents. I, you know. My belief is that you don't, you should, I don't think we want to secure AI agents the same way we secure other infrastructure. That you've got to treat it like you're securing people right. It's going to. You know these agents will work and behave like people and they're going to have similar access patterns right and ways of working that are some or I say, more similar to people than you know, some new enterprise application rolling out.

Evan:

So I think that that's the opportunity for us. It's it's expanding beyond protecting people to protecting identities, whether they're you know people or they're you know service principles or they're you know AI agents. I think that that's there's some unified platform that'll be important to kind of use behavior and context, to kind of look for admiral patterns and stop them. Um, and then there's, you know there's. There's maybe like a little bit further out, but that might get a little bit science fictiony, so I'll I'll pause there, unless you guys want me to go quick.

Steve:

Oh, that's great, that's I mean that just gives our listeners an idea of kind of not just where you guys are going, but where others could go as well. And then just the idea about just behavioral security, it's. It's just, I mean, it's game changer, right, because the example that you just gave about me like hey, steve is doing this, steve is doing that. Why is he accessing this through a powersShell script when he has access to the tool? Like all of that plays a huge part in just what everyday people do, right, I mean, when you have someone who's working in finance and someone who maybe working as a janitor, right, they still have some level of access to the organization, to systems, whatever. But if you can tie it to their roles and what they usually do, kind of a baseline, I mean that's next level already.

Evan:

Yeah, and it feels like intuitive and actually this is what human analysts do today, right. And so I mean, the reason I'm very optimistic about not just for Admiral, for us as an industry is if you look at kind of any of these big breaches, right, the big public ones, ones I won't name the names but what is actually positive is that they happen and then some humans go through a bunch of logs and, like you know, data sets, at some point they're like, oh, this is what happened. In hindsight, actually, we probably could have seen that coming right. Well, if a human, if that takes a human eight weeks to figure out, at some point in the future an ai can do that eight milliseconds, right, and so like I think, like we have the data right and I'm sure you guys have seen this in your own experience right, you're just looking at, you know you have even a junior analyst looking at some logs like that's weird, why, like this doesn't look right. Right, I would say like eight out of ten of these breaches. Right, you look at the activity, what happened, right, even what you already have logged and any human look is like this just doesn't seem right. There's no way. John is using the Yandex web browser from Eastern Europe to send an invoice to someone he's never talked to. It just doesn't make any sense.

Evan:

So I think we all have this human intuition or, sorry, I'll call it common sense intuition about what seems weird, and I think if you can kind of codify that and kind of train AI to apply that same judgment, it can apply human-like accuracy.

Evan:

You're probably a lot higher, but in kind of you know, at machine speeds. So I think that that concept seems like it should apply to many use cases that we apply today. I think for all of us working in security, right, just what percentage of our time is doing things that machines are probably better at? Like you know, we look at a lot of logs, a lot of files, a lot of alerts, right, I don't know if it's 1% or 99%, but a lot of that can probably be done better right, with you know current or future AI technologies. So I think the other benefit of you know the AI transformation of security is that it's going to do more of the mundane work and I think all of us will be able to spend more time higher you know, higher leverage strategic you know things as part of our day jobs.

Steve:

So I was going to ask cause I we do mentorship for people kind of starting in cybersecurity, and and sometimes I do get asked well, I want to get into the field now, before AI takes over all the security jobs. So do you see that happening at any point in time?

Evan:

So, like AI taking over all the jobs, I mean, on what time horizon? Probably, right. I mean, I think if you look, you know, if you look at all the jobs a thousand years ago, they're probably way different than today. So there's always be some evolution. Right, technology always help us do more things better. Um, I think in the short term, right, there's. Just there's some stuff that's not worth having ai go do. Right, there is a lot of things that will be required right by you know humans. Right, you got to build trust and make some judgments. Right, like, I don't think anyone's CEO wants to call the AI agent and you know, look that agent. Hey, can you promise this new thing is secure? Can you help us think about what are security strategies to unlock this new business opportunity? There's always gonna be a role for people.

Evan:

I think my advice would be um, one is like you know, everyone should really be leaning into these technologies. You know, ai is. You know and I'll just use chatty piece example it's an amazing. It's like probably one of the first technologies where it will teach you how to use it. Right, if you say, hey, here's my job, here's what I want to learn, talk to me like a personal coach and every day give me a little advice to kind of encourage me to do stuff. It will kind of train you and coach you probably better than anyone else.

Evan:

You know, I I like some like screenshots in my calendar chat to me and say, hey, you know how can I save an hour this week using you know chat, gpt, and it gives you great examples. So I think everyone should be kind of like learning these tools, right, um, if your work doesn't allow it, then do it at home, right, but like this, this is the future. And the other thing is that I think most knowledge workers for sure everyone's going to effectively become a manager, a manager of a bunch of AI agents.

Evan:

So I think the project management, the coordination, the prioritization, the operational management skills those are things that were probably less valuable in kind of IC roles in the past.

Evan:

We've become more valuable as everyone has their own kind of team of AI interns helping them do their job.

Evan:

So I think that if, like if you're trying to prepare and get ready for that AI future in cybersecurity, those are at least two things you know good to get started, and just you know, getting familiar with like the common tools that are, you know AI for an AI centric whether it's, you know, I don't know, gemini or ChachiBT or AppNormal or whatever ServiceNow has right Like, just like getting to see like how these things are applied, will kind of open your mind about ideas for new applications and I think very soon, right in the next couple of years, it'd be very easy for even a non-technical person to kind of build their own little you know AI application to help them with whatever their job is at work.

Evan:

So I think we're all going to become managers and somewhat like not quite software engineers, but all of us will be able to take technology and deploy that to solve the current things we're facing. Those would be the three areas I would focus on if I was just graduating school, trying to get into cybersecurity and want to make sure I'm prepared for the AI world.

:

Yeah, one thing I've thought about. I think those are really good points, because a lot of the grunt work that we do or have done and we still do with all the logs and you're still, you're still ver um, so you're still validating a lot of what the tool might be doing, just to make sure that you feel confident in it. And there's probably a confidence level in all these tools. Okay, this tool, man, it's, it's false positive rate is minimal. Or this one you got to go double, you know, maybe 50, 50, right, but those are going to get better, they're going to have better efficacy there, um, and I it's funny because I I like being able to validate and and still not lose the skills, right.

:

So one thing I worry about is we're going to get to where, like, hey, that my ai interns have got this right, and then you lose the ability to go do that work because you don't need to do it anymore, right. But I love being able to still jump back in and hey, let's go figure out what happened, let's go dig into it. And it really is true to what you said earlier. I have this like what's normal, what's normal? This is weird. What's normal, what's normal?

:

And you kind of build that over years to your whole enterprise and knowing what's even if it's weird, it's normal, weird, right, but definitely AI and these tools can help you do that much faster. If you give it a ton of logs and you say, hey, look at all these logs and help me find the weird faster, then you might still do some validation there, but still it can do that way better than you can. You just got to utilize those tools effectively and don't be scared of them. And don't be scared. They're going to necessarily take your job, but it's going to change, it's going to evolve. Be ready to evolve with them and start now. Right, go ahead and start now, like you said try.

Evan:

I think, like everyone in every job, you kind of need to understand how things work, like one or two layers of abstraction below the your day work. I think, um, if you seem like the best managers in the world, the best managers are always able to go like a couple of clicks down right.

Evan:

The best software engineering managers. They understand how to like build software right. Um, you know, know the best uh cfos they're still can get into spreadsheets and kind of like check your math. So I think, like that's gonna, it's gonna be important for us to maintain those skills. I think you don't have to go all the way as far down as you used to right like yeah, you should understand, like memory allocation these days super important for the nine years, probably not right yeah, but it's good to kind of think about, like, what skills become more and less valuable, right, in the age of ai.

Evan:

Right, if you, if we were hiring a security analyst five years ago, right, we'd probably we have a bunch more bias towards, certainly, technical experience, knowledge, um, you know, maybe even intelligence right, we would be biased for this today. Like, um, you know, it's actually changing a lot, right, I think that the skills that are more valuable in the future are things that are kind of like easier to learn and practice. I think it's things like curiosity, right, you know, asking really good questions, um, first principles thinking, critical thinking, um, you know the ability to kind of like iterate and kind of riff and jam, um, those are, like, when you're using these AI tools, those are really important, right, I've I've seen the, at least at our company. You know some of the best, some of the most productive people are people who's like really good at asking questions and they apply that into. You know their, you know their um, you know their chat, gpt, you know kind of sessions.

Evan:

So, um, I think the exciting thing for people today is, like this the skills are, um, what's, what's the word? Like, it's now easier to kind of get into it, but some of those kind of core, those core skills, these innate skills they matter a lot more there, but they're more accessible, right. I think people that have high agency and high curiosity they're going to outperform the people in the future that have like 10 years of career experience. Um, so I think that's uh, you know that. So I think that's exciting, right, if I was just trying to get in the workforce today. Just focus on the core things curiosity, accountability, responsibility, communication those things all of us can develop and learn. You don't need 10 years of sim experience anymore to still be effective.

:

Yeah, I like that and I like that. I mean, think about when youtube came out, right, you know. Oh, now youtube university, I can watch videos on everything. I mean that was not around. When I got in cyber security. It was like not a thing. You couldn't even google the things. There was google but you couldn't find everything you wanted to know how to do in cyber security. But now you have this, like you said, a coach, a mentor, a trainer that you can, that all the things that you don't know, you can really get it simplified to your level, um. Just take advantage of these right, take advantage of this awesome time to be able to learn faster, based off of your pace and where you are. That it is like the next level of being able to. You don't really have excuse. I guess I'm saying is, you don't have an excuse to not learn, um, and to level up other than just time and energy.

Evan:

Right, you just got to be, you know, take that time, take the energy to to invest in yourself yeah, and like even developing the, the skill to like, even developing your own growth mindset and your own drive. That's gonna be a valuable skill by itself. We have I don't know, probably I don't know 50 security analysts in our company, right? Like what we look for. We don't really look for technical skills, right? We look for people that have, again, curiosity agency. They want to learn, or they're showing up with a positive mindset they're trying to learn or they're showing up with a positive mindset they're trying to like learn from their peers.

Evan:

Those things become more and more valuable over time and you know, somewhere you said, john, I would encourage everyone you know. Go get a free. You know, sign up for chat gp for free, create a custom gpt, give it custom instructions, say, hey, you are my personal cyber security coach. Every day I'm helping me, how I want you to help me get one percent better. I want you to provoke me and educate me and challenge me, but also kind of keep me in my own learning, Right, yeah, and talk to that thing every day. Don't treat it like a Google search engine, right, but treat it like a you know a virtual assistant, right? That's trying to guide you and coach you. I'm sure if we all did that for our jobs, we'd probably be. You know, we'd probably outperform, whatever our default path is.

:

Yeah, I need to go make a CISO one right now.

Steve:

I know I need to go make one too. No, that's great, great advice. And I really liked what you said about kind of where the future roles are, about project management, just management overall, because that makes total sense and I hadn't really thought about it until you just brought it up. Those are skills now that you can start working on that will help you in the future, because AI is moving and it's moving very quickly, so I like that a lot. So for our listeners, make sure you jot that down. That's, that's a little golden nugget for this episode.

Evan:

Yeah, cause it's it's. You know we're probably already there, but if not, like it'll happen a year or two. Like you know, you're going to show up at work. You're gonna have like 10 AI agents that will basically do whatever you want, and probably better than you can do. What are they going to do, right? So how do?

Evan:

you kind of like understand the higher altitude problem. How do you kind of assign the work? How do you kind of give feedback about you know what's work? How do you inspect to make sure agent did the right way? Right, Kind of like what you said, John. Like the trust to verify, like those are skills that, like you, kind of wouldn't normally get until you started getting to like supervision and management, but kind of those early on right. If you could do that even before your first job, you are going to be running circles around some of the older employees, right, If you get those skills right and you're up to date with these AI tools.

:

That is a that's a cool point. I haven't really thought about it that way. Yeah, one question we'd like to ask everybody is just if you and you mentioned a mentor you had, but just mentor, mentor any mentors you've had and or had the ability to mentor others? Um, just cause the theme of mentorship right, um is anybody you know. How has that played out with you? Know, with you and your career and your life? Um, have you had mentors along the way? Have you been able to mentor anything around that topic?

Evan:

Yeah, I would say I'm sure it's just yes, I've had, you know, a ton of different mentors. I do think kind of like there's this archetypal concept of like that one mentor that's like always around calls you on the weekends Like I don't know. Like that that rarely exists, right. So I think my model of mentorship is like just kind of have this mindset like you can learn stuff from anyone, right, whether the CISO or the CEO or the janitor, like people know stuff that you can learn right. You gotta, like you know, engage and ask questions. I also think you got it Like there's no perfect mentor, right. Everyone's got a bag of strengths and weaknesses, right. So I think, when you know, when I think about my mentors, I I have there's probably I don't know 50 people I would consider like gives me some level of mentorship. I kind of don't take anyone as like the perfect role model. I just look at people and I try to assess you know, hey, what are they? What are they uniquely strength you know what are the unique strengths with unique experiences, and like what are also is like maybe some of the weaknesses, right. I try to pull just the strengths right as you take any, you know, take any kind of celebrity mentor, right, they're a mixed bag, right. They got some good stuff and bad stuff, right. So you've got to kind of pull the right stuff.

Evan:

And my advice for getting advice is that when you talk to people, everyone will have advice for you and they'll tell you, hey, here's what I did I for you. And they'll tell you, hey, here's what I did. I think the the really helpful thing when kind of like listening to advice is making sure that the people, making sure that the whoever, whoever your mentors, they're going to give you advice about their environments, right, hey, this worked for me, you know in. You know, when I was doing I was at this company basically important to ask what are the conditions of that environment that enable that advice to be successful? Cause what I found is a lot of times people give advice they don't really understand. If their world is similar to your world, and the advice isn't always copy-paste, right, but sometimes it does right. If it's like, hey, I was struggling with this thing and here's kind of how I was thinking and here's the environment I was in, if that's the environment you're in now, it's probably really good advice and assault.

Evan:

So I think it's all like always, always listen, always, consider people's inputs, but you gotta use your own judgment and kind of discrimination to figure out, like, what should apply to me, you know at what time, and just remember everyone's flawed. But there is, you know, there's things we can learn from everyone. So that's kind of like my personal strategy for kind of consuming advice. And I would say any success I've had doesn't come from any raw talent I have. It's more from like just being, you know, thoughtful and shrewd about listening to, you know seeking a lot of advice, than being thoughtful and shrewd about consuming that.

:

Cool, say that question again.

Evan:

That was a great question, for when you're asking about the advice I would say, like you know what were the conditions of environment that allowed that advice to be. That made that advice work, or that approach work, advice to be, uh, that that made that advice work or that approach work. Right, I think you should understand. Like you know, yeah, what was the environment like and is it similar to what I'm in you know? If so, you know just the. You know a silly example, the.

Evan:

I could ask you know a CEO of a you know a fortune hunter company? Hey, how did you grow? Yeah, how did you drive business growth next year? How did you organize your counter time? But if I'm in a 10 person company, like, obviously, like that may work for that person in that role, it has no relevance to me. It's probably by default bad advice for me because I'm not a fortunate you know 100 CEO. That one's obvious, right. But sometimes we forget. You know, we talk to people we respect. It's hard to know, kind of like, if what they're suggesting is applicable and kind of like the stage and environment we're in. So I think it's just good to always consider where is advice coming from, what are the conditions that made it appropriate or successful, and just, I guess, said more simply take everything with a grain of salt.

:

Yeah, absolutely.

Steve:

Yeah, that's great, great advice. So just we're gonna do really quickly because we're almost out of time, but just a quick lightning round. So just a quick questions for you, just first thing that comes to mind. So is there any one book or podcast that?

:

you would recommend often to others and he better it can be about besides his own, besides his own podcast.

Steve:

Yeah, it doesn't have to be security, just something you enjoy and you you'd like to recommend to others, book or podcast.

Evan:

Um, so I'll give a um. So I read a lot of science fiction books, right, and the specific book I'll recommend is called um, house of sons, by um, um, oh yeah, which is, I forget, his last name. His last name is Reynolds Um, his first and third is the A. But house of sons, um, and I like this author is an astrophysicist and um, you know it's, it's kind of a hard sci-fi.

Evan:

I like kind of reading about the future because it helps me envision what's possible. It's always we get kind of stuck in the present and like any crazy idea about future technology. Some science fiction author already wrote it right, the technology wasn't quite there yet for that to apply to our world. We're in this very unique window, a time where the technology is almost there, right, and so it's good to remind ourselves kind of what's possible and to kind of dream about the future and be optimistic, ambitious to actually achieve those things, versus kind of being too rooted in the past and the present, which is kind of how our human brains work. So I like that particular book for some reasons, but I love sci-fi Very cool.

Steve:

I like that. And then what's one mistake that taught you a lot in your just overall career?

Evan:

How much time do we have left, because I got a lot of mistakes.

Steve:

It's just the first one that comes to mind.

Evan:

I think the biggest mistake I made was a lot of people look at my career. They're like I want to copy your career. I'm like don't copy my career. I spent like 10 years right, you could do my 20 year career in half the time I had.

Evan:

The biggest mistake I made early in my career is getting too proud of being able to teach myself things. Right. I taught myself to program. I taught myself to start a business. I taught myself how to build a webpage, right. You know, there's some ego there, right, and it's way cheaper and easier and faster to just learn from other people. And that took me like probably 15 years of my career to really, you know, and I was so good at teaching myself things, I kind of like ignored that and I think once I kind of, you know, got the humility to ask the questions and learn from others, you know, I think I probably had more career growth in the last 10 years or the last five years than the first 15 years. So just drop the ego and just see what you can learn from other people. I think you'll be surprised.

:

That's great. Did it happen? Naturally Because you're like okay, I just gotta. How did you come to the point where you're like, okay, I don't know. I can't learn everything. I gotta ask for help sometimes.

Evan:

So it's a bit of an embarrassing story. But my startup was acquired by this other company and my mentor there. I was working on my product and the mentor came to me and said, hey, you're working on your thing, but we's like we got like 20% of our company now working on like 2% of our revenue. It's kind of a waste, right. And I was like screw you, man, like why'd you even buy this company, right? And the guy was great because he just gave me he's like Evan, you got a huge ass ego, man. You're so focused on proving why your you're not thinking about you know how you can help to be a company and you're not even trying to listen or learn from anyone else. And I was like F you man. And then like, for some reason, like a week later, it's kind of soaked in. I was like damn, he's kind of right.

Evan:

And then I spent the next three months at hey, I just like I'm here anyway. They bought my company. Let me just try to learn from these people. They're obviously smart. I think in those three months I learned more, you know, from you know two or three guys at this company than I had learned in like three years of starting my own company by myself and like that was like such a like kind of a wake up call for me, where it's like, and I was just I just remember like three or six months after that I was like I've wasted my life, like what am I doing Right? So that was a it's a bit embarrassing because I was like I was a higher ego person back then. I've hopefully rotated the other direction, but that was a really big wake-up call and I think that that conversation certainly set my life and career on a different trajectory.

:

Very cool.

Steve:

Last one If you were not in the position you are now in cybersecurity with Abnormal, where would you be or where would you like to be?

Evan:

I'm really I have a lot of personal interest, for you know how ai is going to change kind of how we work and how we live, and so I just can't imagine anything outside of ai. I think it's the most consequential technology in civilization. We're at the most interesting time in the you know again, in the history of humanity. You know ai may be the last big invention, right, we have as a civilization. So I don't know exactly what, but I would kind of be something in AI, right, and maybe thinking about how do we safely or securely use AI or how do we make it easier for AI to be applied in more thoughtful ways?

Evan:

You know, a lot of AI applications today are kind of like like you're building like horseless carriages. Where you took like a horse-drawn carriage, you kind of, you know, swapped out the horse for the engine, right, we've kind of bolting on ai to kind of the old world. There's an opportunity for us to really reimagine or re-event how things should work in the ai native world. I think the way that you know the nature of software, the nature of computers are, you know how we live, our lives is going to fundamentally change and so something in that category, right, I don't know, I haven't really thought about it, but um, that's where I have a lot of passion for, or I'd make some video games, which would be fun.

:

I was gonna say. I thought you were gonna say video games. I was like maybe when I retire yeah, yeah. Last question on then um, what's your favorite video game, or is there a favorite one right now are you able to play, or was there a favorite one that was like this? This is why I want to go into video games.

Evan:

So I mean I've played like all the video games. So the last 10 years I probably paid less than I wanted to. The one game so I really only play one game, which is Path of Exile, which is a very complex game which I love.

:

Anything that's like requires spreadsheets and Python scripts to win like that's my game, so that's what I love. I haven't heard of this game. Okay, I'm not, I'm not. I used to play a lot, but now you know busy you got to, I've got a family, but but I do try to play occasionally, so very cool.

Steve:

I'm like a basically like a part-time professional Path of Exile player. I gotta look this up, I gotta look it up.

:

Yep, well, evan, this has been great man. Really. Thank you for coming on Um, lots of, uh, really cool topics that we covered, and thanks for sharing your history, your lessons learned, you know, coming through the fray and uh, and all the things you guys got going on now and for the future. So, um, for the future, so I think this was a really good episode. So thank you again. John great to see you guys.

Steve:

Thank you for having me and let's do it again soon. Absolutely All right With that. We're out. Thank you for tuning in to today's episode of the Cybersecurity Mentors Podcast.

:

Remember to subscribe to our podcast on your favorite platform so you get all the episodes. Join us next time as we continue to unlock the secrets of cybersecurity mentorship.

Steve:

Do you have questions or topics you'd like us to cover, or do you want to share your journey? Join us on Discord at Cybersecurity Mentors Podcast and follow us on LinkedIn. We'd love to hear from you. Until next time. I'm John Hoyt and I'm Steve Higuretta. Thank you for listening.