Hello everyone, this is Tom Muren and I'm here with another Tom, Thomas Kinsella, who is the co-founder and chief customer officer of Tynes. G'day Thomas, how are you? I am great, thanks so much for having me on, it's a pleasure. Once upon a time, a friend of mine who now runs a cybersecurity startup said to me that in 10 years, this was quite a long time ago, in 10 years there'll be a whole lot of automation in cybersecurity.
and there'll be a whole lot of jobs that just disappear and the industry from a jobs perspective will be decimated. But that's not the way the world's turned out at all. But I guess with the rise of Gen 8 AI, it is possible that that may still happen. So I guess we'll start off with
Well, what does Tynes do? Yeah, Tynes is an automation platform and we provide a platform for our customers to build, to run and to monitor their most important workflows. Your co-founder, Owen, he went and did a demo with Patrick, which I edited. Yeah, he did. It was not a huge amount of time ago.
But it was long enough ago that the idea of generative AI or AI technologies didn't really come into my mind as I was editing it. Maybe that's a year and a half or two years or something like that. Whereas nowadays, the first thought would be, OK, how would I try and integrate AI into that? Like, would it be useful for anything? And I guess that's the kind of...
experiments you've been doing perhaps yeah experiments and building and i think the truth is there's like probably like 100 different ideas on how you can integrate ai and if you look at dozens of security or hundreds of security companies out there all of them are integrating ai in loads of different ways we've got a very high bar so we tried i'd say yeah probably 20 or 30 different implementations of ai there's one thing that that ai is very good and that's
demoing it can be extremely impressive that first time you've seen you know chat gpt you're like like wow and then you think oh well actually what are the practical implications of this and we did the exact same thing in times we built a whole load of prototypes like this is great and then we're like oh actually maybe this isn't quite fit for purpose what we've landed on is i suppose some of the most natural things that we felt but we weren't quite prepared to ship them immediately so the first most natural way that people think that they could use uh
They could use AI, especially in an automation platform, to help you build, right? So there's a huge barrier in, I said earlier, the analyst automating their own job, lowering the barrier to entry and allowing somebody to describe using natural language what they want to do or how they want to manipulate data and allowing you to integrate that into your platform.
that's one way that we've got it up and running that's free for everybody if you try our community edition it works immediately so is that like the idea that I, Tom, have a problem I understand the problem really well I understand what I want to do I just don't have the familiarity with Python or Perl or
whatever and it's kind of like an iterative process with whatever the AI technology is to say here's what I want to do show me some code and then you would run that and then go okay I understand the problem well enough to know that this is not working let me go back and
sort of rehash it again until you get to the place where you're happy is that the is that pretty much exactly that so like so so the way this this like build time runs is that you select the data that you want and you describe how you want want the output to look like so if you've got a list of the classic example it's like virus total and a whole lot of engines that have said something is malicious the data's ugly as all hell it's in a really nasty format
AI is really good at writing code to parse out the relevant information. If you, Tom, describe, hey, I want to parse out this relevant piece of information, it's going to be pretty damn accurate. And the beauty about this, and this is, again, the power of automation, but the beauty about it is that...
what you're seeing is you're seeing the original data, you're seeing your, you know, prompt that you've given it, you're able to see the code and then you're able to see the output. And what you've got now is actually like a self-documenting process where you've got something that has been built and you can tell whether or not it's actually, you know, it's working as described based on this is the output that I wanted to deliver. And if you've got it wrong, you can say, oh, actually I want it in this particular format or tweet this or add this. But that's
raw action is used for like hundreds and hundreds of different really simple use cases like extracting age vulnerabilities over X, extracting it all users like adding this, you know, changing the keys. One of the hardest things in in automation, but in cybersecurity in general is like stitching two different platforms that expect two different data types together. So like normalizing alert data and sending it to one source.
actually really trivial. And same with like, you know, writing regex is really trivial. So writing regex is horrific, really painful. Describing using natural language, like how to extract or what data you want to extract data and letting AI write that regex, trivial. So using that to empower the analyst is how we've done it.
Yeah, yeah. I always think whenever someone mentions regex, that old saying that if you've got a problem that you need to solve with regex, you now have two problems. Yeah, exactly. And so I guess, and I think that is really quite true, but you're saying that in this case, the AI helps you with the regex. So I guess the way you've been describing it, it feels like to me these are the sorts of problems
if you were in a SOC might sort of rise up to your most senior analyst who can whip it out in 10 minutes because they've been doing it for 20 years. And so this is like pushing that work down to the more junior people. And is it teaching them at the same time? Yeah, it is. It's not only that, like I wish, certainly when I was running a SOC team and we had a security engineering team who were writing these regexes,
it would take them 10 minutes, but they've got a backlog of 50 things they want to do. And God knows they don't want to be writing another regex for the person who's asked them to write it for regexes and now actually needs to tweak it. So it's getting rid of that dependency on those senior engineers to allow them to focus on, you know, building the fun threat Intel platform that they actually want to be working on. But yeah, it does absolutely teach the analyst as well. Like they can see the input, they can see
the output and again if you're writing scripts the scripts usually sit on your own you know machine if you're writing in an automation platform you've got something that's shared and you can actually see the workflows that somebody else has built the beauty of times obviously you
You may not want your intern being able to go in and edit those, or maybe you've got change control and like get style story versioning or workflow versioning so they can see it. They can suggest changes, all that sort of stuff, but you're exposing them to other processes, seeing what they're done. And then the ultimate goal, it's capable now, but it's way further down the line. The ultimate thing is you should, it shouldn't just be building for one person. You should be building services for other team members, right?
and even other teams. So you should be building a process that if you've got something to analyze a URL or to find out the owner of a device, first of all, build that. Then say, hey, maybe somebody else can call this workflow from another workflow or maybe somebody else can build a UI on top of this so that they can bulk analyze or maybe somebody else can even build an API. And so that's where...
That's where all of this is going. And the last part is that it's, you know, making sure that this can run all the time. So that's, you know, building, running and monitoring so that you know when this is not working. But yeah, it's really just enabling that person to build something that really would take a senior engineer. Right, right, right. So another way that I've heard AI described as it's like an intern. And so it can do stuff and it can flag it for a human person, but
In real organizations, you don't trust interns either. So they're useful, but this seems like quite a different use case. What are the other ways that you've found AI useful? Other than as a sort of partner, I guess.
Yes. So I think that the first thing that I said was like what we described as like that build time, right? So that's like as you're building, it's allowing you to build alongside it. The second way is actually like runtime. And we've seen this be successful in a bunch of companies already. So I think there's a bunch of threat intel companies that are doing a good job. AI is really good at, well, like...
like large language models, but, you know, summarizing, repackaging, reformatting in like a chat interface, giving you the information that you want, but just in a slightly like nicer format. Or also like, you know, doing the basic job of analyzing text and seeing, you know, is this good? Is this bad? Does this look like it's written by a human, et cetera? Or is this potentially malicious? Is this,
gift card text message, something that I should be worried about, or is this actually our CEO? So the second way we've really enabled it is we've actually given people access to an underlying like raw LLM. So, and I'll talk about that later, the security concerns around that and how we've handled that.
But there's a lot of companies, as I said, that have given people that ability to summarize threat intel. We said actually, or analyze an email, hey, is this good or bad? And provide some recommendations. We said those are very smart ways of doing it. But actually, all of that is a relatively simple prompt that is not necessarily custom to your environment, where it's just accessing plain information. There's nothing complicated about it. And that actually works well.
Very well. Yeah, it works extremely well. So we're saying, actually, we can expand beyond that. If you want to analyze vulnerabilities or if you want to explain, like I'm five, to a developer how to fix this vulnerability or if you want to provide recommendations to a GRC team on something or if you want to provide a summary to the CEO or provide an executive summary for an incident, all that sort of stuff is just a slightly different prompt. So we've given raw access to an underlying scenario
where you can, within your workflow, decide how you want to call this. And it means that the use cases are pretty much unlimited. That actually sounds brilliant for a SOC team, like all the people who love working in their hate writing reports. So if you can get something to do like, you know, 85% maybe.
It doesn't even have to be perfect. And that's it. Like, it's so... Like, the challenge is, again, and this is where the beauty of the shareability of the platform, is that, yeah, it doesn't have to be perfect, but a lot of the time they don't even know where to start. So if you are able to say, actually, hey, here's a prompt, here's how we've seen some other people do this, this is the output. And now you can edit it to your heart's content, but you know that it's going to be in the format and structure that you want. I seem to remember that from the demo, Tynes had...
like decision nodes. My recollection is that it was a very deterministic algorithm where if it was, you know, you crunch numbers, crunch, crunch, crunch. And if it was above 0.7, you did this. And if it was below, you did that. Are you trying to use LLMs or anything to try and make that fuzzier? Or is that a kind of place you really want to be reliable because you want to be able to say for sure, this is why we made that decision at that point in time?
Yeah, so we do have the ability to do that and to score. And we've seen some customers do that in those trigger nodes where they can generate a risk score and see whether or not it's good or bad. But we've seen, I kind of alluded to this earlier, is that LLMs are incredibly easy to make something cool, but it's hard to make something robust and efficient.
If you think about a security workflow, and it could be a standard CSPM or phishing email, or it could be an EDO alert or an XDO alert or something like that, those workflows, they need to be consistent. They need to be tested. They need to be observable. They need to be fast. They need to be cheap. They need to be deterministic enough that if it goes right...
that you're like okay this is exactly why and if it goes wrong this is exactly why to the extent that like and this isn't my there's a fantastic vc sarah guo who's come up with a little bit of a framework around around this but it's that if you're isolating somebody's host or if you're you know like doing an rca on an incident uh you don't want to be saying well actually we don't really know why this black box decided you know decided this was good this was bad did you say an rca
Oh, sorry. Yeah. Root cause analysis for an incident. So if you're doing a retroactive review of like what went right or what went wrong, or if you're trying to explain to your legal team, if you're in that stage, you're already in that.
you can and you should build up confidence and this is again the beauty like build up confidence in something if you're confident after you know 20 30 50 100 a thousand runs hey then you can say all right i'm confident but before that you probably want to observe it and have human eyes on it i think that's the challenge with if you look at like your and not to knock them but there's a bunch of
Like security co-pilots, Microsoft's probably the most famous one out there right now, where they've got a workflow engine which is relying 100% on AI, where they're saying we can remediate incidents 100% and take all those steps in that workflow. And even they're saying it's not reliable. Even they're saying, well, actually, you might want to observe every single step as it goes. And it costs like boatloads of money to do it.
Whereas, I suppose, using AI sensibly and reasonably, where you can actually reason with the input and reason with the output and change it so that it reflects your risk tolerance, that's kind of where you need to, that's where we believe it needs to be at the moment. They're just not trustworthy just yet. Yeah, yeah. It seems to me that if you're explaining something to management, you need something better than just AI said yes or AI said no. Yeah.
Yeah, a black box can be painted like a silver bullet, but that still doesn't make it the silver bullet, I suppose. One thing you mentioned earlier that I wanted to pick up on was that you said something about protecting customer data. And so how do you do that? My kind of from a far distance view is that most of the modern LLMs, some you can run locally, some need to be cloud-based. How's Tynes implementing that?
Yeah, so we asked, while we were developing our AI features, we asked a whole lot of CISOs in our network, what are your concerns about LLMs and do you believe the hype? And I think there's generally a little bit of a trough of disillusionment right now, but they had a lot of concern around privacy and security. And not just them, a lot of their legal teams and their own customer agreements with their respective customers prohibited them from using certain LLMs, and especially data was going to be used to train a
Other models? Yeah, the way we've implemented it in Tynes, we've kind of designed it with privacy in mind from the ground up. So we run the language models. In this case, they're using AWS Bedrock. So it's Anthropix, Claude, and a few other models. But they're all within our own infrastructure. And the beauty is they're tenant-scoped, they're private, they're in your region, they're stateless, there's...
No networking, no training, no storage, no logging. So from a security point of view and from a data privacy point of view, from talking to our own internal and external counsel, we don't even have to add another subprocessor to our data processing agreements. We're able to just turn them on. Now, we haven't just turned them on because lots of people obviously have natural concerns about it. But it does mean that when we go to
any of our customers or any prospects and say hey this is how we're using it they're like oh actually that seems to answer like pretty much all of our questions and it's a really nice uh really nice answer rather than saying oh we're sending it out to this company over here and we don't know where they're processing it and we trust that they're not logging it etc
So it sounds to me that you're overall relatively positive about AI and it is not going to take away all the cybersecurity jobs. No, yeah, extremely positive about it. Definitely not going to take away a lot of cybersecurity jobs. And as I said earlier, the better you get at detecting, the more you have to respond to. I think there's going to be a never-ending challenge of insecurity. We're going to get better and we will have more visibility, but we'll grow. And I think, I suppose, towns and workflow engines in general, I think there could be multiple winners in this category.
space, but there's a huge opportunity for people to use AI to improve security posture and improve their security operations teams as well. So yeah, I'm bullish. Well, Thomas, Thomas Kinsella, co-founder and chief customer officer at Tynes. Thanks for both an interesting, illuminating and optimistic talk about what you can use AI for. Thank you. Thanks very much, Tom.