Video: Active Risk Migration Overview | Duration: 908s | Summary: Active Risk Migration Overview | Chapters: Introduction to Active Risk (19.935s), Active Risk Modeling (87.14s), Vulnerability Scoring Algorithm (250.34999s), Exploitability and Risk (396.905s), Active Risk Assessment (491.225s), Active Risk Resources (591.25s), Concluding Active Risk (726.55s)
Transcript for "Active Risk Migration Overview": Hi, everyone. Welcome to today's webinar where we'll be diving into active risk. My name is Joel Alcon. I'm product marketer here at Rapid7, and I'm joined with I'm joined by Neeti Sharma who from our product management team. Today, Neeti will walk us through best practices, exactly what active risk is, how you can make that shift, and how your scores may change from real risk methodology to active risk. Now before we get started, just for level setting, real risk was our, legacy methodology, scoring methodology that was introduced, a couple years ago. And we've had several customers make the shift to active risk. So this webinar is aimed at helping, again, provide clarity, best practices as you make that shift. So, Nidi, take it away. Thank you, Joel. Let's continue talking about interesting things, active risk, real risk. Let me sort of walk you through what other constituent, of active risk. Right? What is active risk really mean? Let me start sharing my screen here. Alright. So what active risk today is accounts for CVSS scores, but what it has in addition that it kind of what makes it really stand out in the industry is the real time element and the predictive threat intelligence. These are things that we do, not that comes from just NetApployd, CesarCare, Heisenberg projects, attack, okay, the exploit, and a lot more. So the way that our model is really built, and I'm going to oversimplify this because we all love simple things. Let me go back sharing my screen. Alright. We have what our rapid seven validity risk assessment approach is. Our model has three different, elements of it. One is our capability factor, intent factor, and together we have what we'll call a multiplicative effect. Let me walk you through this a little bit, in more in detail. So when a vulnerability has a known exploit, for example, it may appear in exploit XML or it's curated with, you know, different metaphor frameworks and other frameworks that we use, we increase risk score up to 10%. Similarly, we have something what we call it an intent factor. What that means is when a volatility is being used, in targeted attacks, campaigns, and these are some, you know, confirmation that we get with our intelligence and our fees and our predictive intelligence, we see the risk increase go up to by another 10%. The real power lies when these two things work together. An example would be so if I have a base vulnerability with a score of 650 and it has a known exploit, I would raise the score to, 715. And within that, if you have another more predictive set intelligence associated where I see this vulnerability has been a part of targeted campaigns, I see the score go up to 787. But the final score that you see for something like this does not is not 787. It's the multiplicative effect. So when both exploitation factors, which means it is exploitable, but it's also, used in by adversaries, APD 21 is using this as a part of a targeted campaign. We see these vulnerability score push out by an exponential function, and that does make sense. This is how our model enhances the exploitability impact. What we've done here and I'm I'm gonna start with some of the the bottom line upfront first. What really is here is anything that is active exploitation in the wild, we will see the score is going to be closer to thousand or thousand, which is maximum priority. When we see vulnerabilities where exploit codes are used in targeted attacks, we see the range somewhere between mid eight hundreds, and that can also live in the critical priority. Individual vulnerabilities that are a part of its public exploit code were kind of higher critical, higher critical. Same thing with security updates with multiple vulnerabilities and individual vulnerabilities with public exploitation being slightly lower than that. What we did was, like, before we like, this was the assessment that we came up with. How we came to that assessment is a fun part. Right? So what we did was we looked at a different different, you know, our customer unique customer's environment and their datasets, and we ran algorithms to kind of compare what our findings were. Did this did this really align? Right? Here's an example of of availability that we see that was a score of 837. It was used in targeted attacks. We saw scores of 802, which were had ExploitDB references, which was part of a better point module. So we ran this, you know, case studies, across multiple customers, and we were able to come down to, you know, assessing the same bottom line up front that I just shared the summary of this. What we also sort of found, which was actually, like, really, really important to us is to understand our score differences. We saw that we were able this algorithm, active risk algorithm was assigning a score of eight thirty seven or somewhere around that same bucket for known exploit code plus confirmed use and targeted attacks. It had higher risk characteristics like they had server side vulnerabilities in popular frameworks, which kind of also justifies why this needs to be higher. And vulnerabilities that had crushed the threshold of theoretical to actually exploit it in bio. So those are the key characteristics that were associated with the score of mid 800. A score of 800 were more about, you know, hey. This was the technical exploitation techniques that are publicly available. Similar, you know, same characteristics. And so what we found the scoring algorithm to be is very consistent with the way that we think about exploitation. Now we often get these this question from customers of, like, hey, can you tell me how your exploitability factor is really being worked? Do I need to add more raters? And what we really suggest is I we think the additional weightage is unnecessary because the exploitability factor and the predictive set intel factor work together so well. We are automatically prioritizing exploitables and vulnerabilities. It is, and I when I when I say never, I'm a little worried, but this is a situation where you should never see a a a a vulnerability that is publicly exploitable to be a score of 500, 600. So we are accounting for a lot of those things. So that's the part of how our algorithm works. And we're gonna actually come up with this, share this other interesting, case study that we ran in other environments. So often we get this, question, and it's a it's a good question that how would my overall risk change when I move from a legacy risk scoring algorithm to active risk? The answer is, like, there's no good one answer. There's no one size fits all because all your, environments are unique and the vulnerabilities that you face and using your environment are not the same. But we've, I think I'm gonna take a pause here and go back to this one part. The intent for active risk was we wanted to ensure that we're not understating any of your risks. Right? The the the objective here is to not overstate risk. It's about not understating risk. So for example, here with this in this, particular data set that we have, we see we actually see a reduction from real risk to active risk for a lot of those vulnerabilities. So we see a lot of six hundreds become 500. Then just the thing where we see whether raise it and it's actually, you know, super justified, We see that we have a Cisco Jabber CV vulnerability, which had a CVS score of 8.8, was exploitable, and was also being used in a targeted campaign. He understated here. Right? So we were we have this at 636, and that's one score that goes up to 786. Similarly, for this other example here, we had, the real the real risk algorithm was calculating this to be 642. What we wanna make sure is we're not understating this. And now we have the newest score to be thousand just because this is being actively exploited in the wild today. This is an example of how, you know, we we've kind of sort of come up with this algorithm. So I what I wanna really ensure is this is not an overstatement of my risk score is going to increase. We've actually seen reduction in risk score as well for different customers. We've seen customers that have actually seen an increase, but the intent here for Rafinha commitment at Rafinac seven is we really wanna tell you the most important risks that you will, you know, you should be addressing. So we're we're trying to live as true to our mission there. What we have as useful resources is we have a a very interesting comparison here where we've walked through very important vulnerabilities here. You can see a vulnerability. You can assess and get a reference to what the real risk score was and why it'd be why do we see an inflation of active risk? Because with the research and the new evolving threat landscape, things have changed with that volatility. Similarly, there is, you know, other many multiple other examples here as well. We also have this detailed deep dive of what active risk is, and what the benefits of active risk is. This is an excellent white paper for adoption. So while we have all these resources available, what we also have this, comparison of real risk and active risk available, to your team to go and compare, and understand. There are some references that you may be able to find to look at, a rough understanding of how active risk scores have changed here. And we also have this excellent white paper that helps you dig into understanding the algorithm and the constituents and other implications of what it is. And what we also have right now is, like, we have we've been working on interesting road map enhancements for active risk, which will include EPSS, will will include flexible scoring customization. Those are all things that we see on active risk road map and not our legacy road map. Now, I mean, I may be biased. I do have this, favor, you know, favored algorithm where I do think active risk is is an excellent algorithm to use. But if you have questions and concerns, you know, we're here as a team to available to your team to be able to answer any questions that you may have. Please be encouraged to set up sessions with us to be able to talk through this. What I will do is at the end of the webinar link, we will post the links for this white paper and the comparisons. So, yeah, that's that's kind of where, active risk is, is definitely leading the game here. I'll pass it on to Joel. Yeah. Thank you so much, Neeti. And I think some key takeaways there as you said is active risk is the methodology where we are focusing all our efforts. Right? Any updates, all the ways that we are enhancing our risk methodology and ability to help you prioritize your risk is focused on active risk. And so, you know, you may be in a situation where where you make that shift to active risk, there may be an upcoming update, etcetera, and that's okay. Again, all our efforts are focused on the active risk, scoring methodology. It's it's I think there's a lot of information there in those documents you shared, Needy. Right? So folks can really understand how some vulnerabilities are scored in the real risk methodology from, before legacy methodology over to this new one. So, again, questions and concerns, please do check out those key resources, and and we're always here to provide assistance as needed. Thank you so much, Needy. And, everyone, we'll see you on the next, webinar. Thank you.