NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Mercor says it was hit by cyberattack tied to compromise LiteLLM (techcrunch.com)
nope1000 13 hours ago [-]
> The incident also prompted LiteLLM to make changes to its compliance processes, including shifting from controversial startup Delve to Vanta for compliance certifications.

This is pretty funny.

The leaked excel sheet with customers of Delve is basically a shortlist of targets for hackers to try now. Not that they necessarily have bad security, but you can play the odds

_pdp_ 11 hours ago [-]
I am not defending Delve or anything and I hope they get what they deserver but there is no correlation between SOC2 certification and the actual cyber capability of a company. SOC2 and ISO27001 is just compliance and frankly most of it is BS.
Lucasoato 3 hours ago [-]
I went through SOC2 Type I and II. I’d say that most of that stuff is necessary, like splitting environments and so on. That doesn’t mean it’s anything close to sufficient to avoid being hacked.

It’s a framework to give you the direction, then if employees are careless (or even malicious), no security standard is complete enough to protect a company.

nope1000 7 hours ago [-]
Sure it's certainly not perfect and a lot of the documentation is something you just write for the audit and never look at it again but that's why I am saying play the odds. The average delve customer startup might be less secure that the average startup who has to justify their processes to a real auditor.
coldstartops 8 hours ago [-]
Personally, I use them as frameworks to justify management processes.

A) I tie the cybersecurity activities to business revenue enabling outcomes (unblocked contracts), and second to reduced risk (as people react less to this when spending the buck).

B) with the political capital from point A) I actually operate a cybersecurity program, justify DevSecOps artefacts, threat modeling, incident response exercises, etc.

What this SOC2 reports, ISO27k certificates are, more like a standardization for communicating the activities of the org to outside people, and getting an external person to vet that the org doesn't bulls*t too much. but at the end of the day, the organization is responsible for keeping their house in order.

9 hours ago [-]
aitchnyu 10 hours ago [-]
Delve and Emdash. Are there more products or companies with similar names?
edgineer 8 hours ago [-]
Polsia (AI slop backwards)
snapcaster 6 hours ago [-]
Some of it is, but things like "your stage/dev and production environments should be completely isolated from eachother" are valid and most tech companies get lazy on this front
parliament32 3 hours ago [-]
It was never about cyber capability. It's a liability transfer framework.

If a service provider has a control that says "we use firewalls on all network access points, and configure those firewalls to CIS benchmark whatever", and a third-party signs off with "yes we checked, they have the firewalls, and they're configured properly", you now have two parties you can sue when a security incident caused by lack of firewalls causes you material damage.

Your org's cyber insurance will also go down if you can say "all our vendors have third-party attested compliance, and we do annual compliance reviews".

latchkey 2 hours ago [-]
According to SemiAnalysis, it is akin to getting a FAA certification.

https://x.com/HotAisle/status/2035062702587232458

sandeepkd 6 hours ago [-]
Yes they may be a BS in certain cases, however its still better than nothing. They do allow the companies to consider the questions atleast instead of claiming unawareness and most importantly it facilitates the incremental improvement.
sebmellen 10 hours ago [-]
It might feel like BS, and I'm inclined to agree with you because of the security theater aspect. (For example, Mercor had their verification done by what appears to be a legitimate audit firm.)

But it's not useless. It still forces you to go through a very useful exercise of risk modeling and preparation that you most likely won't do without a formal program.

cj 10 hours ago [-]
If your goal is to maximize your posture against cyber threats, spending your time on SOC 2 compliance with Vanta (or similar) is a waste of time if you consider the amount of time spent compared to security gained.

It's incredibly easy to get SOC 2 audited and still have terrible security.

> forces you to go through a very useful exercise of risk modeling

Have you actually done this in Vanta, though? You would have to go out of your way to do it in a manner that actually adds significant value to your security posture.

(I don't think SOC/ISO are a waste of time. We do it at our company, but for reasons that have nothing to do with security)

mikeocool 9 hours ago [-]
Probably the most useful aspect of SOC2 is that it gives the technical side of the business an easy excuse for spending time and money on security, which, in startup environment is not always easy otherwise (Ie “we have to dedicate time to update our out of date dependencies, otherwise we’ll fail SOC2”).

If you do it well, a startup can go through SOC2 and use it as an opportunity to put together a reasonable cybersecurity practice. Though, yeah, one does not actually beget the other, you can also very easily get a soc2 report with minimal findings with a really bad cybersecurity practice.

sersi 6 hours ago [-]
That's exactly what I've done in the past. We had to be soc2 and pci dss compliant (high volume so couldn't be through saq). I wouldn't say the auditor helped much in improving our security posture but allowed me to justify some changes and improvements that did help a lot.
9 hours ago [-]
sunir 8 hours ago [-]
It doesn't force you go through risk modelling because by now most SOC2 platforms have templates you just fill in the blanks and sign off. Conversely, the auditors are paid by the company, so their incentive is to pass the audit so the client can get what it wants.

Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140

As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.

gibolt 8 hours ago [-]
Having been through SOC2, it doesn't mean a company is rock solid, but it definitely makes the company button up loose ends, if taken seriously.
jacquesm 10 hours ago [-]
The main use of these certs is to give people that actually want to do their job a stick to hit their bosses with.
CafeRacer 8 hours ago [-]
I am genuinely wonder if anyone have had success landing gigs at Mercor.
tankenmate 7 hours ago [-]
Given their AI "hiring / onboarding" process all I can say is; couldn't have happened to a nicer company.
ffsoftboiled 5 hours ago [-]
I know of a couple people. It was a pretty miserable experience.
bombcar 7 hours ago [-]
The way to get a gig at Mercor is to hack their LLM so that it inserts you as already hired.
robshippr 4 hours ago [-]
Second major supply chain compromise in a week after the axios npm attack. 40 minutes and 500k machines affected. SOC2 won't catch this. The real question is whether your CI pipeline would have flagged a dependency change that happened between your last build and the one going to prod. Most teams have no visibility into that window at all.
yieldcrv 23 minutes ago [-]
> SOC2 won't catch this

Cybersecurity professionals and their certification treadmill crack me up because of this

They get paid less, require more certifications to be marketable, all to simply show actual “computer wizards” where all the blind spots are

sharadov 3 hours ago [-]
Could not happened to a more usurious company.
arwhatever 1 hours ago [-]
LinkedIn itself is far from great, but this seems like a good thread to share my LinkedIn tip of creating a job alert using a search query like

rust embedded NOT lensa NOT jobot NOT alignerr NOT mercor NOT “crossing hurdles”

n1tro_lab 7 hours ago [-]
The malicious LiteLLM versions were live for 40 minutes. Wiz estimates 500,000 machines were affected. LiteLLM is present in 36% of cloud environments. Forty minutes was enough.
cat-whisperer 2 hours ago [-]
all leaks are tied together
cindyllm 2 hours ago [-]
[dead]
aservus 13 hours ago [-]
This is a good reminder that any tool handling sensitive data — even internal ones — needs to be transparent about where data goes. The assumption that SaaS tools protect your data is getting harder to defend.
lukewarm707 12 hours ago [-]
I use llms to read the privacy policies that are too long to read. They guarantee almost nothing, unless you go out of your way to get an sla
susupro1 6 hours ago [-]
[dead]
Serberus 9 hours ago [-]
[dead]
Adam_cipher 8 hours ago [-]
[dead]
Chepko932 10 hours ago [-]
[dead]
tazsat0512 12 hours ago [-]
[dead]
devcraft_ai 13 hours ago [-]
[dead]
techpulselab 13 hours ago [-]
[dead]
ashishb 14 hours ago [-]
[flagged]
lmc 13 hours ago [-]
Docker is not a strong security boundary and shouldn't be used to sandbox like this

https://cloud.google.com/blog/products/gcp/exploring-contain...

ashishb 13 hours ago [-]
Compared to what? Which one is superior?

Running npm on your dev machine? Or running npm inside Docker?

I would always prefer the latter but would love to know what your approach to security is that's better than running npm inside Docker.

lmc 13 hours ago [-]
By all means, run your npm in docker, but please stop telling others it's a secure way to do so.
ashishb 12 hours ago [-]
I only said it is a defense-in-depth measure.

I definitely want to know how is it worse than running npm directly on the host

habinero 11 hours ago [-]
Those aren't the only options, my dude.
ashishb 11 hours ago [-]
And what are good options that you use and that work on Linux as well as Mac OS?
lmc 13 hours ago [-]
ashishb 12 hours ago [-]
So the worst case is that you are back to running npm on your host. Right?
dns_snek 10 hours ago [-]
99% of this is inapplicable to this discussion because it's about misconfigurations.

Escapes:

- privileged mode (misconfiguration, not default or common)

- excessive capabilities (same)

- CAP_SYS_ADMIN (same)

- CAP_SYS_PTRACE (same)

- DAC_READ_SEARCH (same)

- Docker socket exposure (same)

- sensitive host path mounts (same)

- CVE-2022-0847 (valid. https://www.docker.com/blog/vulnerability-alert-avoiding-dir...)

- CVE-2022-0185 (mitigated by default Docker config, requires miconfiguration of capabilities)

- CVE-2021-22555 (mitigated by default Docker config, requires miconfiguration of seccomp filters)

default seccomp filters in docker: https://docs.docker.com/engine/security/seccomp/#significant...

privileges that are dropped: https://docs.docker.com/engine/containers/run/#runtime-privi...

---

I'll add this: Containers aren't as strong of a security boundary as VMs however this means that a successful attack now requires infection of the container AND a concurrent container-escape vulnerability. That's a really high bar, someone would need to burn a 0-day on that.

The bar right now is really, really low - blocking post-install scripts seems to be treated as "good enough" by most. Using a container-based sandbox is going to be infinitely better than not using one at all, and container-based solutions have a much easier time integrating with other tools and IDEs which is important for adoption. The usability and resource consumption trade-off that comes with VMs is pretty bad.

Just don't commit any mortal sins of container misconfigurations - don't mount the Docker socket inside the container (tempting when you're trying to build container images inside a container!), don't use --privileged, don't mount any host paths other than the project folder.

kajman 6 hours ago [-]
I don't think it's crazy to imagine a misconfigured production environment. I always see these same examples of how "containers aren't really secure" and they're very amateur sins to commit though, as you mention.

AFAIK a comprehensive SELinux policy (like Red Hat ships) set to enforce will also prevent quite a few file accesses or modifications from escapes.

EE84M3i 12 hours ago [-]
Confusingly, Docker now has a product called "Docker Sandboxes" [1] which claims to use "microVMs" for sandboxing (separate VM per "agent"), so it's unclear to me if those rely on the same trust boundaries that traditional docker containers do (namespaces, seccomp, capabilities, etc), or if they expect the VM to be the trust boundary.

[1]: https://www.docker.com/products/docker-sandboxes/

notachatbot123 14 hours ago [-]
[flagged]
ashishb 13 hours ago [-]
What makes you think that?

Your cab see the commit history ~10% of code is written by agents.

Rest was all written by me.

Unlike other criticisms of the project, this one feels personal as it is objectively incorrect.

bengale 13 hours ago [-]
All these commenters just yell AI about every post and comment on here now. They have a worse hit rate than a blind marksman.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:19:11 GMT+0000 (Coordinated Universal Time) with Vercel.