That one's always a good read, particularly the discussion of the tension between 100% coverage testing and defensive programming. We go for maximum defensive programming, so huge numbers of code paths that can't be exercised in testing but that will prevent things running off into the weeds if something does manage to trigger them. Another organisation in contrast had a client who required 100% code coverage in testing so they spent six months removing all the non-testable defensive code in their code base.
dang 2 days ago [-]
That's a classic! (https://news.ycombinator.com/item?id=46306724) ... but too far from this particular topic to make sense on the list - otherwise we'd probably have to add all SQLite stories, which are legion.
mickrich345 2 days ago [-]
I never read this article by the C developers before.
It's so odd to read a level headed C vs. Rust take on the internet.
I'm not sure I buy this from a technical perspective. Rust already meets almost all of the criteria laid out at the end of this post. By all means keep using C if you like it, but the rust team has done an excellent job over the last few years addressing most of these issues.
> - Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust moves at a pretty glacial pace these days. Slower than C++ for sure. There haven't been any big, significant changes to the language since async. Code that compiles today should compile indefinitely. (And the rust compiler authors check this on every release, by recompiling basically everything in crates.io to make sure.)
> - Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
Rust matches C in this regard. You can import & export C functions from rust very easily. The consumer of the foreign function interface have no idea they're calling rust and not C.
> - Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
Rust works pretty well on raw / embedded hardware via #[no_std]. There's a few obscure architectures supported by gcc and not llvm (and by extension rust). But it generally works great. I'd love to know what the real blocker platforms are (if any).
> - Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
Uh, I think this is possible today? Rustrover (intellij) can certainly produce coverage reports. This doesn't feel out of reach.
> - Rust needs a mechanism to recover gracefully from OOM errors.
True. You can override the global allocator for a program and use that to detect OOM. But recovering from OOM in general is tricky. I personally wish rust's handling of allocators looked more like zig.
> - Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
Rust and C are pretty much even when it comes to performance. Rust binaries are often a bit bigger though.
thethirdone 2 days ago [-]
The criteria were laid out in 2019 [0]. It was less clear then.
> If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
It seems like the criteria are less of things the SQLite developers are claiming Rust can't do and more that they are non-negotiable properties that need to be considered before even bringing the idea of a rust version to the team.
I think it is at least arguable that Rust does not meet the requirements. And they did explicitly invite private argument if you feel differently.
Ah, I assumed the page was written recently due to this message at the bottom:
>> This page was last updated on 2025-05-09 15:56:17Z <<
> I think it is at least arguable that Rust does not meet the requirements
Absolutely. The lack of clean OOM handling alone might be a dealbreaker for sqlite. And I suspect sqlite builds for some weird platforms that rustc doesn't support.
But I find it pretty weird reading comments about how rust needs prove it performs similarly to C. Benchmarks are just a google search away folks.
> And they did explicitly invite private argument if you feel differently.
Never.
Its not up to me what language sqlite is written in. Emailing the sqlite authors to tell them to rewrite their code in a different language would be incredibly rude. They can write sqlite in whatever language they want. My only real choice is whether or not I want to use their code.
1 days ago [-]
oguz-ismail2 2 days ago [-]
> Rustrover (intellij) can certainly produce coverage reports.
Yes! A quick google brings up cargo-llvm-cov[1], which is a rust wrapper around llvm source code coverage. It has an unstable --branch command for branch coverage, but branch coverage currently has some language level limitations[2].
I’d imagine this will go a bit like the rust rewrite of sudo etc. Despite the memory safety advantages at least towards the start it still ends up more fragile because the incumbent has years of testing and fixing behind it
fulafel 3 days ago [-]
They're not aiming at replacing SQLite-in-C with SQLite-in-Rust, they're doing this so they can implement more additional functionality faster than with C's chainsaw-juggling-act semantics and the inability to access the proprietary SQLite test suite.
IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust. Turso's Limbo announcement says exactly that: they couldn't confidently make large architectural changes without access to the tests. The rewrite lets them build Deterministic Simulation Testing from scratch, which they argue can exceed SQLite's reliability by simulating unlikely scenarios and reproducing failures deterministically.
pseudohadamard 2 days ago [-]
Having seen way too many "we're going to rewrite $xyz but make it BETTERER!!", I don't give this one much chance of success. SQLite is a high-quality product with a quarter-century of development history and huge amounts of effort, both by the devs and via public use, of testing. So this let's-reinvent-it-in-Rust effort will have to beat an already very good product that's had a staggering amount of development effort and testing put into it which, if the devs do manage to get through it all, will end up being about the same as the existing thing but written in a language that most of the SQLite targets don't work with. I just can't see this going anywhere outside of hardcore Rust devotees who want to use a Rust SQLite even thought it still hasn't got past the fixer-upper stage.
adamzwasserman 2 days ago [-]
fragmede is correct.
I needed SQLite as a central system DB but couldn't live with single-writer. So I built a facade that can target SQLite, Postgres, or Turso's Rust rewrite through one API.
The useful part: mirroring. The facade writes to two backends simultaneously so I can diff SQLite vs Turso behavior and catch divergences before production. When something differs, I either file upstream or add an equalizing shim.
Concurrent writes already working is a reasonable definition of success. It's why I'm using it.
pseudohadamard 1 days ago [-]
How common is this as a use case though? I wouldn't normally expect to see "SQLite" and "central system DB" in the same sentence. SQL Server, Postgres, 'Orable, MySQL, DB2, but not really something targeted for small-footprint lightweight use.
adamzwasserman 15 hours ago [-]
SQLite is battle-tested in production at massive scale. Discord handles millions of concurrent users with SQLite clusters. WhatsApp served 900 million users before Facebook acquired them, running on SQLite for message storage. The "lightweight" perception is outdated.
Who knows, maybe 5 years from now, you will say to yourself: that crazy wasn't so crazy after all!
fragmede 2 days ago [-]
How do you want to define success for this project relative to SQLite? Because they already have concurrent writes working for their rust implementation. It's currently marked experimental, but it does already work. And for a lot of people, that's all they want or need.
> IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust.
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
lmm 2 days ago [-]
The test suite that the actual SQLite developers use to develop SQLite is not open-source. 51445 open-source test cases is a big number but doesn't really mean much, particularly given that evidently the SQLite developers themselves don't consider it enough to provide adequate coverage.
adamzwasserman 2 days ago [-]
SQLite's test suite is infamously gigantic. It has two parts: the public TCL tests you're referencing, and a much larger proprietary test suite that's 100x bigger and covers all the edge cases that actually matter in production. The public tests are tiny compared to what SQLite actually runs internally.
wredcoll 2 days ago [-]
...why does sqlite have proprietary tests??
structural 2 days ago [-]
It allows the code to be fully public domain, so you can use it anywhere, while very strongly discouraging random people from forking it, patching it, etc. Even still, the tests that are most applicable to ensuring that SQLite has been built correctly on a new compiler/architecture/environment are made open source (this is great!) while those that ensure that SQLite has been implemented correctly are proprietary (you only need these if you wanted to extend SQLite's functionality to do something different).
This allows for a business model for the authors to provide contracted support for the product, and keeping SQLite as a product/brand without having to compete with an army of consultants wanting to compete and make money off of their product, startups wanting to fork it, rename it, and sell it to you, etc.
It's pretty smart and has, for a quarter century, resulted in a high quality piece of software that is sustainable to produce and maintain.
ragall 2 days ago [-]
So that enhancements only be practical by hiring the original team.
einsteinx2 3 days ago [-]
The irony is if they only had the public domain tests, no one would complain even though it would mean the exact same number of open source tests.
dullcrisp 2 days ago [-]
That’s like if I gave you half the dictionary and then said it’s ironic that if there really weren’t any letters after “M” you wouldn’t be complaining.
Ar-Curunir 2 days ago [-]
There are also non-public tests.
digitalPhonix 3 days ago [-]
The next bullet point:
> 2. The TH3 test harness is a set of proprietary tests…
CharlesW 3 days ago [-]
Of course, but how does that make the allegation not FUD?
digitalPhonix 2 days ago [-]
I’m confused, the statement is that SQLite has a proprietary test suite? It does. Where’s the FUD?
Turso tried to add features to SQLite in libsqlite but there were bugs/divergent behaviour that they couldn’t reconcile without the full test suite.
thisislife2 3 days ago [-]
In other words, they are creating their own database and hitching on to the SQLite brand to market it. (That's fine though).
dlisboa 3 days ago [-]
I think it's fair to say they tried using SQLite but apparently had to bail out. Their use case is a distributed DBaaS with local-first semantics, they started out with SQLite and only now seem to be pivoting to "SQLite-compatible".
Building off of that into a SQLite-compatible DB doesn't seem to me as trying to piggyback on the brand. They have no other option as their product was SQLite to begin with.
IshKebab 3 days ago [-]
No that's completely incorrect. It's compatible with SQLite, not just in the same spirit:
> SQLite compatibility for SQL dialect, file formats, and the C API
That doesn't seem very fair. It's still beta and clearly far from finished. And they do call out the compromises - they have a whole page about how they are not yet fully compatible:
I don't think that's fine at all, it's quite a shitty thing to do hoenstly and I'm not surprised it's a VC backed company doing it.
fragmede 2 days ago [-]
How would you do it then?
shimman 2 days ago [-]
I'd probably wouldn't ride the coattails of another open source project that provides hundreds of billions in value for free annually for anyone on this Earth in order to make a quick buck. IDK I have morals and it seems if you want VC funding you must lack them.
It's no different than the hucksters that take public domain books and slut them up in order to make some coin peddling smut.
groundzeros2015 3 days ago [-]
Without the test suite isn’t even more likely to have stability problems?
9rx 3 days ago [-]
Not likely. The alternative was for them to modify SQLite without the test suite and no obvious indication of what they would need to do to try to fill in the gaps. Modifying SQLite with its full test suite would be the best choice, of course, but one that is apparently[1] not on the table for them. Since they have to reimagine the test suite either way, they believe they can do a better job if the tests are written alongside a new codebase.
And I expect they are right. Trying to test a codebase after the fact never goes well.
[1] With the kind of investment backing they have you'd think they'd be able to reach some kind of licensing deal, but who knows.
dizhn 2 days ago [-]
I don't get this. In their own rust implementation they have to write and use their own test and they still don't have access to the proprietary sqlite tests. So their implementation will necessarily be whatever they implement + whatever passes their tests. Same as it would be if they forked sqlite in C. (Plus they would have the open source tests). Am I missing something?
9rx 2 days ago [-]
You are missing that HN accounts needlessly overthink everything, perhaps?
Otherwise, I doubt it. They have to write the tests again no matter what. Given that, there is no downside to reimplementing it while they are at it. All while there is a big upside to doing that: Trying to test something after the implementation is already written never ends well.
That does not guarantee that their approach will succeed. It is hard problem no matter how you slice it. But trying to reverse engineer the tests for the C version now that all knowledge of what went into it in the first place is lost is all but guaranteed to fail. Testing after the fact never ends well. Rewriting the implementation and tests in parallel increases the chances of success.
dlisboa 3 days ago [-]
Maybe. It's hard to know what kind of issues that test suite covers. If memory safety is the main source of instability for the C implementation then the Rust implementation won't be too affected without the test suite. Same if it focus a lot on compatibility with niche embedded platforms and different OSes, which Turso won't care to lose.
"Stability" is a word that means different things for different use cases.
groundzeros2015 3 days ago [-]
Coverage is described on the SQLite website
0x457 3 days ago [-]
Turso has its own test suite that in the repo.
groundzeros2015 3 days ago [-]
but the other one has decades of engineering effort and is based on real world problems
0x457 2 days ago [-]
But the other one is not available to most and SQLite itself is "open-source" not "open-contributions" so extending SQLite is pretty much impossible at scale:
- no way to merge upstream
- no way to run full test-suit to be sure everything is tiptop
groundzeros2015 9 hours ago [-]
Indeed. That is a problem for developing SQLite without having an agreement with the developers. That is not a problem for me choosing to use SQLite in my stack.
SpecialistK 3 days ago [-]
Of all the projects which may benefit from a rewrite or re-imagining in a memory-safe language, I'm really puzzled why it's heavily-tested, near-universally-deployed software such as sudo (use oBSD doas instead?), the coreutils, and sqlite.
dizhn 2 days ago [-]
Doas supports a subset of sudo functionality by design. Your comment is exactly what I said when I first heard about the rust linux utils thing. The best they can do is have new bugs.
Havoc 2 days ago [-]
I don't think there is a big picture plan. It requires that someone care both about rust and the thing
...which is a pretty arbitrary combination
tracker1 3 days ago [-]
I definitely wouldn't be surprised by bugs and/or compatibility issues over time. Especially in the near term. I'm mixed, but somewhat enthusiastic on Turso's efforts to create client-server options and replication.
In the past I've reached for FirebirdSQL when I needed local + external databases and wanted to limit the technology spread... In the use case, as long as transactions synched up even once a week it was enough for the disparate remote connections/systems. I'm honestly surprised it isn't used more. That said, SQLite is more universal and lighter overall.
adamzwasserman 3 days ago [-]
Building a production app on Turso now. No bugs or compatibility issues so far. The sqlite API isn't fully implemented yet, so I wrote a declarative facade that backfills the missing implementations and parallels writes to both Turso and native sqlite: gives me integrity checking and fallback while the implementation matures
zbentley 3 days ago [-]
Isn’t the rust rewrite deployed as part of some fairly significant Linux distros these days?
That’s hearsay that I haven’t dug into, so I may well be wrong.
ktimespi 3 days ago [-]
Ubuntu is deploying it in a non-LTS release, and they're trying to get the bugs out of the way is what I'm hearing
It looks like some parts are open source and other not. Does anyone know more about the backstory? (It looks like one is a custom program that generate fuzz test. Do they sell it to others SQL engines?)
Rendello 3 days ago [-]
The CoRecursive episode with SQLite creator D. Richard Hipp goes through it. I've linked to the part of the transcript that covers it, the key quote being:
> We still maintain the first one, the TCL tests. They’re still maintained. They’re still out there in the public. They’re part of the source tree. Anybody can download the source code and run my test and run all those. They don’t provide 100% test coverage but they do test all the features very thoroughly. The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out. It did work out really well for us in that it keeps our product really solid and it enables us to turn around new features and new bug fixes very fast.
but if you want the compliance paperwork, you pay for it
shimman 2 days ago [-]
Yeah but what about the poor VC startups that want to rat fuck the commons? Why won't anyone think of them?
dodomodo 3 days ago [-]
usefull if you need to validate that the database runs properly on yours embedded platform, possibly with its custom io and sync primitives.
rantingdemon 3 days ago [-]
This is very shallow for a supposed deep dive.
I'm not ready to entertain Turso as an alternative to something that is as battle tested as Sqlite.
floren 3 days ago [-]
> This is very shallow for a supposed deep dive.
I think it's time for a new law of headlines: anything labeled a "deep dive" isn't.
maxbond 3 days ago [-]
My law of headlines is, "don't take them too seriously, don't develop too many expectations about the article, skim the article (or the comments) to know what it is about and whether it is worth your time".
HelloNurse 2 days ago [-]
Taking feature lists and plans at face value is offensively shallow; the typical Rust fan arrogance pattern can be an explanation (if the Rust rewrite is "better", it doesn't have to be compatible with the rest of the world who uses the actual C SQLite).
geodel 3 days ago [-]
Perhaps these are for deep divers who discuss Apple watch deep diving features than actual deep diving.
attractivechaos 2 days ago [-]
Yeah, I was expecting performance benchmarks, detailed feature comparisons, analysis of binary/extension compatibility, etc.
ndiddy 3 days ago [-]
The thing that worries me the most about Turso is that rather than the small, stable team running SQLite, Turso is a VC backed startup trying to capitalize on the AI boom. I can easily see how SQLite's development is sustainable, but not Turso's. They're currently trying to grow their userbase as quickly as possible with their free open source offering, but when they have investors breathing down their necks asking about how they're going to get 100x returns I'm not sure how long that'll last. VCs generally expect companies they invest in to grow to $100 million in revenue in 5-10 years. If your use of their technology doesn't help them get there, you should expect to be rugpulled at some point.
hu3 3 days ago [-]
I too am weary of VC incentives but:
1) It's MIT licensed. Including the test suite which is something lacking in SQLite:
They do have a test suite that's private which I understand to be more about testing for different hardware - they sell access to that for companies that want SQLite to work on their custom embedded hardware, details here: https://sqlite.org/th3.html
> SQLite Test Harness #3 (hereafter "TH3") is one of three test harnesses used for testing SQLite.
MobiusHorizons 3 days ago [-]
> 2) They have a paid cloud option to drive income from:
I’ve been confused by this for a while. What is it competing with? Surely not SQLite, being client server defeats all the latency benefits. I feel it would be considered as an alternative to cloud Postgres offerings, and it seems unlikely they could compete on features. Genuinely curious, but is there any sensible use case for this product, or do they just catch people who read SQLite was good on hacker news, but didn’t understand any of the why.
3eb7988a1663 2 days ago [-]
The thing that cooks my noodle - who are these insane people who want to beta test a new database? Yes, all databases could have world destroying data loss/corruption, but I have significantly more confidence in a player than has been on the market for many years.
IshKebab 3 days ago [-]
The article talks about this. If you have a project that starts small and an in-process DB is fine, but you end up needing to scale up then you don't have to switch DBs.
lelanthran 2 days ago [-]
That's a valid, but very tiny, use case.
After all, if you can tell in advance that you might hit the limits of SQLite, you'd simply start with postgresql on day one, not with a new unproven DB vendor with a product that has been through the trial by fire of existing DBs.
gizzlon 2 days ago [-]
So the usecase is: I started with SQLite, but now I have too many terrabytes to fit on one server? That seems.. very uncommon.
And since moving it out of process, and even to another network, is going to make it much much much slower. You're going to need a rewrite anyway
IshKebab 2 days ago [-]
I think it's more like you started with SQLite and now you need concurrent writes, replication, sharding, etc. etc. - all the stuff that the "big" databases like PostgreSQL provide.
MobiusHorizons 2 days ago [-]
Thanks. Serves me right for commenting without reading the article.
lelanthran 2 days ago [-]
> Genuinely curious, but is there any sensible use case for this product
Looking at the comments each time this product comes up, Rust is apparently the selling point for many, including the dev team themselves.
g947o 3 days ago [-]
Elasticsearch was license under Apache 2.0 until they switched.
That says enough.
tcfhgj 3 days ago [-]
to AGPL3?
cozzyd 2 days ago [-]
Are there any VC-funded open source projects that didn't attempt rug pulls? (There must be, right?)
imiric 2 days ago [-]
Grafana has been a pretty good steward of OSS. Whether you like their products or not, they've been able to balance the OSS and commercial offerings fairly well.
cozzyd 2 days ago [-]
Yeah that's something I actually use quite a bit!
sophacles 2 days ago [-]
Whether or not they attempt rug pulls, or other slimy measures to extort money from entrenched users... this VC backed OSS startups have given us some nice things. People fork the permissively licensed code when the scumbuckets get too smelly and the company goes on to irrelevancy while people use the actually OSS version.
curuinor 2 days ago [-]
metabase.com, but metabase is intended for business analyst types and is AGPL, with shenanigans for embedding and an enterprise edition thing
EdwardDiego 2 days ago [-]
Man, I've seen the SQL Metabase emits, it's not great. Like, doing a massive join across 10 tables and selecting all the columns from all the tables - to only return the average of one column from one table.
iamrobertismo 3 days ago [-]
The MIT licensing makes this even less trustworthy. I can image a major cloud or fly.io just proprietary forking them as a service, as cloud providers have done for years.
bigstrat2003 3 days ago [-]
So what? The MIT licensed original will still be there, you don't lose out on anything if that happens. And also, SQLite itself is public domain, so by your logic we shouldn't trust SQLite either. Which is crazy.
iamrobertismo 3 days ago [-]
I don't understand you reply here. Database startups have always had the consistent issue of cloud providers providing managed solutions without contributing back. It is why many moved to or use the AGPLv3 and why there was the whole SSPL controversy in the first place. Running a successful open source database startup is not trivial. None of this applies to SQLite.
MobiusHorizons 2 days ago [-]
I think the point is that that sounds like a potential problem for turso, but it’s not really a problem for everyone else unless some sort of vendor lockin would prevent using open source alternatives. But given the strong compatibility story with the SQLite file format implied already that just doesn’t seem credible.
sam_lowry_ 3 days ago [-]
> test suite which is something lacking in SQLite
You must be kidding. Last time I checked, sqlite was mostly extensive test suites.
jzebedee 3 days ago [-]
It's covered in the article. The full SQLite test suite isn't open source, so you (the third party) don't have the same confidence in your modifications as the SQLite team does.
j16sdiz 2 days ago [-]
1. Only if you modify it. There is a free test suit, and You can license the non-free test suit.
2. Compare to the test in Turso, the test in Turso is just kids toy.
HAMSHAMA 3 days ago [-]
I think they meant that the test suite is not open source. You’re right that it is extensive.
koverstreet 3 days ago [-]
Yeah, that's not a good environment for this kind of engineering. You need long term stability for a project like this, slow incremental development with a long term plan, and that's antithetical to VC culture.
On the other hand, Rust code and the culture of writing Rust leads to far more modularity, so maybe some useful stuff will come of it even if the startup fails.
I have been excited to see real work on databases in Rust, there are massive opportunities there.
saidnooneever 2 days ago [-]
where do you see these opportunities? i didnt see a lot of issues personally rust would be better at than C in this domain. care to elaborate? (genuinely curious!)
personally i see more benefit in rust for example as ORM and layers that talk to the database. (those are often useful to have in such an ecossystem so you can use the database safe and sanely, like python or so but then u know, fast and secure.)
koakuma-chan 2 days ago [-]
You need to be crazy to use an ORM. I personally think that even SQL is redundant. I would like to see a high quality embedded database written in Rust.
koverstreet 2 days ago [-]
Yep, exactly this.
It's painful having to switch to another language to talk to the database, and ORMs are the worst kind of leaky abstractions. With Rust, we've finally got a systems language that's expressive enough to do a really good job with the API to an embedded database.
The only thing that's really missing is language support for properly ergonomic Cap'n Proto support - Swift has stuff in this vein already. That'd mean serializable ~native types with no serialization/deserialization overhead, and it's applicable to a lot of things; Swift developed the support so they could do proper dynamically linked libraries (including handling version skew).
If I might plug my project yet again (as if I don't do that enough :) - bcachefs has a high quality embedded database at its core, and one of the dreams has always been to turn that into a real general purpose database. Much of the remaining stuff for making it truly general purpose is stuff that we're going to want sooner or later for the filesystem anyways, and while it's all C today I've done a ton of work on refactoring and modernizing the codebase to hopefully make a Rust conversion tractable, in the not too distant future.
(Basically, with the cleanup attribute in modern C, you can do pseudo RAII that's good enough to eliminate goto error handling in most code. That's been the big obstacle to transitioning a C codebase to be "close enough" to what the Rust version would look like to make the conversion mostly syntactic, not a rewrite, and that work is mostly done in bcachefs).
The database project is very pie in the sky, but if the project gets big enough (it's been growing, slowly but steadily), that's the dream. One of them, anyways.
A big obstacle towards codebases that we can continue to understand, maintain and continue to improve over the next 100 years is giant monorepos, and anything we can do to split giant monorepos apart into smaller, cleaner reusable components is pure gold.
speed_spread 2 days ago [-]
I vaguely remember a crate doing a RocksDB kind of thing?
g947o 3 days ago [-]
I was excited about this for a second until seeing your comment.
Unless you are Amazon which has the resources to maintain a fork (which is questionable by itself with all the layoffs), you probably shouldn't touch this.
CodingJeebus 3 days ago [-]
Completely agree, I'm looking at pretty much all software this way nowadays.
We've all been around long enough to know that "free" VC-backed software always means "free... until it's in our interest to charge for it". And yet users will still complain about the rugpull in 2026, no matter how many times they've been through it. "Fool me once, shame on you"
akagusu 2 days ago [-]
I've lost the count of how many times people were fooled by VC backed companies in this forum.
mhh__ 3 days ago [-]
Some lessons about the modern distaste for copyleft here IMO
pelasaco 2 days ago [-]
So the idea is to rewrite it in Rust and drop SQLite? I mean, maybe that’s just how things evolve. But it feels like every project is only a few vibe-coding sessions away from getting rewritten in $LANGUAGE. And I can’t help wondering whether that’s hurting a sustainable open-source ecosystem.
SQLite is a good example: the author built a small ecosystem around it and managed to make a living from open source. Thanks to author's effort, we have a small surface area, extreme stability, relentless focus on correctness.
If we keep rewarding novelty over stewardship, we’ll lose more “SQLite-like” projects—stable cores that entire ecosystems depend on.
This reflects my experience. I also experienced very bad memory leaks when using libSQL for large write jobs. Haven't tried tursodatabase yet, but my impression by the confusing amount of packages in the Turso ecosystem is it's not ready for primetime yet.
sauercrowd 3 days ago [-]
> ... most of which can be fixed by a rewrite in Rust
huh? That is clearly not the case. memory bugs - sure. Not having a public test suite, not accepting public contributions, weakly typed columns and lack of concurrency has nothing to do with the language. They're governance decisions, that's it.
>I see this situation trhough the prism of the innovator's dilemma: the incumbent is not willing to sacrifice a part of its market to evolve, so we need a new player to come and innovate.
I don't think the innovators dilemma quite applies in the open source world. Projects are tools, that's it. Preserving a project for the sake of preserving it isn't a good idea.
If people need to run a sqlite db in these exotic places, shedding it means someone else has to build their own tool now that can do it. Sqlite has decided that they care about that, so they support it, so they can't use rust. Seems sound.
Projects coming and going is a good thing in open source, not a bug.
jayd16 3 days ago [-]
Maybe they're saying a rewrite part solves the governance issues not the rust part.
overfeed 3 days ago [-]
That'd be an interesting attitude towards governance for a VC-funded startup with -- I presume -- VC-controlled board seats.
rendaw 3 days ago [-]
I know I've seen multiple bug reports in open source projects with "well we can't fix this because it'd break things for existing users." Maybe it's a bad thing, but why do you think this doesn't happen?
jancsika 3 days ago [-]
> lack of concurrency has nothing to do with the language
That's an extraordinary claim for any C codebase.
Unless it ships with code enabling concurrency that is commented out, we should assume that "concurrency in C ain't easy" was a factor in that design choice.
w-m 3 days ago [-]
At the current rate of progress I'm wondering how long it will take for llm agents to be able to rewrite/translate complete projects into another language. SQLite may not be the best candidate, due to the hidden test suite. But CPython or Clang or binutils or...
The RIIR-benchmark: rewrite CPython in Rust, pass the complete test suite, no performance regressions, $100 budget. How far away are we there, a couple months? A few years? Or is it a completely ill-posed problem, due to the test suite being tied to the implementation language?
bathtub365 3 days ago [-]
What’s the point?
w-m 3 days ago [-]
A clearly defined/testable long-horizon task: demonstrating the capability of planning and executing projects that overrun current llm's context windows by several orders of magnitude.
Single-issue coding benchmarks are getting saturated, and I'm wondering when we'll get to a point where coding agents will be able to tackle some long-running projects. Greenfield projects are hard to benchmark. So creating code or porting code from one language to another for an established project with a good test suite should make for an interesting benchmark, no?
vindin 15 hours ago [-]
Why? Just because it’s in rust doesn’t make it a better product
9rx 3 days ago [-]
> A database that can scale from in-process to networked is badly needed
From what I’ve read there’s a pretty sizable performance gap between SQLite and pglite (with SQLite being much faster).
I’m excited to see things improve though. Having a more traditional database, with more features and less historical weirdness on the client would be really cool.
Did you actually click the link? pglite aims to be embeddable just like sqlite.
roywiggins 3 days ago [-]
pglite runs in wasm so it should be possible to embed it where you want, like sqlite?
duped 3 days ago [-]
Why would I want wasm for an embedded database? It's not a feature, quite an anti-feature frankly.
edit: it looks like pglite is only useful for web apps
9rx 3 days ago [-]
> it looks like pglite is only useful for web apps
Where else other than web apps (herein meaning network-attached database servers that, more often than not, although not strictly so, use HTTP as the transport layer) are at meaningful risk of bumping up against SQLite write contention? If your mobile/desktop app has that problem, it is much more likely that you have a fundamental design issue and not a scaling problem.
duped 3 days ago [-]
I'm not sure I follow what that has to do with embedding a database into your application
9rx 3 days ago [-]
In a networked environment, which includes the web, it is typical to expose your database over the network. In the olden days clients started speaking SQL over the network, but there are a number of pitfalls to this approach. SQL was designed for use on mainframes, which, understandably, does not translate to the constraints of the network very well.
To alleviate the pressure of those pitfalls, we started adding middle databases (oft called web apps, API services, REST services, etc.) that proxied the database through protocols that are more ideal to the realities and limitations of the network. Clients were then updated to use the middle database, seeing the hacks required to make SQL usable over the network to be centralized in one spot, greatly reducing the management burden.
But having two database servers is pretty silly when you think about it. Especially when the "backend" database's protocol isn't suitable for the network[1]. Enter the realization that if you use something like SQLite, you don't need another, separate database server. You can have one database server[1] that speaks a network friendly API. Except SQLite itself has a number of limitations that doesn't make it well suited to being the backing engine of your network-first DBMS.
That is what the article is about — Pointing out those limitations, and how Turso plans to overcome them. If your use case isn't "web app", SQLite is already going to do the job just fine.
[1] After all, if it were suited for networks, you wouldn't need the middle service. Clients would already be talking to that database directly instead.
[2] As in one logical database server. In practice, you may use a cluster of servers to provide that logical representation.
duped 2 days ago [-]
Are you an AI?
evertheylen 3 days ago [-]
Not really, I used it to develop against a "real" postgres database for a node backend app. It worked fine and made it pretty easy to spin up a development/CI environment anywhere you want. Only when inserting large amounts of data you start to notice it is slower than native postgres. I had to stop using it because we required the postgis extension (although there is some movement on that front!).
groundzeros2015 3 days ago [-]
You don’t want another server but you do want networking?
3 days ago [-]
tln 3 days ago [-]
Where is the "networked mode" in Turso? Turso's readme and docs do not mention anything like this
I hate to be negative, but where is the deep dive? This is a shallow overview of Turso's features and some of the motivation behind it. Am I missing something?
eviks 3 days ago [-]
It's longer than a tweet
IshKebab 3 days ago [-]
Good article though it kind of stopped just when I thought the deep dive was about to start.
ezekiel68 2 days ago [-]
Could you be any _less_ specific in your criticism?
TeaVMFan 3 days ago [-]
For the Java ecosystem, H2 fills this gap nicely, easily handling both in- memory and remote JDBC access:
I'm working on a Django app. This would make production deployment a bit easier.
Also sad that the test suite isn't open source. Would help drive development of the new DB...
js-j 2 days ago [-]
I'm pretty sceptical, to say the least
cracki 2 days ago [-]
Stop rewriting everything in Rust.
pelasaco 2 days ago [-]
vibe-coding everything in Rust, you mean.
3 days ago [-]
sgammon 3 days ago [-]
Wow what a terrible and misleading article
yunohn 3 days ago [-]
What a breath of fresh air to read a blog not written by AI, with actual human learnings and opinions. Thanks for the write up!
pelasaco 2 days ago [-]
a blog not written by AI, about a project written in AI. It is just matter of time. We just need AI to read the article, and then the full circle is complete.
fragmede 2 days ago [-]
Was this written by an LLM?
yunohn 2 days ago [-]
Nope, just a human encouraging any attempt by others to publish things without AI assistance.
Harsh to see HN disapproved of my positivity, lesson learned I guess…
gethly 3 days ago [-]
let's play a little game known as "count the unsafe"
Turso is an in-process SQL database, compatible with SQLite - https://news.ycombinator.com/item?id=46677583 - Jan 2026 (102 comments)
Beyond the SQLite single-writer limitation with concurrent writes - https://news.ycombinator.com/item?id=45508462 - Oct 2025 (70 comments)
An adventure in writing compatible systems - https://news.ycombinator.com/item?id=45059888 - Aug 2025 (12 comments)
Introducing the first alpha of Turso: The next evolution of SQLite - https://news.ycombinator.com/item?id=44433997 - July 2025 (11 comments)
Working on databases from prison - https://news.ycombinator.com/item?id=44288937 - June 2025 (534 comments)
Turso SQLite Offline Sync Public Beta - https://news.ycombinator.com/item?id=43535943 - March 2025 (67 comments)
We will rewrite SQLite. And we are going all-in - https://news.ycombinator.com/item?id=42781161 - Jan 2025 (3 comments)
Limbo: A complete rewrite of SQLite in Rust - https://news.ycombinator.com/item?id=42378843 - Dec 2024 (232 comments)
https://sqlite.org/whyc.html
> - Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust moves at a pretty glacial pace these days. Slower than C++ for sure. There haven't been any big, significant changes to the language since async. Code that compiles today should compile indefinitely. (And the rust compiler authors check this on every release, by recompiling basically everything in crates.io to make sure.)
> - Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
Rust matches C in this regard. You can import & export C functions from rust very easily. The consumer of the foreign function interface have no idea they're calling rust and not C.
> - Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
Rust works pretty well on raw / embedded hardware via #[no_std]. There's a few obscure architectures supported by gcc and not llvm (and by extension rust). But it generally works great. I'd love to know what the real blocker platforms are (if any).
> - Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
Uh, I think this is possible today? Rustrover (intellij) can certainly produce coverage reports. This doesn't feel out of reach.
> - Rust needs a mechanism to recover gracefully from OOM errors.
True. You can override the global allocator for a program and use that to detect OOM. But recovering from OOM in general is tricky. I personally wish rust's handling of allocators looked more like zig.
> - Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
Rust and C are pretty much even when it comes to performance. Rust binaries are often a bit bigger though.
> If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
It seems like the criteria are less of things the SQLite developers are claiming Rust can't do and more that they are non-negotiable properties that need to be considered before even bringing the idea of a rust version to the team.
I think it is at least arguable that Rust does not meet the requirements. And they did explicitly invite private argument if you feel differently.
0: https://web.archive.org/web/20190423143433/https://sqlite.or...
>> This page was last updated on 2025-05-09 15:56:17Z <<
> I think it is at least arguable that Rust does not meet the requirements
Absolutely. The lack of clean OOM handling alone might be a dealbreaker for sqlite. And I suspect sqlite builds for some weird platforms that rustc doesn't support.
But I find it pretty weird reading comments about how rust needs prove it performs similarly to C. Benchmarks are just a google search away folks.
> And they did explicitly invite private argument if you feel differently.
Never.
Its not up to me what language sqlite is written in. Emailing the sqlite authors to tell them to rewrite their code in a different language would be incredibly rude. They can write sqlite in whatever language they want. My only real choice is whether or not I want to use their code.
See <https://sqlite.org/testing.html#statement_versus_branch_cove...>. Does Rustrover produce branch coverage reports?
[1] https://github.com/taiki-e/cargo-llvm-cov
[2] https://github.com/rust-lang/rust/issues/124118
See the features and roadmap at https://github.com/tursodatabase/turso
I needed SQLite as a central system DB but couldn't live with single-writer. So I built a facade that can target SQLite, Postgres, or Turso's Rust rewrite through one API. The useful part: mirroring. The facade writes to two backends simultaneously so I can diff SQLite vs Turso behavior and catch divergences before production. When something differs, I either file upstream or add an equalizing shim. Concurrent writes already working is a reasonable definition of success. It's why I'm using it.
Who knows, maybe 5 years from now, you will say to yourself: that crazy wasn't so crazy after all!
https://turso.tech/blog/beyond-the-single-writer-limitation-...
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
This allows for a business model for the authors to provide contracted support for the product, and keeping SQLite as a product/brand without having to compete with an army of consultants wanting to compete and make money off of their product, startups wanting to fork it, rename it, and sell it to you, etc.
It's pretty smart and has, for a quarter century, resulted in a high quality piece of software that is sustainable to produce and maintain.
> 2. The TH3 test harness is a set of proprietary tests…
Turso tried to add features to SQLite in libsqlite but there were bugs/divergent behaviour that they couldn’t reconcile without the full test suite.
Building off of that into a SQLite-compatible DB doesn't seem to me as trying to piggyback on the brand. They have no other option as their product was SQLite to begin with.
> SQLite compatibility for SQL dialect, file formats, and the C API
https://github.com/tursodatabase/turso/blob/main/COMPAT.md
It's no different than the hucksters that take public domain books and slut them up in order to make some coin peddling smut.
And I expect they are right. Trying to test a codebase after the fact never goes well.
[1] With the kind of investment backing they have you'd think they'd be able to reach some kind of licensing deal, but who knows.
Otherwise, I doubt it. They have to write the tests again no matter what. Given that, there is no downside to reimplementing it while they are at it. All while there is a big upside to doing that: Trying to test something after the implementation is already written never ends well.
That does not guarantee that their approach will succeed. It is hard problem no matter how you slice it. But trying to reverse engineer the tests for the C version now that all knowledge of what went into it in the first place is lost is all but guaranteed to fail. Testing after the fact never ends well. Rewriting the implementation and tests in parallel increases the chances of success.
"Stability" is a word that means different things for different use cases.
- no way to merge upstream
- no way to run full test-suit to be sure everything is tiptop
...which is a pretty arbitrary combination
In the past I've reached for FirebirdSQL when I needed local + external databases and wanted to limit the technology spread... In the use case, as long as transactions synched up even once a week it was enough for the disparate remote connections/systems. I'm honestly surprised it isn't used more. That said, SQLite is more universal and lighter overall.
That’s hearsay that I haven’t dug into, so I may well be wrong.
It looks like some parts are open source and other not. Does anyone know more about the backstory? (It looks like one is a custom program that generate fuzz test. Do they sell it to others SQL engines?)
> We still maintain the first one, the TCL tests. They’re still maintained. They’re still out there in the public. They’re part of the source tree. Anybody can download the source code and run my test and run all those. They don’t provide 100% test coverage but they do test all the features very thoroughly. The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out. It did work out really well for us in that it keeps our product really solid and it enables us to turn around new features and new bug fixes very fast.
https://corecursive.com/066-sqlite-with-richard-hipp/#testin...
it's free
but if you want the compliance paperwork, you pay for it
I'm not ready to entertain Turso as an alternative to something that is as battle tested as Sqlite.
I think it's time for a new law of headlines: anything labeled a "deep dive" isn't.
1) It's MIT licensed. Including the test suite which is something lacking in SQLite:
https://github.com/tursodatabase/turso
2) They have a paid cloud option to drive income from:
https://turso.tech/pricing
That's not entirely true. SQLite has a TON of tests that are part of the public domain project: https://github.com/sqlite/sqlite/tree/master/test
They do have a test suite that's private which I understand to be more about testing for different hardware - they sell access to that for companies that want SQLite to work on their custom embedded hardware, details here: https://sqlite.org/th3.html
> SQLite Test Harness #3 (hereafter "TH3") is one of three test harnesses used for testing SQLite.
I’ve been confused by this for a while. What is it competing with? Surely not SQLite, being client server defeats all the latency benefits. I feel it would be considered as an alternative to cloud Postgres offerings, and it seems unlikely they could compete on features. Genuinely curious, but is there any sensible use case for this product, or do they just catch people who read SQLite was good on hacker news, but didn’t understand any of the why.
After all, if you can tell in advance that you might hit the limits of SQLite, you'd simply start with postgresql on day one, not with a new unproven DB vendor with a product that has been through the trial by fire of existing DBs.
And since moving it out of process, and even to another network, is going to make it much much much slower. You're going to need a rewrite anyway
Looking at the comments each time this product comes up, Rust is apparently the selling point for many, including the dev team themselves.
That says enough.
You must be kidding. Last time I checked, sqlite was mostly extensive test suites.
2. Compare to the test in Turso, the test in Turso is just kids toy.
On the other hand, Rust code and the culture of writing Rust leads to far more modularity, so maybe some useful stuff will come of it even if the startup fails.
I have been excited to see real work on databases in Rust, there are massive opportunities there.
personally i see more benefit in rust for example as ORM and layers that talk to the database. (those are often useful to have in such an ecossystem so you can use the database safe and sanely, like python or so but then u know, fast and secure.)
It's painful having to switch to another language to talk to the database, and ORMs are the worst kind of leaky abstractions. With Rust, we've finally got a systems language that's expressive enough to do a really good job with the API to an embedded database.
The only thing that's really missing is language support for properly ergonomic Cap'n Proto support - Swift has stuff in this vein already. That'd mean serializable ~native types with no serialization/deserialization overhead, and it's applicable to a lot of things; Swift developed the support so they could do proper dynamically linked libraries (including handling version skew).
If I might plug my project yet again (as if I don't do that enough :) - bcachefs has a high quality embedded database at its core, and one of the dreams has always been to turn that into a real general purpose database. Much of the remaining stuff for making it truly general purpose is stuff that we're going to want sooner or later for the filesystem anyways, and while it's all C today I've done a ton of work on refactoring and modernizing the codebase to hopefully make a Rust conversion tractable, in the not too distant future.
(Basically, with the cleanup attribute in modern C, you can do pseudo RAII that's good enough to eliminate goto error handling in most code. That's been the big obstacle to transitioning a C codebase to be "close enough" to what the Rust version would look like to make the conversion mostly syntactic, not a rewrite, and that work is mostly done in bcachefs).
The database project is very pie in the sky, but if the project gets big enough (it's been growing, slowly but steadily), that's the dream. One of them, anyways.
A big obstacle towards codebases that we can continue to understand, maintain and continue to improve over the next 100 years is giant monorepos, and anything we can do to split giant monorepos apart into smaller, cleaner reusable components is pure gold.
Unless you are Amazon which has the resources to maintain a fork (which is questionable by itself with all the layoffs), you probably shouldn't touch this.
We've all been around long enough to know that "free" VC-backed software always means "free... until it's in our interest to charge for it". And yet users will still complain about the rugpull in 2026, no matter how many times they've been through it. "Fool me once, shame on you"
SQLite is a good example: the author built a small ecosystem around it and managed to make a living from open source. Thanks to author's effort, we have a small surface area, extreme stability, relentless focus on correctness.
If we keep rewarding novelty over stewardship, we’ll lose more “SQLite-like” projects—stable cores that entire ecosystems depend on.
huh? That is clearly not the case. memory bugs - sure. Not having a public test suite, not accepting public contributions, weakly typed columns and lack of concurrency has nothing to do with the language. They're governance decisions, that's it.
>I see this situation trhough the prism of the innovator's dilemma: the incumbent is not willing to sacrifice a part of its market to evolve, so we need a new player to come and innovate.
I don't think the innovators dilemma quite applies in the open source world. Projects are tools, that's it. Preserving a project for the sake of preserving it isn't a good idea.
If people need to run a sqlite db in these exotic places, shedding it means someone else has to build their own tool now that can do it. Sqlite has decided that they care about that, so they support it, so they can't use rust. Seems sound.
Projects coming and going is a good thing in open source, not a bug.
That's an extraordinary claim for any C codebase.
Unless it ships with code enabling concurrency that is commented out, we should assume that "concurrency in C ain't easy" was a factor in that design choice.
The RIIR-benchmark: rewrite CPython in Rust, pass the complete test suite, no performance regressions, $100 budget. How far away are we there, a couple months? A few years? Or is it a completely ill-posed problem, due to the test suite being tied to the implementation language?
Single-issue coding benchmarks are getting saturated, and I'm wondering when we'll get to a point where coding agents will be able to tackle some long-running projects. Greenfield projects are hard to benchmark. So creating code or porting code from one language to another for an established project with a good test suite should make for an interesting benchmark, no?
Why not Postgres? https://pglite.dev
I’m excited to see things improve though. Having a more traditional database, with more features and less historical weirdness on the client would be really cool.
Edit: https://pglite.dev/benchmarks actually not looking too bad.. I might have something new to try!
edit: it looks like pglite is only useful for web apps
Where else other than web apps (herein meaning network-attached database servers that, more often than not, although not strictly so, use HTTP as the transport layer) are at meaningful risk of bumping up against SQLite write contention? If your mobile/desktop app has that problem, it is much more likely that you have a fundamental design issue and not a scaling problem.
To alleviate the pressure of those pitfalls, we started adding middle databases (oft called web apps, API services, REST services, etc.) that proxied the database through protocols that are more ideal to the realities and limitations of the network. Clients were then updated to use the middle database, seeing the hacks required to make SQL usable over the network to be centralized in one spot, greatly reducing the management burden.
But having two database servers is pretty silly when you think about it. Especially when the "backend" database's protocol isn't suitable for the network[1]. Enter the realization that if you use something like SQLite, you don't need another, separate database server. You can have one database server[1] that speaks a network friendly API. Except SQLite itself has a number of limitations that doesn't make it well suited to being the backing engine of your network-first DBMS.
That is what the article is about — Pointing out those limitations, and how Turso plans to overcome them. If your use case isn't "web app", SQLite is already going to do the job just fine.
[1] After all, if it were suited for networks, you wouldn't need the middle service. Clients would already be talking to that database directly instead.
[2] As in one logical database server. In practice, you may use a cluster of servers to provide that logical representation.
https://frequal.com/java/TheBestDatabase.html
Also sad that the test suite isn't open source. Would help drive development of the new DB...
Harsh to see HN disapproved of my positivity, lesson learned I guess…
https://github.com/search?q=repo%3Atursodatabase%2Fturso%20u...