Normally, I'm not a fan of putting the date on a post. However, in this case, the fact that Stonebraker's article was published in 2010 makes it more impressive given the developments over the last 15 years - in which we've relearned the value of consistency (and the fact that it can scale more than people were imagining).
As someone not very deep into databases, these papers were fun to read.
nine_k 1 days ago [-]
In short: eventual consistency is insufficient in many real-world error scenarios which are outside the CAP theorem. Go for full consistency where possible, which is more practical cases than normally assumed.
candiddevmike 1 days ago [-]
But full consistency isn't web scale! There are a lot of times where full consistency with some kind of cache in front of it has the same client quirks as eventually consistency though.
As always, the answer is "it depends".
sgarland 1 days ago [-]
Did you unironically use the term “web scale” in reference to a database?
I think we try too hard to solve problems that we do not even have yet. It is much better to build a simple system that is correct than a messy one that never stops. I see people writing bad code because they are afraid of the network breaking. We should just let the database do its job.
pyrolistical 23 hours ago [-]
But consistency is a choice we need to make when deciding how to use a database.
By letting databases do its job, you let someone else make a trade off for you
thayne 15 hours ago [-]
> Hence, in my opinion, one is much better off giving up P rather than sacrificing C. (In a LAN environment, I think one should choose CA rather than AP).
That isn't how it works. The only way to completely avoid network partitions is to not have a network, i.e. not have a distributed system. Sure partitions are rare, but unless your network is flawless, and your nodes never go down, they do happen sometimes, and when they do, you have to choose between consistency and availability.
That said, in most cases consistency is probably what you want.
Probably needs a (2010) label. Great article, though.
wippler 1 days ago [-]
FYI. This was written in 2010 although it feels relevant even now. Didn't catch it until the mention of Amazon SimpleDB.
redwood 1 days ago [-]
Indeed I was wondering for a moment if amazon had decided to double down on it
redwood 1 days ago [-]
This is why the winning disturbed systems optimize for CP. It's worth preserving consistency at the expense of rare availability losses particularly on cloud infrastructure
pyrolistical 23 hours ago [-]
Also, giving up on availability doesnt imply we will be down for a longtime. It’s just some requests might get dropped. As long as the client knows the request was rejected they can try later.
oooyay 1 days ago [-]
A lot of these kinds of discussions tend to wipe away all the nuance around why you would or wouldn't care about consistency. Most of the answer has to do with software architecture and some of it has to do with use cases.
belter 1 days ago [-]
The 2010 is really important here. And Stonebraker is thinking about local databases systems and was a bit upset but the NoSQL movement push at the time.
And he is making a mistake in claiming the partitions are "exceedingly rare". Again he is not thinking about a global distributed cloud across continents.
Remember also that "partition" is not "yes or no" but rather a latency threshold. If the network is connected but a call now takes 30 seconds instead of milliseconds, that is probably a partition
convolvatron 15 hours ago [-]
this is likely wrong. the issue with partitions is that we can no longer communicate at all, thus we can't end up in the same state. If we have poor performance, thats certainly something that worth putting machinery in to adapt to, but its not at all in the same class as 'I can't talk to you and I dont know what you're doing at all' fro a correctness standpoint
edit: yeah ok, since failure detection is being driven by timers by necessity, then sure. the tradeoff we're making between the interval under which we're unable to make progress vs the upheaval caused by announcing a failure.
anonymars 14 hours ago [-]
Yeah, I glossed over a few steps. There's likely a latency threshold beyond which you should abort, and then it is a partition (after all, that's what TCP is doing under the hood if it sends a packet and doesn't get a response).
One should be so lucky to have an operation fail immediately, rather than lumber on until it times out (holding resources hostage all the while)!
majormajor 1 days ago [-]
> And he is making a mistake in claiming the partitions are "exceedingly rare". Again he is not thinking about a global distributed cloud across continents.
Any time an AWS region or AZ goes down we see a lot of popular services go nearly-completely-down. And it's generally fine.
One thing I appreciate about AWS is that (operating "live" in just a single AZ or even single region) I've seen far fewer partition-causing networking hiccups than when my coworkers and I were responsible for wiring and tuning our own networks for our own hardware in datacenters.
hobs 1 days ago [-]
I would say quite the opposite - most business have little need for eventual consistency and at a small scale its not even a requirement for any database you would reasonably used, way more than 90% of companies don't need eventual consistency.
belter 1 days ago [-]
No. The real world is full of eventual consistency, and we simply operate around it. :-)
Think about a supermarket: If the store is open 24/7, prices change constantly, and some items still have the old price tag until shelves get refreshed. The system converges over time.
Or airlines: They must overbook, because if they wait for perfect certainty, planes fly half empty. They accept inconsistency and correct later with compensation.
Even banking works this way. All database books have the usual “you can’t debit twice, so you need transactions”…bullshit. But think of a money transfer across banks and possibly across countries? Not globally atomic...
What if you transfer money to an account that was closed an hour ago in another system? The transfer doesn’t instantly fail everywhere. It’s posted as credit/debit, then reconciliation runs later, and you eventually get a reversal.
Same with stock markets: Trades happen continuously, but final clearing and settlement occur after the fact.
And technically DNS is eventual consistency by design. You update a record, but the world sees it gradually as caches expire. Yet the internet works.
Distributed systems aren’t broken when they’re eventually consistent. They’re mirroring how real systems work: commit locally, reconcile globally, compensate when needed.
sethev 15 hours ago [-]
These analogies (except for DNS, perhaps) aren't very illuminating on the difference between a CP system and an AP system in the CAP sense, though. In banking, there are multiple parties involved. Each of those parties is likely running a CP system for their transactions (almost guaranteed). Same with stock exchanges - you can look up Martin Thompson's work for a public glimpse of how these systems work (LMAX and Aeron are systems related to this).
These examples are closer to control loops, where a decision is made and then carried out or finalized later. This kind of "eventual consistency" is pervasive but also significantly easier to reason about than what people usually mean by that term when talking about a distributed database, for example.
To expand on the 24/7 grocery store example: if the database with prices is consistent, you will always know what the current price is supposed to be. If the database is eventually consistent, you may get inconsistent answers about the current price that have to be resolved in the code somehow. That's way harder to reason about then "the price changed, but the tag hasn't been hung yet". The first case, professional software engineers struggle to deal with correctly. The second case, anyone can understand.
hobs 18 hours ago [-]
None of the systems you describe are the 90% of businesses - grocery, airlines, banking, stock markets, dns, they are all modeling huge systems with very active logistics compared to most businesses, I still don't agree with you at all.
Banks across countries - not again a problem most businesses ever have to deal with.
awesome_dude 1 days ago [-]
> Even banking works this way. All database books have the usual “you can’t debit twice, so you need transactions”…bullshit. But think of a money transfer across banks and possibly across countries? Not globally atomic..
Banking is my "go to" anology when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)
Allow me to elaborate.
When I was younger we had "cheque books" which meant that I could write a cheque (or check if you're American) and give it to someone in lieu of cash, they would take the cheque to the bank, and, after a period of time their bank would deposit funds into their account, and my bank would debit funds from mine - that delay is eventual consistency.
That /style/ of banking might have gone for some people, but the principle remains the same, this very second my bank account is showing me two "balances", the "current" balance and the "available" balance. Those two numbers are not equal, but they will /eventually/ be consistent.
The reason that they are not consistent is because I have used my debit card, which is really a credit arrangement that my bank has negotiated with Visa or Mastercard, etc. Whereby I have paid for some goods/services with my debit card, Visa has guaranteed the merchant that they will be paid (with some exceptions) and Visa have placed a hold on the balance of my account for the amount.
At some point - it might be overnight, it might be in a few days, there will be a reconciliation where actual money will be paid by my bank to Visa to settle the account, and Visa will pay the merchant's bank some money to settle the debt.
Once that reconciliation takes place to everyone's satisfaction, my account balances will be consistent.
kukkeliskuu 24 hours ago [-]
I have been working on payment systems and it seems that in almost all discussions about transactions, people talk about toy versions of bank transactions that have very little to do with what actually happens.
You don't even need to talk about credit cards to have multiple kinds of accounts (internal bank accounts for payment settlement etc.), multiple involved systems, batch processes, reconciliation etc. Having a single atomic database transaction is not realistic at all.
On the other hand, the toy transaction example might be useful for people to understand basic concepts of transactions.
mrkeen 23 hours ago [-]
And then they take that toy transaction model and think that they're on ACID when they're not.
Are you stepping out of SQL to write application logic? You probably broke ACID. Begin a transaction, read a value (n), do a calculation (n+1), write it back and commit: The DB cannot see that you did (+1). All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1).
Same problem when running on reduced isolation level (which you probably are). If you do two reads in your 'transaction', the first read can be at state 1, and the second read can be at state 2.
I think more conversations about the single "fully consistent" db approach should start with it not being fit-for-purpose - even without considering that it can't address soft-modification (which you should recognise as a need immediately whenever someone brings up soft-delete) or two-generals (i.e. consistency with a partner - you and VISA don't live in the same MySql instance, do you? Or to put it in moron words - partitions between your DB and VISA's DB "don't happen often" (they happen always!))
fishstamp82 20 hours ago [-]
RE: "All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1)."
This is not how it works at all. This is called dirty writes and is by default prevented by ACID compliant databases, no matter the isolation level. The second transaction commit will be rejected by the transaction manager.
Even if you start a transaction from your application, it does not change this still.
mrkeen 14 hours ago [-]
I have no problem with ACID the concept. It's a great ideal to strive towards. I'm sure your favourite RDBMS does a fine job of it. If you send it a single SQL string, it will probably behave well no matter how many other callers are sending it SQL strings (as long as the statements are grouped appropriately with BEGIN/COMMIT).
I'm just pointing out two ways in which you can make your system non-ACID.
1) Leave it on the default isolation level (READ_COMMITTED):
You have ten accounts, which sum to $100. You know your code cannot create or destroy money, only move it around. If no other thread is currently moving money, you will always see it sum to $100. However, if another thread moves money (e.g. from account 9 to account 1) while your summation is in progress, you will undercount the money. Perfectly legal in READ_COMMITTED. You made a clean read of account 1, kept going, and by the time you reach account 9, you READ_ what the other thread _COMMITTED. Nothing dirty about it, you under-reported money for no other reason than your transactions being less-than-Isolated. You can then take that SUM and cleanly write it elsewhere. Not dirty, just wrong.
2) Use an ORM like LINQ. (Assume FULL ISOLATION - even though you probably don't have it)
If you were to withdraw money from the largest account, split it into two parts, and deposit it into two random accounts, you could do it ACID-compliantly with this SQL snippet:
SELECT @bigBalance = Max(Balance) FROM MyAccounts
SELECT @part1 = @bigBalance / 2;
SELECT @part2 = @bigBalance - @part1;
..
-- Only showing one of the deposits for brevity
UPDATE MyAccounts
SET Balance = Balance + @part1
WHERE Id IN (
SELECT TOP 1 Id
FROM MyAccounts
ORDER BY NewId()
);
Under a single thread it will preserve money. Under multiple threads it will preserve money (as long as BEGIN and COMMIT are included ofc.). Perfectly ACID. But who wants to write SQL? Here's a snippet from the equivalent C#/EF/LINQ program:
// Split the balance in two
var onePart = maxAccount.Balance / 2;
var otherPart = maxAccount.Balance - onePart;
// Move one half
maxAccount.Balance -= onePart;
recipient1.Balance += onePart;
// Move the other half
maxAccount.Balance -= otherPart;
recipient2.Balance += otherPart;
Now the RDBMS couldn't manage this transactionally even if it wanted to. By the final lines, 'otherPart' is no longer "half of the balance of the biggest account", it's a number like 1144 or 1845. The RDBMS thinks it's just writing a constant and can't connect it back to its READ site:
info: 1/31/2026 17:30:57.906 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
Executed DbCommand (7ms) [Parameters=[@p1='a49f1b75-4510-4375-35f5-08de60e61cdd', @p0='1845'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
UPDATE [MyAccounts] SET [Balance] = @p0
WHERE [Id] = @p1;
SELECT @@ROWCOUNT;
arter45 21 hours ago [-]
I don't have a lot of payment experience, but AFAIK actual payment systems work in an append-only fashion, which makes concurrency management easier since you're just adding a new row with (timestamp, from, to, value, currency, status) or something similar. However, how can you efficiently check for overdrafts in this model? You'd have to periodically sum up transactions to find the sender's balance and compare it to a known threshold.
Is this how things are usually done in your business domain?
mrkeen 14 hours ago [-]
> how can you efficiently check for overdrafts in this model?
You already laid the groundwork for this to be done efficiently: "actual payment systems work in an append-only fashion"
If you can't alter the past, it's trivial to maintain your rolling sums to compare against. Each new transaction through the system only needs to mutate the source and destination balances of that individual transaction.
If you know everyone's balance as of 10 seconds ago, you don't need to consider any of the 5 million transactions that happened before 10 seconds ago.
(If your system allowed you to alter the past and edit arbitrary transactions in the past, you could never trust your rolling sums, and you'd be back to summing up everything for every operation.)
arter45 13 hours ago [-]
So you're saying each line records the new value of the source and destination balance, rather than just the sum that is being exchanged?
mrkeen 11 hours ago [-]
No.
At the beginning of time, all your accounts will have their starting value.
When the first transaction (from,to,value) happens, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
On the millionth transaction, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
At no point will you need to do more than one check & one add & one sub per arriving transaction.
(The append-only is what allows this: the next state is only ever a single, cheap step from the current state. But if someone insists upon mutating history, the current state is no longer valid because it no longer represents the history that led up to it, so it cannot be used to generate the next state - you need to throw it all away and regenerate the current/next states, starting from 0 and replaying every transaction again.
arter45 10 hours ago [-]
Ok so basically you have a Transactions table as well as a separate Accounts table which stores balances, and every time Alices wishes to pay Bob, a (database) transaction appends an entry to the Transaction table and updates balance in Accounts only if the sender’s balance is ok? Something like a “INSERT INTO…SELECT”?
awesome_dude 10 hours ago [-]
The rolling balance is a "projection"
Your bank statement has the event (A deposit or withdrawal) with details, and to one side the bank will say, your balance after this event can be calculated to be $FOO
The balance isn't a part of the event, it's a calculation based on the (cached) balance known from the previous event.
Further, your bank statements are (typically) for the calendar month, or whatever. They start with the balance bought forward from the previous statement (a snapshot)
awesome_dude 23 hours ago [-]
The point is to give people who don't realise that they have been dealing with eventual consistency all along, that it's right there, in their lives, and they already understand it.
You're right I go into too much detail (maybe I got carried away with the HN audience :-) and you are right that multiple accounts is something else that people generally already understand and demonstrate further eventual consistency principles.
kukkeliskuu 22 hours ago [-]
I wasn't criticizing you, just making the point that when people talk about toy example bank transactions, they usually want to just introduce the basic understanding. And I think it ok, but I would prefer that they would also mention that REALLY the operations are complex.
I modified my comment above that by multiple types of accounts I meant that banks have various accounts for settlements with the other banks etc. even in the common payment case.
awesome_dude 10 hours ago [-]
My bad, I didn't mean to sound too upset, but I do get a bit "trigger happy" from time to time
da_chicken 24 hours ago [-]
No, this is confusing how the financial institutions operate as a business with how the data store that backs those institutions operates as a technology.
You can certainly operate your financial system with a double entry register and delayed reconciliation due to the use of credit and the nature of various forms of promissory notes, but you're going to want the data store behind the scenes to be fully consistent with recording those transactions regardless of how long they might take to reconcile. If you don't know that your register is consistent, what are you even reconciling against?
What you're arguing is akin to arguing that because computers store data in volatile RAM and that data will often differ from what is on disk, that you shouldn't have to worry about file system consistency or the integrity of private address spaces. After all, they aren't going to match anyways.
awesome_dude 23 hours ago [-]
No.
I clearly state
> analogy (sorry about the initial misspell) when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)
The point is, you understand that your bank account is eventually consistent, and I have given an explanation of instances of eventual consistency that you already usually know and understand.
You make the mistake of thinking about something else (the back end storage, the double entry bookeeping).
MORPHOICES 1 days ago [-]
[dead]
1 days ago [-]
Rendered at 06:47:04 GMT+0000 (Coordinated Universal Time) with Vercel.
"What goes around comes around" (2005) https://web.eecs.umich.edu/~mozafari/fall2015/eecs584/papers...
"What Goes Around Comes Around... And Around..." (2024) https://dl.acm.org/doi/pdf/10.1145/3685980.3685984 (ft Andy Pavlo)
Or video (2025) https://www.youtube.com/watch?v=8Woy5I511L8
As always, the answer is "it depends".
But /dev/null is!
By letting databases do its job, you let someone else make a trade off for you
That isn't how it works. The only way to completely avoid network partitions is to not have a network, i.e. not have a distributed system. Sure partitions are rare, but unless your network is flawless, and your nodes never go down, they do happen sometimes, and when they do, you have to choose between consistency and availability.
That said, in most cases consistency is probably what you want.
And he is making a mistake in claiming the partitions are "exceedingly rare". Again he is not thinking about a global distributed cloud across continents.
The real world works with Eventual Consistency. Embrace it, for most 90% of the Business Scenarios its the best option: https://i.ibb.co/DtxrRH3/eventual-consistency.png
edit: yeah ok, since failure detection is being driven by timers by necessity, then sure. the tradeoff we're making between the interval under which we're unable to make progress vs the upheaval caused by announcing a failure.
One should be so lucky to have an operation fail immediately, rather than lumber on until it times out (holding resources hostage all the while)!
Any time an AWS region or AZ goes down we see a lot of popular services go nearly-completely-down. And it's generally fine.
One thing I appreciate about AWS is that (operating "live" in just a single AZ or even single region) I've seen far fewer partition-causing networking hiccups than when my coworkers and I were responsible for wiring and tuning our own networks for our own hardware in datacenters.
Think about a supermarket: If the store is open 24/7, prices change constantly, and some items still have the old price tag until shelves get refreshed. The system converges over time.
Or airlines: They must overbook, because if they wait for perfect certainty, planes fly half empty. They accept inconsistency and correct later with compensation.
Even banking works this way. All database books have the usual “you can’t debit twice, so you need transactions”…bullshit. But think of a money transfer across banks and possibly across countries? Not globally atomic...
What if you transfer money to an account that was closed an hour ago in another system? The transfer doesn’t instantly fail everywhere. It’s posted as credit/debit, then reconciliation runs later, and you eventually get a reversal.
Same with stock markets: Trades happen continuously, but final clearing and settlement occur after the fact.
And technically DNS is eventual consistency by design. You update a record, but the world sees it gradually as caches expire. Yet the internet works.
Distributed systems aren’t broken when they’re eventually consistent. They’re mirroring how real systems work: commit locally, reconcile globally, compensate when needed.
These examples are closer to control loops, where a decision is made and then carried out or finalized later. This kind of "eventual consistency" is pervasive but also significantly easier to reason about than what people usually mean by that term when talking about a distributed database, for example.
To expand on the 24/7 grocery store example: if the database with prices is consistent, you will always know what the current price is supposed to be. If the database is eventually consistent, you may get inconsistent answers about the current price that have to be resolved in the code somehow. That's way harder to reason about then "the price changed, but the tag hasn't been hung yet". The first case, professional software engineers struggle to deal with correctly. The second case, anyone can understand.
Banks across countries - not again a problem most businesses ever have to deal with.
Banking is my "go to" anology when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)
Allow me to elaborate.
When I was younger we had "cheque books" which meant that I could write a cheque (or check if you're American) and give it to someone in lieu of cash, they would take the cheque to the bank, and, after a period of time their bank would deposit funds into their account, and my bank would debit funds from mine - that delay is eventual consistency.
That /style/ of banking might have gone for some people, but the principle remains the same, this very second my bank account is showing me two "balances", the "current" balance and the "available" balance. Those two numbers are not equal, but they will /eventually/ be consistent.
The reason that they are not consistent is because I have used my debit card, which is really a credit arrangement that my bank has negotiated with Visa or Mastercard, etc. Whereby I have paid for some goods/services with my debit card, Visa has guaranteed the merchant that they will be paid (with some exceptions) and Visa have placed a hold on the balance of my account for the amount.
At some point - it might be overnight, it might be in a few days, there will be a reconciliation where actual money will be paid by my bank to Visa to settle the account, and Visa will pay the merchant's bank some money to settle the debt.
Once that reconciliation takes place to everyone's satisfaction, my account balances will be consistent.
You don't even need to talk about credit cards to have multiple kinds of accounts (internal bank accounts for payment settlement etc.), multiple involved systems, batch processes, reconciliation etc. Having a single atomic database transaction is not realistic at all.
On the other hand, the toy transaction example might be useful for people to understand basic concepts of transactions.
Are you stepping out of SQL to write application logic? You probably broke ACID. Begin a transaction, read a value (n), do a calculation (n+1), write it back and commit: The DB cannot see that you did (+1). All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1).
Same problem when running on reduced isolation level (which you probably are). If you do two reads in your 'transaction', the first read can be at state 1, and the second read can be at state 2.
I think more conversations about the single "fully consistent" db approach should start with it not being fit-for-purpose - even without considering that it can't address soft-modification (which you should recognise as a need immediately whenever someone brings up soft-delete) or two-generals (i.e. consistency with a partner - you and VISA don't live in the same MySql instance, do you? Or to put it in moron words - partitions between your DB and VISA's DB "don't happen often" (they happen always!))
This is not how it works at all. This is called dirty writes and is by default prevented by ACID compliant databases, no matter the isolation level. The second transaction commit will be rejected by the transaction manager.
Even if you start a transaction from your application, it does not change this still.
I'm just pointing out two ways in which you can make your system non-ACID.
1) Leave it on the default isolation level (READ_COMMITTED):
You have ten accounts, which sum to $100. You know your code cannot create or destroy money, only move it around. If no other thread is currently moving money, you will always see it sum to $100. However, if another thread moves money (e.g. from account 9 to account 1) while your summation is in progress, you will undercount the money. Perfectly legal in READ_COMMITTED. You made a clean read of account 1, kept going, and by the time you reach account 9, you READ_ what the other thread _COMMITTED. Nothing dirty about it, you under-reported money for no other reason than your transactions being less-than-Isolated. You can then take that SUM and cleanly write it elsewhere. Not dirty, just wrong.
2) Use an ORM like LINQ. (Assume FULL ISOLATION - even though you probably don't have it)
If you were to withdraw money from the largest account, split it into two parts, and deposit it into two random accounts, you could do it ACID-compliantly with this SQL snippet:
Under a single thread it will preserve money. Under multiple threads it will preserve money (as long as BEGIN and COMMIT are included ofc.). Perfectly ACID. But who wants to write SQL? Here's a snippet from the equivalent C#/EF/LINQ program: Now the RDBMS couldn't manage this transactionally even if it wanted to. By the final lines, 'otherPart' is no longer "half of the balance of the biggest account", it's a number like 1144 or 1845. The RDBMS thinks it's just writing a constant and can't connect it back to its READ site:Is this how things are usually done in your business domain?
You already laid the groundwork for this to be done efficiently: "actual payment systems work in an append-only fashion"
If you can't alter the past, it's trivial to maintain your rolling sums to compare against. Each new transaction through the system only needs to mutate the source and destination balances of that individual transaction.
If you know everyone's balance as of 10 seconds ago, you don't need to consider any of the 5 million transactions that happened before 10 seconds ago.
(If your system allowed you to alter the past and edit arbitrary transactions in the past, you could never trust your rolling sums, and you'd be back to summing up everything for every operation.)
At the beginning of time, all your accounts will have their starting value.
When the first transaction (from,to,value) happens, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
On the millionth transaction, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
At no point will you need to do more than one check & one add & one sub per arriving transaction.
(The append-only is what allows this: the next state is only ever a single, cheap step from the current state. But if someone insists upon mutating history, the current state is no longer valid because it no longer represents the history that led up to it, so it cannot be used to generate the next state - you need to throw it all away and regenerate the current/next states, starting from 0 and replaying every transaction again.
Your bank statement has the event (A deposit or withdrawal) with details, and to one side the bank will say, your balance after this event can be calculated to be $FOO
The balance isn't a part of the event, it's a calculation based on the (cached) balance known from the previous event.
Further, your bank statements are (typically) for the calendar month, or whatever. They start with the balance bought forward from the previous statement (a snapshot)
You're right I go into too much detail (maybe I got carried away with the HN audience :-) and you are right that multiple accounts is something else that people generally already understand and demonstrate further eventual consistency principles.
I modified my comment above that by multiple types of accounts I meant that banks have various accounts for settlements with the other banks etc. even in the common payment case.
You can certainly operate your financial system with a double entry register and delayed reconciliation due to the use of credit and the nature of various forms of promissory notes, but you're going to want the data store behind the scenes to be fully consistent with recording those transactions regardless of how long they might take to reconcile. If you don't know that your register is consistent, what are you even reconciling against?
What you're arguing is akin to arguing that because computers store data in volatile RAM and that data will often differ from what is on disk, that you shouldn't have to worry about file system consistency or the integrity of private address spaces. After all, they aren't going to match anyways.
I clearly state
> analogy (sorry about the initial misspell) when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)
The point is, you understand that your bank account is eventually consistent, and I have given an explanation of instances of eventual consistency that you already usually know and understand.
You make the mistake of thinking about something else (the back end storage, the double entry bookeeping).