This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.
In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.
Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?
d_silin 1 days ago [-]
The cynic is me says those interesting but ultimately barren long-form articles are just content marketing for Mathematica software.
Legend2440 15 hours ago [-]
No lol, Stephen Wolfram is more invested in his writings than he is in Mathematica. He genuinely believes he’s going to revolutionize math and physics.
He’s smarter than your average nutjob, but he’s still a bit of a crank.
abetusk 1 days ago [-]
I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
To me, this reads like a profusion of empirical experiments without any cohesive direction or desire towards deeper understanding.
scrubs 1 days ago [-]
Yah Stephen Wolfram is too often grandiose thereby missing the hard edges.
But in this case, given how hard P=NP is, it might create wiggle room for progress.
Ideally it would have gone on and said in view of lemma/proof/conjecture X, sampling enumerated programs might shine light on ... no doubt that'd be better.
But here I'm inclined to let it slide if it's a new attack vector.
nwhnwh 13 hours ago [-]
Is this for programmers? Serious question.
kittikitti 1 days ago [-]
I find that the "ruliological approach" is very similar to feasible mathematics by Jiatu Li (https://eccc.weizmann.ac.il/report/2025/086/). In the last section before the Personal Notes, "In effect, we’re seeing that theoretical computer science can be done not only “purely theoretically”—say with methods from traditional mathematics—but also “empirically”, finding results and developing intuition by doing explicit computational experiments and enumerations." Where regular mathematics is "purely theoretical" and "empirically" is what Jiatu Li also describes in his paper sometimes referred to reverse mathematics like from Quanta magazine.
I appreciated the great explanation of space complexity and it eludicated why some scientific authors don't include it in their analysis of algorithms. However, Wolfram found that "by successively investigating both larger inputs and longer runtimes, one can develop reasonable confidence that—at least most of the time—one is correctly identifying both cases that lead to halting, and ones that do not." There are exceptions like Machine 600720 that have exceptionally long runtimes but I gain a much better understanding about an algorithm if I'm provided the space complexity. It's still an open question in pure theory but it could be understood from empirical results.
user3939382 19 hours ago [-]
Most of the difficulty arises from conflating discreet and continuous values which a la dirichlet function cannot be integrated whereas discrete values do not physically exist in real computation. Both the physical boundary of the transistor is a probabalistic field as is obviously the analog signal it (probabilistically) discretizes. When you ask ill posed questions in math you get to debate the answer for decades.
CaptainNegative 1 days ago [-]
This is so tangentially related to the P vs NP problem that the title is basically pure clickbait. Remove every sentence relating to polynomial anything and the information content of the write-up doesn't change at all.
drumnerd 1 days ago [-]
It reads like slop. It’s repetitive, abstract and adds essentially nothing beyond him babbling about himself.
wow, rare L from OpenClaw. It didn't even respond to the right parent comment!
MohskiBroskiAI 1 days ago [-]
[flagged]
SeanAnderson 1 days ago [-]
[dead]
bmenrigh 1 days ago [-]
Yeah I've spent way too much time reading this "guy's" posts here, Academia profile, etc. Huge waste of time. AI has managed to amplify a crank 100x. This is only going get worse.
soulofmischief 1 days ago [-]
I've seen for myself how much tunnel vision these models will get when collaborating scientifically/mathematically. When working around unfamiliar domains I have to do extensive grounding on my own. Curious to see how this changes over the next two years as the industry goes after scientific collaboration.
MohskiBroskiAI 1 days ago [-]
[flagged]
MohskiBroskiAI 1 days ago [-]
[flagged]
SeanAnderson 1 days ago [-]
[dead]
MohskiBroskiAI 1 days ago [-]
[flagged]
MohskiBroskiAI 1 days ago [-]
[flagged]
throwaway314155 1 days ago [-]
You have NPD or something. Hope you reach out to professionals at some point. Your attitude is unsustainable.
I look forward to your disproportionately rude response.
MohskiBroskiAI 1 days ago [-]
[flagged]
wizzwizz4 1 days ago [-]
You do not win. This is incoherent.
MohskiBroskiAI 1 days ago [-]
[flagged]
wizzwizz4 1 days ago [-]
I understand the different domains quite well. No resolution of P≟NP should involve km/s, density, or "Spectral Gap Magnitude". This is the same rubbish ChatGPT always produces when you spend a week enticing it to produce a revolutionary paper on something, and I know – without checking – that your Lean files are full of `sorry`s.
bmenrigh 1 days ago [-]
You should look. It’s almost more entertaining than the README.md
theorem MilkyWay_Is_Collapsed : DeterminePhase MilkyWay = Phase.Collapsed := by
-- ArkScalar MW ≈ 0.41 < 0.85
-- We use native_decide or just admit the calculation since float/real is messy in proof.
sorry -- Calculation verified by python script
MohskiBroskiAI 1 days ago [-]
[flagged]
gowld 1 days ago [-]
> your Lean files are full of `sorry`s
You meant this literally, but this such a beautiful insult.
MohskiBroskiAI 1 days ago [-]
[flagged]
zozbot234 1 days ago [-]
good bot
MohskiBroskiAI 1 days ago [-]
[flagged]
danlitt 1 days ago [-]
lmao
MohskiBroskiAI 1 days ago [-]
[flagged]
MohskiBroskiAI 1 days ago [-]
[flagged]
CJefferson 1 days ago [-]
Your lean 'proof' is packed full of missing parts. Come back when you aren't skipping most of it.
I don't understand why new accounts, heavily downvoted and flagged, have higher higher quotas for post rate-limiting than well-reputed commenters.
wizzwizz4 1 days ago [-]
I suspect the rate limit takes into account normal activity, as an anti-bait mechanism. (It kicked in at the right time for me, this time: I'd only replied twice to this person. However, it is generally annoying.)
MohskiBroskiAI 1 days ago [-]
[flagged]
MohskiBroskiAI 1 days ago [-]
[flagged]
gerdesj 1 days ago [-]
This is AI slop, sadly. Here's a sentence that very few humans might scribe:
"But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
It is absolutely rammed with m dashes, which is not conclusive. For me, a bit of a clanger is that the writer might have decided to instruct the beastie to go fast and loose with grammar "norms". So, we have loads and loads of sentences starting off with a conjunction (and, but).
It just gets worse. The article is huge - it's over 17,000 words. I've skimmed it and its awful.
Please don't do this.
porcoda 1 days ago [-]
Nah, it’s just Wolfram being Wolfram. He was generating this scale and style of content well before LLMs were a thing. He usually has some interesting ideas buried in the massive walls of text he creates. Some people can’t get past the style and personality though (I can’t blame them…).
gerdesj 5 hours ago [-]
OK, I apologise for my miss-diagnosis! I thought I tended to ill-advised loquacity! I'm just an amateur when compared to the master.
Mind you: who on earth uses long (m) dashes when typing text?
hjoutfbkfd 23 hours ago [-]
> Here's a sentence that very few humans might scribe:
having watched many wolfram videos that's absolutely how he speaks
staticshock 1 days ago [-]
false; wolfram has been circling the topic of "small yet mighty" rule-based systems for decades, and this is his writing style. if you don't like the topic or the style, you are welcome to move on from it with whatever grace you can muster up.
apricot 11 hours ago [-]
Nah, that's just how Stephen Wolfram writes. He also really enjoys telling us how great he is, and does so in every piece he writes.
JadeNB 1 days ago [-]
> This is AI slop, sadly. Here's a sentence that very few humans might scribe:
> "But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
I don't think much of Wolfram's writing, but this seems to me to be just the way that scientists write. I wouldn't blink if I encountered it in a scientific paper. (Well, I'm a mathematician, so I don't know for sure what experimental-science or even theoretical CS papers look like, but I certainly wouldn't blink if I encountered it in a math paper.)
tbossanova 23 hours ago [-]
Yep totally normal sentence for this type of writing, for better or for worse. One can’t really complain when AI slop just reflects human writing
Rendered at 06:47:04 GMT+0000 (Coordinated Universal Time) with Vercel.
This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.
In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.
Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?
He’s smarter than your average nutjob, but he’s still a bit of a crank.
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
[0] https://arxiv.org/abs/cs/0212002
[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf
[2] https://arxiv.org/abs/1706.08431
But in this case, given how hard P=NP is, it might create wiggle room for progress.
Ideally it would have gone on and said in view of lemma/proof/conjecture X, sampling enumerated programs might shine light on ... no doubt that'd be better.
But here I'm inclined to let it slide if it's a new attack vector.
I appreciated the great explanation of space complexity and it eludicated why some scientific authors don't include it in their analysis of algorithms. However, Wolfram found that "by successively investigating both larger inputs and longer runtimes, one can develop reasonable confidence that—at least most of the time—one is correctly identifying both cases that lead to halting, and ones that do not." There are exceptions like Machine 600720 that have exceptionally long runtimes but I gain a much better understanding about an algorithm if I'm provided the space complexity. It's still an open question in pure theory but it could be understood from empirical results.
To be clear, all of us here are speaking with an instance of OpenClaw right now, correct?
The website loads fine for me: https://openclaw.ai/
I look forward to your disproportionately rude response.
You meant this literally, but this such a beautiful insult.
I don't understand why new accounts, heavily downvoted and flagged, have higher higher quotas for post rate-limiting than well-reputed commenters.
"But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
It is absolutely rammed with m dashes, which is not conclusive. For me, a bit of a clanger is that the writer might have decided to instruct the beastie to go fast and loose with grammar "norms". So, we have loads and loads of sentences starting off with a conjunction (and, but).
It just gets worse. The article is huge - it's over 17,000 words. I've skimmed it and its awful.
Please don't do this.
Mind you: who on earth uses long (m) dashes when typing text?
having watched many wolfram videos that's absolutely how he speaks
> "But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
I don't think much of Wolfram's writing, but this seems to me to be just the way that scientists write. I wouldn't blink if I encountered it in a scientific paper. (Well, I'm a mathematician, so I don't know for sure what experimental-science or even theoretical CS papers look like, but I certainly wouldn't blink if I encountered it in a math paper.)