Tumgik
tilde-he · 3 months
Text
Thanks! Hmm... I'm not sure that dropping this (P4) allows quite the branching that you were looking for in the OP, because omitting that axiom allows a number to have multiple predecessors, while I think what you were looking for in the OP is to allow a number to have multiple successors? This is why I suggested making S a relation symbol (with an extra condition so that there is always at least one successor) rather than a function symbol. Either change could be interesting though.
How might we define addition if numbers can have multiple successors? One question would be whether we want addition to still be a function, or whether it too should be multi-valued? If addition is to be single-valued, I suppose that instead of x+S(y)=S(x+y) , I suppose we would say something about some logical combination of the statements S(y,z), S(x+y,w), x+z=w ? Maybe, "for all z such that S(y,z), there exists a w such that S(x+y,w) and x+z=w"? Or, a more neutral and weaker statement, "(for all x,y) there exist z,w such that x+z=w, S(x+y,w), and S(y,z)". Oh, wait, if we are assuming that addition is a function, then having x+z=w makes there be no separate choice of what w to use... So, I guess it would just be a combination of the statements, S(y,z), S(x+y,x+z) So, either, "for all x,y, there exists a z such that S(y,z) and S(x+y,x+z)", or, "for all x,y, for all z, if S(y,z), then S(x+y,x+z)". I'm not sure which of these would be more interesting.
There should be divergent number lines. Sure you have the good old fashioned one, two, three… etc. but after three you can choose to go to four or you could go to grond and continue onwards from there, each line branching out on their own twisting ways.
27 notes · View notes
tilde-he · 10 months
Text
While a lot of the Urbit stuff seems, uh, unreasonable, the idea of an OS of sorts written for a mathematically specified machine model, where the OS shouldn’t change much, and where one could transfer this from computer to computer as long as there is an emulator for the machine model written for the particular hardware, seems fairly appealing.
The need to separately re-establish all of one’s programs and settings when moving to a new device, seems like a waste, and like it shouldn’t be that way. (Of course, there are VMs which can emulate some other physically existing hardware, but then you have all these different possible emulated hardwares, and the... well, it seems like there is more complexity that way, and more edge-cases where some aspects of the hardware might not be emulated quite correctly)
But the specific stack of 3 weird programming languages that the project uses, with their deliberately weird-sounding names for things, ugh.
Why?
Also, I feel like, ideally, a project for such an “eternal OS” would use formal methods to make machine-verifiable proofs of various properties of the OS, like what seL4 has.
Also, probably the machine model should resemble real hardware at least a little bit more than the one Urbit uses, does.
6 notes · View notes
tilde-he · 11 months
Text
How about we use A \cap B to stand for (A \cap B) \ne \emptyset , and, idk, A \not\cap B for A \cap B = \emptyset ?
We need a standard symbol for denoting the binary relation of two sets being disjoint or non-disjoint, just as we have for subsets. Writing A ∩ B = ∅ or A ∩ B ≠ ∅ makes me sad each time I have to, and I'm getting too old to waste precious emotional energy on this nonsense.
56 notes · View notes
tilde-he · 1 year
Text
1) To elaborate on that “minds are finite” comment: insofar as human behavior can be described in a materialist way, then, assuming currently fairly-accepted ideas about physics and information content, humans have (or at least can be effectively modeled as having) finite information capacity, and therefore facts about their behavior aren’t strictly speaking “undecidable”, as it could simply be looked up in an enormous lookup table (where the lookup table includes external influences as part of the keys when looking things up).
Of course, that amount of information is bonkers huge, so maybe one could make a variation on the “human behavior is undecidable” claim which is actually true (using e.g. an appropriate limitation on the size of the TM that is supposed to do the deciding (appropriate in light of the information capacity of a human))
2) But yes, as mentioned, people don’t actually seem to actually optimize for a utility function in any super-meaningful way (of course, if the utility function is a function of actions and not just outcomes, then *some* utility function will be optimized by any history of actions, but that’s why I said “in any super-meaningful way”).
3) And, I agree that if people did have utility functions, there doesn’t appear to be anything fundamental preventing knowing it, and - well...
(no longer numbered) It’s unclear to me whether the fact that we don’t appear to quite match with a utility function (in a meaningful way) is or isn’t important for any of the like, existentially meaningful questions.
But, as you mention, the family of hypothetical questions of the form “what would one do given choice X” would certainly be enough to determine one’s utility function, if one had one, to any degree of accuracy, and I don’t think there being a fact-of-the-matter as to “what one would choose given choice X (or, in scenario X)” for all choices(scenarios) X, would be any problem for meaning, and this is the kind of thing a utility function would encode, so, the answers to these questions would specify a utility function if one had one, and visa versa, and so I think this family of answers could be regarded as a very loose generalization of a utility function? (I’m allowing for facts-of-the-matter as to ”what one would choose” to include things like probabilistic mixes of different responses) And, continuing to act according to such a generalization-of-a-utility-function is not something that makes one less of a person, so much as just, basically a tautology? One will do as one would do if one were to be in the situations one will be in.
Would comprehending some information which entirely encodes this information for oneself, in any way inherently conflict with any important personhood-things ? This isn’t entirely clear to me, but I feel like the answer is probably “no”?
I’m a little bit reminded of a scene I saw screenshots of from “Adventure Time”, wherein a character remarks on how they know precisely what it is that they want, and, while that character was like, evil and such, I don’t think them understanding themself in that way seemed like any kind of obstruction to their personhood..? Though, “fictional evidence isn’t”, so perhaps that’s just irrelevant.
You can fit a utility function to a preference ordering, but not to a finite preference ordering. (EDIT: I mean a finite preference ordering doesn't determine a unique utility function) Everyone's utility function is undefined by the choices they've made so far. It's always possible in the future to make choices that change which utility function is the best fit to your choices, including your past choices. So what you really wanted, why you really did something, is a statement about an infinite sequence of future actions. That puts it in the same kind of statement as stuff like the godel string, or consistency of a formal system. Claiming someone has a particular utility function is a claim that infinite future choices will not deviate from it, just as consistency is a claim that a proof search will never find an inconsistency. So it's the kind of thing where you can have incompleteness or undecidability.
HUmans are trivially undecidable in a certain sense in tha tthey can emulate Turing machines and so there's no program that solves the halting problem for a human emulating a Turing machine. But still it's interesting to apply this perspective to a human's "revealed preferences". Not that you can't infer someone's preferences, it's definitely an approachable modeling problem, but you can't ever achieve certainty here, they can always just like, start behaving in a different way if a certain turing machine halts, that makes their past actions part of a different pattern, and therefore you fit different "revealed preferences" to it.
And I think if you apply that perspective to yourself... the point I think is to give up on the idea that you have a true utility function you need to discover. Like if you want to limit yourself and turn yourself into basically a chess-playing program, sure, you can fit something to your past actions and feelings and commit to following it forever. But that's not discovering your true utility function, that's making an infinite sequence of future choices by a certain method, and doing that is what makes it your true utility function... except that's never complete, cause you can stop at any time.
36 notes · View notes
tilde-he · 1 year
Note
No, I’m saying they should always go in the direction we currently call “counterclockwise”, in both the southern and northern hemispheres (and on the equator), and the reasonable-but-insufficient justification of “but clockwise is the way sundials go” only works in the northern hemisphere.
They should all go ccw because that’s the direction angles go.
So, seeing as sundials in the southern hemisphere go counterclockwise, I guess there's even less reason in the southern hemisphere for clocks to go clockwise? I guess countries in the southern hemisphere just got stuck with the northern hemisphere's convention that was set for clocks, of going the wrong way around (clockwise) instead of the correct forward direction for angles (ccw)?
what should clocks do on the equator, just tremble in terror?
22 notes · View notes
tilde-he · 1 year
Text
For some reason I didn’t respond to this at the time (Aside: I really wish I could set the blog I actually post with to be my “main blog”). And, yes, we could imagine a setup where there’s the “main thing” which has finitely many states and just have a counter, but, like, why? It isn’t like e.g. sorting algorithms (or whatever other kind of algorithm) are restricted to only working sensibly on lists of finite length.
I don’t see why whatever process is responsible for human reasoning couldn’t be scaled up to larger context sizes without there being at some point a fundamental break in the nature of the being there.
I guess you are imagining, in the “become unrecognizable” you are thinking of the “recognizable” part as being like, a (very large) finite state machine which is acting like the state machine of a Turing machine head as it acts on an infinite tape, but, why not instead imagine the person normally as being more analogous to the process including what happens on the tape, where the tape has a fixed limited size (responsible for e.g. working memory, short term memory, and long term memory, all having finite capacity) (where this limited size of the tape is much larger than the number of possible states of the head), and then just, imagine what happens if you remove the cap on the size of the tape.
In principle (and not just in this world with physics as it appears to be), what should make it impossible for someone to enjoy some chocolate that reminds them of a childhood memory, while they are also working on factorizing 3^^^^^3 + 7 ?
Like, what’s the obstacle?
learn-tilde-ath said: yeah, but what if a universe/laws-of-physics where arbitrarily large amounts of information can be stored in a fixed amount of space? In such a universe, perhaps there could be true immortality.
one could imagine Conway’s Life where every cell can be subdivided into smaller games of Life to arbitrary depth, allowing a “fixed size” entity to approach infinite computation asymptotically, but the entity is still forced to “grow” or die, either by terminating or repeating previous states; it might not count as death to gradually transform into an entity that is causally linked with your previous self but utterly unrecognisable to it but it definitely sounds deathish.
34 notes · View notes
tilde-he · 1 year
Text
I’m unsure about “Kelly Criterion only applies when you know the odds”.
Like, sure, “exactly the Kelly Criterion”, probably only makes sense within the assumption that you know what the odds are I guess.
But suppose you are playing a variant of the “multi armed bandit” game, where you can choose how much to bet on each pull of an arm (with a certain minimum bet size I guess). I would expect some kind of variant of Kelly to apply here? Of course, in that case, I guess to define that game you would have to assume some probability distribution over what each of the arms has for its payout distribution... But, perhaps one could look at like, some sort of “worst case” for what that probability distribution might be? Uh... I’m not really sure of that. Well, suppose you are required to make all N pulls even if it seems likely that all the arms have negative expected return, so that, assuming that some of the arms are better, uh, you would be likely to eventually find this out, and therefore ruling out some of the distributions from the “worst case compatible with what is observed so far” ? idk.
(and, of course, I am considering this variant on the multi armed bandit here as potentially working as a metaphor for other decisions in life and such.)
it's fun to talk about the kelly criterion and i guess we're doing it cause of stuff sbf and wo said about it but im skeptical it's related to the downfall of alameda
like you can do some model-based calculation that if you exceed the kelly bets you're putting yourself at risk of losing everything
but when i read about the subprime mortgage crisis and about the fall of LTCM, their risk models were wrong. They failed in ways that the model said was impossible. So it's not about the model-based safety of the kelly criterion vs the model-based risk of exceeding it rly i dont think
you can calculate really low risk based on like the standard 1/√n decay of variation when averaging n uncorrelated things. But then the normally-uncorrelated stuff falls together in a market downturn
I wouldn't have much confidence generalizing from anecdotes, but alan greenspan says in his memoirs he thinks this is like the biggest issue, that risk models need to explicitly model two "phases", so i get some confidence from someone who knows much more than me saying something similar
so when i see some crypto traders go under during a crypto crash? And general downturn cause of a money vacuum? I don't think this is because of some huge risk they knowingly accepted, I would guess it's because their risk models they used to make a lot of money back when money was free told them there was no chance of losing all their money
20 notes · View notes
tilde-he · 1 year
Text
I think one idea I’ve seen which is tangentially related, is the idea of, automatically finding cycles in wants, like, [“X would be willing to provide A in exchange for B” “Y would be willing to provide B in exchange for C”, “Z would be willing to provide C in exchange for A”] (except on a much larger scale and without necessarily being single cycles, but multiple interconnected cycles)
Perhaps if one started with some kinds of digital goods, like, art commissions, uh, recommendations/advertisements, tutoring services, and other things which could be initially done at a non-professional level over the internet, it could be possible? And then like, if you didn’t require all the chains to be closed before completing a cycle, you could create representations of debts?
Like, if you start with just a system to facilitate barter, and then only later add in some more abstract unit(s) of account (the debts), and then significantly later some people began to exchange this unit/these units of account for like, government-endorsed-currencies ?
idk
cryptocurrency does at least have a simple value proposition: it's a speculative asset / ponzi scheme and so there's a ready supply of people willing to exchange real money for it in the hope of getting rich / selling it to some other sucker later, which keeps the whole ecosystem growing (and collapsing, and growing again and then collapsing, and so on).
but if a debt trading market is "real money done better but without the backing of a sovereign government", what's the use case for it that drives adoption?
you can't use it to pay your taxes, you might use it to avoid taxes but that's obviously a lightning rod for trouble, and while you would happily use it to buy things there's no situation in which you would rather accept it over real money when selling things unless those things are illegal or socially sanctioned in some way (drugs, sex, assassinations, and so on) such that regular payment providers keep their distance.
so you can seed a crypto trading market by taking real money and giving away magic beans but you would need to seed a debt trading market by doing the exact opposite: giving people goods and services worth real money in exchange for magic beans from them, which is obviously a less appealing proposition for an investor.
(now you could say that some venture capitalists are indirectly doing this, for example subsidising grocery delivery services that lose money in the hope of building market share for some future enterprise that might be worth something one day, but that seems more like a semi-fraudulent quirk of the low interest rate environment rather than any kind of sound business plan).
one possibility is to boostrap the market by selling intangible things such that early adopters aren't running the risk of bankruptcy, for example you could "sell" a web comic or podcast in which case there's a progression from people giving you "likes" and other forms of attention that cannot be redeemed for cash and people subscribing to your patreon or hitting your tip jar, where perhaps debt trading could offer a smooth gradient across that spectrum from "I owe you a notional favour" to "I am literally buying you a coffee".
but it does seem like that use case could be achieved by simpler methods, like the clever infrastructure isn't really contributing much to this specific example and is only there in the opportunistic hope that the market would grow into something more than that (without being exploited for criminal ends or promptly shut down by the SEC).
12 notes · View notes
tilde-he · 1 year
Text
What if instead of conditioning on x >= s, you instead take like, a million samples and take the 50 largest values?
At the last, if you took 5 million samples from both distributions, and then took the 100 largest values from the union of those two sets, then (assuming at least some of the values came from from the distribution which tends to produce smaller outputs), these 100 smallest values will probably be all pretty close to one-another (even though the ones that came from the tends-to-be-smaller distribution probably tend to be on the lower end of the 100)
you would expect female CEOs and politicians to be as power hungry and sociopathic as male politicians and CEOs even if women on average were not, simply because of selection effects on what is a very small group.
90 notes · View notes
tilde-he · 2 years
Note
If it would immediately bring about such a ceasefire, and even if it wouldn’t (unless maybe if it would somehow prolong the war or something?), then no, it wouldn’t be wrong to kill Hitler?
Something cannot be both immoral and “worth it”. If it is morally worth it, then it isn’t, in the context, immoral.
If anyone is currently having a genocide be done on their command, it is ok to kill them. (Exception: it was one more additional immoral thing for hitler to kill himself. The correct thing for him to do would have been to give himself up to the allies and allow himself to be executed for his crimes.)
it's true that political assassination is bad, but it also seems sort of ghoulish to spell it out in the context of war. it's like, killing "soldiers"? fine, good, do it as much as you want! killing random civilians who were resisting, or just living in a city you bombed? eh well it's bad but what did you think would happen, it's war. killing a SPECIFIC person who you were specifically targeting because they're important? how dare you. "people" vs "faceless mooks"
killing anyone is bad, but "soldiers" being people who have donned a specific costume and said "I will try to kill you while you try to kill me" are at least aware of the game being played and (conscripts aside) voluntarily choosing to play it.
killing civilians is obviously a crime and one of the reasons why war sucks so much ass, if you could simply lock the soldiers from both sides in a sealed chamber and let them sort it out between them then that would obviously be preferable!
deliberately attempting to kill a specific person (or the person standing next to them, or whoever had the misfortune to vaguely resemble them) is murder! like, a pretty basic crime! the stuff of hoodlums and gangsters! it doesn't magically become okay just because it's being done by gangsters paid by the government.
launching an aggressive war is still the ultimate crime that underlies so many other crimes, but assassinations are carried out even in "peacetime" too are that doesn't make them any better.
45 notes · View notes
tilde-he · 2 years
Text
There we go! ( : ] )
I need to create a pentagonal tiling...
64 notes · View notes
tilde-he · 2 years
Text
Huh, while every node here has 5 neighbors, and it seems like the balls around any two nodes are equivalent, this is still different than the hyperbolic tiling depicted above it. In the tiling of hyperbolic space with pentagons, each pentagon has 5 neighbors, and the dual, the places where the lines meet, each of those vertices have 4 neighbors, while, in the graph you’ve made, while each node has 5 neighbors, the dual graph is such that some of the dual regions have 4 neighbors while others have 3.
the tiling you’ve produced here seems to make a periodic tiling of the Euclidean plane, made of (depending on how you look at it) either irregular pentagons, or a combination of squares and triangles.
I need to create a pentagonal tiling...
64 notes · View notes
tilde-he · 2 years
Text
Are you looking only for things which tile the flat plane (or flat higher dimensional spaces) or do you want to include like, hyperbolic stuff? My thought would be “connected graphs with bounded degree where at any vertex, for any radius, the ball (with respect to distance on the graph) centered at that vertex, is isomorphic to the ball of the same radius centered at any other vertex”. I would think that specifying what it looks like up to some radius would then be enough to pin down the whole thing?
I think you can grow grids like crystals from simple rules -- I mean obviously you can, they're tiling patterns, but I mean rules about connections rather than shapes per se.
26 notes · View notes
tilde-he · 2 years
Text
I think “most west I’ve been” can be given a good definition. While the fact that longitude values wrap around does make longitude values not linearly ordered (in any a way where one direction of east/west is always increasing and the other always decreasing) it does not prevent there being a cyclic order on these values, and of course there is such a cyclic order. The longitude values, of course, correspond to points on a circle. For any interval on a circle, unless the interval is either empty or the entire circle, there are two endpoints of this circle. And, within that interval, there is a clear linear order compatible with the cyclic order on the whole circle. So, where you have the cyclic order being defined by the “west” direction, for any proper interval , there is a specific endpoint which is “the west endpoint”, and, if it is a closed interval (as it is in this application), it is “the westmost point of the interval”.
(Generally, for any cyclic order, if you cut it at any point, you can produce a corresponding linear order on the complement of that point. If you take any two points outside of an interval, the linear order induced by cutting at that point, will, when restricted to the interval, produce the same linear order on the interval.)
Furthest north I've ever been: Seattle, specifically the path running around Green Lake
Furthest west I've ever been: Golden Gate Bridge, or more probably some road leading to the Golden Gate Bridge
Furthest south I've ever been: US Virgin Islands, probably wherever I went scuba diving
Furthest east I've ever been: ferry ride to the UK Virgin Islands
183 notes · View notes
tilde-he · 2 years
Text
actually wait, this one actually has the lyrics (from enter sandman) sung in the style of mr sandman, and is more cohesive (and is also from 12 years ago, unlike the other one which was from 2 years ago)
youtube
it’s pretty good! I kinda wish there was version without the introduction talking bit.
do you think the Sandman is really sick of that song
105 notes · View notes
tilde-he · 2 years
Text
Yes (and here is one such mashup) :
youtube
Though, it only has the lyrics from one. I was sort-of expecting something that would go back and forth between the two lyrics, but I guess that would be more difficult.
It is not the only mashup of those two songs together that has been made, nor even the only one made by the person who made this one.
do you think the Sandman is really sick of that song
105 notes · View notes
tilde-he · 2 years
Text
While sortition (in the sense of “pick a random ballot from those cast and the candidate on that ballot wins”) has the significant issue of being less convincing regarding legitimacy, it is one of the only systems where one’s incentives are always to exactly vote your preference. (Others include : “two of the candidates are picked at random, and everyone’s ballot is interpreted as a vote for whichever they ranked higher, and whichever of two two gets more, wins”, and stuff like that).
If it weren’t for the appearance-of-legitimacy thing (which is one of the most important parts of voting, so setting it aside is setting aside a lot) , I think I might favor sortition.
learn-tilde-ath said: “Ranked choice” meaning the general category, or the specific system popularized under that name?
I'm thinking of instant-runoff voting as used in Australia, but surely almost any system is an improvement on first past the post as used in America.
32 notes · View notes