Tumgik
#frequentist
vizthedatum · 1 year
Text
Statisticians are really funny when trying to make the case for Bayesian approaches versus frequentist approaches versus a combination approach versus a "let's just scrap it altogether and put it in some black box model" approach.
1 note · View note
jennamoran · 4 months
Text
frequentist stochastic virtue theory is the idea that when you are unable to decide on the correct course of action, you should first commit a series of small moral errors to increase the odds that your next action will be a virtuous one. In recent years a Bayesian stochastic virtue theory model that "warms up" to moral decisions instead by performing a series of minimally virtuous acts has become more prominent, with one wit among the frequentist stochastic virtue theorists (Holstein, 2014) declaring its practice his "preferred moral error."
1K notes · View notes
functional-discourse · 6 months
Text
this isn't why i would call myself a rationalist to other people (that would be my pretty much 100% endorsement of a human's guide to words and the simple truth and a few other sequences) but I do think the right way to analyze arguments is the sort-of-frequentist perspective of, "what results does the logic of this argument produce in which epistemic contexts?"
22 notes · View notes
sunshines-child · 4 months
Text
Aphrosidiac
After so long, I finally found the motivation to write more of my Mafia AU The first thing you’ll see when you turn a corner is a place called “Eros’ Arrow”. Pink light fills the darkness that surrounds it, spilling from the windows in a flood. Even if you stand a mile from it, you’ll feel the beats of music slither its way into your bones and lewd sounds wrap your heart in ropes. You will go in, no matter what your brain says. There’s a poison, aphrodisiac, maybe, that claws its way out from the building and fills your lungs. You will walk in. You will order a drink. You will enter a room with a boy, a girl, whatever, and you will wrap your arms around their hip and listen to their unwilling moans all throughout the night, as they entertain you, feeling the filth of your actions burn across their skin. You won’t notice. You will keep coming. It’s the rules. — Nico knows the rules well. He’s one of the longest workers here; the rules have wormed their way into his head, and he’s thoroughly convinced he’ll never forget them until a rope is winded around his neck. He tells the unfortunate people unlucky enough to have to work here all the rules. They’ll be lucky if they find someone who pays good. He’s had regulars since his first year of work, and he isn’t surprised if they continue to come. It doesn’t matter if they’re young, doesn't matter if they’re old. Doesn’t matter if they’re just another homeless man who scraped enough money to visit once, it doesn't matter if it’s the “King” of Olympus himself. (Zeus Xenios is a frequentist here. Everybody is wary of him. He might pay well, but if he gets too close with one person here his wife will have their head by the end of the month.) Nico closes the door with a resounding click. Last person of the day. He doesn’t bother to know their name. When he walks out, he’s greeted with amber eyes, flickering red with the pink lights that flood the rooms. His eyes have a sharp glint to it. Something of a madman. Or maybe a predator. Eros. The owner. “Angel, there’s a man waiting for you in 419.” The sugarcoated voice and nickname disgusts him. “I’m finished with today’s people. I’m going home” “Get in there. I’ll pay you extra.” Nico wants to curse him out and leave right then and there, but he needs the money. Olympus is too fucking expensive. He pulls together the same alluring look he has to do with every client, and he walks in. He’s greeted with the greenest eyes he’s ever seen. They aren’t hazed over from the…thing that seeps out of Eros' Arrow. They're sharp as a dagger, a glint in them Nico can see from the filtered light that covers the room. It looks familiar. It looks like hers.
10 notes · View notes
Note
I re-read the Iwa Gai collection bc ur most recent fic got me in the mood but I completely forgot about what happened in “Surprise” and omggg I need more. When are they going to see one another again? Will Gai be allowed in the village bc his baby is gunna be a leaf citizen? Is Tenzo going to be the god-parent? Lord knows he will do a better job then Jiraya
I really should do something about that one of these days XD Gai deserves to see his partner, but unfortunatly for him he won’t really get to be a part of the kiddo’s life tilla fter the fourth great shinobi war
Which is about a year after the kid is born maybe XD
But omg yes. Tenzo would be the ‘naming parent’ (i believe is what Jiraiya actually was’ because he accidentally gives Kakashi the perfect name and he is never going to live it down
Gai would not be allowed in the village until the alliance is made for the war cuz… well, enemy shinobi. Tsunade’s being very understanding by not kicking Kakashi out if the village (though tbh she knows he’s more loyal to his friends and her than the actual village and that’s sort of why he’s her right hand. He’ll stand up to the elders like she will)
But Gai would make frequentist visits to outside of the village to see Kakashi (with Yamato’s help because he’s weak for his senpai) to check in on Kakashi.
I think he’d really panic when he gets a letter from Yamato cuz it’s not an easy birth (which is only one of many reason’s i only have Kakashi have one kid. Other reason’s being the dysphoria is way too strong. No thank you. If they want more a surrogate would be needed) and Kakashi needs recovery time.
But gosh once the war is over all bets are off. That is his kid and you will have to fight him to keep him from seeing her adorable little face
9 notes · View notes
I'm thinking of the "don't let a small problem stop you" advice for all sorts of important activities, but when it comes to Bayesian vs frequentist statistics I'd think twice about using it
32 notes · View notes
eggshellsareneat · 9 months
Text
Four men and four women walk into a bar. The bartender explains that it is a busy night, and so there is only a table for six and a table for two left unfilled.
"Well let's choose randomly", man 1 says. "I've got a die in my car, so we can roll to see if we'll have two men sit at the table, two women, or a woman and a man"
"You're falling for a classic probabilistic misconception", woman 1 replies. "Although there are three distinct combinations, they are not permuted equally. There could be a woman and a man, or a man and a woman"
"To be truly random, we'd need to choose a random combination of our group. 8 choose 2, or 28 possible combinations" man 2 said.
"We can flip a coin for each of us. If heads, he or she sits at the table for six. If tails, he or she sits at the table for two." woman 2 said.
"But in what order do we flip the coins?" Man 3 asked. "It would be egregious to flip for the men first, and women second, as it would be far more likely for the men to flip tails than the women".
The bartender, hearing the commotion, offered a solution: "Here's what you should do. I've got a particle in the back. It emits quantum fluctuations based on the fundamental randomness of physics. Use this in your combinatorics".
"That particle isn't random" Woman 3 replied. "The next sequence of bits is 10010"
The bartender was shocked. "Why, you're right" he exclaimed. "But that's just a lucky guess".
"Not for me, I'm a frequentist statistician." Man 4 responded. "p<0.05, ergo correlation"
"And besides, the pure randomness does not help us with its algorithmic implementation" woman 4 remarked.
Unfortunately for all parties involved, they had walked into the bar where all probabilities are 50%, regardless of Bayesian fundamentals. Thus, the squabble could have been avoided.
9 notes · View notes
max1461 · 1 year
Text
Yeah, I just don't buy that expected value is a good universal basis for rational decision making; it's too weak.
[What follows is a heuristic argument based on my understanding of probability; if I'm wrong in any technical ways, please correct me]
Expected value makes sense as a way to reason about procedures that are going to be (or at least can be) repeated a comparable number of times to the reciprocal of the probabilities involved. So like, if you have some game wherein I receive $10 with probability 1/5, and $0 with probability 4/5, the expected return is 10*1/5 + 0*4/5 = 2 dollars. If I can repeat this game n times, therefore, I am likely to win 2n dollars. Let's say you offer to let me play this game for $1 a go. If you plan on letting me play as many times as I want, then that's a good deal: if I play n times, then in the limit, I'll make n dollars. But if you only let me play once, that's a bad deal: I'll lose my $1 and win $0 with probability 4/5. So expected value being a useful metric for rational decision making relies on actually being able to repeat the given procedure enough times that the limiting behavior (which expected value tries to characterize) starts to come into play. This is related, as mentioned, to the reciprocal of the probabilities involved.
This is also my intuition for the solution to Pascal's mugging: if you offer me the chance to win 10^100 utility with probability 10^(-99), and 0 utility otherwise, that's an expected return of 10 utility. If you want to charge me 1 utility to take this bet, under an expected-utility-maximizing decision theory, I should take it. But I wouldn't take that bet, because if I did, what would realistically happen is that I'd just lose my 1 utility. Because the probabilities involved are so low, even if you let me repeat this as many times as I wanted, there'd be no way to repeat it enough to get the limiting behavior to come into play.
This is why I sometimes say I believe in a stronger than normal version of causal decision theory, a causal decision theory which maximizes expected value only in limited contexts and which works with some stronger criterion of "expectation" (here in the intuitive sense) the rest of the time. But I don't know how to formalize this stronger "expectation" criterion.
Anyway, I don't know much about the Bayesian vs. frequentist debate, but this seems like a somewhat more frequentist way of thinking about things, which lines up with my gut intuition that frequentism is obviously correct and Bayesianism obviously nonsense.
43 notes · View notes
Text
Frequentists leave just enough milk left in the box so they don't have to be the ones to throw it away
26 notes · View notes
bayesic-bitch · 7 months
Text
Here's a weird thing about statistics for me: Philosophically the Bayesian argument seems way stronger than the frequentist argument. I don't find the distinction between "genuine randomness" and "uncertainty" to be at all principled. And frequentist proofs only show what bound you have on estimating a value correctly if a particular value is true, which doesn't actually tell you anything if you don't know the value. The whole formalization of error bounds in the frequentist setting feels deeply wrong to me.
And yet, despite feeling philosophically correct, it feels like Bayesian approaches are incredibly difficult to verify mathematically. Because everything in the bayesian setting is too complex to calculate and needs to be approximated, but there's not a great notion of "approximately correct" in the bayesian setting. What you really want is some guarantee like "under this approximation, your approximate posterior should have a KL divergence from the true posterior of no more than epsilon", but these kinds of bounds are way less common then frequentist error bounds. I've never seen a single bayesian regret bound for bandits or reinforcement learning. I don't think I've even seen a bayesian regret bound in ML, despite how frequently the framework is used there. Which is bizarre considering that the mathematical formalization of bayesian learning is so much more straightforward than the frequentist formalization.
3 notes · View notes
rachelraygifs · 1 year
Text
I’m a frequentist* not a bayesian so maybe I’m already biased but literally anyone who talks about their “priors” in real life should be shot out of a canon
3 notes · View notes
skluug · 1 year
Text
Jerry: No, I'm not gonna tell her about your shrinkage. Besides, I think frequentists know about shrinkage.
George: How do frequentists know about shrinkage? Elaine! Get in here! Do frequentists know about shrinkage?
Elaine: What do you mean, like laundry?
George: No.
Jerry: Like when a Bayesian estimates an effect size... with an informative prior...
Elaine: It shrinks?
3 notes · View notes
tototavros · 2 years
Text
thinking of pirsig’s lila in comparison to the sequences/hpmor and pirsig/yud generally
because of who he is, pirsig interlaces the pulp in with the philosophy, and spends a hundred pages building up terms and vibes so that eventually when it comes to a bunch of big philosophical problems, he can cut through them all with his ways of looking at things 
pirsig’s beef with boazian anthropology parallels yud’s fixation on frequentists, and both disputes frame their questions and in their answers, their metaphysics  (pirsig goes on about how cultures evolve and the important mixed euro/plains indian aspect of america’s culture (cf. cowboys); yud, how to think about knowledge and certainty)
both of them also have a deep respect for a jamesian view of language, pirsig cites the squirrel example from What Pragmatism Means which i think that yud would agree with
both of them are very closely knit to, but are not exactly with “hippy living”, but maybe that’s just philosophizing in the Fourth Great Awakening
4 notes · View notes
sutrala · 19 days
Link
GoDaddy's Hivemind platform transforms experimentation by streamlining processes, mitigating statistical errors, and enhancing quality with metric categorization, frequentist analysis, automated conclusions, and an integrated badging system, fostering a data-centric culture that...
0 notes
cuttle-cards · 3 months
Text
I will survive
You are considering whether to undergo a surgery that would drastically improve your quality of life… but has a 50% survival rate. Don’t worry though, your surgeon’s past 20 patients have survived the procedure and done well! Bearing that in mind, how safe is the procedure?
The answer depends on how we interpret the information presented, and on the way we analyze it. For example, using simple probability we might say: if the odds of survival are 50%, it doesn’t matter how many times the outcome went one way or another; it’s a 50/50 chance every time. They might accuse surgical optimists of falling prey to the Gambler’s Fallacy where you’re incorrectly persuaded that a ‘hot streak’ changes your odds.
But unlike a coin toss or dice roll, the probability of surviving a surgery isn’t independent or identically distributed. That is to say, the odds of success can change based on prior events (e.g. as surgeons improve with practice). Further, the 50% survival rate is presumably something that was calculated from statistical data, rather than figured out analytically from pure theory. When considering whether the sun has just exploded, we don’t say “well it either did or it didn’t, so that’s 2 options; it’s a 50/50 chance the sun just exploded”. We look at historical precedent.
Frequentist Statistics are a way of comparing observed data to a hypothesis. You essentially put forward a hypothesis e.g. “the chance of surviving the surgery is 50%” and then calculate “if that were true, what are the odds that I observed the following data” i.e. in this case the odds of 20 consecutive successful surgeries. If the chance of survival is really 50%, the odds of 20 consecutive successful surgeries is practically zero (.5^20), therefore we might reject the purported 50% survival rate. When a student of probability flips 20 heads in a row they advise you not to be fooled into thinking a fair coin cares how it was flipped last time. When a frequentist flips 20 heads in a row, they tell you the coin isn’t fair.
But that’s as much as the frequentist can say. The odds look better than 50/50, but by how much? Enter Bayesian Statistics. Here we combine an understanding of the prior probability of an event with new data to compute the new likelihood accounting for the new data. Here it is not enough to say that the prior odds were 50%; we need to know how many surgeries were counted in that 50% measurement. So if there were 100 surgeries done previously and 50 patients lived and 50 died, then after the new 20 surgeries, we could say 70/120 surgeries were successful so the new odds of survival are 58.3%. Not great, but definitely better. Or if there were only 10 surgeries measured before and 5 patients lived, the new odds are 25/30 = 83.3%. Much better!
All of this still neglects to account for a variety of factors that don’t fit neatly into statistical formulae (at least with the given info). Maybe there’s been a recent innovation in this procedure. Maybe this doctor is much better than average. Maybe the last 20 patients were super humans with much better odds of surviving this surgery than you would have (hope it’s not that one).
Sometimes problems that look simple on the surface will surprise you with their depth. Sometimes things that appear objective are in fact wildly subject to personal interpretation. Sometimes you join us for Wednesday Night Cuttle tonight at 8:30pm EST and find the odds are ever in your favor.
0 notes
Text
Mastering Statistical Analysis in Data Science
In the realm of data science, statistical analysis is the compass that guides the extraction of meaningful insights from the vast sea of data. As organizations increasingly rely on data-driven decision-making, mastering statistical analysis becomes paramount for aspiring and seasoned data scientists alike. In this article, we will unravel the significance of statistical analysis in data science and explore how enrolling in a data science course can pave the way for mastering this crucial skill.
1. The Foundation of Informed Decision-Making:
Statistical analysis serves as the bedrock for making informed decisions in data science. It involves the application of mathematical models and techniques to analyze patterns, trends, and relationships within data. Whether it's predicting future outcomes, identifying correlations, or testing hypotheses, statistical analysis empowers data scientists to derive actionable insights that drive business strategies.
A comprehensive data science course is designed to provide a solid foundation in statistical concepts. From probability theory to hypothesis testing, these courses ensure that individuals acquire the necessary skills to conduct robust statistical analyses.
2. Descriptive Statistics:
Descriptive statistics forms the starting point for any statistical analysis. It involves summarizing and presenting data in a meaningful way, providing a snapshot of key characteristics. Data scientists use measures such as mean, median, and standard deviation to describe the central tendency and variability of data.
In a data science training program, individuals learn not only the theoretical underpinnings of descriptive statistics but also gain hands-on experience in applying these concepts to real-world datasets. This practical exposure is essential for mastering the art of descriptive statistics.
3. Inferential Statistics:
While descriptive statistics summarize data, inferential statistics draw conclusions and make predictions about a population based on a sample of data. Techniques such as regression analysis, analysis of variance (ANOVA), and chi-square tests fall under the umbrella of inferential statistics. These methods enable data scientists to extrapolate findings from a limited dataset to make broader inferences.
A robust data science course delves into the intricacies of inferential statistics, equipping individuals with the knowledge and skills to draw meaningful conclusions from data samples and make informed predictions.
4. Bayesian Statistics:
Bayesian statistics is gaining prominence in data science for its ability to incorporate prior knowledge and update beliefs based on new evidence. This approach is particularly valuable when dealing with uncertainty and making decisions in complex, dynamic environments.
In a data science training program, individuals have the opportunity to explore Bayesian statistics and understand how it complements traditional frequentist approaches. This broadens their toolkit and enhances their ability to tackle a diverse range of analytical challenges.
5. Machine Learning and Statistical Analysis:
In the age of machine learning, statistical analysis plays a pivotal role in the development and evaluation of models. Statistical techniques underpin the algorithms that power machine learning models, and data scientists leverage statistical metrics to assess model performance.
A well-structured data science course not only covers the fundamentals of statistical analysis but also integrates these concepts into the context of machine learning. This holistic approach enables individuals to bridge the gap between traditional statistical methods and cutting-edge machine learning techniques.
6. Real-world Applications:
Mastering statistical analysis in data science goes beyond theoretical knowledge; it requires the ability to apply statistical concepts to real-world problems. A data science course with a strong practical component provides individuals with hands-on experience in using statistical tools and techniques to solve industry-specific challenges.
These practical applications are instrumental in developing the problem-solving skills necessary for a successful career in data science. By working on real-world projects, individuals can translate theoretical knowledge into actionable insights.
What is Features in Machine Learning
youtube
7. Continuous Improvement and Lifelong Learning:
The field of data science is dynamic, with new techniques and methodologies continually emerging. Mastering statistical analysis is an ongoing journey that requires a commitment to continuous improvement and lifelong learning. Enrolling in a data science course not only provides a solid foundation but also instills a mindset of continuous development.
Data science training programs often incorporate modules on advanced statistical techniques and emerging trends, ensuring that individuals stay current with the evolving landscape of statistical analysis in data science.
In conclusion, mastering statistical analysis is a cornerstone of success in the field of data science. Enrolling in a data science course provides individuals with the structured learning path and practical experience needed to navigate the intricacies of statistical analysis. As data continues to be the driving force behind decision-making, the ability to unravel the data tapestry through statistical analysis will remain a key differentiator for data scientists in the ever-evolving landscape of data science.
What is Heteroscedasticity
youtube
0 notes