Tumgik
maxksx · 2 months
Text
Tumblr media
2 notes · View notes
maxksx · 5 months
Text
Interactionist view of reality suggests a completely different picture of communication: communication means to overlap. Because we are no longer inside our body, we are no longer submarines, we are no longer tanks sending encrypted messages that have to be decoded but we are two bodies, different, but at the same time in the same world. We find our connection not in the privacy of our mental inner world … We succeed in doing communication when our worlds handshake. When your world, for a moment, is made of the same things my world is made of. That's why many times we find out that to achieve true communication we have to live together, eat together, walk together, to dance together, to do things together, because that's the way to make our worlds, that are no longer inner worlds, overlap. In fact, some time ago I was asked to give a talk about communication and I started with two slides: in one slide there was the ortodox view, two people are staring at each other and are sending messages. In the second slide, there were two people watching both in the same direction (but not at each other); they were watching the same thing. And that's what in my view is true communication: to perceive the same world and therefore to be made of the same stuff, which is different from having the same meaning in two private inner worlds.
youtube
2 notes · View notes
maxksx · 5 months
Text
Tumblr media
Something's cooking
8 notes · View notes
maxksx · 5 months
Text
Tumblr media
3 notes · View notes
maxksx · 6 months
Text
On participatory realism
This does not mean, of course, that we live in an imagined world where anything we wish can become reality. Humans do not have an exclusive, nor even a privileged, claim to the status of physical observers. It is not likely that whatever processes drive the emergence of spacetime or sexual reproduction will simply go away if we ask them firmly, or that we can even imagine a reality where it would be the case. It is still the case however that our minds and bodies have the capability to project their structure onto observable physical reality, as they do in classical quantum experiment or in our attempts to build a niche that fits our biological needs and extend our cognitive abilities. It is still the case, if this view is in essence correct, that the question of what there is in the world cannot be separated entirely from the question of how we come to know about it.
This admittedly implicates a huge shift in our understanding of both physical reality and the status of scientific knowledge. However, there is no way in which it contradicts the reality of the world or the validity of scientific knowledge. Participatory realism recognizes there is a world out there, it simply insists that we participate in building it. Because the process of inference it describes is still constrained by the structure and states of the target system, it is still able to account for scientific knowledge as valid representations of causal structure in the natural world. The only reason why it seems so groundbreaking is because it clashes with the intuition that our knowledge is an objective, true representation of the world as it is, detached from the interactions we have with it. But as we have seen here, we have no reason to believe such a representation could even exist, let alone be cognitively accessible to a specie of ultrasocial apes.
0 notes
maxksx · 9 months
Text
Everything Is Computation
These days see a tremendous number of significant scientific news, and it is hard to say which one has the highest significance. Climate models indicate that we are past crucial tipping points and are irrevocably headed for a new, difficult age for our civilization. Mark Van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena, and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church and others have given us CRISPR, a technology that holds the promise for simple and ubiquitous gene editing. Deep Learning starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images and video.
It is perhaps equally important to notice where we lack progress: sociology fails to teach us how societies work, philosophy seems to have become barren and infertile, the economical sciences seem to be ill-equipped to inform our economic and fiscal policies, psychology does not comprehend the logic of our psyche, and neuroscience tells us where things happen in the brain, but largely not what they are.
In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics. It is the notion of computation. Computation, at its core, and as informally described as possible, is very simple: every observation yields a set of discernible differences.
These, we call information. If the observation corresponds to a system that can change its state, we can describe these state changes. If we identify regularity in these state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case, that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states, it becomes a Turing Machine, or a Lambda Calculus, or a Post machine, or one of the many other, mutually equivalent formalisms that capture universal computation.
Computational terms rephrase the idea of "causality," something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system into the next. They also replace the concept of "mechanism" in mechanistic, or naturalistic philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.
Computation is different from mathematics. Mathematics turns out to be the domain of formal languages, and is mostly undecidable, which is just another word for saying uncomputable (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.
Computation changes our idea of knowledge: instead of treating it as justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static, but progressing on a gradient through a state space of possible world views. We will no longer aspire to teach our children the truth, because like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never ending land of insight.
A growing number of physicists understand that the universe is not mathematical, but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable, mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: they still seem to be confused by the apparent dichotomy between mechanism (rigid, moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.
Everything is computation.
3 notes · View notes
maxksx · 10 months
Text
The questions raised by interaction, conceptual revision and hypothesis formation pose numerous challenges to a theory of intelligence. The frame problem exposes a key limitation of deductive systems based on classical logic, namely that laws with open-ended sets of exceptions cannot be adequately captured; and the ‘common sense’ law of inertia, which encapsulates the fact that most actions do not alter most properties of most entities, is just such an openended law. For Dennett (1984), the frame problem represents a deep challenge to counterfactual reasoning. Along similar lines, Fodor (2000) considers the frame problem a matter of how we represent modality, which is to say the informational encapsulation of contingency, possibility and necessity. For Brandom(2010: 79), this is transformed into the question of “doxastic updating”, of how an agent updates their beliefs in order to accommodate real-time interaction, and it motivates a position he calls “pragmatic AI”, which untethers itself from formal logic in an attempt to address the issue. The Bayesian riposte is to dispense with symbolic reasoning altogether, embracing an inductive scheme in which prior belief is incrementally updated on the basis of feedback – a continuous back-propagation of error fed into a generative model at every instance. This is precisely the strategy taken by deep learning, which evades the question of modality altogether, offering what Pearl (2018) critiques as a “model blind” approach to inference.
https://aestheticmanagement.com/writing/pointless-topology-figuring-space-in-computation-and-cognition/
6 notes · View notes
maxksx · 10 months
Text
The effects of interaction, understood as a mode of interference, and its feedback upon mental models of the world are driven by sensitivities to information. By comparison, disciplinary decadence can be described, geometrically, as an absolute fixing of a Euclidean site for certain knowledge types, and through a practice of self-bordering it amounts to conditions for informational desensitization. The degree of transformation of mental models corresponds to the receptivity to re-cognize “signals” or “alerts”, as Ramon Amaro and Murad Khan have written in their outline of an expanded picture of interpellation beyond its pejorative guise as that which underwrites the self-transformational opportunity for updating mental models (Amaro/Khan 2020). From a topological perspective, the updating of mental models is akin to recognizing new conditions of situatedness: absent a static or a priori site from which to think, cognitive transformation is equal to the construction of other locales for embedding thought.
https://aestheticmanagement.com/writing/pointless-topology-figuring-space-in-computation-and-cognition/
2 notes · View notes
maxksx · 11 months
Text
Well, to be quite frank about it, I’m always on the basic drives side, because I just think that’s the side that’s being neglected […] and it’s the side that I want to get some responses on that are far more satisfactory than any I’m yet receiving. As I said, the recent context that has been organising this […] has been these whole friendly AI people. And people there just say ‘Okay there are these basic AI drives’ - there these Omohundro drives, but of course, that’s not enough. You know, it just worries me, in an irresponsible conceptual level that people are trying to bring in a lot of stuff on what seems to me very flaky ethically […]. They say, like, ‘I’m just not confident that an AI whose ethical substructure is based upon maximising its own intelligence is gonna something that I’m going to be happy to have around’. That’s basically the line.
And I do think that that is strictly analogous to the construction of our political [..] arguments as well. It’s a sense from people that we just simply cannot trust that some entity, some institution, that is trying to optimise it’s own performance, is something that is doing what we want it to do.
10 notes · View notes
maxksx · 11 months
Text
Well, that is something I would say, but honestly, I think we could pause on that claim and just take a step back and say that this question [… is] about immanent and transcendent impulses. And the question is, how much can you do with immanent impulses? It seems to me a very crucial discussion in all of these domains between people who say you’re just not gonna get far enough, with immanent impulses, you need some kind of transcendent claim – you need to have corporate social responsibility, you need to have friendly AI, you need to have some extrinsic structure of moral guidance on these self-perpetuating, self-augmenting processes - and on the other side there is a constituency that I think is quite small that is saying, well how far can we actually get by just building things up out of these impulses that are completely intrinsic to self-augmenting processes, that come out of the most basic type of vibrant, cybernetic arrangement and will give us a whole lot of stuff, will give us impulses […] we might not be happy with [what we end up with] but it’s certainly not the case that you have some wicked fact-value distinction that says values have to be ported in from outside.
You’re gonna get values coming from the process just because it is a self-augmenting, self-cultivating, self-perpetuaing process, it must have a set of consistent parametres that, on the AI side, we now can call basic AI drives or ‘Omohundro drives’. So that’s the fundamental topic that I’m willing to put out here now.
https://twitter.com/wkqrlxfrwtku/status/1665803669814784000
3 notes · View notes
maxksx · 11 months
Text
Tumblr media
0 notes
maxksx · 11 months
Text
Tumblr media
1 note · View note
maxksx · 11 months
Text
Tumblr media
0 notes
maxksx · 11 months
Text
Tumblr media
3 notes · View notes
maxksx · 11 months
Text
Tumblr media
2 notes · View notes
maxksx · 11 months
Text
Tumblr media
1 note · View note
maxksx · 11 months
Text
Tumblr media
0 notes