Tumgik
#computer chip
littleguymart · 11 months
Photo
Tumblr media
(source)
245 notes · View notes
sewercentipede · 1 year
Text
Tumblr media
yuri.nosho
66 notes · View notes
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
A stimboard for our William Afton/Springtrap (Five Nights at Freddy's) fictive! With themes of horror robotics, rabbits, and things that remind him of his daughter!
~💖 Bebe
X | X | X
X | X | X
X | X | X
32 notes · View notes
krjpalmer · 1 year
Photo
Tumblr media
BYTE July 1998
This issue’s cover story took the “millennium bug” seriously, although it hadn’t (yet) joined those online who’d egged themselves on to insisting all electrical power would go out at the stroke of midnight, it would never be turned back on again, and whatever was actually being done to address that impending existential crisis wasn’t enough. BYTE itself, though, wouldn’t survive to the millennium or even to the next month. According to Wikipedia, McGraw-Hill sold its publishing arm due to declining advertising sales (BYTE was much thinner this year than it had once been), and the company that bought it immediately shut the long-running magazine down, although its web site apparently endured for another decade on a subscription basis for most of that time.
25 notes · View notes
onyxzoe · 1 year
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
9 notes · View notes
forgottengenius · 4 months
Text
انسانی دماغ میں نصب کی گئی کمپیوٹر چپ کیا ہے؟
Tumblr media
ارب پتی بزنس مین ایلون مسک نے اعلان کیا ہے کہ ان کی کمپنی ’نیورالنک‘ نے انسانی دماغ میں پہلی بار کمپیوٹر چِپ نصب کر دی ہے۔ یہ اعلان ایلون مسک نے اپنے سوشل میڈیا پلیٹ فارم ’ایکس‘ پر کیا، انہوں نے ٹوئٹ کرتے ہوئے لکھا کہ امریکی نیورو ٹیکنالوجی کمپنی ’نیورا لنک نے  پہلے انسان کے دماغ میں کمپیوٹر چپ نصب کر دی ہے اور وہ شخص صحت یاب ہو رہا ہے‘۔ ایلون مسک نے یہ بھی بتایا کہ ’ابتدائی نتائج حوصلہ افزا رہے ہیں جس دوران نیورونز کی سرگرمیوں میں اضافے کو دیکھا گیا۔‘ ایلون مسک نے بتایا کہ اس چپ کی مدد سے لوگ اپنے خیالات کے ذریعے موبائل فون، کمپیوٹر سمیت تمام الیکٹرونک ڈیوائس کو استعمال کر سکیں گے ان کا کہنا تھا کہ ابتدا میں یہ چپ ہاتھوں پیروں کو استعمال کرنے سے معذور افراد کے دماغ میں نصب کیا جائے گا۔ ایلون مسک نے اس تجربے کے بارے میں مزید تفصیلات ظاہر نہیں کیں اور نہ ہی یہ بتایا کہ پہلی چپ لگوانے والا شخص کون تھا یا ٹیکنالوجی کیا تھی، تاہم انہوں نے بتایا کہ نیورالنک کا پہلا پروڈکٹ ٹیلی پیتھی ہے۔
Tumblr media
یاد رہے کہ ’نیورا لنک‘ ٹوئٹر کے مالک ایلون مسک کی کمپنی ہے، جسے 2016 میں بنایا گیا تھا، اس کمپنی کا مقصد ایسی کمپیوٹرائزڈ چپ تیار کرنا ہے، جنہیں انسانی دماغ اور جسم میں داخل کر کے انسان ذات کو بیماریوں سے بچانا ہے۔ اسی کمپنی نے 2020 میں تیار کردہ کمپیوٹرائزڈ چپ کو جانوروں کے دماغ میں داخل کر کے اس کی آزمائش بھی کی تھی اور پھر کمپیوٹرائزڈ چپ والے جانوروں کو دنیا کے سامنے بھی پیش کیا گیا تھا۔ کمپنی نے مذکورہ چپ کی انسانوں پر آزمائش کے لیے امریکی محکمہ صحت سے اجازت طلب کی تھی اور مئی 2023 کو نیورا لنک کو آزمائش کی اجازت دے دی گئی تھی۔ نیورا لنک کی جانب سے اپنی ٹوئٹ میں دعویٰ کیا گیا تھا کہ امریکا کے ’فوڈ اینڈ ڈرگ ایڈمنسٹریشن‘ (ایف ڈی اے) نے کمپیوٹرائزڈ چپ کی انسانی دماغ میں آزمائش کی اجازت دے دی۔
انسانی دماغ میں نصب کی گئی کمپیوٹر چپ کیا ہے؟ نیورا لنک کی جانب سے بنائی گئی مذکورہ چپ کسی چھوٹے سکے کی سائز کی ہے اور وہ انتہائی پتلی ہے، جسے کسی بھی جاندار کے دماغ میں نصب کر کے اسے وائرلیس سسٹم کے ذریعے اسمارٹ فون سے منسلک کیا جاسکے گا۔ مذکورہ چپ فالج، انزائٹی، ڈپریشن، جوڑوں کے شدید درد، ریڑھ کی ہڈی کے درد، دماغ کے شدید متاثر ہوکر کام چھوڑنے، نابینا پن، سماعت گویائی سے محرومی، بے خوابی اور بے ہوشی کے دوروں سمیت دیگر بیماریوں اور مسائل کو فوری طور پر حل کرنے میں مدد فراہم کرے گی۔ مذکورہ چپ کو موبائل فون کے سم کارڈ کی طرح ایسے سسٹم سے بنایا گیا ہے جو سگنل کی مدد سے اسے اسمارٹ فون سے منسلک کرے گا اور پھر فون کے ذریعے مذکورہ چیزیں شامل کی جا سکیں گی اور چپ سے چیزیں نکالی بھی جا سکیں گی۔ مذکورہ چپ انسانی خیالات کا ریکارڈ بھی جمع کرے گی جب کہ انسان کی یادداشت کو بھی محفوظ رکھ سکے گی۔ چپ میں محفوظ انسانی یادداشت کو کمپیوٹر یا موبائل کے ذریعے کسی بھی وقت ری پلے کیا جا سکے گا یا کسی بھی وقت ماضی میں گزرے دنوں کو اسکرین پر ڈیٹا کی صورت میں لایا جا سکے گا۔
بشکریہ ڈان نیوز  
0 notes
gardenfractals · 4 months
Text
youtube
0 notes
sacz21 · 5 months
Text
Tumblr media
The billion-plus internet-connected gadgets, including computer chips that collect, save, and share data, come under the "internet of things." It simplifies the operation of everything from a capsule-sized device to giant military equipment and measures real-time temperature, weather, speed, wind flow, heart rate, blood flow, and everything else in the universe.
0 notes
gamecode · 2 years
Text
Tumblr media
16K notes · View notes
swagliostro · 17 days
Text
Tumblr media
Sketch page of Chip! (That I colored digitally with a mouse since I still need a new stylus </3)
316 notes · View notes
machinezlove · 3 months
Text
Tumblr media
I would do unspeakable things to her
504 notes · View notes
Text
The disenshittified internet starts with loyal "user agents"
Tumblr media
I'm in TARTU, ESTONIA! Overcoming the Enshittocene (TOMORROW, May 8, 6PM, Prima Vista Literary Festival keynote, University of Tartu Library, Struwe 1). AI, copyright and creative workers' labor rights (May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).
Tumblr media
There's one overwhelmingly common mistake that people make about enshittification: assuming that the contagion is the result of the Great Forces of History, or that it is the inevitable end-point of any kind of for-profit online world.
In other words, they class enshittification as an ideological phenomenon, rather than as a material phenomenon. Corporate leaders have always felt the impulse to enshittify their offerings, shifting value from end users, business customers and their own workers to their shareholders. The decades of largely enshittification-free online services were not the product of corporate leaders with better ideas or purer hearts. Those years were the result of constraints on the mediocre sociopaths who would trade our wellbeing and happiness for their own, constraints that forced them to act better than they do today, even if the were not any better:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
Corporate leaders' moments of good leadership didn't come from morals, they came from fear. Fear that a competitor would take away a disgruntled customer or worker. Fear that a regulator would punish the company so severely that all gains from cheating would be wiped out. Fear that a rival technology – alternative clients, tracker blockers, third-party mods and plugins – would emerge that permanently severed the company's relationship with their customers. Fears that key workers in their impossible-to-replace workforce would leave for a job somewhere else rather than participate in the enshittification of the services they worked so hard to build:
https://pluralistic.net/2024/04/22/kargo-kult-kaptialism/#dont-buy-it
When those constraints melted away – thanks to decades of official tolerance for monopolies, which led to regulatory capture and victory over the tech workforce – the same mediocre sociopaths found themselves able to pursue their most enshittificatory impulses without fear.
The effects of this are all around us. In This Is Your Phone On Feminism, the great Maria Farrell describes how audiences at her lectures profess both love for their smartphones and mistrust for them. Farrell says, "We love our phones, but we do not trust them. And love without trust is the definition of an abusive relationship":
https://conversationalist.org/2019/09/13/feminism-explains-our-toxic-relationships-with-our-smartphones/
I (re)discovered this Farrell quote in a paper by Robin Berjon, who recently co-authored a magnificent paper with Farrell entitled "We Need to Rewild the Internet":
https://www.noemamag.com/we-need-to-rewild-the-internet/
The new Berjon paper is narrower in scope, but still packed with material examples of the way the internet goes wrong and how it can be put right. It's called "The Fiduciary Duties of User Agents":
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827421
In "Fiduciary Duties," Berjon focuses on the technical term "user agent," which is how web browsers are described in formal standards documents. This notion of a "user agent" is a holdover from a more civilized age, when technologists tried to figure out how to build a new digital space where technology served users.
A web browser that's a "user agent" is a comforting thought. An agent's job is to serve you and your interests. When you tell it to fetch a web-page, your agent should figure out how to get that page, make sense of the code that's embedded in, and render the page in a way that represents its best guess of how you'd like the page seen.
For example, the user agent might judge that you'd like it to block ads. More than half of all web users have installed ad-blockers, constituting the largest consumer boycott in human history:
https://doc.searls.com/2023/11/11/how-is-the-worlds-biggest-boycott-doing/
Your user agent might judge that the colors on the page are outside your visual range. Maybe you're colorblind, in which case, the user agent could shift the gamut of the colors away from the colors chosen by the page's creator and into a set that suits you better:
https://dankaminsky.com/dankam/
Or maybe you (like me) have a low-vision disability that makes low-contrast type difficult to impossible to read, and maybe the page's creator is a thoughtless dolt who's chosen light grey-on-white type, or maybe they've fallen prey to the absurd urban legend that not-quite-black type is somehow more legible than actual black type:
https://uxplanet.org/basicdesign-never-use-pure-black-in-typography-36138a3327a6
The user agent is loyal to you. Even when you want something the page's creator didn't consider – even when you want something the page's creator violently objects to – your user agent acts on your behalf and delivers your desires, as best as it can.
Now – as Berjon points out – you might not know exactly what you want. Like, you know that you want the privacy guarantees of TLS (the difference between "http" and "https") but not really understand the internal cryptographic mysteries involved. Your user agent might detect evidence of shenanigans indicating that your session isn't secure, and choose not to show you the web-page you requested.
This is only superficially paradoxical. Yes, you asked your browser for a web-page. Yes, the browser defied your request and declined to show you that page. But you also asked your browser to protect you from security defects, and your browser made a judgment call and decided that security trumped delivery of the page. No paradox needed.
But of course, the person who designed your user agent/browser can't anticipate all the ways this contradiction might arise. Like, maybe you're trying to access your own website, and you know that the security problem the browser has detected is the result of your own forgetful failure to renew your site's cryptographic certificate. At that point, you can tell your browser, "Thanks for having my back, pal, but actually this time it's fine. Stand down and show me that webpage."
That's your user agent serving you, too.
User agents can be well-designed or they can be poorly made. The fact that a user agent is designed to act in accord with your desires doesn't mean that it always will. A software agent, like a human agent, is not infallible.
However – and this is the key – if a user agent thwarts your desire due to a fault, that is fundamentally different from a user agent that thwarts your desires because it is designed to serve the interests of someone else, even when that is detrimental to your own interests.
A "faithless" user agent is utterly different from a "clumsy" user agent, and faithless user agents have become the norm. Indeed, as crude early internet clients progressed in sophistication, they grew increasingly treacherous. Most non-browser tools are designed for treachery.
A smart speaker or voice assistant routes all your requests through its manufacturer's servers and uses this to build a nonconsensual surveillance dossier on you. Smart speakers and voice assistants even secretly record your speech and route it to the manufacturer's subcontractors, whether or not you're explicitly interacting with them:
https://www.sciencealert.com/creepy-new-amazon-patent-would-mean-alexa-records-everything-you-say-from-now-on
By design, apps and in-app browsers seek to thwart your preferences regarding surveillance and tracking. An app will even try to figure out if you're using a VPN to obscure your location from its maker, and snitch you out with its guess about your true location.
Mobile phones assign persistent tracking IDs to their owners and transmit them without permission (to its credit, Apple recently switch to an opt-in system for transmitting these IDs) (but to its detriment, Apple offers no opt-out from its own tracking, and actively lies about the very existence of this tracking):
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
An Android device running Chrome and sitting inert, with no user interaction, transmits location data to Google every five minutes. This is the "resting heartbeat" of surveillance for an Android device. Ask that device to do any work for you and its pulse quickens, until it is emitting a nearly continuous stream of information about your activities to Google:
https://digitalcontentnext.org/blog/2018/08/21/google-data-collection-research/
These faithless user agents both reflect and enable enshittification. The locked-down nature of the hardware and operating systems for Android and Ios devices means that manufacturers – and their business partners – have an arsenal of legal weapons they can use to block anyone who gives you a tool to modify the device's behavior. These weapons are generically referred to as "IP rights" which are, broadly speaking, the right to control the conduct of a company's critics, customers and competitors:
https://locusmag.com/2020/09/cory-doctorow-ip/
A canny tech company can design their products so that any modification that puts the user's interests above its shareholders is illegal, a violation of its copyright, patent, trademark, trade secrets, contracts, terms of service, nondisclosure, noncompete, most favored nation, or anticircumvention rights. Wrap your product in the right mix of IP, and its faithless betrayals acquire the force of law.
This is – in Jay Freeman's memorable phrase – "felony contempt of business model." While more than half of all web users have installed an ad-blocker, thus overriding the manufacturer's defaults to make their browser a more loyal agent, no app users have modified their apps with ad-blockers.
The first step of making such a blocker, reverse-engineering the app, creates criminal liability under Section 1201 of the Digital Millennium Copyright Act, with a maximum penalty of five years in prison and a $500,000 fine. An app is just a web-page skinned in sufficient IP to make it a felony to add an ad-blocker to it (no wonder every company wants to coerce you into using its app, rather than its website).
If you know that increasing the invasiveness of the ads on your web-page could trigger mass installations of ad-blockers by your users, it becomes irrational and self-defeating to ramp up your ads' invasiveness. The possibility of interoperability acts as a constraint on tech bosses' impulse to enshittify their products.
The shift to platforms dominated by treacherous user agents – apps, mobile ecosystems, walled gardens – weakens or removes that constraint. As your ability to discipline your agent so that it serves you wanes, the temptation to turn your user agent against you grows, and enshittification follows.
This has been tacitly understood by technologists since the web's earliest days and has been reaffirmed even as enshittification increased. Berjon quotes extensively from "The Internet Is For End-Users," AKA Internet Architecture Board RFC 8890:
Defining the user agent role in standards also creates a virtuous cycle; it allows multiple implementations, allowing end users to switch between them with relatively low costs (…). This creates an incentive for implementers to consider the users' needs carefully, which are often reflected into the defining standards. The resulting ecosystem has many remaining problems, but a distinguished user agent role provides an opportunity to improve it.
And the W3C's Technical Architecture Group echoes these sentiments in "Web Platform Design Principles," which articulates a "Priority of Constituencies" that is supposed to be central to the W3C's mission:
User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.
https://w3ctag.github.io/design-principles/
But the W3C's commitment to faithful agents is contingent on its own members' commitment to these principles. In 2017, the W3C finalized "EME," a standard for blocking mods that interact with streaming videos. Nominally aimed at preventing copyright infringement, EME also prevents users from choosing to add accessibility add-ons that beyond the ones the streaming service permits. These services may support closed captioning and additional narration of visual elements, but they block tools that adapt video for color-blind users or prevent strobe effects that trigger seizures in users with photosensitive epilepsy.
The fight over EME was the most contentious struggle in the W3C's history, in which the organization's leadership had to decide whether to honor the "priority of constituencies" and make a standard that allowed users to override manufacturers, or whether to facilitate the creation of faithless agents specifically designed to thwart users' desires on behalf of manufacturers:
https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership
This fight was settled in favor of a handful of extremely large and powerful companies, over the objections of a broad collection of smaller firms, nonprofits representing users, academics and other parties agitating for a web built on faithful agents. This coincided with the W3C's operating budget becoming entirely dependent on the very large sums its largest corporate members paid.
W3C membership is on a sliding scale, based on a member's size. Nominally, the W3C is a one-member, one-vote organization, but when a highly concentrated collection of very high-value members flex their muscles, W3C leadership seemingly perceived an existential risk to the organization, and opted to sacrifice the faithfulness of user agents in service to the anti-user priorities of its largest members.
For W3C's largest corporate members, the fight was absolutely worth it. The W3C's EME standard transformed the web, making it impossible to ship a fully featured web-browser without securing permission – and a paid license – from one of the cartel of companies that dominate the internet. In effect, Big Tech used the W3C to secure the right to decide who would compete with them in future, and how:
https://blog.samuelmaddock.com/posts/the-end-of-indie-web-browsers/
Enshittification arises when the everyday mediocre sociopaths who run tech companies are freed from the constraints that act against them. When the web – and its browsers – were a big, contented, diverse, competitive space, it was harder for tech companies to collude to capture standards bodies like the W3C to secure even more dominance. As the web turned into Tom Eastman's "five giant websites filled with screenshots of text from the other four," that kind of collusion became much easier:
https://pluralistic.net/2023/04/18/cursed-are-the-sausagemakers/#how-the-parties-get-to-yes
In arguing for faithful agents, Berjon associates himself with the group of scholars, regulators and activists who call for user agents to serve as "information fiduciaries." Mostly, information fiduciaries come up in the context of user privacy, with the idea that entities that hold a user's data would have the obligation to put the user's interests ahead of their own. Think of a lawyer's fiduciary duty in respect of their clients, to give advice that reflects the client's best interests, even when that conflicts with the lawyer's own self-interest. For example, a lawyer who believes that settling a case is the best course of action for a client is required to tell them so, even if keeping the case going would generate more billings for the lawyer and their firm.
For a user agent to be faithful, it must be your fiduciary. It must put your interests ahead of the interests of the entity that made it or operates it. Browsers, email clients, and other internet software that served as a fiduciary would do things like automatically blocking tracking (which most email clients don't do, especially webmail clients made by companies like Google, who also sell advertising and tracking).
Berjon contemplates a legally mandated fiduciary duty, citing Lindsey Barrett's "Confiding in Con Men":
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3354129
He describes a fiduciary duty as a remedy for the enforcement failures of EU's GDPR, a solidly written, and dismally enforced, privacy law. A legally backstopped duty for agents to be fiduciaries would also help us distinguish good and bad forms of "innovation" – innovation in ways of thwarting a user's will are always bad.
Now, the tech giants insist that they are already fiduciaries, and that when they thwart a user's request, that's more like blocking access to a page where the encryption has been compromised than like HAL9000's "I can't let you do that, Dave." For example, when Louis Barclay created "Unfollow Everything," he (and his enthusiastic users) found that automating the process of unfollowing every account on Facebook made their use of the service significantly better:
https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html
When Facebook shut the service down with blood-curdling legal threats, they insisted that they were simply protecting users from themselves. Sure, this browser automation tool – which just automatically clicked links on Facebook's own settings pages – seemed to do what the users wanted. But what if the user interface changed? What if so many users added this feature to Facebook without Facebook's permission that they overwhelmed Facebook's (presumably tiny and fragile) servers and crashed the system?
These arguments have lately resurfaced with Ethan Zuckerman and Knight First Amendment Institute's lawsuit to clarify that "Unfollow Everything 2.0" is legal and doesn't violate any of those "felony contempt of business model" laws:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/
Sure, Zuckerman seems like a good guy, but what if he makes a mistake and his automation tool does something you don't want? You, the Facebook user, are also a nice guy, but let's face it, you're also a naive dolt and you can't be trusted to make decisions for yourself. Those decisions can only be made by Facebook, whom we can rely upon to exercise its authority wisely.
Other versions of this argument surfaced in the debate over the EU's decision to mandate interoperability for end-to-end encrypted (E2EE) messaging through the Digital Markets Act (DMA), which would let you switch from, say, Whatsapp to Signal and still send messages to your Whatsapp contacts.
There are some good arguments that this could go horribly awry. If it is rushed, or internally sabotaged by the EU's state security services who loathe the privacy that comes from encrypted messaging, it could expose billions of people to serious risks.
But that's not the only argument that DMA opponents made: they also argued that even if interoperable messaging worked perfectly and had no security breaches, it would still be bad for users, because this would make it impossible for tech giants like Meta, Google and Apple to spy on message traffic (if not its content) and identify likely coordinated harassment campaigns. This is literally the identical argument the NSA made in support of its "metadata" mass-surveillance program: "Reading your messages might violate your privacy, but watching your messages doesn't."
This is obvious nonsense, so its proponents need an equally obviously intellectually dishonest way to defend it. When called on the absurdity of "protecting" users by spying on them against their will, they simply shake their heads and say, "You just can't understand the burdens of running a service with hundreds of millions or billions of users, and if I even tried to explain these issues to you, I would divulge secrets that I'm legally and ethically bound to keep. And even if I could tell you, you wouldn't understand, because anyone who doesn't work for a Big Tech company is a naive dolt who can't be trusted to understand how the world works (much like our users)."
Not coincidentally, this is also literally the same argument the NSA makes in support of mass surveillance, and there's a very useful name for it: scalesplaining.
Now, it's totally true that every one of us is capable of lapses in judgment that put us, and the people connected to us, at risk (my own parents gave their genome to the pseudoscience genetic surveillance company 23andme, which means they have my genome, too). A true information fiduciary shouldn't automatically deliver everything the user asks for. When the agent perceives that the user is about to put themselves in harm's way, it should throw up a roadblock and explain the risks to the user.
But the system should also let the user override it.
This is a contentious statement in information security circles. Users can be "socially engineered" (tricked), and even the most sophisticated users are vulnerable to this:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
The only way to be certain a user won't be tricked into taking a course of action is to forbid that course of action under any circumstances. If there is any means by which a user can flip the "are you very sure?" circuit-breaker back on, then the user can be tricked into using that means.
This is absolutely true. As you read these words, all over the world, vulnerable people are being tricked into speaking the very specific set of directives that cause a suspicious bank-teller to authorize a transfer or cash withdrawal that will result in their life's savings being stolen by a scammer:
https://www.thecut.com/article/amazon-scam-call-ftc-arrest-warrants.html
We keep making it harder for bank customers to make large transfers, but so long as it is possible to make such a transfer, the scammers have the means, motive and opportunity to discover how the process works, and they will go on to trick their victims into invoking that process.
Beyond a certain point, making it harder for bank depositors to harm themselves creates a world in which people who aren't being scammed find it nearly impossible to draw out a lot of cash for an emergency and where scam artists know exactly how to manage the trick. After all, non-scammers only rarely experience emergencies and thus have no opportunity to become practiced in navigating all the anti-fraud checks, while the fraudster gets to run through them several times per day, until they know them even better than the bank staff do.
This is broadly true of any system intended to control users at scale – beyond a certain point, additional security measures are trivially surmounted hurdles for dedicated bad actors and as nearly insurmountable hurdles for their victims:
https://pluralistic.net/2022/08/07/como-is-infosec/
At this point, we've had a couple of decades' worth of experience with technological "walled gardens" in which corporate executives get to override their users' decisions about how the system should work, even when that means reaching into the users' own computer and compelling it to thwart the user's desire. The record is inarguable: while companies often use those walls to lock bad guys out of the system, they also use the walls to lock their users in, so that they'll be easy pickings for the tech company that owns the system:
https://pluralistic.net/2023/02/05/battery-vampire/#drained
This is neatly predicted by enshittification's theory of constraints: when a company can override your choices, it will be irresistibly tempted to do so for its own benefit, and to your detriment.
What's more, the mere possibility that you can override the way the system works acts as a disciplining force on corporate executives, forcing them to reckon with your priorities even when these are counter to their shareholders' interests. If Facebook is genuinely worried that an "Unfollow Everything" script will break its servers, it can solve that by giving users an unfollow everything button of its own design. But so long as Facebook can sue anyone who makes an "Unfollow Everything" tool, they have no reason to give their users such a button, because it would give them more control over their Facebook experience, including the controls needed to use Facebook less.
It's been more than 20 years since Seth Schoen and I got a demo of Microsoft's first "trusted computing" system, with its "remote attestations," which would let remote servers demand and receive accurate information about what kind of computer you were using and what software was running on it.
This could be beneficial to the user – you could send a "remote attestation" to a third party you trusted and ask, "Hey, do you think my computer is infected with malicious software?" Since the trusted computing system produced its report on your computer using a sealed, separate processor that the user couldn't directly interact with, any malicious code you were infected with would not be able to forge this attestation.
But this remote attestation feature could also be used to allow Microsoft to block you from opening a Word document with Libreoffice, Apple Pages, or Google Docs, or it could be used to allow a website to refuse to send you pages if you were running an ad-blocker. In other words, it could transform your information fiduciary into a faithless agent.
Seth proposed an answer to this: "owner override," a hardware switch that would allow you to force your computer to lie on your behalf, when that was beneficial to you, for example, by insisting that you were using Microsoft Word to open a document when you were really using Apple Pages:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
Seth wasn't naive. He knew that such a system could be exploited by scammers and used to harm users. But Seth calculated – correctly! – that the risks of having a key to let yourself out of the walled garden were less than being stuck in a walled garden where some corporate executive got to decide whether and when you could leave.
Tech executives never stopped questing after a way to turn your user agent from a fiduciary into a traitor. Last year, Google toyed with the idea of adding remote attestation to web browsers, which would let services refuse to interact with you if they thought you were using an ad blocker:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
The reasoning for this was incredible: by adding remote attestation to browsers, they'd be creating "feature parity" with apps – that is, they'd be making it as practical for your browser to betray you as it is for your apps to do so (note that this is the same justification that the W3C gave for creating EME, the treacherous user agent in your browser – "streaming services won't allow you to access movies with your browser unless your browser is as enshittifiable and authoritarian as an app").
Technologists who work for giant tech companies can come up with endless scalesplaining explanations for why their bosses, and not you, should decide how your computer works. They're wrong. Your computer should do what you tell it to do:
https://www.eff.org/deeplinks/2023/08/your-computer-should-say-what-you-tell-it-say-1
These people can kid themselves that they're only taking away your power and handing it to their boss because they have your best interests at heart. As Upton Sinclair told us, it's impossible to get someone to understand something when their paycheck depends on them not understanding it.
The only way to get a tech boss to consistently treat you well is to ensure that if they stop, you can quit. Anything less is a one-way ticket to enshittification.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
340 notes · View notes
krjpalmer · 1 year
Photo
Tumblr media
BYTE December 1997
This issue isn’t available on the Internet Archive. It did go to some lengths to talk up the amazing new architecture of a chip being developed by Intel and Hewlett Packard. The magazine was a little less enthusiastic about “network computers.”
4 notes · View notes
redactedcrowart · 7 months
Text
Tumblr media
regret (maybe you shouldn't have fucking panini pressed your mancrush, dipshit)
381 notes · View notes
forgottengenius · 9 months
Text
کمپیوٹر چپ کو انسانی دماغ میں آزمانے کی اجازت
Tumblr media
امریکی نیورو ٹیکنالوجی کمپنی ’نیورا لنک‘ نے دعویٰ کیا ہے کہ اسے امریکی محکمہ صحت نے کمپیوٹر چپ کو انسانی دماغ میں آزمانے کی اجازت دے دی ہے۔ ’نیورا لنک‘ ٹوئٹر کے مالک ایلون مسک کی کمپنی ہے، جسے 2016 میں بنایا گیا تھا، اس کمپنی کا مقصد ایسی کمپیوٹرائزڈ چپ تیار کرنا ہے، جنہیں انسانی دماغ اور جسم میں داخل کر کے انسان ذات کو بیماریوں سے بچانا ہے۔ اسی کمپنی نے 2020 میں تیار کردہ کمپیوٹرائزڈ چپ کو جانوروں کے دماغ میں داخل کر کے اس کی آزمائش بھی کی تھی اور پھر کمپیوٹرائزڈ چپ والے جانوروں کو دنیا کے سامنے بھی پیش کیا گیا تھا۔ کمپنی نے مذکورہ چپ کی انسانوں پر آزمائش کے لیے امریکی محکمہ صحت سے اجازت طلب کی تھی اور اب نیورا لنک کو آزمائش کی اجازت دے دی گئی۔ نیورا لنک کی جانب سے اپنی ٹوئٹ میں دعویٰ کیا گیا کہ امریکا کے ’فوڈ اینڈ ڈرگ ایڈمنسٹریشن‘ (ایف ڈی اے) نے کمپیوٹرائزڈ چپ کی انسانی دماغ میں آزمائش کی اجازت دے دی۔
Tumblr media
اسی حوالے سے خبر رساں ادارے ’ایسوسی ایٹڈ پریس‘ (اے پی) نے بتایا کہ اگرچہ محکمہ صحت نے اس حوالے سے مزید کوئی تفصیلات شیئر نہیں کیں، تاہم اس بات کی تصدیق کی کہ ادارے نے ایلون مسک کی کمپنی کو کمپیوٹرائزڈ چپ کی آزمائش کی اجازت دے دی۔ نیورا لنک کے مطابق کمپیوٹرائزڈ چپ کی انسانی دماغ میں آزمائش کے پروگرام کو امریکی محکمہ صحت کے تعاون سے مکمل کیا جائے گا اور اس ضمن میں مزید تفصیلات جلد ہی جاری کی جائیں گی۔ ادارے کے مطابق ابھی آزمائشی پروگرام کے لیے بھرتیاں نہیں کی جا رہیں لیکن اس ضمن میں جلد ہی مزید معلومات کو عام کیا جائے گا۔ خیال ظاہر کیا جا رہا ہے کہ اگلے چند ماہ میں نیورا لنک کمپنی محدود رضاکاروں کو بھرتی کر کے ان کے دماغوں میں کمپیوٹرائزڈ چپ کو داخل کر کے آزمائشی پروگرام کا آغاز کر دے گی۔ نیورا لنک کی جانب سے بنائی گئی مذکورہ چپ کسی چھوٹے سکے کی سائز کی ہے اور وہ انتہائی پتلی ہے، جسے کسی بھی جاندار کے دماغ میں نصب کر کے اسے وائرلیس سسٹم کے ذریعے اسمارٹ فون سے منسلک کیا جاسکے گا۔
مذکورہ چپ فالج، انزائٹی، ڈپریشن، جوڑوں کے شدید درد، ریڑھ کی ہڈی کے درد، دماغ کے شدید متاثر ہو کر کام چھوڑنے، نابینا پن، سماعت گویائی سے محرومی، بے خوابی اور بے ہوشی کے دوروں سمیت دیگر بیماریوں اور مسائل کو فوری طور پر حل کرنے میں مدد فراہم کرے گی۔ مذکورہ چپ کو موبائل فون کے سم کارڈ کی طرح ایسے سسٹم سے بنایا گیا ہے جو سگنل کی مدد سے اسے اسمارٹ فون سے منسلک کرے گا اور پھر فون کے ذریعے مذکورہ چیزیں شامل کی جا سکیں گی اور چپ سے چیزیں نکالی بھی جا سکیں گی۔ مذکورہ چپ انسانی خیالات کا ریکارڈ بھی جمع کرے گی جب کہ انسان کی یادداشت کو بھی محفوظ رکھ سکے گی۔ چپ میں محفوظ انسانی یادداشت کو کمپیوٹر یا موبائل کے ذریعے کسی بھی وقت ری پلے کیا جا سکے گا یا کسی بھی وقت ماضی میں گزرے دنوں کو اسکرین پر ڈیٹا کی صورت میں لایا جا سکے گا۔
بشکریہ ڈان نیوز
0 notes
puppetmaster13u · 2 months
Text
Prompt 260
You know what could be a really funny and fun crossover? Especially with my constant dragon AUs? Danny Phantom and Wakfu. 
See, Danny and Tucker have decided to reincarnate together, almost like a vacation after reaching the age of 100. But see, they let Sam choose where they were going to be hanging out for the next however long, for fun! 
And see, she saw world full of plants and life and an utter asshole trying to destroy a plant-person city what the fuck- and she tosses their souls into that world right away. 
Now see, they didn’t really have any request about their reincarnation except that they were able to stick together. And what better way than for them to come from the same egg! Okay so maybe they arrived a little early- thank you Clockwork- but it’ll take a while to hatch anyway. 
Okay, so maybe they’ve hatched alone, but that’s also fine! Sure for literal babies probably not, but they know how to do stuff- mostly. They’ll figure it out! Besides, Tucker is a freaking dragon now, and that’s so cool! And Danny has these cool wing-horns and mini portals too! Sure it’s currently only for this world, but still! 
It’ll be fun, and honestly they know how to survive a desert! Mostly!
78 notes · View notes