Tumgik
#textfiles
geocitiesdig · 3 months
Text
Tumblr media Tumblr media Tumblr media
From the CD "Animation Factory - Essential Collection" (/ESS1.ISO/computer/computers), via diskmaster.textfiles.com.
1K notes · View notes
beelzemon-bm · 3 months
Text
listen, i've been seeing a lot about palworld and about how people want "better alternatives" to pokemon and if you want a monster taming/collecting based game that's more mature than your average pokemon game and like. i'm telling you, cyber sleuth and hackers memory are Right there. they're even turn based like pokemon. sure they take some getting used to, especially if you didn't grow up on the digimon story formula, but that's okay. there's more out there than just palworld with it's weird knockoff designs and more people just need to be nicer to digimon if you ask me. especially after years of it getting shit on regularly for "just being a knockoff" when it has its own world and characters to explore.
26 notes · View notes
Text
Tumblr media
messing around with creating more and less "realistic" "screenshots". text from Cosmos, 1984
Tumblr media
13 notes · View notes
learningcontainer · 1 year
Text
1 note · View note
ultraeditus · 2 years
Link
Large files and big data are one of many inevitabilities that comes with today’s computing. If you’ve ever had the misfortune of encountering a file so freakishly large that your poor text editor crashes when trying to open it, we feel your pain. We’ve been there. If you’ve read those Stack Overflow threads we posted above, you’ll know that people inevitably answer the question with examples of text editors that do open those large files. Well, large files up to some threshold.
1 note · View note
containmentbreach · 2 years
Text
i was talking about this on discord earlier but why have we as autistics abandoned the term special interest...it feels like everyone just uses hyperfixation now which is also a good word but i always saw the two as having different meanings. a hyperfixation is something you get intensely focused on for like a couple of weeks or months before the interest either fades or just becomes less intense. a special interest is something that like...pays rent in your brain. even if you aren't thinking about it at the time it's a part of your identity. it's something you love learning about or know everything about or just are overjoyed by. obviously people aren't always going to experience their special interests the same but i stand by the idea that they are different from hyperfixations. they're two unique kinds of consuming things. justice for special interests
1K notes · View notes
queenlua · 3 months
Text
i'm committing Adverb Crimes in the club tonight lads
4 notes · View notes
ordophilosophicus · 2 years
Text
If I didn't have 10 million projects, I'd start doing Death Note video Essays. For how popular the series is, there is so much left to overanalyze.
- Why do we collectively ignore Beyond Birthday and why he is actually important
- Death Note and the problem of Names
- Watari/Wammy and the most underrated and probably creepy character in the whole of death note
- why nobody talks about the shinigami world and Ryuks theft of that one Note Book
- death note noir [ in Depth Analysis on how DN utilizes Noir stereotypes]
67 notes · View notes
protechera · 11 months
Text
0 notes
baekuras · 1 year
Text
randomly at 1am being hit with the need to go dream up animatics again for my ocs
they always consume so much time though but by god is it fun to watch them when they’re done difficult choices to be made
1 note · View note
seat-safety-switch · 11 months
Text
I think we all have that family member we don’t want to see at the big functions. Family reunions. Family cookouts. Family feuds with a rival family over the distribution of methamphetamine in the tri-state area. Family funerals. Family violent reprisals shattering the peace of an otherwise calm suburban neighbourhood which had thought that it could just ignore the fact that a predator was living in their midst. Family bowling night.
In the past, I’ve told you about my squabbles with my incredibly rich cousin from the old country, Blyat Safety-Switch. He drapes himself in the most exquisite clothes, flies first class everywhere, and pays a person to wash his car for him. That last one is a little confusing to me, but I’m assured that very rich people opt to forego the simple pleasures of the hands in order to attend more business meetings. Yeah, he’s a real dick, and I don’t care if my mom reads this one and phones me up super angry in the morning. He’s not in this story.
Closer to home is what I would call the “less extended” section of the clan. When I was a young kid, my older cousin Mort and I would work on RC cars together, only to inevitably crank up the current a little bit too far and blow up the primitive batteries of the era. Sure, you could break open disposable cameras, remove the flash capacitor, and improve your thirty-foot times, but we had no money, and the tourists had long stopped coming by our neighbourhood once that issue of Time Magazine about our living conditions faded into memory. Mort and I had to come up with a new source of energy, and unfortunately at that time Mort had gotten the internet.
You see, back then the internet was full of bullshit. Not like today, where that bullshit is precision-engineered by an interlocking matrix of advanced computers and bad-faith foreign operatives. No, back then it was just made by bored people. Or at least mentally ill ones. Mort would download all these textfiles from BBSes, and then he would excitedly tell me about the things he had found with the help of his trusty Apple III+. You can make napalm in a microwave using styrofoam. You can make napalm in a microwave using gasoline and a kitchen sponge. You can clean a microwave before your parents come home using a combination of lemon juice, baking soda, and a toothbrush.
So what happened to Mort? He’s gainfully employed now, at a complicated office job that involves using Excel to save all of humanity. In other words, he’s still a sucker. And if he comes to Maw-Maw’s cookout this weekend, he still owes me a Tamiya Lunchbox.
146 notes · View notes
geocitiesdig · 3 months
Text
Tumblr media
From the CD "Animation Factory - Essential Collection" (/ESS1.ISO/computer/computers), via diskmaster.textfiles.com.
396 notes · View notes
beelzemon-bm · 11 months
Text
been tossing around a very very very self indulgent blog thats just me working through trauma and some weird personal problems via a renamon standin for myself with digimon world ds as a base bcs Nostalgia?
and if any of ygs would be interested in that lemme know, ik its a bit out of my norm but ive been doing a lot of retrospective thinking lately and just wanna take a hammer to my art style and make something really weird and artsy using my comfort game as a base to explore those feelings?
6 notes · View notes
Text
Tumblr media
colorized BBS logo
6 notes · View notes
starbage · 3 months
Text
so many solutions for how I did things in the game are objectively dumb as hell. That thing is being held together by spaghetticode and prayers.
some favorite examples:
the item menu is just one long chain of if-statements (at the very least, it DOES utilize labels to jump to the end once a match is found)
The pause menu is just a choice box with a different position; it’s all common events from there.
The changing titlescreen at the end of every playthrough is done by reading/writing to a textfile because I was too stupid to figure out how else to store the information. I had to figure out this method like a week before launch because the old way I did it was incredibly broken….
20 notes · View notes
canmom · 9 months
Text
yarr harr, fiddle de dee [more on piracy networks]
being a pirate is all right to be...
I didn't really intend this post as an overview of all the major methods of piracy. But... since a couple of alternatives have been mentioned in the comments... let me infodump talk a little about 1. Usenet and 2. direct peer-to-peer systems like Gnutella and Soulseek. How they work, what their advantages are on a system level, how convenient they are for the user, that kind of thing.
(Also a bit at the end about decentralised hash table driven networks like IPFS and Freenet, and the torrent indexer BTDigg).
Usenet
First Usenet! Usenet actually predates the web, it's one of the oldest ways people communicated on the internet. Essentially it's somewhere between a mailing list and a forum (more accurately, a BBS - BBSes were like forums you had to phone, to put it very crudely, and predate the internet as such).
On Usenet, it worked like this. You would subscribe to a newsgroup, which would have a hierarchical name like rec.arts.sf.tv.babylon5.moderated (for talking about your favourite TV show, Babylon 5) or alt.transgendered (for talking about trans shit circa 1992). You could send messages to the newsgroup, which would then be copied between the various Usenet servers, where other people could download them using a 'news reader' program. If one of the Usenet servers went down, the others acted as a backup. Usenet was a set of protocols rather than a service as such; it was up to the server owners which other servers they would sync with.
Usenet is only designed to send text information. In theory. Back in the day, when the internet was slow, this was generally exactly what people sent. Which didn't stop people posting, say, porn... in ASCII form. (for the sake of rigour, that textfile's probably from some random BBS, idk if that one ever got posted to Usenet). The maximum size of a Usenet post ('article', in traditional language) depends on the server, but it's usually less than a megabyte, which does not allow for much.
As the internet took off, use of Usenet in the traditional way declined. Usenet got flooded with new users (an event named 'Eternal September'; September was traditionally when a cohort of students would start at university and thus gain access to Usenet, causing an influx of new users who didn't know the norms) and superseded by the web. But it didn't get shut down or anything - how could it? It's a protocol; as long as at least one person is running a Usenet server, Usenet exists.
But while Usenet may be nigh-unusable as a discussion forum now thanks to the overwhelming amount of spam, people found another use for the infrastructure. Suppose you have a binary file - an encoded movie, for example. You can encode that into ASCII strings using Base64 or similar methods, split it up into small chunks, and post the whole lot onto Usenet, where it will get synchronised across the network. Then, somewhere on the web, you publish a list of all the Usenet posts and their position in the file. This generally uses the NZB format. A suitable newsreader can then take that NZB file and request all the relevant pieces from a Usenet server and assemble them into a file.
NZB sites are similar to torrent trackers in that they don't directly host pirated content, but tell you where to get it. Similar to torrent trackers, some are closed and some are open. However, rather than downloading the file piecemeal from whoever has a copy as in a torrent, you are downloading it piecemeal from a big central server farm. Since these servers are expensive to run, access to Usenet is usually a paid service.
For this to work you need the Usenet servers to hold onto the data for long enough to people to get it. Generally speaking the way it works is that the server has a certain storage buffer; when it runs out of space, it starts overwriting old files. So there's an average length of time until the old file gets deleted, known as the 'retention time'. For archival purposes, that's how long you got; if you want to keep something on Usenet after that, upload it again.
As a system for file distribution... well, it's flawed, because it was never really designed as a file sharing system, but somehow it works. The operator of a Usenet server has to keep tens of petabytes of storage, to hold onto all the data on the Usenet network for a retention period of years, including the hundreds of terabytes uploaded daily, much of which is spam; it also needs to fetch it reliably and quickly for users, when the files are spread across the stream of data in random places. That's quite a system engineering challenge! Not surprisingly, data sometimes ends up corrupted. There is also a certain amount of overhead associated with encoding to ASCII and including parity checks to avoid corruption, but it's not terribly severe. In practice... if you have access to Usenet and know your way to a decent NZB site, I remember it generally working pretty well. Sometimes there's stuff on Usenet that's hard to find on other sources.
Like torrents, Usenet offers a degree of redundancy. Suppose there's a copyrighted file on Usenet server A, and it gets a DMCA notice and complies. But it's still on Usenet servers B, C and D, and so the (ostensible) copyright holder has to go and DMCA them as well. However, it's less redundant, since there are fewer Usenet servers, and operating one is so much more involved. I think if the authorities really wanted to crush Usenet as a functional file distribution system, they'd have an easier time of it than destroying torrents. Probably the major reason they don't is that Usenet is now a fairly niche system, so the cost/benefit ratio would be limited.
In terms of security for users, compared to direct peer to peer services, downloading from Usenet has the advantage of not broadcasting your IP on the network. Assuming the server implements TLS (any modern service should), if you don't use a VPN, your ISP will be able to see that you connected to a Usenet server, but not what you downloaded.
In practice?
for torrenting, if you use public trackers you definitely 100% want a VPN. Media companies operate sniffers which will connect to the torrent swarm and keep track of what IP addresses connect. Then, they will tell your ISP 'hey, someone is seeding our copyrighted movie on xyz IP, tell them to stop'. At this point, your ISP will usually send you a threatening email on a first offence and maybe cutoff your internet on a second. Usually this is a slap on the wrist sort of punishment, ISPs really don't care that much, and they will reconnect you if you say sorry... but you can sidestep that completely with a VPN. at that point the sniffer can only see the VPN's IP address, which is useless to them.
for Usenet, the threat model is more niche. There's no law against connecting to Usenet, and to my knowledge, Usenet servers don't really pay attention to anyone downloading copyrighted material from their servers (after all, there's no way they don't know the main reason people are uploading terabytes of binary data every day lmao). But if you want to be sure the Usenet server doesn't ever see your IP address, and your ISP doesn't know you connected to Usenet, you can use a VPN.
(In general I would recommend a VPN any time you're pirating or doing anything you don't want your IP to be associated with. Better safe than sorry.)
What about speed? This rather depends on your choice of Usenet provider, how close it is to you, and what rate limits they impose, but in practice it's really good since it's built on incredibly robust, pre-web infrastructure; this is one of the biggest advantages of Usenet. For torrents, by contrast... it really depends on the swarm. A well seeded torrent can let you use your whole bandwidth, but sometimes you get unlucky and the only seed is on the other side of the planet and you can only get about 10kB/s off them.
So, in short, what's better, Usenet or BitTorrent? The answer is really It Depends, but there's no reason not to use both, because some stuff is easier to find on torrents (most anime fansub groups tend to go for torrent releases) and some stuff is easier to find on Usenet (e.g. if it's so old that the torrents are all dead). In the great hierarchy of piracy exclusivity, Usenet sits somewhere between private and public torrent trackers.
For Usenet, you will need to figure out where to find those NZBs. Many NZB sites require registration/payment to access the NZB listing, and some require you to be invited. However, it's easier to get into an NZB site than getting on a private torrent tracker, and requires less work once you're in to stay in.
Honestly? It surprises me that Usenet hasn't been subject to heavier suppression, since it's relatively centralised. It's got some measure of resilience, since Usenet servers are distributed around the world, and if they started ordering ISPs to block noncomplying Usenet servers, people would start using VPNs, proxies would spring up; it would go back to the familiar whack-a-mole game.
I speculate the only reason it's not more popular is the barrier to entry is just very slightly higher than torrents. Like, free always beats paid, even though in practice torrents cost the price of a VPN sub. Idk.
(You might say it requires technical know-how... but is 'go on the NZB indexer to download an NZB and then download a file from Usenet' really so much more abstruse than 'go on the tracker to download a torrent and then download a file from the swarm'?)
direct peer to peer (gnutella, soulseek, xdcc, etc.)
In a torrent, the file is split into small chunks, and you download pieces of your file from everyone who has a copy. This is fantastic for propagation of the file across a network because as soon as you have just one piece, you can start passing it on to other users. And it's great for downloading, since you can connect to lots of different seeds at once.
However, there is another form of peer to peer which is a lot simpler. You provide some means to find another person who has your file, and they send you the file directly.
This is the basis that LimeWire worked on. LimeWire used two protocols under the hood, one of them BitTorrent, the other a protocol called Gnutella. When the US government ordered LimeWire shut down, the company sent out a patch to LimeWire users that made the program delete itself. But both these protocols are still functioning. (In fact there's even an 'unofficial' fork of the LimeWire code that you can use.)
After LimeWire was shut down, Gnutella declined, but it didn't disappear by any means. The network is designed to be searchable, so you can send out a query like 'does anyone have a file whose name contains the string "Akira"' and this will spread out across the network, and you will get a list of people with copies of Akira, or the Akira soundtrack, and so on. So there's no need for indexers or trackers, the whole system is distributed. That said, you are relying on the user to tell the truth about the contents of the file. Gnutella has some algorithmic tricks to make scanning the network more efficient, though not to the same degree as DHTs in torrents. (DHTs can be fast because they are looking for one computer, the appointed tracker, based on a hash of the file contents. Tell me if you wanna know about DHTs, they're a fascinating subject lol).
Gnutella is not the only direct file sharing protocol. Another way you can introduce 'person who wants a file' and 'person who has a file' is to have a central server which everyone connects to, often providing a chatroom function along with coordinating connections.
This can be as simple as an IRC server. Certain IRC clients (by no means all) support a protocol called XDCC, which let you send files to another user. This has been used by, for example, anime fansub groups - it's not really true anymore, but there was a time where the major anime fansub groups operated XDCC bots and if you wanted their subs, you had to go on their IRC and write a command to the bot to send it to you.
XDCC honestly sucked though. It was slow if you didn't live near the XDCC bot, and often the connection would often crap out mid download and you'd have to manually resume (thankfully it was smart enough not to have to start over from the beginning), and of course, it is fiddly to go on a server and type a bunch of IRC commands. It also put the onus of maintaining distribution entirely on the fansub group - your group ran out of money or went defunct and shut down its xdcc bot? Tough luck. That said, it was good for getting old stuff that didn't have a torrent available.
Then there's Soulseek! Soulseek is a network that can be accessed using a handful of clients. It is relatively centralised - there are two major soulseek servers - and they operate a variety of chat rooms, primarily for discussing music.
To get on Soulseek you simply register a username, and you mark at least one folder for sharing. There doesn't have to be anything in it, but a lot of users have it set so that they won't share anything unless you're sharing a certain amount of data yourself.
You can search the network and get a list of users who have matching files, or browse through a specific user's folder. Each user can set up their own policy about upload speed caps and so on. If you find something you want to download, you can queue it up. The files will be downloaded in order.
One interesting quirk of Soulseek is that the uploader will be notified (not like a push notification, but you see a list of who's downloading/downloaded your files). So occasionally someone will notice you downloading and send you a friendly message.
Soulseek is very oriented towards music. Officially, its purpose is to help promote unsigned artists, not to infringe copyright; in practice it's primarily a place for music nerds to hang out and share their collections. And although it's faced a bit of legal heat, it seems to be getting by just fine.
However, there's no rule that you can only share music. A lot of people share films etc. There's really no telling what will be on Soulseek.
Since Soulseek is 1-to-1 connections only, it's often pretty slow, but it's often a good bet if you can't find something anywhere else, especially if that something is music. In terms of resilience, the reliance on a single central server to connect people to peers is a huge problem - that's what killed Napster back in the day, if the Soulseek server was shut down that would be game over... unless someone else set up a replacement and told all the clients where to connect. And yet, somehow it's gotten away with it so far!
In terms of accessibility, it's very easy: just download a client, pick a name and password, and share a few gigs (for example: some movies you torrented) and you're good.
In terms of safety, your IP is not directly visible in the client, but any user who connects directly to you would be able to find it out with a small amount of effort. I'm not aware of any cases of IP sniffers being used on Soulseek, but I would recommend a VPN all the same to cover your bases - better safe than sorry.
Besides the public networks like Soulseek and Gnutella, there are smaller-scale, secret networks that also work on direct connection basis, e.g. on university LANs, using software such as DC++. I cannot give you any advice on getting access to these, you just have to know the right person.
Is that all the ways you can possibly pirate? Nah, but I think that's the main ones.
Now for some more niche shit that's more about the kind of 'future of piracy' type questions in the OP, like, can the points of failure be removed..?
IPFS
Since I talked a little above about DHTs for torrents, I should maybe spare a few words about this thing. Currently on the internet you specify the address of a certain computer connected to the network using an IP address. (Well, typically the first step is to use the DNS to get an IP address.) IPFS is based on the idea of 'content-based addressing' instead; like torrents, it specifies a file using a hash of the content.
This leads to a 'distributed file system'; the ins and outs are fairly complicated but it has several layers of querying. You can broadcast that you want a particular chunk of data to "nearby" nodes; if that fails to get a hit, you can query a DHT which directs you to someone who has a list of sources.
In part, the idea is to create a censorship-resistant network: if a node is removed, the data may still be available on other nodes. However, it makes no claim to outright permanence, and data that is not requested is gradually flushed from nodes by garbage collection. If you host a node, you can 'pin' data so it won't be deleted, or you can pay someone else to do that on their node. (There's also some cryptocurrency blockchain rubbish that is supposed to offer more genuine permanence.)
IPFS is supposed to act as a replacement for the web, according to its designers. This is questionable. Most of what we do on the web right now is impossible on IPFS. However, I happen to like static sites, and it's semi-good at that. It is, sympathetically, very immature; I remember reading one very frustrated author writing about how hard it was to deploy a site to IPFS, although that was some years ago and matters seem to have improved a bit since then.
I said 'semi-good'. Since the address of your site changes every time you update it, you will end up putting multiple redundant copies of your site onto the network at different hashes (though the old hashes will gradually disappear). You can set a DNS entry that points to the most recent IPFS address of your site, and rely on that propagating across the DNS servers. Or, there's a special mutable distributed name service on the IPFS network based around public/private key crypto; basically you use a hash of your public key as the address and that returns a link to the latest version of your site signed with your private key.
Goddamn that's a lot to try to summarise.
Does it really resist censorship? Sorta. If a file is popular enough to propagate enough the network, it's hard to censor it. If there's only one node with it, it's no stronger than any other website. If you wanted to use it as a long term distributed archive, it's arguably worse than torrents, because data that's not pinned is automatically flushed out of the network.
It's growing, if fairly slowly. You can announce and share stuff on it. It has been used to bypass various kinds of web censorship now and then. Cloudflare set a bunch of IPFS nodes on their network last year. But honestly? Right now it's one of those projects that is mostly used by tech nerds to talk to other tech nerds. And unfortunately, it seems to have caught a mild infection of cryptocurrency bullshit as well. Thankfully none of that is necessary.
What about piracy? Is this useful for our nefarious purposes? Well, sort of. Libgen has released all its books on IPFS; there is apparently an effort to upload the content of ZLib to IPFS as well, under the umbrella of 'Anna's Archive' which is a meta-search engine for LibGen, SciHub and a backup of ZLib. By nature of IPFS, you can't put the actual libgen index site on it (since it constantly changes as new books are uploaded, and dynamic serverside features like search are impossible on IPFS). But books are an ideal fit for IPFS since they're usually pretty small.
For larger files, they are apparently split into 256kiB chunks and hashed individually. The IPFS address links to a file containing a list of chunk hashes, or potentially a list of lists of chunk hashes in a tree structure. (Similar to using a magnet link to acquire a torrent file; the short hash finds you a longer list of hashes. Technically, it's all done with Merkle trees, the same data structure used in torrents).
One interesting consequence of this design is that the chunks don't necessarily 'belong' to a particular file. If you're very lucky, some of your chunks will already be established on the network. This also further muddies the waters of whether a particular user is holding onto copyrighted data or not, since a particular hash/block might belong to both the tree of some copyrighted file and the tree of some non-copyrighted file. Isn't that fun?
The other question I had was about hash collisions. Allegedly, these are almost impossible with the SHA-256 hash used by default on IPFS, which produces a 256-bit address. This is tantamount to saying that of all the possible 256KiB strings of data, only at most about 1 in 8000 will actually ever be distributed with the IPFS. Given the amount of 256-kibibyte strings is around 4.5 * 10^631305, this actually seems like a fairly reasonable assumption. Though, given that number, it seems a bit unlikely that two files will ever actually have shared chunks. But who knows, files aren't just random data so maybe now and then, there will be the same quarter-megabyte in two different places.
That said, for sharing large files, IPFS doesn't fundamentally offer a huge advantage over BitTorrent with DHT. If a lot of people are trying to download a file over IPFS, you will potentially see similar dynamics to a torrent swarm, where chunks spread out across the network. Instead of 'seeding' you have 'pinning'.
It's an interesting technology though, I'll be curious to see where it goes. And I strongly hope 'where it goes' is not 'increasingly taken over by cryptocurrency bullshit'.
In terms of security, an IPFS node is not anonymous. It's about as secure as torrents. Just like torrents, the DFT keeps a list of all the nodes that have a file. So if you run an IPFS node, it would be easy to sniff out if you are hosting a copyrighted file on IPFS. That said, you can relatively safely download from IPFS without running a node or sharing anything, since the IPFS.tech site can fetch data for you. Although - if you fetch a site via the IPFS.tech site (or any other site that provides IPFS access over http), IPFS.tech will gain a copy of the file and temporarily provide it. So it's not entirely tantamount to leeching - although given the level of traffic on IPFS.tech I can't imagine stuff lasts very long on there.
Freenet Hyphanet
Freenet (officially renamed to Hyphanet last month, but most widely known as Freenet) is another, somewhat older, content-based addressing distributed file store built around a DHT. The difference between IPFS and Freenet is that Freenet prioritises anonymity over speed. Like in IPFS, the data is split into chunks - but on Freenet, the file is spread out redundantly across multiple different nodes immediately, not when they download it, and is duplicated further whenever it's downloaded.
Unlike torrents and IPFS, looking up a file causes it to spread out across the network, instead of referring you to an IP address. Your request is routed around the network using hashes in the usual DHT way. If it runs into the file, it comes back, writing copies at each step along the way. If a node runs out of space it overwrites the chunks that haven't been touched in a while. So if you get a file back, you don't know where it came from. The only IP addresses you know are your neighbours in the network.
There's a lot of complicated and clever stuff about how the nodes swap roles and identities in the network to gradually converge towards an efficient structure while maintaining that degree of anonymity.
Much like IPFS, data on Freenet is not guaranteed to last forever. If there's a lot of demand, it will stick around - but if no nodes request the file for a while, it will gradually get flushed out.
As well as content-based hashing, the same algorithm can be used for routing to a cryptographic signature, which lets you define a semi-mutable 'subspace' (you can add new files later which will show up when the key is queried). In fact a whole lot of stuff seems to be built on this, including chat services and even a Usenet-like forum with a somewhat complex 'web of trust' anti-spam system.
If you use your computer as a Freenet node, you will necessarily be hosting whatever happens to route through it. Freenet is used for much shadier shit than piracy. As far as safety, the cops are trying to crack it, though probably copyrighted stuff is lower on their priority list than e.g. CSAM.
Is Freenet used for piracy? If it is, I can't find much about it on a cursory search. The major problem it has is latency. It's slow to look stuff up, and slow to download it since it has to be copied to every node between you and the source. The level of privacy it provides is just not necessary for everyday torrenting, where a VPN suffices.
BTDigg
Up above I lamented the lack of discoverability on BitTorrent. There is no way to really search the BitTorrent network if you don't know exactly the file you want. This comes with advantages (it's really fast; DHT queries can be directed to exactly the right node rather than spreading across the network as in Gnutella) but it means BitTorrent is dependent on external indices to know what's available on the network and where to look for it.
While I was checking I had everything right about IPFS, I learned there is a site called BTDigg (wikipedia) which maintains a database of torrents known from the Mainline DHT (the primary DHT used by BitTorrent). Essentially, when you use a magnet link to download a torrent file, you query the DHT to find a node that has the full .torrent file, which tells you what you need to download to get the actual content of the torrent. BTDigg has been running a scraper which notes magnet links coming through its part of the DHT and collects the corresponding .torrent files; it stores metadata and magnet links in a database that is text-searchable.
This database isn't hosted on the BitTorrent network, so it's as vulnerable to takedown as any other tracker, but it does function as a kind of backup record of what torrents exist if the original tracker has gone. So give that a try if the other sites fail.
Say something about TOR?
I've mentioned VPNs a bunch, but what about TOR? tl;dr: don't use TOR for most forms of piracy.
I'm not gonna talk about TOR in detail beyond to say I wouldn't recommend using TOR for piracy for a few reasons:
TOR doesn't protect you if you're using torrents. Due to the way the BitTorrent protocol works, your IP will leak to the tracker/DHT. So there's literally no point to using TOR.
If that's not enough to deter you, TOR is slow. It's not designed for massive file transfers and it's already under heavy use. Torrents would strain it much further.
If you want an anonymisation network designed with torrents in mind, instead consider I2P. Using a supported torrent client (right now p much just Vuze and its fork BiglyBT - I would recommend the latter), you can connect to a torrent swarm that exists purely inside the I2P network. That will protect you from IP sniffers, at the cost of reducing the pool of seeds you can reach. (It also might be slower in general thanks to the onion routing, not sure.)
What's the future of piracy?
So far the history of piracy has been defined by churn. Services and networks grow popular, then get shut down. But the demand continues to exist and sooner or later, they are replaced. Techniques are refined.
It would be nice to imagine that somewhere up here comes the final, unbeatable piracy technology. It should be... fast, accessible, easy to navigate, reliably anonymous, persistent, and too widespread and ~rhizomatic~ to effectively stamp out. At that point, when 'copies of art' can no longer function as a scarce commodity, what happens? Can it finally be decoupled from the ghoulish hand of capital? Well, if we ever find out, it will be in a very different world to this one.
Right now, BitTorrent seems the closest candidate. The persistent weaknesses: the need for indexers and trackers, the lack of IP anonymity, and the potential for torrents to die out. Also a lot of people see it as intimidating - there's a bunch of jargon (seeds, swarms, magnet links, trackers, peers, leeches, DHT) which is pretty simple in practice (click link, get thing) but presents a barrier to entry compared to googling 'watch x online free'.
Anyway, really the thing to do is, continue to pirate by any and all means available. Don't put it all in one basket, you know? Fortunately, humanity is waaaay ahead of me on that one.
do what you want 'cos a pirate is free you are a pirate
47 notes · View notes