Ablaze is the strongest and oldest of the band and behaves like a proud older brother towards his bandmates (not like JD back then) He's 39.
He is half Rock and half Pop Troll, which makes him a Rock-Pop Troll among his friends.
Ablaze is a calm and chill man, which makes him like a typical dad who allows absolutely everything! During the performance, he gains energy from the music and behaves teasingly during the choreographer.
He's just a cool and loyal Troll.
He has and can play electric guitar.
He literally sleeps in any place (Even standing, with a smile on his face ;])
He understands his close friends perfectly by their condition.
He likes to tease the hair of friends he meets on the way or when he stands next to them (especially Trickee and Hype come across this)
Ablaze really likes to hear about the activities of his friends, memorizing the smallest details. He has a very good memory.
If something goes wrong, or there is some tension in the band, Ablaze takes over and tries to restore harmony and stability in the band (he succeeds)
Hype
Hype is the leader of the band and the spirit of the company. He's 29.
He successfully works out compromises if there are any conflicts between his bandmates.
Hype is a truly caring Troll who speaks and does, motivating his band with infectious enthusiasm.
He wants his friends to feel not only recognized, but also supported, not just heard, but understood.
He coordinates meetings with the band and schedules concerts.
He has his own mug with the inscription "#1 Leader" which Boom gave him.
He often hangs out with Boom, and gave him the nickname "RainBro"
Hype has a habit of straightening his hair forward. That's why his hair is bent forward.
He clearly has silly and funny things that look funny in a good way.
Hype and Ablaze are responsible for the idea and lyrics of the song.
Boom
Boom is the designer and is responsible for the costumes for the performance. He's 28.
He's gay (canon) And he wasn't afraid to open up to his friends about it.
Boom is relaxed and cordial and sensitive to the feelings of others and appreciates harmony.
Boom easily reacts to the emotions of others, helping them establish harmony. He is even able to perceive the emotions of others just by listening to their voices.
He likes to spend time with his friends doing casual, fun things.
Boom will always find the right compliment to soften the heart that is ready to call his risky behavior irresponsibility or recklessness.
It's easy to make him laugh. He often laughs at Trickee's jokes.
He pricks up his ears in the direction where someone is talking to get into the conversation.
He is afraid to make mistakes when he creates costume designs.
He loves cocoa, exactly the kind that Ablaze makes.
Trickee
Trickee is the youngest of the band and a funny Troll. He's 26, just like Branch.
Nothing tires him out faster than talking about dry facts or harsh reality.
Trickee is responsible for the choreography, which he does very well.
He has shorter hair than his bandmates.
Trickee removes his glasses and pulls down his hair only when he is asleep or in order to walk freely.
He often does something risky, which forces his band to agree to protect him from the incident.
Trickee being the youngest, always hangs out with Ablaze, feeling protected and a little like a kid with an older Troll.
He loves almost everything Ablaze cooks! Its easy to convince Trickee to something (it is possible especially for Hype)
He has several copies of his glasses, as he broke his glasses not infrequently, even accidentally.
He has a habit of suddenly hugging friends from behind and sniffing their nice hair (this is normal for them x])
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
Studio execs love plausible sentence generators because they have a workflow that looks exactly like a writer-exec dynamic, only without any eye-rolling at the stupid “notes” the exec gives the writer.
All an exec wants is to bark out “Hey, nerd, make me another E.T., except make the hero a dog, and set it on Mars.” After the writer faithfully produces this script, the exec can say, “OK, put put a love interest in the second act, and give me a big gunfight at the climax,” and the writer dutifully makes the changes.
This is exactly how prompting an LLM works.
A writer and a studio exec are lost in the desert, dying of thirst.
Just as they are about to perish, they come upon an oasis, with a cool sparkling pool of water.
The writer drops to their knees and thanks the fates for saving their lives.
But then, the studio exec unzips his pants, pulls out his cock and starts pissing in the water.
“What the fuck are you doing?” the writer demands.
“Don’t worry,” the exec says, “I’m making it better.”
- Everything Made By an AI Is In the Public Domain: The US Copyright Office offers creative workers a powerful labor protective
THIS IS THE LAST DAY FOR MY KICKSTARTER for the audiobook for "The Internet Con: How To Seize the Means of Computation," a Big Tech disassembly manual to disenshittify the web and make a new, good internet to succeed the old, good internet. It's a DRM-free book, which means Audible won't carry it, so this crowdfunder is essential. Back now to get the audio, Verso hardcover and ebook:
http://seizethemeansofcomputation.org
Going to Burning Man? Catch me on Tuesday at 2:40pm on the Center Camp Stage for a talk about enshittification and how to reverse it; on Wednesday at noon, I'm hosting Dr Patrick Ball at Liminal Labs (6:15/F) for a talk on using statistics to prove high-level culpability in the recruitment of child soldiers.
On September 6 at 7pm, I'll be hosting Naomi Klein at the LA Public Library for the launch of Doppelganger.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Hiii so I absolutely love your lockdown series can't believe I've only seen it till now❣
Do you know of anyone else who writes cillian x reader or cillian x oc. They're so hard to find as there's lots of tommy fics that get tagged as Cillian x reader. TIA ❤
Hello lovely anon! Thank you so much for this, I’m delighted that you’re enjoying the lockdown madness 🥰 It always tickles me when other people like those goofs in love because they have a very special place in my heart ♥️
So, the Cillian writing fandom is quite small (but mighty!) but below are the folk I know for sure write stories for him. Where they have them, I’m linking you to their Cillian masterlists but most of these folk also write for Tommy Shelby and other Cillian characters too.
✨ @peakyscillian -> masterlist <- gotta start with my BESTIE who writes some of the most epic Cillian smut in the land. Her Date series and Bend The Rules are brilliant, please go and check out her work 🤍
✨ @look-at-the-soul -> masterlist <- Mar writes beautiful Cillian stories and has a long running series (that I really MUST catch up on! 🙈)
✨ @gypsy-girl-08 -> masterlist <- one of the doyenne’s of the Cillian literary fandom 🤭 So many amazing (and smutty!) stories including a staggering number of series.
✨ @queenshelby -> masterlist <- another long running Cillian writer with a panchant for smut (though often with age gaps, just fyi, in case that’s not your particular jam)
✨ @cillspropertea -> masterlist <- Annie writes beautiful, heartfelt stories for Cill. She hasn’t been as active lately but she has some great stuff in her back catalogue!
✨ @creativepawsworld -> masterlist <- I’m quite new to Scarlett’s wonderful work but I’m so glad we crossed paths because she’s great! Especially check out her young 90s Cillian story (another I need to catch up on 🤦🏼♀️).
✨ @missymurphy1985 -> masterlist <- one of the OG’s of the Cillian writing world, and although she is no longer producing new works, I cannot recommend her back catalogue highly enough. One of the first writers I started reading when I joined as a lurker 🤭
EDIT: Since Oppenheimer lots of new people have started writing for Cillian. Below are just a few of them.
✨ @mypoisonedvine -> masterlist <- jd writes some of the best one shots I’ve come across for such a big range of Cillian characters (including himself). She only just started writing for him in July and has already (as of 6 sept) written over 100k words for him! Word of warning though, lots of her stories are DARK AF so please read the warnings and engage only where you’re comfortable. It’s your job as a reader to curate your consumption.
✨ @burnyouwithacigarettelighter -> masterlist <- another newbie to the Cillian world and I would never have guessed she’d only just started writing smut! Check her out!
✨ @floralcyanidee / @floralcyanide -> masterlist <- her main is currently shadowbanned so adding both accounts here. Great smut and a lovely new find for the Cillian collection!
I’m sure I’ve forgotten people and if I have, I’m sorry guys, it’s not deliberate! I’m just stupid! 🙈 Please do feel free to add yourself or your friends to this list and help an anon out finding some quality Cillian content 🙌🏼 xx