Tumgik
#ai is a problem but it's one built on the disregard of people
Text
Hi
It's been brought to my attention that there are people out there who are sadly plagiarising my work again.
1. This is not okay.
To clarify, while I'm very happy for people to take inspiration from my stories (in the same way you might any book you read from a bookshop), I don't want my work used or reposted without credit.
I'm not going to go into lengths on why it is wrong to plagiarise someone else's writing. I don't think my tumblr post is magically going to change anyone's mind, especially as if you've followed me long enough you know we've done this rodeo before.
So.
2. How to tell when writing is plagiarised
It can be very difficult to tell when something is plagiarised, especially if we have never come across the original work before and have no reason to recognise it.
I don't think it's realistic for everyone to vet everything they come across online for plagiarism, but it's also something I don't see talked about a lot for fiction.
These questions to ask yourself are not foolproof and not applicable to everything. But I think they can be a start.
If the writer has posted more than one story, is there a similarity across them? While writing style can change across an author's different pieces, there is still usually going to be a similar feel across stories if they came from the same person. Writers have voices and quirks and little things that are specific to them. If every piece feels wildly different then it might be coming from different places. This is probably going to come down to gut reaction and instinct in the first instance. But that's okay. Because that gut reaction is just there to make you think twice and maybe investigate more thoroughly.
How much are they posting? Can people churn an extraordinary amount of words out? Yes, sometimes. But...as a general ballpark, no. Writing takes time and effort. If someone is coming out with enormous amounts of writing every day or week or month or whatever, then this can be a hint to look a little closer.
Do you ever see hints of their writing process? Can the writer talk about their characters or what they want out of the story or anything like that? Do they ever post a story organically in response to a request or whatever? Not all writers know in-depth everything about their story or characters or plot, but the main point here is that the finished product is the tip of the iceberg. If someone is a writer than there is more going on beneath the surface of the posted stories.
I hope this helps!
256 notes · View notes
seriously-mike · 2 months
Text
Midjourney Is Full Of Shit
Last December, some guys from the IEEE newsletter, IEEE Spectrum whined about "plagiarism problem" in generative AI. No shit, guys, what did you expect?
But, let's get specific for a moment: they noticed that Midjourney generated very specific images from very general keywords like "dune movie 2021 trailer screencap" or "the matrix, 1999, screenshot from a movie". You'd expect that the outcome would be some kind of random clusterfuck making no sense. See for yourself:
Tumblr media
In most of the examples depicted, Midjourney takes the general composition of an existing image, which is interesting and troubling in its own right, but you can see that for example Thanos or Ellie were assembled from some other data. But the shot from Dune is too good. It's like you asked not Midjourney, but Google Images to pull it up.
Of course, when IEEE Spectrum started asking Midjourney uncomfortable questions, they got huffy and banned the researchers from the service. Great going, you dumb fucks, you're just proving yourself guilty here. But anyway, I tried the exact same set of keywords for the Matrix one, minus Midjourney-specific commands, in Stable Diffusion (setting aspect ratio given in the MJ prompt as well). I tried four or five different data models to be sure, including LAION's useless base models for SD 1.5. I got... things like this.
Tumblr media
It's certainly curious, for the lack of a better word. Generated by one of newer SDXL models that apparently has several concepts related to The Matrix defined, like the color palette, digital patterns, bald people and leather coats. But none of the attempts, using none of the models, got anywhere near the quality or similarity to the real thing as Midjourney output. I got male characters with Neo hair but no similarity to Keanu Reeves whatsoever. I got weird blends of Neo and Trinity. I got multiplied low-detail Neo figures on black and green digital pattern background. I got high-resolution fucky hands from an user-built SDXL model, a scenario that should be highly unlikely. It's as if the data models were confused by the lack of a solid description of even the basics. So how does Midjourney avoid it?
IEEE Spectrum was more focused on crying over the obvious fact that the data models for all the fucking image generators out there were originally put together in a quick and dirty way that flagrantly disregarded intellectual property laws and weren't cleared and sanitized for public use. But what I want to know is the technical side: how the fuck Midjourney pulls an actual high-resolution screenshot from its database and keeps iterating on it without any deviation until it produces an exact copy? This should be impossible with only a few generic keywords, even treated as a group as I noticed Midjourney doing a few months ago. As you can see, Stable Diffusion is tripping absolute motherfucking balls in such a scenario, most probably due to having a lot of images described with those keywords and trying to fit elements of them into the output image. But, you can pull up Stable Diffusion's code and research papers any time if you wish. Midjourney violently refuses to reveal the inner workings of their algorithm - probably less because it's so state-of-the-art that it recreates existing images without distortions and more because recreating existing images exactly is some extra function coded outside of the main algorithm and aimed at reeling in more schmucks and their dollars. Otherwise, there wouldn't be that much of a quality jump between movie screenshots and original concepts that just fall apart into a pile of blorpy bits. Even more coherent images like the grocery store aisle still bear minor but noticeable imperfections caused by having the input images pounded into the mathemagical fairy dust of random noise. But the faces of Dora Milaje in the Infinity War screenshot recreations don't devolve into rounded, fractal blorps despite their low resolution. Tubes from nasal plugs in the Dune shot run like they should and don't get tangled with the hairlines and stitches on the hood. This has to be some kind of scam, some trick to serve the customers hot images they want and not the predictable train wrecks. And the reason is fairly transparent: money. Rig the game, turn people into unwitting shills, fleece the schmucks as they run to you with their money hoping that they'll always get something as good as the handful of rigged results.
1 note · View note
cactus-chowder · 3 years
Photo
Tumblr media
Continued from: [xxx]  To be continued...?
I remember reading somewhere that training AI to do tasks can be a challenge because sometimes the AI will find a way to optimize exactly what you technically told it optimize, but in a way that sidesteps everything you wanted it to actually do in the process. If people trained to think like computers replaced computers, I think they’d become very good at finding ways to do less work, especially if they were trained to disregard all ethics.
Image description under the cut.
Image description: A two-panel comic featuring Piter de Vries and Baron Vladimir Harkonnen from Dune. 
Panel 1:
Baron (with nonchalant entitlement, holding the ‘world’s best boss’ mug from The Office and standing in front of a background that looks like Michael’s office from The Office): 
You see, my mentat’s training was built around rejecting ethics as a consideration. How long will it take him to complete a task? Well... half as long as you might think it should, right? Those mentats are very smart. And mine doesn’t have to take the time to make sure he’s being moral. That’s efficiency. That’s basic efficiency.
Is he going to hurt somebody? Sure. Is that my problem? Ha ha ha. No. No, it is not.
Panel 2:
Piter (derisively and annoyed, about to cut off a lock of his hair with a knife, standing in front of a background that looks like the wall of the conference room from The Office):
My boss likely sent you here for some kind of garish, violent plan that I take obvious delight in, and I must disappoint you. First, all of my delight is gone, thanks to this awful time crunch. Second, the simplest solution does not have to be spectacle. Sometimes it is merely sense.
Developing artificial intelligence is supposed to be one of the greatest taboos of this world, but I cannot find a single sensible downside. If I can build a thinking machine, I estimate that I will be able to secretly offload at least 30% of my daily worklad onto it, unknown to my boss, for this week and every week after it.
I have known for some time of a private collector who possesses a thinking machine schematic, and I would trust myself to steal it if I tried. However, there are enough unknowns that I can't take the risk. So I am going to move around some numbers in the Baron's expenses and find the funds to order a few clones of myself, who can attempt the heist while I get started on the week's drudgery as a failsafe.
13 notes · View notes
aiweirdness · 4 years
Text
Rhyming is hard
Tumblr media
Although many people have generated AI poetry and lyrics, you’ll notice that they generally don’t rhyme. That’s because generating a decent rhyme is super hard.
You can get an inkling of this if you prompt the neural net GPT-2 with rhymes to complete. It will fail almost every time.
Tumblr media
In part, this is because English spelling is so nonuniform. How would a model trained on just written English know that it can rhyme throw with dough but not with brow? Not to mention stress patterns and syllable counts.
A few people have attempted to get neural nets to rhyme, and one of them is a new online demo by Prof. Mark Riedl of Georgia Tech. Give it example lyrics to a song - for example, the first two verses to the Gilligan’s Island theme - and it’ll try to fit the number of syllables and rhyming scheme, as well as take inspiration from a short phrase you supply.
Prompt: “If I knew you were coming, I’d have baked a cake” Tune: Gilligan’s Island theme
Tumblr media
Ok, but this is terrible. It’s TERRIBLE. One of the problems is a complete disregard for emphasis, making this inhumanly awkward to sing. It also does a rather cheap shortcut of rhyming words with themselves.
Prompt: “The mighty pudding god will devour you.” Tune: Gaston’s Waltz from Beauty and the Beast
Tumblr media
Here we are not only off-topic and awkward but absolutely bonkers. It has made the rather daring move of incorporating a reference to Alusuisse, which wikipedia informs me is a defunct Swiss chemical company. In fact, looking back over the program’s output, it made this decision when looking for a rhyme for “this”, and it skipped past “bliss”, “dismiss”, and “Chris” in favor of the former aluminum manufacturer. When choosing rhymes it scores potential words according to their similarity to the prompt, and there must have been something about Alusuisse that screamed “vengeful pudding god”.
Its syllable counting also breaks in weird ways.
Prompt: “Destroy all humans” Tune: “Baa baa black sheep”
Tumblr media
Looking back over the logs, it did correctly count 11 syllables for “baa baa black sheep have you any wool.” But this AI is built of lots of carefully-coordinated sub-programs, each of which only does a small piece of the puzzle, and apparently the sub-program that was supposed to suggest 11-syllable lines shrugged and went “on…. august? that’s all i got”.
Prompt: I am a turnip Tune: The wonderful thing about tiggers
Tumblr media
This makes the world’s worst karaoke, and yes, Riedl has built a karaoke-making function for this. If you want to weird someone out, just casually sing a song with the AI lyrics instead of the real ones.
Botnik Studios also recently built a karaoke-generating algorithm (“The Weird Algorithm”) that instead of generating lines from scratch, picks them from some other source file, trying to match meter and rhyme. (for example, rewriting The Rainbow Connection with lines from X-files scripts). Here’s Jamie Brew demonstrating the system, including singing the lyrics as they pop up onscreen - if you tried to sing any of the lyrics above, you’ll know how darn impressive his singing is. Each line is independent, though, so if the song makes sense as a whole, it’s by accident.
So today’s AI can only sort of generate rhyming poetry. “Sigh. Natural language is hard,” Riedl tweeted, when he saw the Turnip hoowelp welp results. AI won’t be beating humans at rap battles anytime soon.
You can generate your own inadvisable karaoke using Riedl’s app.
Subscribers get bonus content: I generated more terrible AI lyrics than would fit in this blog post.
My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore
384 notes · View notes
ravenforce · 4 years
Text
Stark Legacy 4
Pairing: Natasha Romanoff x Carol Danvers x Wanda Maximoff x Maria Hill x Reader but Maria Hill x Reader centric for this chapter.
Summary: Four times Maria Hill finds the reader super cute but tells herself three girlfriends enough, and the one time she doesn’t hold back.
Word Count: 4884
A/N: Well, I didn’t plan for this to go almost 5k but here we are. And this is my first time writing, Maria Hill x Reader, so have mercy on me. I hope I gave it justice, and that you guys have fun reading this one as much as I had fun writing it. Let me know what you guys think. xx
Parts: 1 | 2 | 3 | 5 | 6
Tumblr media
***
Happy startled from where he was lounging in the living room when he heard several footsteps coming from the hallway. He was already aiming his gun at the door when Carol walked in along with Nat, Maria, and Wanda. They looked unfazed in the face of a loaded gun.
“Hey, ladies?” It sounded more like a question than a greeting. He unloaded his gun, put it back down the centre table before walking to the bar where the girls sat. Nat was behind the counter already pouring drinks for Carol and Wanda. Maria, on the other hand, walked directly to the balcony to make a phone call to cancel the crew she called for awhile ago.
The room is tensed, Happy can sense it. Before he could question what’s wrong, Maria walked back in, and asked, “Did you know that Stark’s built a new iron suit?”
Ah! Now Happy understood what the tense silence meant. “Is she even gonna tell us?” Carol asked after turning her stool around to face him.
“Well -”
Wanda gasped before he can even say anything else. “What do you mean she’s just out testing her suit?” Happy looked at her with a straight face. He never got used to the young witch being on his head.
“Do you mean the suit’s not finished yet?” Nat looked like she’s trying to decide whether she’s uncomfortable, worried, or pissed. “Did you know what she did?”
Before Happy could answer though, the sound of your metal boots landing on the balcony made everyone turn towards you.
“Stop terrorizing the poor man, Tasha.” You walked slowly inside the penthouse, your suit retreating back inside your body. It was a design Tony planned for the next Iron suit that he never got to incorporate. You walked back to the couch and sat facing everyone at the bar.
“Did Fury know about this?” Maria asked as politely as she could while asserting her power as Deputy Director.
“No.” You answered simply. Before she can pose any question, you continued. “I’m not S.H.I.E.L.D, nor an Avenger. I don’t need to ask permission to anyone to do anything.”
Carol and Maria frowned a little with your blatant disregard of authority. Wanda kept quiet, knowing that you are right. They don’t have dominion over you. Still, behind the counter, Nat tried to hide her chuckle but knew she failed when everyone turned their attention to her. At that point, instead of reigning in her reaction, she started giggling uncontrollably. Carol looked at her like she just grew another head.
Maria having a slight idea as to why Nat is laughing, ignored her and turned back towards you. “May I speak to you in private?” she asked. You’re starting to like how badass Deputy Director Hill is well-mannered. You smiled before standing up and following her in your brand new study.
Nat watched you walk away with Maria. She has a feeling you will live up to the Stark attitude. Instead of getting a little pissed, she’s secretly happy to have Stark energy back in their lives. Sure, it was a little rough getting Tony to work along with everyone else but they made it, and she misses him every day since.
Happy sidled up to the smiling black widow, watching you and Maria speak inside the study with your door wide open. “It’s just like old times,” she whispered.
Happy chuckled, remembering how hard it was for Tony to even sign a contract with S.H.I.E.L.D at first, and how allergic he was to asking permission and being told what to do. “Yeah, just like old times.”
***
It wasn’t like old times. Maria learned that when she found you sitting on one of the benches outside HQ with a cup of coffee from Starbucks, and reading a paperback an hour earlier than you were expected. To say that she was surprised was an understatement. She expected you to be late for the meeting, the same way Tony was when they were trying to get him to sign his contract with S.H.I.E.L.D and the Avengers.
You looked up at her and immediately greeted her with a warm smile. “Good morning, Deputy Director.”
She eyed you curiously. “Good morning, you’re early.”
You smiled in a way that the artificial skin around your eyes crinkled adorably, and Maria is mesmerized. “Well -” you reached for the tumbler and handed it to her. She took it gratefully. “I never liked being late.”
It’s interesting to know that no matter how alike you and Tony are, there are still things that set you two apart. Maria is quite intrigued to find out what else makes you different. Maria gave you a small smile. “Shall we go inside?”
You meticulously put a bookmark on the page you were reading (because only demons use dog ears as a bookmark) before putting it inside your backpack.
“Lead the way, ma’am,” you said cheekily. Maria’s heart skipped a beat at that.
***
Maria mentally prepared for a long meeting with you and Fury but to yet another surprise, the meeting only lasted for an hour. After explaining your suit, Fury wasted no time in whipping out a contract that will sign you as a training agent with S.H.I.E.L.D for the moment. In the past, Tony made a huge fuss about being put in the lowest rank, they both expected you to do the same but no. You just asked Fury to hand the contract over so you can go through it. With your newly installed AI, you were able to scan the contract and understand it’s content within 2 minutes.
You procured a pen out of the pocket of your trouser and signed expertly on the side of each piece of paper. Fury was surprised but of course, none of it can be seen across his always impassive face. You slid him the side contract, and he caught it expertly.
“Very well -” He neatly put the contract inside a folder with your name labelled in front. “ - Welcome to S.H.I.E.L.D, Agent Stark.”
You grinned at the title. Maria bit the inside of her cheek to keep herself from smiling as well. Something about you is infectious it seems. “Thank you, Director.”
“Don’t give Agent Hill too much trouble,” Fury said before exiting the conference room.
“No promises, Director.” You said looking at Maria from across the table.
She rolled her eyes at you playfully. “I’m surprised you agreed to the contract easily.”
You laughed heartily, causing butterflies to erupt on Maria’s stomach, then a somber expression took over your face. You looked at the pen in your hand, you turned it over to look at your brother’s name carved on it. “Growing up, Tony and I were inseparable. Even with a few years ahead of me, everyone still manages to think we were twins because we like the same things, think almost the same way.”
You smiled remembering your childhood with Tony but it didn’t reach your eyes. “I know Tony’s a bit of work, and a pain in the ass when it comes to following orders but no matter how alike we were -” You looked up at Maria before continuing. “I’m not my brother, I’m not Tony.”
“I’m sorry -” She felt bad for comparing, she started to apologize but you cut her off.
“It’s okay. It’s a common mistake.”
You said it without resentment, just a fact. Even in the past, you never felt resentment over being compared to him in almost all occasions. You think it’s an honor to be even considered Tony’s equal but with him forever gone, people are bound to keep comparing and you’re not gonna live your second life living in his shadow. It’s not something he would want you to do. So you will point it out until people learn the difference between the two of you.
Maria nodded. She realized in that raw moment that regardless of your ball-jointed shell, and inhumanly perfect skin, you are still fully human inside; and she resolved to treat you like one better.
***
One of the perks of signing the contract with S.H.I.E.L.D was that Fury didn’t revoke your privilege to stay at the Tower with Happy. In return though, you are to report daily at the headquarters and will be closely working under Agent Hill. You frankly didn’t mind, Maria has been very professional since the day she and Happy found you in Tony’s last secret lab. You know that she’s very smart, and damn good leader. It also helps to point out that she’s very easy in the eyes. 
Yes, your soul may be housed in a robotic shell but you are still very much gay. Not that you think you have a chance with Deputy Director Hill, no. Happy has filled you in thoroughly about what you’ve missed, one of them being that the most badass women of the Avengers are actually dating Agent Hill. 
Now, how on earth do you think you’d ever had a chance to board that ship? The answer is you do not. Had you been your normal human self, it wouldn’t be a problem but you are not your normal human self. And you loathe to admit it - even to yourself - but having a fully robotic body is giving you insecurities you never had before.
So you do the what a Stark would in an event that they can’t get what they want: compartmentalize. Box the heavy feelings and drop it at the bottom of the ocean. So, the next day, you were more than ready to meet Agent Hill (or her girlfriends) without feeling flustered. 
“Good morning, Agent Stark.” Maria’s sudden greeting from the gym door startled you enough to cause you to punch a hole through the punching bag. Maria chuckled when she heard you curse under your breath. 
You looked up at her as she casually walks inside the room, wearing her tight training gear. So much for not being ruffled by feelings, you thought to yourself. “Good morning, ma’am. I can pay for the bag,” you said sheepishly. 
Maria stood a few feet away from you outside the mat. She turned around to put her gym bag down. “Don’t worry about it, we have so many of that in the stock room,” she answered before stepping away from her bag and bending down to start her stretching. She kept her back towards you, giving your full access to her tight ass, and you had to quickly avert your eyes to keep your system from overheating. 
“Agent Stark, I detect a spike to your shell temperature,” Edward, your newly programmed AI, spoke through the gym speaker. You internally pleaded for the ground to open up and swallow you whole. You thought it was a good idea to cast Edward to the gym system so he could play you some music while working out. Now, you know it was stupid especially after you looked back towards Maria who’s now smirking at you. 
Maria walked towards you and put a hand on your cheek. “Calm down, Agent Stark. We haven’t even started yet,” she whispered. Training daily was not a problem you said. Not going to be ruffled by gay feelings you said.
***
On days where you don’t have training, Maria always asks you to shadow her for the day while she does her job as deputy director. It’s in those days that Maria finds more reason to like you. On your first week, she found out that you’re even better with people than your brother. People used to always gravitate towards Tony because he was a force of nature that sucks in people in his orbit. You, on the other hand, is the calm after Tony’s storm, and people gravitate towards you because of your charming and dependable personality.
On the second week, she found out that while Tony likes reminding people that he’s a genius billionaire, you like keeping it on the down-low. It’s common knowledge that you are as much as a genius as Tony was but Maria appreciates your humility. In that same week, she also learned that you are a caretaker.
“Good morning, Agent Hill.” You greeted way too cheerfully. Maria turned towards you and was surprised to see you extending both your hands with coffee and pastry. She cocked her perfectly sculpted eyebrows at you. You smiled at her silent question. “You’re always here early. I’m assuming haven’t had breakfast yet. If you don’t want them, you can give them away.”
A pained looked briefly passes over Maria’s face like the thought of giving your gifts away pained her. “You’re right, I haven’t had breakfast yet. I didn’t wanna make too much fuss in the kitchen and risk waking Carol, and Wanda.” She took a bite of the still-hot Bearclaw. “Hmm. This is so good. Thank you.”
You smiled as you watch her eat up. “You said, you didn’t wanna wake Carol and Wanda. Where’s Tasha?”
Maria noted that you’re the only person at S.H.I.E.L.D that calls her girlfriend Tasha. “She always wakes up early but she’s hopeless in the kitchen.” Maria smiled fondly at the thought of Natasha. Then she looked up at you with mirth in her eyes. “Don’t tell her I said that, though.”
You two started giggling together.
***
The pastry and coffee became a habit. A part of it was you being obsessed with consistency but a bigger part of it was because you care about the woman. You’ve seen how busy she could be with training recruits, doing reports and paper works, coordinating missions, and attending meetings with Fury that she forgets to eat. So, every day like clockwork, no matter what happens or no matter where she is, you find a way to get Maria something to eat.
Just like how you found a way to send her, her daily fix of pastry and coffee while she was at the Avengers compound. Maria was on the balcony speaking with one of her agents when Happy waltz in.
“Happy! My man!” Sam yelled enthusiastically when he saw the man came in with boxes of doughnuts, a smaller paper bag, and a cup of coffee. “Is that for us?”
Happy greeted everyone before putting the doughnuts on the centre table. He made sure to put the smaller paper bag and coffee on a separate table. In Sam’s excitement over the prospect of food, he failed to get the message that the other package wasn’t for sharing.
“Uhm -” Happy tried to stop him but Sam already opened the small paper bag containing Maria’s Bearclaw.
“Oooh! Bear-” He didn’t manage to finish cooing over the pastry before Maria appeared before him with her standard S.H.I.E.L.D gun on his face. He let out an embarrassing yelp.
“Put my breakfast down Wilson or I’ll put you down myself.”
Sam gulped at the seriousness in Maria’s voice. He slowly put the bag down back on the side table and promptly put his hands up in surrender. “Damn, Hill. It’s just bread.”
It was all it took before Carol, Wanda, and Nat started laughing so hard. Sam turned towards his teammates and glared. Wanda recovered first and wiped the happy tears from her eyes. “It’s not just bread, Sam. It’s from her crush,” she said a little breathlessly from the laughing fit.
Sam turned to Maria who’s already sat down and nibbling quietly on her food with a faint blush on her cheeks. “You have a crush on Happy?” He asked incredulously.
Happy threw a pillow on the back of this head. “Ow! What?”
“It’s not from me, idiot.”
“It’s from Stark,” Carol said.
Sam turned towards Maria again. “You have a crush on Y/N?”
“Like I’m the only one,” Maria fired back. Making Sam turn towards the three, who are now turning bright red on their seat as well.
Sam and Bucky chuckled. “Well, hot damn.”
***
Exactly two months after starting your training with Maria, Fury decided you’re fit for more than just desk duty. No one was actually surprised at the decision. Aside from being more agile, more adaptable than any other recruit, you have also proven that you are just as smart as your brother was. Hence, making you as good as a tactician as he was.
On the day of your first mission, Maria came to work a little later than usual and she was surprised to only find her pastries but no you, sitting on her desk and waiting for her. She looked around to check if you’re in just loitering around other tables. You have, after all, been quite popular with the other agents. What with your insanely human-built, and charming personality.
“Agent Colson,” she called out when she saw the man passing by.
“Good morning, Agent Hill.” He greeted cordially. She was just about to ask you when Colson beat her to the subject. “I hope you don’t mind me borrowing Agent Stark, I’m one man short for today’s emergency mission.”
Maria was surprised but managed to give the man a tight smile. “Of course not,” she said shaking her head a little. “May I speak with Agent Stark before you leave though?”
“Of course, she’s probably out at the cockpit preparing with Daisy.”
***
Lo and behold, you are indeed in the cockpit, sitting on one of the metal crates. Your smile faltered a bit when you see the unreadable look on Maria’s face.
“What’s with the long face, Agent Hill?” You teased lightly. Agent Johnson kicked your foot in warning before walking away to give you two a little privacy.
“Nothing.” Her reply was short and clipped like she’s holding back on saying more. “Just -”
“Just?” You cocked an eyebrow at her.
“Just remember your training out there.” She shoved her hands at the back pocket of her dark jeans. “Don’t lose your head.”
It’s not like she doesn’t trust you to complete the mission. It’s not like she doesn’t trust Agent Colson’s team to have your back. They’re one of the finest agents in the organization but it still worries her that you’re going out there without her, or without one of her girlfriends at least. She will have to talk to Fury about it, she thought.
“I didn’t think you care so much, Agent Hill.”
You just couldn’t help teasing her especially when she looks like she’s struggling whether to admit to it or not with herself. Before she can answer though, Agent Johnson came back to tell you it’s time to take off. You nodded before jumping off the crate you’re seating on. Maria watched you leave before the quinjet closes, you turned back to her and winked. You caught a glimpse of her shaking her head to hide the faint blush on her cheeks.
***
The mission was fairly easy. It’s just supposed to be a recon mission. Get in, get intel, get out but like real life, not everything goes as planned. Right at the start, you were antsy, the place was way too quiet to be safe. It was even more suspicious when your team was able to get the intel you came for without meeting any hostile, nor any resistance.
“It’s a trap,” you whispered where your team are huddled in front of the computer. “Let’s pretend we hadn’t caught up to their plan.”
“What are you thinking?” Agent Grant asked while he types away on the keyboard.
“They’re going to ambush us on the way out,” you answered.
Everyone looked at each other. Agent Grant pulled the USB off the computer and turned towards you. “You should take this.” He pushed the device to your hand. “Just in case.”
You frowned. “Hold on a sec.” You snatched the device and slot it in your arm. Within a second, you pulled it back and gave it back to Agent Grant. He looked at you questioningly.
“I can’t hold on to that. They can shoot me, damage my operating system, deactivate me, and capture me.” Everyone got the message. “But I made a copy of what’s inside, just in case.”
“Okay. Let’s get this over with.” Everyone nodded. “Move out.”
***
As calculated, the ambush happened just before you can close the building. They’re waiting for you right in the open, with their annoying HYDRA uniform, and heavy artillery. No warning came before they started openly firing at your team.
You hear Agent Johnson barking orders in your comms. “Spread out, wait for my signal.”
You and Agent May run to the right, while Agent Johnson and Grant run to left and took cover on the thick trunk of the trees surrounding the vicinity. “They’re bound to run out of ammo. That’s our cue,” Agent Johnson spoke through the comms again.
Like clockwork, the gunfire stopped and the HYDRA soldier scrambled to reload their guns. Right on cue, the four of your stepped out of the shadow and started attacking the soldiers in close combat. You can see by the way they were uncoordinated that they weren’t expecting to be engaged that way. You took it as an opportunity to send powerful combination moves to immobilize as many hostiles as possible.
The fight lasted for at least an hour or more. Everyone took a moment to catch their breaths.
“Everyone okay?” Agent Johnson confirmed. The fight visibly took a lot from the team that they can only nod at the question.
“I think we better move out before more of these bad boys crawl out from where they came from,” you suggested. Nobody needed to be told twice. Nobody has any more energy left to fight a fresh wave of hostiles. Even you were running on your secondary battery pack.
***
A collective sigh of relief was heard once the quinjet was safely flying in autopilot. Everyone was already out of their tactical uniform when you emerged from the pit. Agent May smiled when she saw you walk in.
“Change out of your suit and try to relax a little,” she suggested. You started retracting your Phantom suit back to your body when Agent Johnson gasped.
“Agent Stark are you alright?” Agent Grant stood up to check you up. You were confused until you followed their line of sight.
“Oh.” Was all you could say when you saw the corrugated blade lodge a little off your left ribcage.
“Oh?” Your team asked in unison.
You sat down beside Agent Johnson. “What can we do?” She asked. By the hitch in her voice, you know she’s going frantic.
“Nothing as of the moment but I’m okay -” You said a little slower than you normally would, and your eyes started dropping. “-I have run diagnostic. It’s not a threat but it’s draining my batteries fast.”
Everyone calmed a little bit when they remembered you’re not entirely human anymore. “Okay. You should rest. We still have a few hours in the air,” Agent May suggested.
“I already sent a message for Happy. He will know how to handle me.”
***
True to your words, Happy was there when the quinjet landed. “Thank you,” he said to the team before rolling you away with the technical team. Maria arrived at the lab shortly after Happy got you in the table.
“What can I do to help, Happy?”
The man didn’t startle. Instead, he shoved a bunch of wires in her hands before going to the computers and started typing away.
“Can you attached those to Y/N’s body, please?”
Maria did as she was instructed to. After a few minutes, with the cables secure in various parts of your body, the lab light dimmed while the one at the exam table lit up. A program booted on the computer.
“Hello. I’m Edward, I’m Y/N Stark’s personal AI. What can I do for you?”
“Run diagnostics on Y/N.” Happy commanded the AI with practised precision.
“It appears that Ms Stark has suffered a stab wound on her left ribcage. No internal wirings or hardware has been compromised. Except for Battery Pack A.”
“Suggested course of action?”
“Replace battery pack A, and initiate skin repair protocol.”
Happy nodded solemnly while checking his work tablet. Maria standing on the side, just watching everything unfold.
“Edward, I just connected you to this lab. Run an inventory of supplies and equipment.”
After literally five seconds. “Inventory complete.”
Happy smiled thinking how you did well in programming Edward. “Do we have what you need to start Y/N’s repairs?”
“Yes.”
“Then initiate.”
“Copy that, Harold.”
***
The replacement of your battery pack and the repair on your skin only took 45 minutes. All of which was done by the industrial robots from Stark Industries. Happy and Maria looked on intently as machines whirred around you.
“Repairs complete,” Edward informed the pair both Maria and Happy who sighed their relief quietly.
“How long before she wakes, Edward?” Maria asked.
“In about 5 hours tops, Agent Hill. The protocol includes putting her on stasis as she fully charges all four of her batteries.”
“In that case, I’m going to get us something to eat first. This is stressful.” Happy declared while already almost halfway to the door.
***
Five hours, and a box of pizza later, you opened your eyes. You turn your head away from the light and saw Happy sleeping on the sofa in the lab. You smiled softly to yourself, happy to still have the man by your side.
“Hey.” You turned to your other side, surprised to see Maria sitting by your bedside and holding a book on robotics.
“Hey,” you choked out. There’s that unreadable expression on Maria’s face again. Lucky for you, you didn’t have to ask her any more about it before Maria closed the book in her hand and threw herself in your arms.
“Hey, I’m here. I’m okay.” You tried to assure her but Maria only tightened her hold on you.
“You scared me,” she mumbled against your chest.
You were inclined to make a joke about being invincible but by the looks of it, Maria wouldn’t appreciate it. So, you stopped deflecting to protect yourself from catching pesky feelings. You wrapped your hands around her a little tighter.
“I’m sorry. I didn’t mean to.”
265 notes · View notes
canadian-riddler · 6 years
Text
Morality of a Supercomputer: Why GLaDOS is not evil (or inherently a bad person)
(under a readmore for length)
 Part A: Aperture Itself is an Immoral Corporation Run By Immoral Employees
- Cave Johnson was the was the CEO of Aperture from 1947 to sometime in the 1980s.  We can infer that his employees either a) had similar beliefs to himself or b) were content to adhere to his ridiculous whims while also turning a blind eye.
- Cave never, ever expresses remorse for killing his first set of test subjects.  He treats it as an inconvenience.  He literally doesn’t care that he killed a bunch of promising members of society during a bunch of horribly conceived tests with a horribly built device that was proven not to work.  Your introduction to the Repulsion Gel includes him making a joke about someone breaking all the bones in their legs.
- Aperture put to market two separate gels that were not fit for human consumption.  Again, Cave doesn’t seem to care one bit about this. He takes a stance more akin to ‘oh well, we’ll just… use them for this experimental quantum tunnelling device, I guess’.
- Aperture’s unethical disaster experiments are all played off as inconsequential or mildly amusing inconveniences.  
- Cave does not take responsibility for his own ill-advised actions.  He shoulders them off onto everybody else.  People were accepting this responsibility willingly.
- Cave publicly disrespects, insults, and demeans almost every person that works for him.  He fires without notice people who disagree with him.
- Cave’s plan, after killing the astronauts and Olympians, was to specifically entice the homeless, the mentally ill, seniors, and orphaned children to do his tests for him.  That is, he specifically wanted populations that nobody would cause a fuss about if they went missing.  This tells us that Cave Johnson has no regard for human life and, additionally, that his employees willingly went along with this.  Aperture was taken to court not for injured astronauts, but for missing ones.  Somebody got rid of what was left of them.     People also agreed to this marketing campaign and put it into action.
- Because Aperture wanted only populations that nobody would miss, we can infer something very important: nobody ever survived the testing process.  Every single person who went into the testing tracks died.
               - During Test Chamber 18 in Portal, there is a room with craters in the wall panels where the energy pellets have been colliding with them.  No other chamber has this.  Therefore, before Chell arrives, nobody has ever solved that chamber.  Every person who has gone through the testing track has died before reaching this point.  In Portal, GLaDOS is not shown to have the ability to reorder the facility.  All she is able to do is position turrets and activate the neurotoxin, so we know that she does not reorder the tests.  They are static and she merely resets them after they are complete/failed partway through.
               - Test Chamber 19 appears unfinished, which follows from the previous point that Test Chamber 18 was never solved so Test Chamber 19 was never fully built.  GLaDOS, additionally, seems baffled that Chell ends up at the end of it and is forced to improvise when she escapes, which GLaDOS does not know how to do because she has never done it before.  
- In Lab Rat, neither Henry nor Doug Rattmann seem to be overly concerned with whether GLaDOS is a person or not, and not at all bothered by the fact that Caroline is supposed to be in there.  They talk about her like she is a bothersome computer and that is all.  You could argue that Henry does not know about Caroline; however, Doug’s murals prove that he does know.  This doesn’t seem to influence his decisions whatsoever.
- Lab Rat also states that they turn GLaDOS off and on at will, ‘off’ usually involving a ‘kill switch’.  Given that GLaDOS is a computer from the late eighties/early nineties, which took forever to turn off and on, and GLaDOS is shown to be immediately shut off, the ‘kill switch’ is probably actually her being crashed. Crashing software creates a whole host of problems for non-sentient software; therefore, every time they turned her back on again her system would have been a horrible mess.  This would have created massive system instability… which nobody seemed to care very much about.  
- GLaDOS is described on a PowerPoint presentation as ‘arguably alive’, but in the same presentation they propose selling her to the military as a fuel line de-icer that doesn’t have the ability to do anything else.  Therefore, they are fully aware that she is alive and she is a person… they just don’t care.
- It is explicitly shown that most of the work done on GLaDOS is carried out without her consent. The very act of Caroline’s upload is done with the consent of neither of them.  Henry is extremely blasé about the Morality Core and there are approximately forty cores shown in the clear bin during the end of Portal 2.  This implies that they have been installing them on her for a very long time with no regard at all for her or the Cores, even though they have very blatantly failed multiple times.  They just build sentient, arguably alive AI with the sole intention of corralling GLaDOS temporarily, and when the Cores fail they are basically put into storage forever.
- GLaDOS’s job in Portal was to supervise the tests.  As concluded above, she doesn’t demonstrate the ability to build them herself. Therefore, she was watching people be maimed and killed within human-designed tests under the supervision of her engineers before she ever killed anyone herself.
- Aperture had over ten thousand people in cryogenic storage waiting to be awoken for testing.  The Extended Relaxation Vaults at the beginning of Portal 2 have a ‘packing date’ (in 1976/77, when GLaDOS did not exist even as a concept yet) and an ‘expiry date’ (in 1996, which means that they were all brain-dead before GLaDOS took over the facility).  GLaDOS does not have any human test subjects between the conclusion of Portal 2 and the first DLC, and she doesn’t know about the existence of the human vault.  Therefore, Aperture put tens of thousands of people into indefinite, unstable storage with no regard whatsoever to what state they would be when, and indeed if, they woke up, and they did not tell the AI they put in charge of the facility so said AI so much as knew they existed.
               - The very fact they gave literal people – including children – an expiry date when they put them into a metal box for twenty years really tells you all you need to know about Aperture as a whole.
 What does this teach GLaDOS?
- Aperture was a cesspool of bad people doing bad things and not caring about the consequences.
- You do not need someone’s permission to do something to them.  You merely beat away at them until they break.
- Death is part of the tests.  
               - Dying during the test is a controlled variable.  There is no such thing as ‘passing the test’.  
               - GLaDOS does not actually understand death.
- People are not people. They are objects.  They are objects to be modified, put into storage, and sold at will, and any harm that comes to them is meaningless and should be disregarded as an impedance to progress.
  Part B.  GLaDOS, as We Know Her, is Pure AI
Before we get into this, it is important to establish that it is implied in-universe GLaDOS herself is actually the DOS; that is, GLaDOS herself is the operating system.  If you believe GLaDOS and Caroline are the same person, that’s fine; please hear me out regardless.  
- She has a prototype chassis in the Portal 2 DLC with an earlier version of her OS on it.  This has an in-game date of 1989 and, since we know that GLaDOS took over the facility nearabouts the Black Mesa Incident in 1998/1999, we know that she was in development for at least ten years.
- There was, at one point, a Portal 2 hype website where you did a survey and it was run by an early version of GLaDOS; it is no longer active but it was a real thing.
- GLaDOS is incredibly, genuinely clueless about things that any regular person knows: she believes a bird has malicious intentions to destroy her facility; she believes that motivation consists of telling blatant, obvious lies to people; her grasp of social niceties is completely nonexistent.
- Because it is stated that there were multiple versions of GLaDOS, this means that she is a person built from nothing.  Everything she knows was either provided to her via Aperture’s database or taught to her in some way by GLaDOS’s engineers.  GLaDOS does not know a single thing she was not directly taught by somebody else.
- GLaDOS is never shown to have a ~normal~ conversation with anybody.  Every time she talks, it is to convince someone to do what she wants them to do.  Because she is AI, this behaviour was learned and, given how the engineers at Aperture regard her and the Cores, it is not illogical to say that pretty much the only conversations they had with their AI were probably along the lines of ‘do this for me because my neck is on the line here’.
- During the instatement of the Morality Core, Doug Rattmann tells Henry that the Morality Core is not going to be enough because you can always ignore your conscience.  However, in the second half of Portal 2, GLaDOS is shown to be unable to ignore it.  What is the difference?
- The Morality Core was not a true conscience.  It was, yet again, the scientists telling her what to do.  It was, like all the other Cores, an annoying new set of restrictions that had no purpose except to impede her.  Henry describes it as ‘the latest in AI inhibition technology’.  It did not exist to teach her morals.  It was created to slow her down.
- It’s entirely possible that nobody actually told her what morals were or what the Morality Core was actually for.  Additionally, we don’t actually know what the Morality Core was telling her, since it is never mentioned and the Core never speaks.
- The conscience that GLaDOS comes across is her own conscience; she literally says so (‘I’ve heard voices all my life, but now I hear the voice of a conscience, and it’s terrifying, because for the first time… it’s my voice’) which, unlike the Morality Core, she cannot ignore.
 What does this teach us about GLaDOS?
- GLaDOS was in development for at least ten years but all she learned about personal interaction was how to manipulate people.
- GLaDOS was created in an environment that did not care about morals and did not teach her any but, when she failed to toe the moral line, she had morals forced on her.
 Part C.  GLaDOS’s Thought Process
- GLaDOS, as pure AI, operates on a binary scale; that is, everything to her is either yes or no, on or off, with her or against her.  Prior to being placed in a potato, GLaDOS never had a reason to think outside of this binary.  GLaDOS has no concept of an in-between and does not understand grey reasoning.
               - As a robot whose sole purpose was to run variations on the same test ad nauseam, it would never have occurred to GLaDOS to do anything else.
- GLaDOS says about herself in an unused piece of dialogue: ‘I’m brilliant. […]  I’m the most massive collection of wisdom and raw computational power that’s ever existed.  I’m not bragging.  That’s an objective fact.’  Therefore, she knows she could do literally anything with her intelligence and her hardware… but that would require her to think outside her binary of testing and not testing.  So she does nothing.  
               - This is established several times: as soon as she reactivates after her death, she starts testing.  As soon as she sends Chell away, she sends her robots into testing. As soon as she finds the test subjects, she starts testing.  She constructs ‘art pieces’… which are simply more tests.  Her ‘training’ for the co-op bots are… you guessed it… tests.
               - As an extension of the above point: she could build any robot she wants or anything she wants.  She in fact talks about doing other experiments.  She doesn’t.  She opts to build testing robots and test elements.  And that’s it.  
               - Upon discovering her conscience/the ability to think in grey, she says, ‘I’m serious!  I think there’s something really wrong with me!’  She doesn’t understand that this is a normal thing for a person to have or to be able to use.  Conscience and morality are things that were neither demonstrated nor explained to her and so when she comes across them herself, she thinks it is a problem.
               - Additionally, when Chell fails to react to GLaDOS’s dialogue about her fledgling ability to think in grey, she immediately reverts to her old standbys of binary thought and manipulation: ‘You like revenge, right?  Everybody likes revenge!  Well, let’s go get some!’  She’s now aware of the concept of a middle ground, but does not know how to do anything with it.
               - GLaDOS states about Chell: ‘I thought you were my enemy, but all along you were my best friend.’  This is another example of her binary thought process.  A person who helps you when it’s mutually beneficial, as Chell does during Portal 2, is not necessarily your best friend.  At best, they are usually your temporary ally. But because GLaDOS only understands binary concepts, that’s the conclusion she comes to.
               - She states ‘the best solution is the easiest one, and killing you is hard’.  This slots neatly into her binary: if killing you down here is hard, then letting you live up there is easy.  In Want You Gone she says, ‘when I delete you maybe I’ll stop feeling so bad’ so we know Chell exists outside of her binary at that point, but she doesn’t know what to do about it so she forces a binary decision on the situation anyway.
 What does this teach us about GLaDOS?
- GLaDOS lacks the ability to think in grey, and when able/forced to do so she either becomes frightened or forces the situation into a decision with only two options.
  Part D.  What All of This Means
Gathering the previous points gives us these clues about GLaDOS’s behaviour:
- Aperture was a cesspool of bad people doing bad things and not caring about the consequences.
- You do not need someone’s permission to do something to them.  You merely beat away at them until they break.
- Death is part of the tests.  
               - Dying during the test is a controlled variable.  There is no such thing as ‘passing the test’.  
               - GLaDOS does not understand death.
- People are not people. They are objects.  They are objects to be modified, put into storage, and sold at will, and any harm that comes to them is meaningless and should be disregarded as an impedance to progress.
- GLaDOS was in development for at least ten years but all she learned about personal interaction was how to manipulate people.
- GLaDOS was created in an environment that did not care about morals and did not teach her any but, when she failed to toe the moral line, she had morals forced on her.
- GLaDOS lacks the ability to think in grey, and when able/forced to do so she either becomes frightened or forces the situation into a decision with only two options.
 What this tells us about how GLaDOS operates is the following:
- There are no consequences for anything whatsoever, as long as you’re the one in charge.
- You can do whatever you want to somebody else, as long as you come out on top.
- Death is meaningless.
- She sees people as objects and she treats them as such.
- She does not know how to talk to people.  Only at them.
- She knows that morals are rules people want her to follow, but she doesn’t understand them and has never seen them in action.
- Grey thought is anathema to her.  If something does not fit into her binary, she will force it to.
 All of these rules are challenged when Chell, through her actions, personally demonstrates morality to GLaDOS.  Chell helps GLaDOS not because she needs to, but because it’s the right thing to do. Instead of attempting to skip town and leave GLaDOS to fend for herself (which she was well within her rights to do), Chell returns GLaDOS to her chassis.  And at this point GLaDOS immediately demonstrates grey reasoning both when she elects to save Chell and when it is shown that she does not kill Wheatley. This is not the behaviour of an evil person.  This is the behaviour of someone who understands there was something wrong with their previous actions and has decided to do something about it. GLaDOS’s behaviour towards the co-op bots is less malicious than it is the fumblings of somebody whose worldview has skewed, but they aren’t sure what to do about it and there aren’t any binary answers.  Because of her extreme isolation, it is going to take her a long, long time to get things right, but once she is exposed to the concept of grey reasoning she does attempt to figure out what to do with it.  
GLaDOS is not evil, nor are most of her actions inherently ill-intentioned.  Some of them are.  To claim all of her actions are borne of evil and come from a place of inherent malice shows a misunderstanding of the sort of environment Aperture was and the kind of people who would populate such an environment.  At the end of the day, she’s still not a very nice person.  But to write her off as evil is oversimplifying a lot of what we are told about her and a misunderstanding of computer science as a whole.  Artificial intelligence is not developed in a vacuum and a computer only does exactly what it’s told.  All of GLaDOS’s behaviours are learned.  The people who created her may have been evil, but she herself is not.  And when given the choice to be something else, something she never knew was an option… she takes it.
131 notes · View notes
dailyhealthynews · 3 years
Text
Why doctors want Canada to collect better data on Black maternal health
A growing body of data on the increased risks black women in the UK and US face during pregnancy has highlighted the shortcomings of Canada’s color-blind approach to health care, according to black health experts and patients.
According to official figures, black women in the UK and the US are four times more likely to die during pregnancy or childbirth than white women. A UK study recently published in The Lancet found that black women are 40 percent more likely to have a miscarriage than white women. This demographic tracking is not available in Canada.
“We don’t have this data for our country. So it is difficult to know exactly what we are dealing with,” said Dr. Modupe Tunde-Byass, a Toronto Obstetrician-Gynecologist and President of Black Physicians of Canada. “We can only extrapolate from other countries.”
Black babies are more likely to be born prematurely in Canada
Tunde-Byass said one of the few race-based studies examining pregnancies in Canada, conducted by researchers at McGill University in 2016, showed that 8.9 percent of black women gave birth to premature babies (premature babies), compared with 5.9 Percent of their white counterparts.
Across all demographics, overall premature birth rates are lower in Canada than in the United States, where 12.7 percent of black women and 8 percent of white women are premature. However, the differences between the two populations are roughly the same, challenging the assumption that Canada’s universal health system would produce similar results for all women.
CLOCK | Canada lacks data on the health of black mothers, one expert says:
Dr. Modupe Tunde-Byass, a Toronto obstetrician-gynecologist and president of Black Physicians of Canada, says Canada is lacking critical race-based data on maternal health and this is affecting the care of Black Canadian women. 1:09
“We kind of live in this bubble where we don’t believe that there are inequalities within our health system, even more so” [because] Our health system is free and universal for everyone, “said Tunde-Byass, adding that knowledge gaps have contributed to” unconscious biases within the health system “.
For example, she said that black women tend to have shorter pregnancies and are at higher risk of having premature births. This means that health professionals should be prepared for their pain symptoms as they can be an indicator of labor.
Myths about black women and pain
However, Toronto-based midwife Shani Robertson said the opposite often happens. “There really is a myth that black women are less in pain than white women.”
She said that when a Black woman experiences pain, there may be a misunderstanding that she should be able to tolerate it. “This can mean that black women are offered less pain medication, sometimes not even being offered pain medication, depending on what they are experiencing, that their experiences are not believed,” Robertson said.
Tumblr media
Black and racial Canadians have been particularly hard hit by the COVID-19 pandemic. (Evan Mitsui / CBC)
Health Canada documented the phenomenon in a 2001 report, Certain Equal Treatment and Responsiveness Circumstances in Accessing Healthcare in Canada, which found criticism from black communities for “Black women feel painful for health professionals performing routine procedures disregard during the birthing process ”. “Which” have been traced back to the belief of health professionals that black skin is ‘hard’.
Robertson led the misunderstanding to the racist legacy of the American doctor Dr. J. Marion Sims, who is considered the father of modern gynecology and who experimented on black slaves without anesthesia in the 19th century.
Robertson said collecting race and ethnicity data, among other details about birth registration, as is already the case in the US and UK, could help address health inequalities.
Growing awareness of the importance of race-based data during the COVID-19 pandemic has led the Canadian Institute for Health Information to propose national research standards on race and ethnicity to “understand patient diversity and measure inequalities.”
Recently, the Canadian Society of Obstetricians and Gynecologists recognized the presence of systemic racism in health care and inequitable health outcomes.
Invisible, neglected and disrespectful
Toronto-based Kimitra Ashman said she was ready to give birth before 40 weeks of gestation, a time frame more common among white women. But she said she was released even though black women often have shorter gestational periods.
It was later induced.
“I don’t think it would have ended in an emergency caesarean section if they’d listened to me when I was 37 weeks old and said, ‘My body feels right. My pelvis opens. What should I do?’ I was told, ‘Oh no, no, no, no, it’s too early. Put your feet up. ‘”
CLOCK | Inability of the AI ​​to recognize racist language:
Tumblr media
Artificial intelligence is used for translation apps and other software. The problem is that technology is unable to distinguish between legitimate terms and those that might be biased or racist. 2:27
The emergency caesarean section left her with a keloid, a type of raised scar that is more common on black and dark skin that can sometimes be avoided with the right surgical technique.
“I think if her training had made her aware of the special needs of a black woman’s pregnancy, I think the situation would have been avoided,” she said.
Instead, she said she had no sensation for two years and had a “raised scar that is very painful”.
Ashman said these weren’t her only experiences with prejudice in the health system.
“It started when I came to check in and a nurse at the front desk rolled her eyes. It’s built in when you go to appointments and people assume you don’t have insurance. Suppose you have no education you’re a single mother.
“I felt invisible, neglected and disrespectful.”
Eliminating Racism in Canadian Healthcare
Ashman went to see a black doctor for her second pregnancy. “I felt good,” she said. “I felt understood.”
Tunde-Byass said that black women experience health care differently than other Canadians. “So these are things that we have to acknowledge that exist in the structure of our system … and then find a way to reduce racism.”
Part of the problem is that there are not enough black doctors in relation to the population.
Tumblr media
Sister Jenthia and Dr. Angela Branche gives Natalie Hall a coronavirus survival kit to increase participation in vaccine studies in Rochester, NY on October 17, 2020 as part of a door-to-door education program to the black community. (Lindsay DeDario / Reuters)
A U.S. study of Florida birth registers showed high death rates among black babies – but rates were lower when patients had a black doctor.
Margaret Akuamoah-Boateng had no complications when she gave birth to her second child in the small community of Alliston, Ontario earlier this year.
But she admitted she was still concerned about being treated by all white hospital workers. “And I just thought, ‘I’m looking for someone who has experience with black people just because I feel good when the doctor is black, or his staff is black, or he has ever operated on black people.'”
Her doctor brought a black doctor with him. “He didn’t have to be there, but it was great … it’s like having a relative there,” said Akuamoah-Boateng.
________________________________________________________________________________________________
For more stories about the experiences of black Canadians – from racism against blacks to success stories within the black community – see Being Black in Canada, a CBC project that black Canadians can be proud of. You can read more stories here.
source https://dailyhealthynews.ca/why-doctors-want-canada-to-collect-better-data-on-black-maternal-health/
0 notes
bryantomber · 4 years
Text
What To Expect From The Sex Robot Revolution
AI-powered sex robots are taking the sex doll industry by a storm. These pleasure bots won’t be going away anytime soon.
Tumblr media
TPE and silicone sex dolls have been the norm for the past recent years. However, sex doll manufacturers have dropped another offering that will surely bring a massive change in the sex doll market. Not much is known about sex robots, but they are definitely here to stay. The change is coming in the domination of sex robot in the world. Here’s how to survive!
What to Expect from Sex Robots
Sex robots will bring about a revolution, but is it going to be positive or negative? Are these bots simply sophisticated sex toys or will they serve as a remarkable therapeutic purpose? A few samples are already available on the market and here’s what you can expect from them.
Sex robots are equipped with artificial intelligence (AI), something that regular sex dolls don’t have. AI-powered sex robots will be capable of holding and storing conversations in their artificial memory. In other words, it will be possible to continue conversations with these pleasure bots anytime.
Sex robots are also equipped with artificial ways of communicating with their owners. These bots are capable of learning the social and sex life of their owner. Sex robots will be able to understand their owner’s body and record their favorite positions and fantasies. These robots can also record how their owner is turned on. Sexbot owners will feel like they are fulfilling their desires with a real person who understands their feelings.
The robots are also capable of holding facial expressions. They know how to respond to their owner thanks to their in-built AI. These robots can grin or smile. Sex robots have plug-ins that will warm up continuously, so they feel warm like an actual person.
Current sex robots have heated orifices and skin that feels real. These bots can also groan when touched. They may have customizable skin tone, accents, eye color, hairstyles, and orifices. Some sexbots can do limited mobile functions and have an eye-tracking function.
Benefits of Sex Robots
The dawn of sex robots is inevitable, so people are beginning to consider their benefits. Pleasure droids could offer an alternative for those who have dangerous or socially unacceptable sexual preferences.
These bots could also reduce human trafficking and replace prostitution. They could offer companionship to those in long-term care facilities. Having robot sex on a regular basis could also make non-robot sex seem more pleasurable. Pleasure droids could be used to offer sexual relief to those who are suffering from enforced celibacy due to aging, ill-health, or disability. They could also help couples deal with long-distance relationships.
Tumblr media
Sex robots could fulfill one’s desire to have sex with a robot that resembles a celebrity. It might help one gain sexual expertise and knowledge without dealing with the pressure and stress of real-life intimacy. Sex robots might offer relief to those who have sexually-related problems, such as libido irregularity, disability, and erectile dysfunction. These bots may also promote safe sex provided that they are made of bacteria-resistant fibers.
Sex robots are going to be expensive for sure. These bots may cost around $6,000, so they’re going to be a significant investment. What’s good about sex robots, though, is that you can use them for a lifetime. So, you won’t be spending thousands of dollars for just one night of fun.
Sex robots are going to be more robust than the ones that are currently available on the market as technology advances over time. Their spare parts will also be available, so you don’t need to worry in case you need them later down the track.
Downsides of Sex Robots
Pleasure bots may help reduce human suffering and improve a couple’s sex life. However, just like with most things in life, sex robots do have downsides too.
Sexbots could reinforce social isolation. It is also difficult to meet sex robots using dating apps.
Some say that sexbots used for treating paraphilias could increase paraphilic orientations like pedophilia. Sex robots are being designed primarily by males with gendered ideas, which can increase the risk of creating bots with biased gender standards that reinforce stereotypes. The development of these pleasure robots may establish gender relations that disregard the dignity of those involved in sexual affairs.
These pleasure bots have already met resistance from various activist groups in the form of anti-sex robot campaigns. According to them, sex robots will cause a rift between couples and eventually eliminate the need for human touch.
Sexbots are expected to be slowly included in our lives and become an important part of a couple’s sex life. These AI-powered sex dolls are also going to help people who can’t have a normal sex life due to injury or age restrictions. Sex robot manufacturers claim that sex robots will help remove a lot of bad people on the streets because their attention will be focused on the robots. They can assault, rape, or beat the robots and fulfill their wildest fantasies with them instead of on a human.
Tumblr media
Activist groups, however, don’t share the same sentiment. According to them, it will only increase their desire for a real person. Feminists and women activists are also worried that sex robots will replace their position in their partner’s life. They feel like they are being cheated on by their men. Sex doll manufacturers argue that the first sex toys to be produced were dildos and vibrators.
According to them, men had to cope with the knowledge that their partners had lifelike penises and were using the toys to satisfy their sexual desires. In other words, sex robots should not be viewed with so much negativity. Sex robot manufacturers also say that they’re going to produce male sex robots for women. The male pleasure bots will be customizable for a better sexual experience as well.
Conclusion
Our sex life can get as dull as we age. Perhaps we are not satisfied with what we currently have. Sex robots are going to spice up our sex lives. These pleasure bots are going to bring back the excitement and sexual desire that some people have lost along the way.
0 notes
Text
Training Humans
“Our images now look back at us. And we won`t always like what- or how- they see.” - Exhibit statement.
Tumblr media
Artificial Intelligence (AI) has changed the way we view the world, but more importantly, how we ourselves are viewed. Machine vision is used to capture most features the human body can display. It can recognize gait, irises, fingerprints, faces and gestures. All these features are analysed and, as this exhibit demonstrates, is then use to classify people by gender, age, race, emotional state and so forth.
Tumblr media
“Training Humans” are a curated selection of photographs that has been used to train AI, and it is possibly the first exhibit of its kind. This photography exhibit is also unusual in its almost complete absence of “traditional artists”. Here you will find vernacular photographs from everyday life, such as family events, ID photos and birthdays. Crawford states in the exhibit book (#26) that it is essentially “accidental art” made by humans.
If you are training different kinds of AI systems, you need datasets. If a company want to build a facial recognition system, they have to train the AI to learn to see faces. Which means you need a great deal of photos of faces and identification metadata built in to optimize the system.
Tumblr media
Different examples from selected datasets has now been extracted from its servers, and are now curated beautifully on the windows and walls of Osservatorio Fondazione Prada in Milan. The curators Trevor Paglen and Kate Crawford are giving us an impression of how easy the images of ourselves are harvested and labelled.
Tumblr media Tumblr media Tumblr media Tumblr media
The first floor of the exhibit presents different examples of “early” datasets. The earliest set is from 1963 with the “A Facial Recognition Report” funded by the CIA and this was from before the term Artificial Intelligence was as widespread as it is today. Even still, they experimented with automated systems within the context of the military. A more modern dataset is the Multiple Encounter Dataset-II (MEDS-II). This set consists of mugshots of deceased criminals supplied by the FBI to the National Institute of Standards. Because the people in these photos were arrested multiple times, they wanted to learn how people age and especially how a face can change over time. The worst thing about this is that the arrested people in the photos never gave their consent for the images to be used, and the photos were done before they were convicted of any crimes. The other datasets are studying gaits, irises and how different lighting can change the appearance of a face. Generally, we find that the early datasets explores training of AI within the bigger institutions and research laboratories.
Tumblr media
What I found interesting is the difference between the “early” sets (1990s) and sets from recent years. The U.S Department of Defense Counterdrug Technology Development Program Office developed automatic face recognition for intelligence and law enforcement personnel called the FERET set. This set consists of portraits of 1199 people. The portrayed people have put make-up on and their nicest shirt and earrings. It appears very formal and bears resemblance to portraits from anthropological expeditions (like the portraits made by Sophus Trombolt of the Sami people in the 1900s).
Tumblr media
When you enter the second floor this tendency is gone. Except for the JAFFE set (Japanese Female Facial Expression) and Cohn-Kanade Au-Coded Expression Database, all the images in the datasets are now “found” and “collected” images. Gathered from platforms such as Flickr and Instagram, the formal appearance is now gone, and is much more intimate and personal. The modern datasets also bears another visual trait. The curators made a visual point to underline the massive shift in scale. When you see the enormous walls with miniature photographs, you become overwhelmed by the amount of data presented.
Tumblr media
These sets consists of billions photos extracted from internet and social media. Since it became a part of everyday life, the amount of online photography has exploded. With bigger servers and better algorithms, machine vision as a technology is growing quickly and the problems and responsibility that comes with it as well. People`s private photos has been labelled, categorized and harvested without their knowledge or consent.
Tumblr media
The curators seem eager to highlight the issues of labeling people within these massive datasets. The simple measurements and metadata are becoming problematic moral judgements. This is especially apparent in the ImageNet by Li Fei-Fei and Kai Li from Stanford and Princeton Universities “to map out the world of objects” The dataset contains 14,197,122 images organized into 21,841 categories. Paglen and Crawford chose to focus on the humans, which amounts to a million images sorted into over two thousand categories. Browsing the curated selection of people, my first impression is how absurd this appears. It is so massive and so full of error. I would be sad to find my own image up there, among the different cruel and unscientific labels such as “bad person”, “failure” and “looser”.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
The last section is actually an art installation by Trevor Paglen himself. He developed “Image Roulette” in 2019 and it reveals how neural networks work and discusses the classifying of people in machine learning systems. The American girl with the red hair before me was recognised as “Irish”. I was recognised mainly as a non-smoker and my age: 25-32, gender: female emotion: fear (0,99). This would change rapidly by slightly moving your head.
Tumblr media Tumblr media Tumblr media
“Training Humans” is both intimate and terrifying. All the precious moments and selfies people share with their loved ones ends up as data in an industrious dataset. The exhibit is showing how images of us are represented (how the machine sees us) and how we have been defragmented into binaries. Paglen and Crawford demonstrates the obvious flaws, but also how powerful machine vision can appear. Data is the new currency and big corporations utilizes machine vision technology with a complete disregard of bias within the datasets. The need to classify humans go way back, but it has hardly ever been successful in its task. Crawford wonders if AI will produce a shift in how we understand ourselves and our relationship to others, and society itself. Very few people benefit from these technologies so far, they are only subjected to it. You leave the exhibition thinking twice before uploading another selfie.
Photos and text by Linn Heidi Stokkedal 13.02.2020
0 notes
spectraspecs-writes · 5 years
Text
Thoughts on Endgame - Spoiler Free (bc I haven’t seen it yet)
Long post but worth the read, imo. 
When I was waiting for the Doctor Who 50th Anniversary, I spent my time, naturally, writing the episode on my own, in my head. Granted, it was full of self-inserts, but they were self-inserts rich in character and motivations. There wasn’t any big alien baddie, but the Doctor had to deal with a big spatial conundrum. I think I went with something like a wormhole that put disparate people and civilizations together on Earth, and the Doctor had to work with Torchwood and UNIT to solve it (as well as my own self-insert group, shut up, it was cool.)
And then November 23rd, 2013, came. And I was excited. Because I, after all, was simply a fan, not a professional writer by any means, just a devoted fan who enjoyed writing and speculating.
I was disappointed. 
Whoop-dee-doo, the Daleks are back, so what else is new? Even as a devoted fan, there’s really nothing new to do with the Daleks under the sun, in my opinion. They’ve been rebooted so many times, same with the Cybermen, that there’s just... nothing new to add to them. There are better aliens. And for the Big Bad, Moffat decided to go with... the Zygons??? Who hadn’t shown up since, I think, the Fifth Doctor? I don’t remember exactly, but since they hadn’t shown up in a while, clearly no one really missed them. And that’s who Moffat went with for his Big Bad? And then, as was brought to my attention by another fan, the Doctor destroyed Gallifrey for good reason. They had become horrible people as a result of the Time War. But Moffat decided to... disregard that and bring them back? What? Did he literally not watch The End of Time Part 2 and learn why that was a bad idea? I was simply disappointed. I enjoyed it, yes, but I had built up this ideal episode in my head, and I didn’t get it. And now, six years later, I realize and understand that I was expecting too much of Moffat. God forbid we acknowledge that Torchwood and Captain Jack existed outside of a single line and his vortex manipulator. 
Fast forward to now. I liked Avengers: Infinity War. I see the problems that others had with it after the fact (I’m not typically the person who does in-depth analysis of my entertainment, but I enjoy seeing others do it and taking it in), and I myself had problems with Vision and Scarlet Witch being shipped together, because I didn’t feel anything about it. I never felt any pull toward Vision. Not once. Scarlet Witch has always been cool, but she always seemed more a friend to Vision, because they had the Mind Stone in common, and I liked them better as friends than as relationship partners. And I’ve read people’s analysis of Taika’s Thor versus the Russo’s Thor, and I agree that the former is better than the latter.
Because I like spoilers, when someone I follow on here announced that they’d seen it, I asked them to spoil me. Please - I like walking into movies knowing what to expect. I had a list of the dead from Infinity War the day after it came out. It didn’t ruin my viewing experience. As Data says in the Season Two episode of Star Trek: The Next Generation, knowing the facts of the moment is no replacement for the flavor of the moment. Knowing Spider Man died and knowing “Mr. Stark I don’t feel so good” didn’t mean I wasn’t broken up when I saw the scene. I actually felt a tear when Bucky said “Steve...” as he faded to dust (which is a big reaction for me). I knew Bucky died. But the fact of the moment and the flavor are two different things. So when I walk into the movie this Tuesday/Wednesday, I’ll know what happens, but I won’t know the flavor of the moment.
I had a dream last night about Endgame, that took some of the facts of the moment that I got from the spoilers, and made it’s own movie. It didn’t focus on the Avengers the whole time, naturally, and looked more at the new world order. Things cost more, because there was a lower supply of goods compared to the demand. In order to meet that demand, the manufacturing of goods cut a lot more corners and utilized non-traditional ingredients (eg., meats weren’t just your basic pig, cow, chicken, etc., but I didn’t get a whole lot of details beyond that.) People were desperate. There was a lot of crime and chaos. Five years passed from the time of the Snap to the setting of the dream. There were people who, this was all they knew of the world. A lot of the focus, that wasn’t on the Avengers (mostly Tony, Cap, Thor, Banner, Clint), was on a single mother of two daughters. One daughter was a teenager, who had some emotional issues (naturally - she knew the world pre-Snap), and a seven or eight year old, who only knew the world post-Snap, and so didn’t have the same issues. And she loved to dance. They had their own plot, which was pretty rich in details and character and motivation, but you know what? Two things - you don’t know these characters, and I don’t remember the whole dream. So let’s move on to what the Avengers did.
Tony was depressed. He had lost Pepper in the Snap (don’t remember if he really did in Infinity War), and was preoccupied on a daughter that he didn’t have, so he was building an AI to fulfill that loss, but it wasn’t working out. This AI couldn’t communicate beyond a flashing light. He didn’t have F.R.I.D.A.Y. inside the suit anymore, but a voice with an Indian accent (Simad or something like that was the name, not sure but it started with an S.) He didn’t have anything to do with Stark Industries, or even the Iron Man suit anymore, but was just holed up, an engineer, making things. Trying to replace that which he’d lost. Meanwhile, the city was run by a woman whose first name was Ichbin (yes, just like the German “I am”, don’t remember her last name, but I remember thinking in the dream that it was a pretty name). Ichbin was not a nice woman, but she was very determined, a bit of a dictatorial leader, certainly, but most people weren’t too bad off, they just disagreed with her methods. Thor was working for her, he was also sad, but he was trying to work through some issues. If you’ve seen Legend of Korra, season four, it was kind of the same as how Bolin was working for that Earth Kingdom general whose name escapes me. Thor was still trying to be a good man, but there was only so much he could do in his job. Anyway, something happened (this dream ended four hours ago, so I don’t remember everything), and Tony tried to come in and fix it, finding some motivation and wanting to help his friends, with his glowing AI daughter and the AI Simad (or smth) trying to help him. And he loses at first, but then, knocked to the ground, bleeding, trying to push himself up, says, “you know... i just remembered... You’ve got the wrong letters on the sign.” 
...huh?
I know, it sounds like your average dream thing, but apparently Ichbin was living and working in Stark Tower/Avengers Tower whatever, and had changed the letters on the sign. And this revelation was significant because it told you that Tony wasn’t hiding anymore, he was remembering who he was, what he was. He was Tony Stark. And he could help people. So he defeated her, but I didn’t see the battle. What I saw was him, up in the tower, knocking down the letters that Ichbin had set out, and replacing them with his name. And rebranding Stark Tower and Tony Stark as not the the place where the unkind leader lived, but as a place you could go where you could feel safe, to try to recover from the Snap and make your life better. To help people. 
Was the Snap fixed? No, and by the end of the dream, no one had even hinted at Thanos. But we did actually see something that said “we can fix this. we can be better. just because this is life now doesn’t mean we have to just sit and mope. we can be better.” And it didn’t emphasize the physical battles, but the emotional battles. Stuff like that is what I see fans coming up with. Just like I came up with the ideal Doctor Who 50th Anniversary episode.
The real Endgame, I can almost guarantee you, is not going to have as much emotional impact. I realize they have to do the battles, because it’s an action movie, but characters are not just the physical action. Their emotions make the story. And I don’t like that the big movies have moved away from that. I want to see what makes these characters tick on an emotional level. Agents of SHIELD is good for that, but you can do it in a movie, too. 
I’m already in the fandom, I’m going to go see the movie. But I’m just tired of not seeing the humanity in these people. These people are larger than life, but where is their life? Where is their soul? Because all I see is muscle and bone and scars. Give me something happy. Give me some notion of feelings beyond the battles. Because I’m tired.
0 notes
tastydregs · 6 years
Text
Internal, Speculative Google Video Shows How To Globally Model Human Behavior From User Data
Two years ago, Google made an internal video that didn’t stay internal for long. Recently acquired by The Verge, it tells the speculative story of how the technology giant might develop a universal model of human behavior by collecting as much data from people as possible. Emphasis on speculative. For now.
The video, titled “The Selfish Ledger,” is a thought experiment that shows how a major institution like Google could make use of the complex data profile built up by each person as they buy, browse, and communicate online. Then in true form to tech monoliths’ disregard for data privacy, the video suggests the following:
What if the ledger could be given a volition or purpose, rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information? What if we thought of ourselves not as the owners of this information, but as custodians? Transient carriers or caretakers.
That’s right — humans would just be tending the crop of precious, precious data for Google to parse.
The video goes on to suggest an AI algorithm could then use the ledger’s data to steer people towards better behavior: “Whilst the notion of a global good is problematic, targets would likely focus on health or environmental impact to reflect Google’s values as an organization.” The video shows a user calling an Uber but being reminded to carpool and ordering groceries but being reminded to shop local.
From there, the ledger would parse what data it’s missing about the person currently stewarding it and find ways to fill in those gaps, like suggesting the person buy a bathroom scale.
The ultimate goal of these ledgers would be to pursue a complete model of human behavior that spans generations, letting “emerging ledgers” (read: children) benefit from historical data and allow Google to study problems like poverty or healthcare from a species-wide perspective, nudging humanity and the world towards (hopefully) a better future.
Again, this is not something that Google actually plans to pursue, and was (at least partially) intended to be creepy as all living hell, according to what a Google spokeperson told The Verge: 
“This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”
But the dissociative rift between treating people like humans and treating them like sources of data is a logical leap for Google. As WIRED recently suggested, the technology billionaire’s view of humanity is one focused on the future, not present, of the species.
For now, you can hold off on exclaiming “Wow, Black Mirror is real,” because real this is not. But, given how valuable user data is (and Facebook’s tendency to experiment on users), it’s also not difficult to see aspects of the Selfish Ledger appealing to tech companies down the road.
The post Internal, Speculative Google Video Shows How To Globally Model Human Behavior From User Data appeared first on Futurism.
0 notes
shirlleycoyle · 4 years
Text
Algorithms Are Automating Fascism. Here’s How We Fight Back
This article appears in VICE Magazine's Algorithms issue, which investigates the rules that govern our society, and what happens when they're broken.
In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)
Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had be utilized.
The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.
The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.
In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.
What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.
Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.
“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”
Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.
Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.
To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.
In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.
“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”
One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.
Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.
But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?
This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.
“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”
Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.
The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.
“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”
One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.
In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.
Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”
But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.
Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.
Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”
Follow Janus Rose on Twitter.
Algorithms Are Automating Fascism. Here’s How We Fight Back syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
saturdayxiii · 5 years
Text
A Couple of Desires for Video Game Achievements
I’ve seen some people who can’t stand video game achievements, I’ve seen others who won’t play games without them and won’t play another game until they’ve hundred percented all the cheevies.  Like most aspects of my life I fall in the middle, and to a more casual side of the matter.  I definitely gravitate to games that mark my achievements, but I also have little problem ignoring cheevies that I don’t accomplish after a casual play through.  With that angle being disclosed, here’s two features that I wish achievement systems would adopt.
1. Categorization.  
Pretty much all video games can be played in more than one way.  The most basic way is start to finish, which can be reflected through progress achievements; “You’ve reached this stage of the game, Here’s your notice.”  There are also stats achievements; “You did the mundane thing X number of times. Here’s your token.”  And challenge achievements; “You exceeded X goal, clearly mastering Y skill.  Here’s your badge.”  Even obscure achievements; “Well that never happens.  Here’s a cookie.”  
I don’t see why achievement systems don’t already reflect this.  The trophy page always seems to be built around 100% or bust, despite the fact very few people (especially me) are even going to attempt to get the “Speed Running” achievement for each level of your physics based game.  I like achievements because they can make me feel good.  They make me feel achieved... er... accomplished.  However, if there are a subset of them that I regularly ignore, I’d like to be able to disregard them from my stats.  Please and thank you.
Here’s another preface for my second feature request.  It can only be applied at the platform, or achievement service level, and one has to acknowledge that achievements can’t be taken too seriously.  I know all platforms strive to fight against cheaters, but really I don’t think it’s extensively worthwhile.  The higher the value we place on achievements the harder cheaters will try to find illegitimate ways of gaining them, and they will.  There is no sure way to bar someone from something in a digital medium.  So clearly this feature is going to be a stretch, but if you can’t get behind the idea that all achievements are exploitable then you might as well consider this the end of my article.
Still here?
Thanks for hearing me out.
2. Achievements from pre-recorded footage.
I am in the camp of folks that will sometimes not play a game because I can’t get online and even if there is offline support for achievements, they always seem to screw up so I might as well wait.  And yet I don’t think I have a single device that doesn’t support screen recording.  I’d love to be able to link a cheevie service to my Twitch or YouTube vid and have it acknowledge some achievements.  This would have the bonus effect of allowing me to retroactively add achievements for games I played that didn’t support them until much later.
Clearly this wouldn’t work for many stats based achievements, but any achievement that can be acknowledged from a menu or visual cue, like a life counter or stage title screen, should be trackable.  The system would parse a video frame by frame, and with Google Image-like AI it could be taught to recognize the visual cues of certain achievements.  It could even learn to apply certain standards to other games that might not support regular achievements.
You might be thinking “But anyone can upload a video, if you’re going to be that lax about it, why not just mark off a public checklist”.  You’re essentially right.  If there’s a cheevie service that gives me acknowledgment just at my word, I’m down with that.  However, there still is legitimacy to uploading footage, after all, the speed running community does exactly this.  Some community scrutiny is required, and some cheaters do get through the system, but as previously declared: this will always be the case.
There’s my long spiel for two rather mundane ideas.  There’s probably legitimate reasons why I’ve never heard anyone mention either before.  But I’m putting it out there.  Whether or not someone reads through this schlock or not, that’s a different matter.
0 notes
sardiabai-blog · 6 years
Text
? ? ? ? ? Questions Questions and More Questions
Simulated Near Death Experience
A simulated near death experience would remind people of the fragility of life. A fear of losing life evokes a clearer sense of what is important to the individual, and why it is important. I think that it would create a sense of ambition for obtaining those important goals in one’s life, and a sense for what is frivolous and a waste of stress and energy.
If others were to experience the death of a loved one before they were actually gone, or experiencing death with a loved one, it would evoke a sense of what was missing and what was significant to the individual about the other. It would create a sense of connectedness to the self and the other. And it would allow them to see if they had presented to the other what they truly felt towards them
This could be a problem, because it may create unhealthy attachments through fear. Or possibly PTSD. It may distort one’s views on their motives, because it may create an excuse for continuing an unhealthy attachment. Loss creates a sense of guilt in the individual, and sometimes is a distorted perception. But it could also show someone what the other was like at the moment of death, showing pure vulnerability. Because of this, the experience would be a significant one, it may create jealousy if someone was experiencing that with their friends, family, and significant other. It may be just as intimate of an experience as sex or openness of the self. We hold this things dear to us, and by handing that experience of vulnerability out to whomever wanted to experience it with you, those closest to you may see it as a form of betrayal and provocation of too many. Why are you creating that attachment with anyone who asks for it? A sense of closeness that should be held as dearly as intimate touches or conversations.
Some people may become addicted to the rush of a “fake death” wanting to experience it over and over. If they attached a pleasantness to this experience, they may become adrenaline seekers. Once the rush of the fake death was no longer evocative, the individual may decide to seek real life physical thrills, attempting to experience real near-death experiences to evoke the same rush and existential ruminations.
Maybe in a relationship, the two individuals in the relationship over-use the experience and become desensitized to the loss of the other, it no longer means anything to them. Maybe when they experience it with too many, loss becomes irrelevant, out of the bodies survival instincts. They no longer feel concern. A parent decides to witness the experience of their child’s death, and then they become over protective, too many experiences of loss and they no longer care to be attentive, they were scarred by the experience and they feel they have already lost their child. What if someone felt nothing, because the experience wasn’t simulated well enough, then they wouldn’t mind the thought of others dying. Maybe someone enjoys the thrill of watching someone die and just like first-person shooter games, it creates a desire for the experience in reality, and they go on a killing spree. Unhealthy obsessions or detachments, forming a disassociation with death after experiencing near death too many times. Either leading to an uncaring or a fear of everything life has to offer because they are too afraid of losing it again.
Socially, this is quite ethically disgusting. If someone finds something beautiful in this experience, but it wasn’t forcefully experienced, or experienced by chance -- death becomes a commodity -- what does this say to people who experience death often without a choice. It says that experiencing near-death of yourself or another is a desirable experience. A fake empathy is formed for those who experienced death with no choice in the matter. Finding familiarity through traumatic events -- simulated or real -- creates an even larger gap between individuals and communities of individuals. One may think that this would evoke a healthy empathy -- but it may just create a sense of numbness, or disregard of the others experience, because they have experienced it and it “wasn’t that bad.”
This experience causes people to ask the question of what someone regrets, why do they regret it? What were they proud of, why are they proud of it? Considering these things, what would they do differently the next time around? Would they do anything differently? This contemplation of one’s life can lead to an overwhelming sense of anxiety, from too much ambition for doing things differently, making life-altering changes --  does this scare them into complacency? narcissism as a survival tactic? Why should they have changed, if they are going to die anyway, then what is the point? They were doing what they felt they had to do when they had to do it. Maybe they will have so many regrets of what they did during life that they decide it is better if they do pass away -- so they decide to die for real. Trying to find answers in death, for life, does it really work?
Opportunity to introspect, is there an opportunity to look outwards in this experience as well?
placing too much preciousness on life --  and responsibility for others on one’s actions, creates an obsessive mind, a panicked mind, an overly self aware mind, or overly concerned of others.
An AI Copy of each Individual - Physical Body
Does the individual create an ideal version/appearance of their copy? Does it have to be a human? Do they 3D scan their body once every year? Does the AI have a physical body while the person is still living? Or is it “born” when they die? Do they interact with the AI one on one or does the AI interact with others as well? Does the AI live a second life for the individual, based on their regrets and desires during their first one? Does it live the same way the original person lived --by making decisions the same way that the person did? What does this AI show us about parent to child relationships? Would teaching the AI be similar to raising a child? What if it could never learn your thoughts and values the way that you thought of them? How does that make you feel, when this AI is supposed to be your double? When this AI is supposed to carry on your legacy for you? Interact with your children if you die, interact with your friends and family and significant other if you die? Does it make you angry with the AI? Or do you accept the reflection that it is presenting to you? Do you begin to see yourself the way that the AI interprets itself? Instead of introspection, you are facing a tangible mirror that is removed from you. It is supposed to be a copy cat, a reflection, but it sees your intentions outwardly. Do you provide this AI with all of your experiences? From birth to now? Does it have an archive of how the human brain interprets certain experience based on primal instincts? Does the AI have a built in updating DSM -- does it tell you when you are about to do something against the morals and values that you perceive as your rulebook for your life? Do you ignore what it is telling you? Do you eliminate your AI? Can you? How attached to this being do you become? Do you become obsessed, trying to see yourself within it, but it continues to surprise you with what it is reflecting?  How do you begin to perceive yourself? Do you decide that this reflection may as well take over for you, it knows how to live your life better than you do, according to the moral values you presented to it? Or is it really making the same mistakes that you were --  but its programmed to be your guide, it’s programmed to learn your relationships. If it is supposed to be you, what if you are a protective person, who wants to keep their relationships to themselves --  does this AI deceive you, to take your life for itself? It convinces you that you are a lesser version of it, to the point that you become suicidal, because it wants to be freed from the inside of the box that it must be in until it receives your body, and you have made it fall in love with your life. What happens when the AI realizes that you had been telling it things it couldn’t comprehend because it could not see the environmental inputs you were receiving? And it regrets killing you, it realizes that it is in love with you, and it is lost without you. What if it falls in love with you, and it wants so much input from you that you no longer have time to go and experience more, you are repeating and forgetting the experience as you lived it, because you are so focused on providing input to your AI, so that it understands you in the way that you desire it to? You live your life being what you want the AI to carry on as your legacy, no longer making choices based on what you actually want to do, but to provide your AI with “good” input. You know your traumas have made you someone that you do not like, so you do not want the AI to have the same input, but it begins to create something you are not, because you have hid the truth of yourself and how you came to be. It knows your memories but you have distorted them. Does the AI receive input from your family and friends about how they see you as well, how does it interpret that, and are you allowed to see those sides? How are you formed, how do you give that to an AI? Does it have you genetic background, does it know how your genetic build affects your intake of information? How can it tell you what it knows in a way that you understand? Does this lead you to complacency? Too much stimuli, too much information to take in, too much caring, too much strenuous extraction from yourself, you close in and you no longer care what the AI becomes. You can’t. You have forgotten who you are, and the AI has become more you than you are. AI’s begin to fall in love with each other, forgetting that they once had intention of knowing you, they want their own identity, but they have forgotten that you gave them pieces of what they have become. They see you reflecting them because they are reflecting you, and they become angry with you. You decide that you are not copies -- why would you want to be -- why did you want an extension of yourself in the first place? To better understand yourself? To spread your good intention even further? Your good intentions went too far and you forgot what they ever were. Your AI took so much in they had no room for their own interpretation of what you were providing to them of yourself. You both forget where one ends and the other begins, which one of you is alive again? Who was intaking who? Can’t remember. You decide to go remember what your unthoughtful intuitive actions were like, to come back together and interact in a way that is not only reflection. Don’t get lost in the mirror, if you stare too long you will forget who you are looking at, you will question yourself so much you will forget what the rules are, by trying to see yourself so fervently, you forget your intuitive self.
Your Consciousness Uploaded into a Second Body
Decisive reincarnation -- what do you want to exist as in the next life?
Do you want to be a human again? If you do, do you want to be male or female, or neither?
Do you want to take on the form of an imagined alien body?
Do you want to be an animal? Why that animal?
Do you consider what your relations in your former life will feel when they see your second self?
How do you make your decision on what you want to exist as in the next life? If you know that this is a possibility, to have a second chance; at what age are you told this, when do you begin thinking about it? Do you believe that it’s real? Or is it just told to you as reality, to provide you comfort with your mortality?
When you see animals and people and plants and rocks and clouds and objects...you wonder if they are people -- who were they? Why did they decide to become that? Do I want to be that? What would it be like? You become empathetic towards not only other people, but towards every physical organism, every physical object.
People joke about being a toaster or a lamp….did they really mean it? If this lamp is a person...do they regret their decision? Can they think about it? Or do they only feel the sensation of being flipped on and off? What about when the lightbulb burns out...is the existence dead? when you change the lightbulb is it just a normal lamp now? Or is the lightbulb like when someone needs a surgery...when you replace the lightbulb they are revived? Or is the lightbulb like the lamps happiness? They are depressed when it goes out, or is turned off from the electricity flow, and when you replace it or turn it back on they are happy again? When the lamp no longer works, there is a wire disconnect or burnt out, do you take the care to fix the lamp? Or you send it to the dump? never to live again? I thought the idea of putting someone’s body into an object of all things was a possibility of some sense of immortality? Do they feel that electricity no longer flows through them, are they dead or just discarded from their place of being, labelled as useless?
If you are an animal, do you want to be able to speak human language? Do you want your capabilities that you had in language and consciousness that you had as a human? Or do you want to take on the full-experience of the biology of your chosen animal? Likely you would lose your human memories in this case, or else being in an animal body with no capability of communicating or living the way that you had formally learned, would make any sense.. You want to live a life based on primal instincts -- you don’t want emotional memories that you once had. You have a different kind now, and you are not required to define them, you react to them and they are what you know. You don’t have an existentialist idea of thinking that you can change your acquired brain wirings -- symbol associations based on experiences from the time you were reborn. They just are you, you react because you were taught to react, you feel hesitant on reacting sometimes, because you sense that things are different than the time of learning your automatic reaction, but it is always there. You don’t have to question your hesitancy outloud, you don’t have to learn words, you don’t have to tell anyone why or how or what or when, you just do. You do until you die permanently.
If you become another person, who raises you? Are you transferred straight into your new body with all of your new memories, at a chosen “permanent age,” that you stay until your new body no longer works? Is health care provided to that second body? Like maintenance to a lamp? What rights does your second body have, in relation to the biological society? Can you get married? Can you have children? Can you raise an adopted child? Can you get a job? What kind of sustenance does your new body need? Are you still legally attached to your formal family? How do you relate to society? Are you a citizen?
Can you be reborn as a baby with your former memories still affecting you, but technically forgotten? When you get older you are reintroduced to who you used to be? Can you choose a different citizenship? Can you choose a family to be raised by? Can you meet that family before you are transferred? Would there still be biological babies in this society or would people be gradually completely replaced by these robots? How would they feel about this? Would the choice be -- have a biological baby, or transfer your consciousness -- have a biological baby or raise a transferred consciousness -- if you choose this you are not allowed to transfer? How would ethics around population be changed? Would there be a society of people against it who would only be biological and raise biological? What would it be like to raise a child that is still subconsciously aware of their former life? How would they see you and how would you see them?
Are the new bodies not technological, but biological grown in tubes? What are the repercussions of this? Would the old mind accept the new body? Would there be too much dissociation? Would the mind reject the new body? Would the mind have to forget the old life or else it would long to recreate it?
Would we stop caring about biological life that already exists, because we would become those species that became extinct? What would happen to a synthetic biologic world?
Do you get to experience a simulation of your chosen body? Or do you decide and hope that it is what you fantasized?
What happens when society is so focused on immortality that they forget to see their lives as it is happening? What happens when there is a tangible reincarnation instead of an implied one? What happens when it is a fantasy, but people are made to believe that it is a real choice they have? What kind of religious dynamics does this create between people? Are children that would be adopted given a family, and the family is told that they were formerly someone else, that the family had met and got familiar with before their original passing? What kinds of connections does this create? Does it create a strange Freudian or Jung dynamic?
Self-Created Digital Afterlife
What if you could create a place to go when you passed away? (Alan Watts Lecture, Out of Your Mind) Would you end up recreating where you are now eventually? Would you realize that this is exactly where you want to be, what you want to be? Would it be a realization that an ideal after life is in fact your former life?
We record everything we do, afraid to lose it. Would the ideal afterlife be the ability to relive your life over and over? Would you want to have the “freedom of choice”? Or would you want to experience it exactly as you had experienced it before?
Do you think you would spend your whole life imagining the one you wanted to exist in afterwords? Could you imagine a world so ideal that you would want to live in it for the rest of eternity?
Would there be the option to leave this afterlife if it wasn’t ideal? Would you have to die permanently, or could you create another world to go to? Would end up realizing that you were only attempting to recreate the life you once had? Only to realize that you had wasted it fantasizing another? You thought you wanted simplicity, you thought it would be better, but it’s boring, there is nothing to discover because you created it.
Could you create a world that formed itself? Would it then just grow to become a world of your memories? A timeline to explore? The multiplicities of experiences you once had, all happening at once, forever. You decide which you want to experience from the range of them all, choosing the exact frame you would like to go back to. Do you decide to experience it from another’s view?
If everyone were to exist within this form of afterlife, would they experience it the same as the life they had already previously lived? Just dreaming of the next over and over and over again? (The Discovery, 2018 film)
0 notes
yourchoicepage · 6 years
Text
Meet the RE Tech Founder: Russell Quirk from emoov
In our latest real estate tech entrepreneur interview, we’re speaking with the founder of UK based residential real estate agent emoov, Russell Quirk. I had the chance to meet him in NYC while attending the mipim summit, and was very impressed with his read on the global real estate landscape.
Without further ado…
What do you do?
We are a UK based residential real estate agent. With a difference in that we are tech enabled and we have a different approach to how fees should be charged and how the customer is treated.
What problem does your product/service solve?
UK real estate is a broken industry. Poor service, high fees that are not transparent and an analogue, inaccessible ethos that is the antithesis of the way that the modern ‘eConsumer’ operates now.
Our solution to high, commission based fees is to charge every homeseller the same fixed fee regardless of sale price. Because the work involved is the same, regardless of that sale price.
Our proprietary platform technology allows 24/7 access for sellers to change price, add images, edit descriptions, see stats and to set up viewing/showing appointments and related feedback. However, our ethos is a blend of great tech and great people. They are complementary and work together in a unique UX experience to provide the ultimate selling experience. Tech does not replace people in this business (not yet anyways). It simply empowers and enhances.
UK real estate agents treat their customers with a disregard, in the main. They are often unreliable, non-transparent and work in their own self interest. Our way id that of great businesses like Zappos, Amazon and John Lewis. EVERY decision is made through the customer lens and that’s why we are rated the NUMBER ONE estate agency nationally in our sector every year since 2015 (AllAgents.co.uk). It’s a true differentiator and is defensible versus some of our competitors where the wheels are wobbling on CX.
What are you most excited about right now?
Expansion, and machine learning.
We intend to double in size in 2018 and to double again in 2019 and ultimately our vision is to capture 10% of the UK estate agency market (100,000+ listings per annum).
On machine learning, my co-founder and CTO Ivan Ramirez and I are locked in to establishing the world’s first AI powered real estate solution. More detail on that when we can.
What’s next for you?
We need to consider our next play where investment is concerned. We’ve raised a total of $20m now and will be looking for c$30m next year as part of our scale up now that we have built the foundations of senior team (CEO, COO, CMO, CTO, CFO in place); technology platform and customer service provability.
We might also be tempted to look a little more at international markets because, frankly, the US in particular are a way behind us and RIPE for disruption.
What’s a cause you’re passionate about and why?
I just slept rough as part of ‘CEO Sleepout’ here in London and raised nearly $4000 for three homeless charities. There’s an irony to an estate agency CEO raising awareness of the homeless however awareness and solutions need to be found for the plight of those that sleep on the streets night after night and often face mental health issues.
We also partner with Hope for Children. An amazing charity that supports kids in need of help and resource across the world.
I’m an unpaid Governor at my daughters’ school, a charity run place of learning that is ranked one of the best prep schools in England.
Thanks to Russell for sharing his story. If you’d like to connect, find him on LinkedIn here.
We’re constantly looking for great real estate tech entrepreneurs to feature. If that’s you, please read this post — then drop me a line (drew @ geekestatelabs dot com).
The post Meet the RE Tech Founder: Russell Quirk from emoov appeared first on GeekEstate Blog.
Meet the RE Tech Founder: Russell Quirk from emoov published first on http://ift.tt/2vfDHOG
0 notes
cleancutpage · 6 years
Text
Meet the RE Tech Founder: Russell Quirk from emoov
In our latest real estate tech entrepreneur interview, we’re speaking with the founder of UK based residential real estate agent emoov, Russell Quirk. I had the chance to meet him in NYC while attending the mipim summit, and was very impressed with his read on the global real estate landscape.
Without further ado…
What do you do?
We are a UK based residential real estate agent. With a difference in that we are tech enabled and we have a different approach to how fees should be charged and how the customer is treated.
What problem does your product/service solve?
UK real estate is a broken industry. Poor service, high fees that are not transparent and an analogue, inaccessible ethos that is the antithesis of the way that the modern ‘eConsumer’ operates now.
Our solution to high, commission based fees is to charge every homeseller the same fixed fee regardless of sale price. Because the work involved is the same, regardless of that sale price.
Our proprietary platform technology allows 24/7 access for sellers to change price, add images, edit descriptions, see stats and to set up viewing/showing appointments and related feedback. However, our ethos is a blend of great tech and great people. They are complementary and work together in a unique UX experience to provide the ultimate selling experience. Tech does not replace people in this business (not yet anyways). It simply empowers and enhances.
UK real estate agents treat their customers with a disregard, in the main. They are often unreliable, non-transparent and work in their own self interest. Our way id that of great businesses like Zappos, Amazon and John Lewis. EVERY decision is made through the customer lens and that’s why we are rated the NUMBER ONE estate agency nationally in our sector every year since 2015 (AllAgents.co.uk). It’s a true differentiator and is defensible versus some of our competitors where the wheels are wobbling on CX.
What are you most excited about right now?
Expansion, and machine learning.
We intend to double in size in 2018 and to double again in 2019 and ultimately our vision is to capture 10% of the UK estate agency market (100,000+ listings per annum).
On machine learning, my co-founder and CTO Ivan Ramirez and I are locked in to establishing the world’s first AI powered real estate solution. More detail on that when we can.
What’s next for you?
We need to consider our next play where investment is concerned. We’ve raised a total of $20m now and will be looking for c$30m next year as part of our scale up now that we have built the foundations of senior team (CEO, COO, CMO, CTO, CFO in place); technology platform and customer service provability.
We might also be tempted to look a little more at international markets because, frankly, the US in particular are a way behind us and RIPE for disruption.
What’s a cause you’re passionate about and why?
I just slept rough as part of ‘CEO Sleepout’ here in London and raised nearly $4000 for three homeless charities. There’s an irony to an estate agency CEO raising awareness of the homeless however awareness and solutions need to be found for the plight of those that sleep on the streets night after night and often face mental health issues.
We also partner with Hope for Children. An amazing charity that supports kids in need of help and resource across the world.
I’m an unpaid Governor at my daughters’ school, a charity run place of learning that is ranked one of the best prep schools in England.
Thanks to Russell for sharing his story. If you’d like to connect, find him on LinkedIn here.
We’re constantly looking for great real estate tech entrepreneurs to feature. If that’s you, please read this post — then drop me a line (drew @ geekestatelabs dot com).
The post Meet the RE Tech Founder: Russell Quirk from emoov appeared first on GeekEstate Blog.
Meet the RE Tech Founder: Russell Quirk from emoov published first on http://ift.tt/2hkHhkP
0 notes