Tumgik
#except it's much more meaningful bc it's about representation
maerhiya · 2 months
Text
in regards to the constant dismissal of his aroace identity, i hate it when alastor 'fans' say and use the excuse: "he's fictional, he won't get offended."
like, you're right, but it can and will offend us.
when you see yourself being represented on screen, of course you'd feel enthusiastic about it — representation allows individuals to see themselves reflected in the media they consume, validating their identities and experiences. but when so many people take that representation and decide to disregard and discard it, it is so fucking frustrating. we finally have another character to be part of the tiny amount of representation we have, but then people don't even care about how much it means to us? like yeah, alastor won't get offended because he's not real, but it frustrates and annoys us. do you realize that it's also technically invalidating the aroace community? that you're invalidating our feelings? imagine feeling like you're finally being seen because your orientation is finally being represented in media, and people just decide to blatantly ignore, discard, and invalidate it.
media has such a powerful influence on real life, representation being a prevalent factor of it. there are numerous posts that dictate how people went to watch a movie/show or read a book just because a character depicts their identity in it — obviously, being represented is an incredibly uplifting and validating experience.
which is why seeing an aroace character in a popular show is so meaningful to us because we live in a world where romance and sex are literally everywhere and prioritized above all else. (and it's pretty obvious that alastor's on the repulsed end of the spectrum, but even if he wasn't, at least make an effort to acknowledge his sexuality instead of continuing to portray him as allo; aroace folks can be in relationships but it's not going to be the same thing with allos' experiences.)
any and every representation matters, but why does that seem to stop at people under the aroace spectrum? like y'all can't even let us appreciate the scraps of representation we have. we barely have any, so are we really that dramatic for being upset at how people easily disregard and dismiss our identities that are being depicted on screen just like that? is it truly wrong of us to want to defend and maintain the little representation we have?
263 notes · View notes
meg-noel-art · 1 year
Note
hi!! i wanna preface by saying i agree with you, i just want to add one thing bc i don't want to make my own post or be perceived in any meaningful way
i feel like the nature of specifically horizon's dlcs is a direct continuation of aloy's story. frozen wilds was how we got truly introduced to the heph storyline and why we know we need to go after it, also introduction/context behind apex machines etc. when i first played forbidden west i was surprised that it took the DLC so strongly into account, which led me to believe it was meant to be a necessary step in aloy's story bc it didn't felt like exposition - it felt like assumed, already known information (which it would not be if you didn't play frozen wilds).
i also think that's what they were going for with burning shores. unfortunately, GG alienated a large portion of their playership by making it ps5 exclusive (whatever their reasoning may be, likely technological) - also the points you brought up about other reasons players can't get a dlc. I'm not saying it's a good model to base a franchise on, but that is the sense i got.
i have nothing to say about the shipping aspect bc it truly does not matter to the overarching narrative outside of hooray!! representation that is important for a lot of people!! shipping wars are stupid as hell. you're playing barbie with other people's characters, everyone, calm down
Yes I've gotten multiple asks bringing up the Frozen Wilds, but it only proves my point. So in that sense I still disagree with you and others. FWs is still not necessary to play. It adds dramatic context. But all of it's lore is still re-explained, admittedly to a lesser degree, in HFW.
Heph running wild is still reiterated via exposition during the plot of Forbidden West. Characters that were important to implementing the lore never make a return - CYAN is mentioned once in a dialogue wheel with GAIA. Aratak is a battle easter egg. Ourea is dead. Even Gildun is not an exception. His reappearance was EASTER EGGED in HFW and then he ONLY showed up in -- another DLC, which is again, optional.
I'm not making up rules here folks. This is how DLC works for any franchise. They can absolutely expand lore, tack on characters and important plot points but at the end of the day they are NOT necessary to play the main story. And will always be rehashed/reexplained for players who did not play the DLC. Or who couldn't.
That's one of the reasons they made up another Zenith that somehow escaped the final battle. Londra is entirely expendable as a villian. One off monster of the week special episode villain.
Look how hard Guerrilla pushed the Sunhawk comic, going so far as to include it digitally in an edition of HFW -- and still Talanah's entire quest retells the events of the comic via dialogue wheels.
Because ---- Guerrilla HAS to account for the fact that s o m e fans may not have read it and won't know what's going on. And every fan must be allowed to be clued in.
Mass Effect DLCs are another great example - and ME is even MORE open choice than Horizon is.
I completely agree with you that Guerrilla narrowed what it can do with Burning Shores moving forward bc of the ps5 exclusivity as well. For whatever reason they did it doesn't really matter the point is it restricts it's audience from playing.
I ALSO had an issue with the writing in HFW because it was written very very obviously to allow NEW players into the series. To START with HFW, not HZD, and not be totally lost. Even past the first cutscene, Aloy spends a ton of time rehashing events with old characters. I think 3 will play out much the same way. Because they have to develop it like that. Their goal as a company is to appeal to a large crowd of gamers, for money. Capitalism, they do not want their games to fail. Especially after queer clapback. So three will rehash undoubtedly the events of HFW, HZD AND BS. And that's a lot to cram in without forcing players to have played or make assumptions.
I have nothing to say about the shipping aspect either. Despite the many asks today for some reason interpreting my objective take on the nature of a DLC is an attack on representation OR the ship in general - i have zero issue with it. I will say yet AGAIN, i thought it was cute and enioyed the story.
That does not change the fact that a DLC is skippable, choices with Seyka were in a flashpoint aka optional, THIS IS NOT DENYING THE ROMANCE - but Aloy's reaction to it is different and would effect a player's game -- and at most I assume she'll be reintergrated in a fashion similar to Gildun. Because again, Guerrilla MUST account for the fact that THEY chose to introduce her in a DLC and thus people/new players will not know her or the story.
That's how it works, sorry about it.
Thank you for the courteous tone of your ask! This is, however, the last BS ask I will answer.
Take care 🫶
18 notes · View notes
funkyrabbit · 2 years
Text
@gay-rat-and-aardvark-wedding said: FRR there’s this comic i found in deviantart and nigel x paige is canon in this comic, but the author just refuses to retcon paige for patrick for literally no reason 🙃🙃🙃 those two don’t even have any significance to the comic so it wouldn’t even change anything. mf made a whole comic strip just to complain about it 
OH yeah, i'm very familiar with that comic. i was following it for a while but when the author went and did That it really turned me off. the comic he made about it came off like he was really disapproving of the episode, just rubbed me the wrong way 
i’m putting this under a read more bc...i’m tired of keeping my criticisms of it to myself 
the nigel x paige thing was irritating to me for so many reasons. even if nigel never got married to patrick, i just never saw him as straight?? actually, for the longest time he gave me ace vibes. but there is a lot of evidence that nigel could be gay across many episodes of the show prior to the airing of “mr. ratburn and the special someone”. to just flat out ignore all of it, especially when it was confirmed to be canon by the show, is just so wrong to me
he spoke about it like it would be such a big change. but like you mentioned, it was SO insignificant to the comic ?? bro you made like FIVE comics out of close to 200 DIRECTLY talking about the nigel x paige thing and/or your OC, so i really don’t see how it would be such a big change. it’s a comic that mainly focuses on dw and her friend group anyway, at least that was how it started?? like i don’t get why he was getting so defensive over it 😐
ugh i could be wrong but it really seems because there is now a canon gay pairing, he decided to divert from keeping the comic as much of a canon continuation of the show as possible. which by the way, that seemed to be his original plan until “mr. ratburn and the special someone” came out?? so now he claims his comic has always “done its own thing” and doesn’t plan to match the continuity of the show 🙄
he said, and i quote, that he “want(s) their lives to be meaningful as well as canonically plausible, in accord with their established personas.” except when there’s a canon gay pairing involved right? then that plausible in canon stuff goes right out the window! let’s just ignore the evidence that nigel could be gay, across years and across numerous episodes.
again, maybe this was not the case, but it sure comes off like it! his reasoning just seems blatantly homophobic since i don’t recall him making such a big to-do over anything else that happened in the show since he started the comic! 
on that note, there are zero characters in his comic who are lgbt+. across like 200 comics with such a huge cast of characters. like all he has are het couples or characters getting het crushes? and you could make someone ace, or non-binary?? but no?? 
even aside from all of that, there’s one arc where he brought bo baxter back to elwood city and rekindled things with bitzi which literally made zero sense to me. i have a much easier time seeing her get back with harry mills. they had way more chemistry and if i recall correctly, bitzi even told buster at the end of “bitzi’s break up” that the reason they split was bc she was too busy for a relationship and they had different schedules. that doesn’t mean she and harry couldn’t get back together in the future?? that seemed more in the realm of “canonically plausible” than bitzi suddenly falling back in love with her ex-husband she’s been divorced from for years 
idk this comic just doesn’t make much of an effort to show diversity and representation which is literally what arthur is all about ??? bitzi getting back with her ex-husband, so there’s no longer buster having divorced parents and living in a single-parent household (which was huge representation for me growing up as a kid, actually)...tbh i started losing interest in the comic when it went in that direction
but whatever i’m done lol TL;DR it’s homophobic, boring, and i don’t follow it anymore anyway 😂
4 notes · View notes
i-am-extremely-mad · 3 years
Text
It shocks me over and over again when I come across blogs that extremely aggressively, absolutely HATE LOK, Korra, korrasami and practically every character and aspect of the show. I have to share this horror with others because these are just a small part of the awful views from that blog (supposedly feminist and lesbian), interestingly, attitudes about LOK and korrasami were mostly positive or at least neutral in 2014-2015, and then abruptly changed sometime in the middle of last year which coincides with LOK finally being on Netflix, I will probably respond to if I am in the mood for a toxic discussion...
Anonymous asked:
“I think it's a bit hypocritical that you hate Korra's personality and not Zuko's.Zuko is arrogant asshole bitch and you like him. I never see you criticizing him like you do with Korra”
“Zuko is an arrogant asshole bitch, but he’s not annoying. Korra is an arrogant asshole bitch and is very annoying. Hope this helps!”
#asks#anti lok#going to absolutely BLOW YOUR MIND to find out that the quality of the media itself determines how much I like a character#as well as the quality of the characters development#also this isn't math there is no transitive property for liking characters#some hit and some don't#get over it#Anonymous
Anonymous asked:
“As soon as I heard “I’m the Avatar; you’ve gotta deal with it!” I knew I would fucking hate that show. I naturally hate people who are like that. If Bryke was still smart they would have thought to make Korra’s personality more like water similar to Aang with air, not “haha fuck you, I’m avatar haha!”
“LOL YEP like 3 seconds into the show you hear that, and understand EXACTLY what the rest of LOK is going to be like. Not only is a jarring contrast to Aang and every other Avatar we’ve seen, it directly contradicts everything we know about the Avatar cycle from ATLA. All the other Avatars have to be TOLD that they are the Avatar, and have to work hard to master their non-native elements. Korra just naturally being able to bend 3 elements when she’s like 5 tells you everything you need to know about how the creators of LOK went about making their show: worldbuilding and logic don’t matter, it’s all about flashy visuals and one-time gags.”
#asks#anti lok#DISGOSTING#'meh meh if korra was a MAN you wouldn't call her arrogant' I absolutely would#korra being a dickhead is not okay just because she's a woman#Anonymous
Anonymous asked:
“Korrasami is shit,a joke, boring af, they don't have romantic chemistry, asami acts like a big sister towards korra. there I said it for you.”
“OOP! Well, I certainly didn’t say it!”
#asks#anti lok#but ur right#ACTUALLY I disagree on one point#asami doesn't act like a sister to korra#they act like work colleagues that only ever hang out during their lunch break#they act like very distant cousins that only talk on facebook#they act like people that share mutual friends but don't know each other that well#okay I'll stop#Anonymous
“Korra: 1/10, I will see myself out the door to be CANCELLED! Not only was her character very unlikeable, but the way fandom reared up to defend this (quite frankly) terrible character under the guise of “wokeness” when it is clear that the creators sprinkled in just enough ~representation~ to get brownie points without actually saying anything meaningful is just EMBARRASSING. Korra defenders are being manipulated by those cishet white men they hate so much, and they do it gladly. Anyway, I find Korra boring, disrespectful, and underdeveloped.”
#asks#ask game#character ask game#anti lok#SORRY YALL LOK'S CHARACTERS ARE BAD#also korra gives off 'mean feminine lesbian who calls gnc women slurs' vibes#korra and asami would bully me and then call me a homophobe#and kuvira gives off such heterosexuelle vibes I simply CANNOT with her#thetpot
“IT’S SO VILE! Korra is barely even an active character in her own show! She’s just a vessel that gets beaten and broken over and over again. She doesn’t actually get to LEARN from any mistakes that she makes, she’s just forced to recover from these external traumas that have literally nothing to do with her.
Ugh, tbh I feel NOTHING for korrasami. Korra and Asami don’t speak about anything except Mako for most of the show, and only really start actually TALKING to each other in the last half of season 4. None of Korra’s friends really spend that much time together throughout the runtime of the show tbh.
But yeah, it’s frustrating that people tout LOK as this amazing show staring a queer WOC, but the people making the show HATED Korra and HATED developing her in a meaningful way.”
Anonymous asked:    
“Korra was like Zuko at the beginning of the show, now she in season 4 is like Aang. Bryke gave kuvira a redemption bc team avatar was missing a Zuko. now she is the new zuko and not Korra.”
Sorry, my brain short circuited. You think Korra???? Is like Aang???? That might be the most offensive thing I have ever received in this askbox.
#asks#anti lok#KORRA IS LIKE AANG#IN WHAT UNIVERSE#HOW DARE YOU INSULT MY BOY LIKE THIS#I WON'T STAND FOR IT#Anonymous
“also lock me in lesbian prison but korrasami is WEAK! they didn’t have a single conversation that wasn’t about mako for 3.5 seasons!!! they had zero moments together to indicate that asami would be the only person that korra would write to!!! yall tricked me, I thought I was getting some gay shit.
#anti lok#I SAID WHAT I SAID#korra had more chemistry in her one scene with opal than she ever did with asami”
Anonymous asked:
What do you think of korrasami?
no thank u, I don’t feel like being called a homophobe by a bunch of straight women today.
#asks#anti lok#a hornet's nest I will not be swinging at on this Monday lmao#I hate everything in lok you do the math#I'm sure I've talked about my issues with korrasami on my blog SOMEWHERE#have fun!#Anonymous
Not me seeing posts giving LOK and Korrasami credit for queerness in animation when Steven Universe, Adventure Time, and She-Ra were doing it unapologetically, openly, right from the very beginning....
#anti lok#TESTING MY GODDAMN PATIENCE#if korrasami was individually influential for you as a queer woman that's FINE#but do NOT give this insane credit to the cishet writing team of LOK!!!#not when these other shows were made by ACTUAL QUEER WOMEN#DISGOSTING
Anonymous asked:
if ur looking for an actual well-written canon wlw pairing in the atla verse, there’s rangshi. fc yee works so hard to fix all of bryke’s garbage, bless his soul. i have no hope for anything avatar studios related, but if fc yee is in the writer’s room, then there may be a very marginal chance that the stuff coming out is at least somewhat worthy of being associated with atla. the worldbuilding that he’s done in rise of kyoshi is insane.
I have heard good things about the Kyoshi novels! Unfortunately, LOK is the drop of shit that has poisoned the entire water supply. All ATLA-related works are going to have to be LOK compliant now, which is so deeply restrictive and contradictory to what I liked about ATLA in the first place. I feel like pre-canon stuff is safer (and again, heard AMAZING things about what FC Yee has done with a pre-ATLA world), but I guess I’m too cynical to get really invested in any more ATLA stuff anymore.
#asks#atla#anti lok#put Nat in charge of Avatar Studios and THEN we'll talk#finally get the thotty aang and amazing worldbuilding THAT WE DESERVE#Anonymous
I know, this was awful to see...
11 notes · View notes
vldkeith · 2 years
Note
I remember reading the klible and even though I wasn’t sold that they would be canon it personally gave me a much deeper understanding of Keith and Lance’s beautiful relationship and i also learned a lot about creating meaningful relationships between story characters though film/media strategies from it. the klible did give people false hope but I think it honestly did klance justice better than the show ever could. I wanted to reread it but the author (they were also a super cool Keith Kin like you) took it down after the show went south 😔 honestly the show should have taken s4-8 down instead cuz it’s an embarrassment to the klible✋🙄
yeah im def not arguing that the klible wasnt the Better Representation of klance because like, obviously it was. but i think the willingness to put so much trust into the cishet writers/creators was naive and only set people up for heartache. theres def a place for hope in media engagement--hope for a better ending, hope for something meaningful--but too much only gives the creators more money and leverage to disappoint...i think we need to adopt a mentality of forcing writers to prove themselves to us before we begin these intense analyses and discussions bc this "they're on our side until they show us they're not" helps nobody except those on top making the bad decisions
5 notes · View notes
theeggoman · 4 years
Photo
Tumblr media
My Hero AU where everything stays the same except Bakugou and Izuku are girls. That’s it. They’re girls, nothing else is gender bent, Bakugou’s name doesn’t change, their personality’s don’t change, Deku’s crush on Ururaka doesn’t change, Deku’s nickname doesn’t change now it’s just meaner because it’s not even a pun, everything’s the same it’s just. Girls. 
Oh also they’re lesbians and they fall in love but yeh everything else is the same just BNHA lesbian AU
Bakugou’s insecurity is driven by fear that she’ll be seen as weak for being a woman, and her anger issues are heavily driven by how stupidly sexualized the female pro hero’s are in the media and having to work twice as hard to earn her place. Izumi’s awkwardness and self doubt is deepened by the impossible standards of living up to All Might’s greatness while always being underestimated bc of her gender. Bakugou pushes Deku away because of her crush on her, which she doesn’t understand and is terrified of, so it comes out as internalized homophobia in the form of bullying. Give me Deku who doesn’t even realize she likes Kacchan and just thinks all girls look at eachother’s butts and want to kiss eachother and it’s normal until Kirishima gets super close with Bakugou and she’s suddenly terrified that they’ll start dating and “oh shit I’ve been in love with Kacchan since I was five and WOW I guess kissing practice with Mellissa on I Island actually DID count as my first kiss” bc she’s a chronic dumbass. Give me terrible wingmen Todoroki and Iida. Give me Terrible wingman Kirishima and Mina. Give me LESBIANS.
MINI rant real quick: I know that I’m projecting. But honestly, I don’t want ANYTHING about the story to drastically change, bc Horikoshi has already created the perfect set up for these scenarios I want to see. Think about how much more meaningful this story could be. I think about this all the time. About how much more I would enjoy the story if I saw a girl overcoming the same obstacles that Izuku does, if I saw a girl portrayed with the fire and loud mouthed passion of Bakugou. I think about how Ururaka, who was once a badass get rich quick schemer who nearly took down Bakugou has become nothing but a love interest. I think about how Mina has an incredible design and as tough as nails personality while maintaining her bubbly cute pink nature and how she gets NO screen time. I think about every half assed female character, not just from My Hero, but from Marvel and DC and every Anime other than Sailor Moon, who’s only there to drive the male character to fall for, or for the fans to jerk off to, or to get killed off for the revenge plot to begin. I think about the Betchdal Test and how fucking depressing those standards are, the BARE MINIMUM that so few things pass, and I feel like screaming because I fall in love with these male characters while grasping at desperate straws for any kind of female representation, MUCH LESS lesbian representation. I want a story where girls are the lead, where they fall in love and don’t die or get aids or cheat on their husbands before inevitably going back to them. I want superhero’s that are as amazing as batman, who aren’t written as women, but who just HAPPEN to be women. 
I want women to stop being “women” and start being people.
748 notes · View notes
Text
(not a positive post about the casting)
read at your own risk!
the CW wanted to take a risk without taking any risk at all--which is why they cast Ruby Rose, the most photogenic, exotic wlw who can still appeal to the straight men and women who like to fetishize queer women.
it demonstrates that their concept of having an “out lesbian with an outspoken passion for social justice” may be their contribution to progressive representation, but it is clouded with artifice. whatever action they try to suit to those words in their production/casting decisions is performative.
instead of having a sizable passionate audience of queer people who would support this show unequivocally, they want to make a tastefully suggestive queer show that comic fanboys and straight girls can claim ownership of. this is the DEFINITION of trying to please everyone will ultimately please no one.
except it will please some people, namely the casual (straight) viewers of the CW. the CW has a history of exploiting the LGBT+ community and this is no exception; it is a casting decision that on paper looks like a “win” for the LGBT+ (and I am in no way diminishing the relevance of Ruby Rose’s sexual or gender identity to discredit her) community but in reality is most beneficial to the casual, socially/politically uninvested straight men and women who have proven themselves to be the popular viewership of the Arrowverse .
if you want to make a show about a queer woman that feels genuine and not like publicity stunt politics, don’t half-ass it. and before you come at me by saying that the CW is a business and of course they need mass market appeal, let me refute that with all of Black Panther. something genuinely and passionately made by members of that community for that community, with full commitment and zero half-assery, and with respectful and nuanced handling of political issues can have massive commercial success and still be incredibly meaningful for its marginalized community. BP was one hell of a love letter to African culture, pulled no punches, and made $1.3 billion.
batwoman could be so much more than another forgettable CW superhero show if they cared enough to try, and that’s the worst part.
((this is not hate on Ruby Rose, only on the CW! she seems like she’s fine as a person [?], and you can like her even tho she can’t act! that’s perfectly acceptable. this also does not take into account religious erasure and Kate Kane’s Jewish identity which is something I will not comment on bc I am not Jewish))
4 notes · View notes
gotgifsandmusings · 7 years
Note
the way you guys handled the racism part of the podcast was just. awful i couldnt even finish the rest of the podcast bc i was so offput. expected better from you :/
I’m so sorry to hear that, seriously.
I don’t want to hide behind excuses; if our tone or words were hurtful, that’s the way of it, and all I can do is apologize for it and learn why. It was not our intent, and as we said at the start of it, we’re more than open to a dialogue.
I’ve received positive and negative feedback for pretty much every portion of the podcast, however (it’s not like “oh yay, person X agrees so we’re fine!” or anything, of course), and I do think there’s some value in digging into that.
Julia and I tend to be more forgiving of Martin, not that we’re asking anyone else to be. And given the virulence with which we go after D&D, I understand how hypocritical this can come across as, and how frustrating this can be too. But the reason we are is basically two-fold:
One is that we believe there’s a value to his books. Now, there’s also a value to the political discussion on Bill Maher’s show, for example, but yet amazingly, decent political commentary shouldn’t come with a side-serving of Islamophobia. I don’t watch his show, so why should I accept and praise books that don’t handle race well? That don’t handle female sexuality that amazingly, particularly in the cause of wlw scenes? That oftentimes do feel like the sexual violence could be easily toned down, or it’s unnecessarily gendered, or it does fall into unfortunate patterns with things like dead mothers?
The answer to that ties into the second reason, which is that his pattern is getting better. FeastDance felt more thoughtful, felt like there was more of an emphasis on female and other marginalized voices, and it felt like there was great intentionality on Martin’s part to do so. I haven’t read all his interviews; I can’t guess at what’s in his head beyond what his body of work shows us. But you can bet that if he was coming across as someone who was unwilling to reflect and engage with his own shortcomings, I wouldn’t be as invested.
I could be wrong about him. I’ve said this a lot before, but I could be really, really wrong. For now, he has my benefit of the doubt. I’m not asking you to bestow yours.
Back to the problems at hand though, and the value of his books. No, they’re not perfect at all. There’s a lot of issues, and these are issues that a more intersectional author likely wouldn’t have. To be perfectly honest, I think we’re starting to have a tendency of expecting perfection in every area from our media now. While I love that we’re finally in a place where our cultural dialogue is pushing for the change we want, and that storytellers are actually listening (look at like, Clexacon’s mere existence, for instance), I think this can easily become a double edged sword, where you’ve got the fandom raising pitchforks about Steven Universe not doing well with butch representation.
ASOIAF is no SU. It’s a book series written by a white dude in his 60s that spans twenty years. Which is why Julia and I put so much stock into the pattern and direction the books seem to be headed, because our social dialogue shifts so much. Well, depressingly not as much as it should, but I think it’s hard to deny that there is far less tolerance for bullshit in our media, and far more expectations of representational media that are not just once again glorifying the white male lens. 
I don’t believe the book series simplistically does that at all. I find there to be feminist takeaways in Martin’s critique of the patriarchy, and in the way Martin holds up a lens to the bullshit assumptions by this society, which is one uncomfortably reflective of our own history (though certainly not highly accurately so). I wouldn’t say my willingness is to forgive the issues in the books, but more like say, “these are here, these are problems, but I still find this text valuable. I still find the close-POV different and worthwhile.”
I can’t speak for Julia, but I can at least say this is what we had hoped to convey in the podcast. I believe we failed spectacularly. I think our tendency not to plan or overly structure our episodes went heavily against us here. Everything we were saying was in a larger context of “and this is a problem,” but wow we really didn’t make that clear.
What we did was basically raise the problems in turn, talk about what we think his intent was and what its function in the story has been, and then conclude on “this could have been better,” which after you know…like ten minutes of what probably sounds like rationalizations was not exactly going to come across as particularly meaningful. Had we structured more, I think we could have been clearer about “and it did not land.”
Showing Dany as completely unable to comprehend the political situation she was in, and being over her head with the complexity, did *not* require a lack of Essosi POVs, even if we suspect that’s partially why Martin made that choice, for instance.
But of course that didn’t come across, especially when there were some downright flippant things said that we also didn’t clarify. Like Julia mentioning she didn’t want a Dothraki POV, probably because it’d be very close to one as distressingly violent and patriarchal as Vic, which is simply unpleasant to read (and I’m also not sure I agree; I would have loved Dany eating the heart from a POV of someone in the Dosh Khaleen, for instance).
We know each other well, and we know the intent and place we’re coming from when we’re saying something, so I think that led to us not explicating stuff that absolutely needed to be explicated. Again, there’s no excuse. I wish we had planned  and presented everything differently, and it seems pretty obvious now how badly we needed to do that. I’ve learned a lot just in the past day, and all I can do is try to be better.
However, I will say…I suspect there’s also going to be content disagreements in the conclusions Julia and I land on. I’ve seen this with the fandom dialogue about the issues of sexism in the books before, and we’ve often received criticism for defending how he writes the patriarchy and women. Or for how women in the past basically are these pure, idealized victims, or they’re forgotten. We believe that’s to a point most of the time, that being one that provides a fuller picture of Westeros’s bullshit patriarchy (unnammed Mama Martell as an exception because there’s no reason for that at all), but we know it’s a point that doesn’t land.
Then there’s stuff like Arianne’s ‘hypersexuality’, which I simply don’t agree with. In my view, and something Gretchen and I were just discussing, Cersei is far more sexualized (she just tends to view sex from a manipulative standpoint always, instead of deriving pleasure from it, Jaime aside which is clearly unhealthy), and the degree to which this is a problem for a Dornish POV to have these traits (which I think is played up in the fandom) is one where I part ways with a lot of people. I can’t answer how I’d feel about it if I weren’t white, so I do my best to acknowledge that lens whenever I can. But in general, from what I can tell, my lens is also just a bit less Doylist than where some land.
And that’s fine, too. We’re all just engaging with the books how we like to do, and taking from it what’s there for us. There’s no objective takeaways, and not to belabor the point, but I could be so wrong about these books.
Why am I all Doylist with D&D? Because Watsonian analysis is useless in GoT, sure, but because they’ve violated my trust and my benefit of the doubt so thoroughly. I’m not there with Martin, and maybe that’s a problem. I suspect I might even be too Watsonian for my own good because of how engaged I find myself with certain aspects. Half of why we recorded that podcast was to kind of slap ourselves in the face with some Doylist realities, but I do now think the tone ended up being too dismissive, and I don’t feel good about it.
Anyway, this is just a super long-winded apology, as well as a meek explanation I suppose. Certainly not an excuse. This episode was requested a lot for us, probably because of how defensive of the books we get, and I feel like in our attempt to talk every angle of the issue, we ended up just coming across as doubling down on that defense. Moving forward you can bet your ass I’m going to be far more cognizant of this.
What’s funny is, feeling defensive actually wasn’t my experience at all recording it. Hell, even just pulling your asks for it, I was like, “wow this all really sucks,” and found myself getting a good deal more nervous for TWOW coming out. Because…god…I think I might be wrong. I’m back in that place I was in during season 5 where I was wondering if Sansa was going to get raped by LF (obviously a different context than the show), or if we’re not supposed to see Tyrion’s misogyny.
I’m not ready to give up on Martin yet, but I’m sure as hell not asking anyone else to forgive him. And if nothing else, I know now that at least a few takeaways we had were certainly not his intent, but the result of our own engagement and projections onto the media. I think I might be wrong (and where’s TWOW).
36 notes · View notes
knightofbalance-13 · 7 years
Text
https://rwbycrit.tumblr.com/post/165522291637/knightofbalance-13
Really shouldn’t keep doing this: Makes you look bad.
i’ll admit: i didn’t read much of this bullshit bc i couldn’t make it more than like 3 lines inÂ
Manipulative language and outright showing you don’t know what you are talking about.
1. idk why you keep bringing your lgbt friends up? they didn’t make this post you did, and, as far as i know, you’re heterosexual? so me saying you’re heterosexual is not me not considering a person h*mosexual if they disagree with me lmao. hope you stretched  before you reached tho.
That would be a good point...If I was talking about myself. the opinions I expressed are those I share with my friends and they have even more severe opinions as well. you disregarding it TWICE now on the assumption that a single heterosexual is saying this despite being told numerous that is not the case is you denying that their opinions aren't valid or they don’t count. Only one stretching is the person refusing to read.
1a. “when a vats [sic] majority of homosexuals don’t acre [sic] and equal number of people who call it outright oppose you. All stuff I’ve seen and noted from my friends. ” you have like 6 lgbt friends, that’s not a “majority of h*moseuxals” lmao
Too bad the stuff that applies to is the one NOT saying “the vats majority of homosexuals” but rather the “equal number” part. Maybe you should before you comment. that or have a higher reading comprehension tha a five year old.
2. you called me a homophobe? even tho i cant? be a homophobe? because i’m literally gay? like i, a lesbian, cannot be homophobic. i can only have internalized homophobia and cannot perpetuate homophobia in any meaningful way as an out, trans lesbian. how many times do i gotta say that, dude?
Too bad you don’t need to perpetuate homophobia TO BE a homophobe. You just need to think that all homosexuals are bad or, in your case, interchangeable IE acting like only people who agree with you can be LGBT.
And just because eyou’re gay doesn’t mean you can’t be a homophobe. It’s called Boomerang bigot and it happens: Sort of like saying “I’m white so my opinion doesn’t matter” is being racist because you are devaluing all white people for being white.
2b. i don’t only consider people gay if they agree with me? i consider people gay if they’re gay? and you are not? ???? i’ll have an actual convo with a gay person about something we disagree with, but you, sir, are not gay. and i’m not interested in your hetero opinions on gay issues.
Really? @mageknight14 @ula-star @icindernikos @rainbowloliofjustice @tumblezwei Why don’t you guys go ahead and talk for yourselves. But let me guess, if they agree with they obviously don’t count, Am I right?
3. what the fuck is a “gay ship tease” if not “hinting at a q*eer character when there is none?”Â
Gay ship tease can happen  even if there is a different LGBT character in the show. You can have gay ship tease if say, Ruby is asexual. It isn’t queerbaiting because there's already an LGBT character. If there wasn't one then it might be queerbaiting if there is no LGBT character at the end. Ya know, all stuff I explained in my original post.
3a. “heterosexual ships get teased all the time and never come true”. this is literally “what if a mouse said this to a kia sorento?????” its not the same, dumbass. there’s no such thing as “heterobaiting”, which is basically what you’re claiming here, bc heterosexual rep is gonna happen, there’s no doubt about it. lgbt rep is not the same as hetero rep bc lgbt rep is so rare and even moreso rarely done well.Â
Actually, you’re the one that analogy applies to because I never once talked about representation: Just ships. So: Moving The Goalposts.
3b. you tried to disprove q*eerbaiting by citing Sherlock… one of the most particularly cited examples of q*eerbaiting….. as well as well-known homophobic show… seeing as they made Irene Alder a lesbian,,, and then made her attracted to Sherlock for whatever reason…… and then they had Moriarty, who was q*eercoded and would perfectly fit in among Hays Code era q*eercoded villains. lmao you really tried it tho?
Proof? No? Well considering you tendency to scree\am homophobia at stuff you don’t like, your word is worthless here.          
4. ,and here’s where i really lost it,: cishet is not a slur. the fact that i actually have to type that out and tell you that is hilarious to me. “cishet” is shorthand for “cisgender heterosexual”. “cishet” has never been used in the derogatory sense, except maybe as a joke on tumblr among lgbt people as a coping mechanism, but cishet people have never been oppressed for being cishet.
And yet you sue it to devalue my opinion, shut me up and debunk me on WHAT I am. Not to mention how you ignore my sources and bring up none of your own
cishet has never been used as an actual slur. it is in no way a slur. ‘q*eer’ on the other hand, is absolutely a slur. it has a long history of being a slur. it’s not even just an antiquated slur. i myself, a 21 yr old living in a large city, have been called ‘q*eer’ as a slur.
And you used cishet in the same way: Your point being?
some people consider ‘q*eer’ to be a reclaimed slur, but not all of us. i don’t like saying it bc i has been used against me. kob, you have no right to use the term “q*eer” as you cannot reclaim it. no amount of lgbt friends will ever change that.
I wasn’t reclaiming that: I was just treating you as you treat me. If you see it as a slur, then most likely you know you were using Cishet as a slur ergo you have no right to complain.
i can’t take this dude seriously. kob really thinks homophobia is “demonized” in society? this society, where “gay panic” is a legitimate, and legal defense in all US states minus California and Illinois? where 2016 was the deadliest year for the lgbt community? where the worst mass shooting in American history was at the Pulse Nightclub in 2016? and people wouldn’t even admit that it was about killing gay people?Â
because some people WOULD try to force it: some gay people are like that. Are all of them? FUCK NO, I would bet money that they don’t even make up 1% but it happens and can happen. I’m not gonna defend them if they try calling that to get away but I will listen and look at the facts. Not that I am happy about THIS at all but I have to be somewhat professional here.
And mind telling me WHERE most LGBT died? Because if it’s in the Middle East, that isn’t in America where we’re talking about.
And one shooting done by one person is not homophobia on a country or worldwide scale. The guy is seen as a monster and is heavily hated and looked down upon and the event is treated as tragic. If homophobia WAS okay, no one would have talked about this or looked into it.
And what people? Who? Proof?
yeah, it’s 2017. and homophobia is still alive and well. kob here prove it.Â
When did you change your name to “Kob?” I mean, I’m not the one who disregards LGBT people unless they agree with me or think LGBT people are so fragile they need every ship granted to them.
4 notes · View notes
tak4hir0 · 5 years
Link
Transformers from scratch Transformers are a very exciting family of machine learning architectures. Many good tutorials exist (e.g. [1, 2]) but in the last few years, transformers have mostly become simpler, so that it is now much more straightforward to explain how modern architectures work. This post is an attempt to explain directly how modern transformers work, and why, without some of the historical baggage. I will assume a basic understanding of neural networks and backpropagation. If you'd like to brush up, this lecture will give you the basics of neural networks and this one will explain how these principles are applied in modern deep learning systems. A working knowledge of Pytorch is required to understand the programming examples, but these can also be safely skipped. Self-attention The fundamental operation of any transformer architecture is the self-attention operation. We'll explain where the name "self-attention" comes from later. For now, don't read too much in to it. Self-attention is a sequence-to-sequence operation: a sequence of vectors goes in, and a sequence of vectors comes out. Let's call the input vectors \(\x_1, \x_2, \ldots \x_t\) and the corresponding output vectors \(\y_1, \y_2, \ldots, \y_t\). The vectors all have dimension \(k\). To produce output vector \(\y_\rc{i}\), the self attention operation simply takes a weighted average over all the input vectors $$ \y_\rc{i} = \sum_{\gc{j}} w_{\rc{i}\gc{j}} \x_\gc{j} \p $$ Where the weights sum to one over all \(\gc{j}\). The weight \(w_{\rc{i}\gc{j}}\) is not a parameter, as in a normal neural net, but it is derived from a function over \(\x_\rc{i}\) and \(\x_\gc{j}\). The simplest option for this function is the dot product: $$ w'_{\rc{i}\gc{j}} = {\x_\rc{i}}^T\x_\gc{j} \p $$ This gives us a value anywhere between negative and positive infinity, so we apply a softmax to map the values to \([0, 1]\) and to ensure that they sum to 1 over the whole sequence: $$ w_{\rc{i}\gc{j}} = \frac{\text{exp } w'_{\rc{i}\gc{j}}}{\sum_\gc{j} \text{exp }w'_{\rc{i}\gc{j}}} \p $$ And that's the basic operation of self attention. A visual illustration of basic self-attention. Note that the softmax operation over the weights is not illustrated. A few other ingredients are needed for a complete transformer, which we'll discuss later, but this is the fundamental operation. More importantly, this is the only operation in the whole architecture that propagates information between vectors. Every other operation in the transformer is applied to each vector in the input sequence without interactions between vectors. Understanding why self-attention works Despite its simplicity, it's not immediately obvious why self-attention should work so well. To build up some intuition, let's look first at the standard approach to movie recommendation. Let's say you run a movie rental business and you have some movies, and some users, and you would like to recommend movies to your users that they are likely to enjoy. One way to go about this, is to create manual features for your movies, such as how much romance there is in the movie, and how much action, and then to design corresponding features for your users: how much they enjoy romantic movies and how much they enjoy action-based movies. If you did this, the dot product between the two feature vectors would give you a score for how well the attributes of the movie match what the user enjoys. If the signs of a feature match for the user and the movie—the movie is romantic and the user loves romance or the movie is unromantic and the user hates romance—then the the resulting dot product gets a positive term for that feature. If the signs don't match—the movie is romantic and the user hates romance or vice versa—the corresponding term is negative. Furthermore, the magnitudes of the features indicate how much the feature should contribute to the total score: a movie may be a little romantic, but not in a noticeable way, or a user may simply prefer no romance, but be largely ambivalent. Of course, gathering such features is not practical. Annotating a database of millions of movies is very costly, and annotating users with their likes and dislikes is pretty much impossible. What happens instead is that we make the movie features and user features parameters of the model. We then ask users for a small number of movies that they like and we optimize the user features and movie features so that their dot product matches the known likes. Even though we don't tell the model what any of the features should mean, in practice, it turns out that after training the features do actually reflect meaningful semantics about the movie content. The first two learned features from a basic matrix factorization model. The model had no access to any information about the content of the movies, only which users liked the. Note that movies are arranged from low-brow to high-brow horizontally, and from mainstream to quirky vertically. From [4]. See this lecture for more details on recommender systems. For now, this suffices as an explanation of how the dot product helps us to represent objects and their relations.This is the basic principle at work in the self-attention. Let's say we are faced with a sequence of words. To apply self-attention, we simply assign each word \(\bc{t}\) in our vocabulary an embedding vector \(\v_\bc{t}\) (the values of which we'll learn). This is what's known as an embedding layer in sequence modeling. It turns the word sequence $$\bc{\text{the}}, \bc{\text{cat}}, \bc{\text{walks}}, \bc{\text{on}}, \bc{\text{the}}, \bc{\text{street}}$$ into the vector sequence $$\v_\bc{\text{the}}, \v_\bc{\text{cat}}, \v_\bc{\text{walks}}, \v_\bc{\text{on}}, \v_\bc{\text{the}}, \v_\bc{\text{street}} \p $$ If we feed this sequence into a self-attention layer, the output is another sequence of vectors $$\y_\bc{\text{the}}, \y_\bc{\text{cat}}, \y_\bc{\text{walks}}, \y_\bc{\text{on}}, \y_\bc{\text{the}}, \y_\bc{\text{street}} $$ where \(\y_\bc{\text{cat}}\) is a weighted sum over all the embedding vectors in the first sequence, weighted by their (normalized) dot-product with \(\v_\bc{\text{cat}}\). Since we are learning what the values in \(\v_\bc{t}\) should be, how "related" two words are is entirely determined by the task. In most cases, the definite article the is not very relevant to the interpretation of the other words in the sentence; therefore, we will likely end up with an embedding \(\v_\bc{\text{the}}\) that has a low or negative dot product with all other words. On the other hand, to interpret what walks means in this sentence, it's very helpful to work out who is doing the walking. This is likely expressed by a noun, so for nouns like cat and verbs like walks, we will likely learn embeddings \(\v_\bc{\text{cat}}\) and \(\v_\bc{\text{walks}}\) that have a high, positive dot product together. This is the basic intuition behind self-attention. The dot product expresses how related two vectors in the input sequence are, with "related" defined by the learning task, and the output vectors are weighted sums over the whole input sequence, with the weights determined by these dot products. Before we move on, it's worthwhile to note the following properties, which are unusual for a sequence-to-sequence operation: There are no parameters (yet). What the basic self-attention actually does is entirely determined by whatever mechanism creates the input sequence. Upstream mechanisms, like an embedding layer, drive the self-attention by learning representations with particular dot products (although we'll add a few parameters later). Self attention sees its input as a set, not a sequence. If we permute the input sequence, the output sequence will be exactly the same, except permuted also (i.e. self-attention is permutation equivariant). We will mitigate this somewhat when we build the full transformer, but the self-attention by itself actually ignores the sequential nature of the input. In Pytorch: basic self-attention What I cannot create, I do not understand, as Feynman said. So we'll build a simple transformer as we go along. We'll start by implementing this basic self-attention operation in Pytorch. The first thing we should do is work out how to express the self attention in matrix multiplications. A naive implementation that loops over all vectors to compute the weights and outputs would be much too slow. We'll represent the input, a sequence of \(t\) vectors of dimension \(k\) as a \(t\) by \(k\) matrix \(\X\). Including a minibatch dimension \(b\), gives us an input tensor of size \((b, t, k)\). The set of all raw dot products \(w'_{\rc{i}\gc{j}}\) forms a matrix, which we can compute simply by multiplying \(\X\) by its transpose: import torch import torch.nn.functional as F # assume we have some tensor x with size (b, t, k) x = ... raw_weights = torch.bmm(x, x.transpose(1, 2)) # - torch.bmm is a batched matrix multiplication. It # applies matrix multiplication over batches of # matrices. Then, to turn the raw weights \(w'_{\rc{i}\gc{j}}\) into positive values that sum to one, we apply a row-wise softmax: weights = F.softmax(raw_weights, dim=2) Finally, to compute the output sequence, we just multiply the weight matrix by \(\X\). This results in a batch of output matrices \(\Y\) of size (b, t, e) whose rows are weighted sums over the rows of \(\X\). y = torch.bmm(weights, x) That's all. Two matrix multiplications and one softmax gives us a basic self-attention. Additional tricks The actual self-attention used in modern transformers relies on three additional tricks. 1) Queries, keys and values Every input vector \(\x_\rc{i}\) is used in three different ways in the self attention operation: It is compared to every other vector to establish the weights for its own output \(\y_\rc{i}\) It is compared to every other vector to establish the weights for the output of the \(\gc{j}\)-th vector \(\y_\gc{j}\) It is used as part of the weighted sum to compute each output vector once the weights have been established. These roles are often called the query, the key and the value (we'll explain where these names come from later). In the basic self-attention we've seen so far, each input vector must play all three roles. We make its life a little easier by deriving new vectors for each role, by applying a linear transformation to the original input vector. In other words, we add three \(k \times k\) weight matrices \(\W_q\), \(\W_k\),\(\W_v\) and compute three linear transformations of each \(x_\rc{i}\), for the three different parts of the self attention: $$ \begin{align*} \q_\rc{i} &= \W_q\x_\rc{i} & \k_\rc{i} &= \W_k\x_\rc{i} & \v_\rc{i} &= \W_v\x_\rc{i} \end{align*} $$ $$ \begin{align*} w'_{\rc{i}\gc{j}} &= {\q_\rc{i}}^T\k_\gc{j} \\ w_{\rc{i}\gc{j}} &= \text{softmax}(w'_{\rc{i}\gc{j}})\\ \y_\rc{i} &= \sum_\gc{j} w_{\rc{i}\gc{j}} \v_\rc{i}\p\\ \end{align*} $$ This gives the self-attention layer some controllable parameters, and allows it to modify the incoming vectors to suit the three roles they must play. Illustration of the self-attention with key, query and value transformations.2) Scaling the dot product The softmax function can be sensitive to very large input values. These kill the gradient, and slow down learning, or cause it to stop altogether. Since the average value of the dot product grows with the embedding dimension \(k\), it helps to scale the dot product back a little to stop the inputs to the softmax function from growing too large: $$ w'_{\rc{i}\gc{j}} = \frac^T\k_\gc{j}}{\sqrt{k}} $$ Why \(\sqrt{k}\)? Imagine a vector in \({\mathbb R^k}\) with values all \(c\). Its Euclidean length is \(\sqrt{k}c\). Therefore, we are dividing out the amount by which the increase in dimension increases the length of the average vectors.3) Multi-head attention Finally, we must account for the fact that a word can mean different things to different neighbours. Consider the following example. $$\bc{\text{mary}}, \bc{\text{gave}}, \bc{\text{roses}}, \bc{\text{to}}, \bc{\text{susan}}$$ We see that the word gave has different relations to different parts of the sentence. mary expresses who's doing the giving, roses expresses what's being given, and susan expresses who the recipient is. In a single self-attention operation, all this information just gets summed together. If Susan gave Mary the roses instead, the output vector \(\y_\bc{\text{gave}}\) would be the same, even though the meaning has changed. We can give the self attention greater power of discrimination, by combining several self attention mechanisms (which we'll index with \(\bc{r}\)), each with different matrices \(\W_q^\bc{r}\), \(\W_k^\bc{r}\),\(\W_v^\bc{r}\). These are called attention heads. For input \(\x_\rc{i}\) each attention head produces a different output vector \(\y_\rc{i}^\bc{r}\). We concatenate these, and pass them through a linear transformation to reduce the dimension back to \(k\). In Pytorch: complete self-attention Let's now implement a self-attention module with all the bells and whistles. We'll package it into a Pytorch module, so we can reuse it later. Combining three attention heads into one matrix multiplication (for the queries). import torch from torch import nn import torch.nn.functional as F class SelfAttention(nn.Module): def __init__(self, k, heads=8): super().__init__() self.k, self.heads = k, heads We think of the \(h\) attention heads as \(h\) separate sets of three matrices \(\W^\bc{r}_q\), \(\W^\bc{r}_k\),\(\W^\bc{r}_v\), but it's actually more efficient to combine these for all heads into three single \(k \times hk\) matrices, so that we can compute all the concatenated queries, keys and values in a single matrix multiplication. # These compute the queries, keys and values for all # heads (as a single concatenated vector) self.tokeys = nn.Linear(k, k * heads, bias=False) self.toqueries = nn.Linear(k, k * heads, bias=False) self.tovalues = nn.Linear(k, k * heads, bias=False) # This unifies the outputs of the different heads into # a single k-vector self.unifyheads = nn.Linear(heads * emb, emb) We can now implement the computation of the self-attention (the module's forward function). First, we compute the queries, keys and values: def forward(self, x): b, t, k = x.size() h = self.heads queries = self.toqueries(x).view(b, t, h, k) keys = self.tokeys(x) .view(b, t, h, k) values = self.tovalues(x) .view(b, t, h, k) The output of each linear module has size (b, t, h*k), which we simply reshape to (b, t, h, k) give each head its own dimension. Next, we need to compute the dot products. This is the same operation for every head, so we fold the heads into the batch dimension. This ensures that we can use torch.bmm() as before, and the whole collection of keys, queries and values will just be seen as a slightly larger batch. Since the head and batch dimension are not next to each other, we need to transpose before we reshape. (This is costly, but it seems to be unavoidable.) # - fold heads into the batch dimension keys = keys.transpose(1, 2).contiguous().view(b * h, t, k) queries = queries.transpose(1, 2).contiguous().view(b * h, t, k) values = values.transpose(1, 2).contiguous().view(b * h, t, k) As before, the dot products can be computed in a single matrix multiplication, but now between the queries and the keys. # - get dot product of queries and keys, and scale dot = torch.bmm(queries, keys.transpose(1, 2)) # - dot has size (b*h, t, t) containing raw weights dot = F.softmax(dot, dim=2) # - dot now contains row-wise normalized weights We apply the self attention to the values, giving us the output for each attention head # apply the self attention to the values out = torch.bmm(dot, values).view(b, h, t, e) To unify the attention heads, we transpose again, so that the head dimension and the embedding dimension are next to each other, and reshape to get concatenated vectors of dimension \(kh\). We then pass these through the unifyheads layer to project them back down to \(k\) dimensions. # swap h, t back, unify heads out = out.transpose(1, 2).contiguous().view(b, t, h * e) return self.unifyheads(out) And there you have it: multi-head, scaled dot-product self attention. You can see the complete implementation here. Building transformers A transformer is not just a self-attention layer, it is an architecture. It's not quite clear what does and doesn't qualify as a transformer, but here we'll use the following definition: Any architecture designed to process a connected set of units—such as the tokens in a sequence or the pixels in an image—where the only interaction between units is through self-attention. As with other mechanisms, like convolutions, a more or less standard approach has emerged for how to build self-attention layers up into a larger network. The first step is to wrap the self-attention into a block that we can repeat. The transformer block There are some variations on how to build a basic transformer block, but most of them are structured roughly like this: That is, the block applies, in sequence: a self attention layer, layer normalization, a feed forward layer (a single MLP applied independently to each vector), and another layer normalization. Residual connections are added around both, before the normalization. The order of the various components is not set in stone; the important thing is to combine self-attention with a local feedforward, and to add normalization and residual connections. Normalization and residual connections are standard tricks used to help deep neural networks train faster and more accurately. The layer normalization is applied over the embedding dimension only. Here's what the transformer block looks like in pytorch. class TransformerBlock(nn.Module): def __init__(self, k, heads): super().__init__() self.attention = SelfAttention(k, heads=heads) self.norm1 = nn.LayerNorm(k) self.norm2 = nn.LayerNorm(k) self.ff = nn.Sequential( nn.Linear(k, 4 * k), nn.ReLU(), nn.Linear(4 * k, k)) def forward(self, x): attended = self.attention(x) x = self.norm1(attended + x) fedforward = self.ff(x) return self.norm2(fedforward + x) We've made the relatively arbitrary choice of making the hidden layer of the feedforward 4 times as big as the input and output. Smaller values may work as well, and save memory, but it should be bigger than the input/output layers. Classification transformer The simplest transformer we can build is a sequence classifier. We'll use the IMDb sentiment classification dataset: the instances are movie reviews, tokenized into sequences of words, and the classification labels are positive and negative (indicating whether the review was positive or negative about the movie). The heart of the architecture will simply be a large chain of transformer blocks. All we need to do is work out how to feed it the input sequences, and how to transform the final output sequence into a a single classification. The whole experiment can be found here. We won't deal with the data wrangling in this blog post. Follow the links in the code to see how the data is loaded and prepared. Output: producing a classification The most common way to build a sequence classifier out of sequence-to-sequence layers, is to apply global average pooling to the final output sequence, and to map the result to a softmaxed class vector. Overview of a simple sequence classification transformer. The output sequence is averaged to produce a single vector representing the whole sequence. This vector is projected down to a vector with one element per class and softmaxed to produce probabilities. Input: using the positions We've already discussed the principle of an embedding layer. This is what we'll use to represent the words. However, as we've also mentioned already, we're stacking permutation equivariant layers, and the final global average pooling is permutation invariant, so the network as a whole is also permutation invariant. Put more simply: if we shuffle up the words in the sentence, we get the exact same classification, whatever weights we learn. Clearly, we want our state-of-the-art language model to have at least some sensitivity to word order, so this needs to be fixed. The solution is simple: we create a second vector of equal length, that represents the position of the word in the current sentence, and add this to the word embedding. There are two options. position embeddings We simply embed the positions like we did the words. Just like we created embedding vectors \(\v_\bc{\text{cat}}\) and \(\v_\bc{\text{susan}}\), we create embedding vectors \(\v_\bc{\text{12}}\) and \(\v_\bc{\text{25}}\). Up to however long we expect sequences to get. The drawback is that we have to see sequences of every length during training, otherwise the relevant position embeddings don't get trained. The benefit is that it works pretty well, and it's easy to implement. position encodings Position encodings work in the same way as embeddings, except that we don't learn the position vectors, we just choose some function \(f: {\mathbb N} \to {\mathbb R}^k\) to map the positions to real valued vectors, and let the network figure out how to interpret these encodings. The benefit is that for a well chosen function, the network should be able to deal with sequences that are longer than those it's seen during training (it's unlikely to perform well on them, but at least we can check). The drawbacks are that the choice of encoding function is a complicated hyperparameter, and it complicates the implementation a little. For the sake of simplicity, we'll use position embeddings in our implementation. Pytorch Here is the complete text classification transformer in pytorch. class Transformer(nn.Module): def __init__(self, k, heads, depth, seq_length, num_tokens, num_classes): super().__init__() self.num_tokens = num_tokens self.token_emb = nn.Embedding(k, num_tokens) self.pos_emb = nn.Embedding(k, seq_length) # The sequence of transformer blocks that does all the # heavy lifting tblocks = [] for i in range(depth): tblocks.append(TransformerBlock(emb=emb, heads=heads)) self.tblocks = nn.Sequential(*tblocks) # Maps the final output sequence to class logits self.toprobs = nn.Linear(emb, num_classes) def forward(self, x): """ :param x: A (b, t) tensor of integer values representing words (in some predetermined vocabulary). :return: A (b, c) tensor of log-probabilities over the classes (where c is the nr. of classes). """ # generate token embeddings tokens = self.token_emb(x) b, t, e = tokens.size() # generate position embeddings positions = torch.arange(t) positions = self.pos_emb(positions)[None, :, :].expand(b, t, e) x = tokens + positions x = self.tblocks(x) # Average-pool over the t dimension and project to class # probabilities x = self.toprobs(x.mean(dim=1)) return F.log_softmax(x, dim=1) At depth 6, with a maximum sequence length of 512, this transformer achieves an accuracy of about 85%, competitive with results from RNN models, and much faster to train. To see the real near-human performance of transformers, we'd need to train a much deeper mode on much more data. More about how to do that later. Text generation transformer The next trick we'll try is an autoregressive model. We'll train a character level transformer to predict the next character in a sequence. The training regime is simple (and has been around for far longer than transformers have). We give the sequence-to-sequence model a sequence, and we ask it to predict the next character at each point in the sequence. In other words, the target output is the same sequence shifted one character to the left: With RNNs this is all we need to do, since they cannot look forward into the input sequence: output \(i\) depends only on inputs \(0\) to \(i\). With a transformer, the output depends on the entire input sequence, so prediction of the next character becomes vacuously easy, just retrieve it from the input. To use self-attention as an autoregressive model, we'll need to ensure that it cannot look forward into the sequence. We do this by applying a mask to the matrix of dot products, before the softmax is applied. This mask disables all elements above the diagonal of the matrix. Masking the self attention, to ensure that elements can only attend to input elements that precede them in the sequence. Note that the multiplication symbol is slightly misleading: we actually set the masked out elements (the white squares) to \(-\infty\) Since we want these elements to be zero after the softmax, we set them to \(-\infty\). Here's how that looks in pytorch: indices = torch.triu_indices(k, k, offset=0) matrices[:, indices[0], indices[1]] = float('-inf') After we've handicapped the self-attention module like this, the model can no longer look forward in the sequence. We train on the standard enwik8 dataset (taken from the Hutter prize), which contains \(10^8\) characters of Wikipedia text (including markup). During training, we generate batches by randomly sampling subsequences from the data. We train on sequences of length 256, using a model of 12 transformer blocks and 256 embedding dimension. After about 24 hours training on an RTX 2080Ti (some 170K batches of size 32), we let the model generate from a 256-character seed: for each character, we feed it the preceding 256 characters, and look what it predicts for the next character (the last output vector). We sample from that with a temperature of 0.5, and move to the next character. The output looks like this: 1228X Human & Rousseau. Because many of his stories were originally published in long-forgotten magazines and journals, there are a number of [[anthology|anthologies]] by different collators each containing a different selection. His original books have been considered an anthologie in the [[Middle Ages]], and were likely to be one of the most common in the [[Indian Ocean]] in the [[1st century]]. As a result of his death, the Bible was recognised as a counter-attack by the [[Gospel of Matthew]] (1177-1133), and the [[Saxony|Saxons]] of the [[Isle of Matthew]] (1100-1138), the third was a topic of the [[Saxony|Saxon]] throne, and the [[Roman Empire|Roman]] troops of [[Antiochia]] (1145-1148). The [[Roman Empire|Romans]] resigned in [[1148]] and [[1148]] began to collapse. The [[Saxony|Saxons]] of the [[Battle of Valasander]] reported the y Note that the Wikipedia link tag syntax is correctly used, that the text inside the links are represent reasonable subjects for links. Most importantly, note that there is a rough thematic consistency; the generated text keeps on the subject of the bible, and the Roman empire, using different related terms at different points. While this is far form the performance of a model like GPT-2, the benefits over a similar RNN model are clear already: faster training (a similar RNN model would take many days to train) and better long-term coherence. In case you're curious, the Battle of Valasander seems to be an invention of the network. At this point, the model achieves a compression of 1.343 bits per byte on the validation set, which is not too far off the state of the art of 0.93 bits per byte, achieved by the GPT-2 model (described below). Design considerations To understand why transformers are set up this way, it helps to understand the basic design considerations that went into them. The main point of the transformer was to overcome the problems of the previous state-of-the-art architecture, the RNN (usually an LSTM or a GRU). Unrolled, an RNN looks like this: The big weakness here is the recurrent connection. while this allows information to propagate along the sequence, it also means that we cannot compute the cell at time step \(i\) until we've computed the cell at timestep \(i - 1\). Contrast this with a 1D convolution: In this model, every output vector can be computed in parallel with every other output vector. This makes convolutions much faster. The drawback with convolutions, however, is that they're severely limited in modeling long range dependencies. In one convolution layer, only words that are closer together than the kernel size can interact with each other. For longer dependence we need to stack many convolutions. The transformer is an attempt to capture the best of both worlds. They can model dependencies over the whole range of the input sequence just as easily as they can for words that are next to each other (in fact, without the position vectors, they can't even tell the difference). And yet, there are no recurrent connections, so the whole model can be computed in a very efficient feedforward fashion. The rest of the design of the transformer is based primarily on one consideration: depth. Most choices follow from the desire to train big stacks of transformer blocks. Note for instance that there are only two places in the transformer where non-linearities occur: the softmax in the self-attention and the ReLU in the feedforward layer. The rest of the model is entirely composed of linear transformations, which perfectly preserve the gradient. I suppose the layer normalization is also nonlinear, but that is one nonlinearity that actually helps to keep the gradient stable as it propagates back down the network. Historical baggage If you've read other introductions to transformers, you may have noticed that they contain some bits I've skipped. I think these are not necessary to understand modern transformers. They are, however, helpful to understand some of the terminology and some of the writing about modern transformers. Here are the most important ones. Why is it called self-attention? Before self-attention was first presented, sequence models consisted mostly of recurrent networks or convolutions stacked together. At some point, it was discovered that these models could be helped by adding attention mechanisms: instead of feeding the output sequence of the previous layer directly to the input of the next, an intermediate mechanism was introduced, that decided which elements of the input were relevant for a particular word of the output. The general mechanism was as follows. We call the input the values. Some (trainable) mechanism assigns a key to each value. Then to each output, some other mechanism assigns a query. These names derive from the datastructure of a key-value store. In that case we expect only one item in our store to have a key that matches the query, which is returned when the query is executed. Attention is a softened version of this: every key in the store matches the query to some extent. All are returned, and we take a sum, weighted by the extent to which each key matches the query. The great breakthrough of self-attention was that attention by itself is a strong enough mechanism to do all the learning. Attention is all you need, as the authors put it. They key, query and value are all the same vectors (with minor linear transformations). They attend to themselves and stacking such self-attention provides sufficient nonlinearity and representational power to learn very complicated functions. The original transformer: encoders and decoders But the authors did not dispense with all the complexity of contemporary sequence modeling. The standard structure of sequence-to-sequence models in those days was an encoder-decoder architecture, with teacher forcing. The encoder takes the input sequence and maps it to a single latent vector representing the whole sequence. This vector is then passed to a decoder which unpacks it to the desired target sequence (for instance, the same sentence in another language). Teacher forcing refers to the technique of also allowing the the decoder access to the input sentence, but in an autoregressive fashion. That is, the decoder generates the output sentence word for word based both on the latent vector and the words it has already generated. This takes some of the pressure off the latent representation: the decoder can user word-for-word sampling to take care of the low-level structure like syntax and grammer and use the latent vector to capture more high-leve semantic structure. Decoding twice with the same latent vector would, ideally, give you two different sentences with the same meaning. In later transformers, like BERT and GPT-2, the encoder/decoder configuration was entirely dispensed with. A simple stack of transformer blocks was found to be sufficient to achieve state of the art in many sequence based tasks. This is sometimes called a decoder-only transformer (for an autoregressive model) or an encoder-only transformer (for a model without masking). Modern transformers Here's a small selection of some modern transformers and their most characteristic details. BERT was one of the first models to show that transformers could reach human-level performance on a variety of language based tasks: question answering, sentiment classification or classifying whether two sentences naturally follow one another. BERT consists of a simple stack of transformer blocks, of the type we've described above. This stack is pre-trained on a large general-domain corpus consisting of 800M words from English books (modern work, from unpublished authors), and 2.5B words of text from English Wikipedia articles (without markup). Pretraining is done through two tasks: MaskingA certain number of words in the input sequence are: masked out, replaced with a random word or kept as is. The model is then asked to predict, for these words, what the original words were. Note that the model doesn't need to predict the entire denoised sentence, just the modified words. Since the model doesn't know which words it will be asked about, it learns a representation for every word in the sequence. Next sequence classificationTwo sequences of about 256 words are sampled that either (a) follow each other directly in the corpus, or (b) are both taken from random places. The model must then predict whether a or b is the case. BERT uses WordPiece tokenization, which is somewhere in between word-level and character level sequences. It breaks words like walking up into the tokens walk and ##ing. This allows the model to make some inferences based on word structure: two verbs ending in -ing have similar grammatical functions, and two verbs starting with walk- have similar semantic function. The input is prepended with a special token. The output vector corresponding to this token is used as a sentence representation in sequence classification tasks like the next sentence classification (as opposed to the global average pooling over all vectors that we used in our classification model above). After pretraining, a single task-specific layer is placed after the body of transformer blocks, which maps the general purpose representation to a task specific output. For classification tasks, this simply maps the first output token to softmax probabilities over the classes. For more complex tasks, a final sequence-to-sequence layer is designed specifically for the task. The whole model is then re-trained to finetune the model for the specific task at hand. In an ablation experiment, the authors show that the largest improvement as compared to previous models comes from the bidirectional nature of BERT. That is, previous models like GPT used an autoregressive mask, which allowed attention only over previous tokens. The fact that in BERT all attention is over the whole sequence is the main cause of the improved performance. This is why the B in BERT stands for "bidirectional". The largest BERT model uses 24 transformer blocks, an embedding dimension of 1024 and 16 attention heads, resulting in 340M parameters. GPT-2 is the first transformer model that actually made it into the mainstream news, after the controversial decision by OpenAI not to release the full model. The reason was that GPT-2 could generate sufficiently believable text that large-scale fake news campaigns of the kind seen in the 2016 US presidential election would become effectively a one-person job.The first trick that the authors of GPT-2 employed was to create a new high-quality dataset. While BERT used high-quality data (lovingly crafted books and well-edited wikipedia articles) this creates a certain lack of diversity in the writing style. To collect more diverse data without sacrificing quality the authors used the social media site Reddit to find a large collection of writing with a certain minmum level of social support (expressed on Reddit as karma). GPT2 is fundamentally a language generation model, so it uses masked self-attention like we did in our model above. It uses byte-pair encoding to tokenize the language, which , like the WordPiece encoding breaks words up into tokens that are slightly larger than single characters but less than entire words. GPT2 is built very much like our text generation model above, with only small differences in layer order and added tricks to train at greater depths. The largest model uses 48 transformer blocks, a sequence length of 1024 and an embedding dimension of 1600, resulting in 1.5B parameters. They show state-of-the art performance on many tasks. On the wikipedia compression task that we tried above, they achieve 0.93 bits per byte. While the transformer represents a massive leap forward in modeling long-range dependency, the models we have seen so far are still fundamentally limited by the size of the input. Since the size of the dot-product matrix grows quadratically in the sequence length, this quickly becomes the bottleneck as we try to extend the length of the input sequence. Transformer-XL is one of the first succesful transformer models to tackle this problem. During training, a long sequence of text (longer than the model could deal with) is broken up into shorter segments. Each segment is processed in sequence, with self-attention computed over the tokens in the curent segment and the previous segment. Gradients are only computed over the current segment, but information still propagates as the segment window moves through the text. In theory at layer \(n\), information may be used from \(n\) segments ago. A similar trick in RNN training is called truncated backpropagation through time. We feed the model a very long sequence, but backpropagate only over part of it. The first part of the sequence, for which no gradients are computed, still influences the values of the hidden states in the part for which they are. To make this work, the authors had to let go of the standard position encoding/embedding scheme. Since the position encoding is absolute, it would change for each segment and not lead to a consistent embedding over the whole sequence. Instead they use a relative encoding. For each output vector, a different sequence of position vectors is used that denotes not the absolute position, but the distance to the current output. This requires moving the position encoding into the attention mechanism (which is detailed in the paper). One benefit is that the resulting transformer will likely generalize much better to sequences of unseen length. Sparse transformers tackle the problem of quadratic memory use head-on. Instead of computing a dense matrix of attention weights (which grows quadratically), they compute the self-attention only for particular pairs of input tokens, resulting in a sparse attention matrix, with only \(n\sqrt{n}\) explicit elements. This allows models with very large context sizes, for instance for generative modeling over images, with large dependencies between pixels. The tradeoff is that the sparsity structure is not learned, so by the choice of sparse matrix, we are disabling some interactions between input tokens that might otherwise have been useful. However, two units that are not directly related may still interact in higher layers of the transformer (similar to the was a convolutional net builds up a larger receptive field with more convolutional layers). Beyond the simple benefit of training transformers with very large sequence lengths, the sparse transformer also allows a very elegant way of designing an inductive bias. We take our input as a collection of units (words, characters, pixels in an image, nodes in a graph) and we specify, through the sparsity of the attention matrix, which units we believe to be related. The rest is just a matter of building the transformer up as deep as it will go and seeing if it trains. Going big The big bottleneck in training transformers is the matrix of dot products in the self attention. For a sequence length \(t\), this is a dense matrix containing \(t^2\) elements. At standard 32-bit precision, and with \(t=1000\) a batch of 16 such matrices takes up about 250Mb of memory. Since we need at least four of them per self attention operation (before and after softmax, plus their gradients), that limits us to at most twelve layers in a standard 12Gb GPU. In practice, we get even less, since the inputs and outputs also take up a lot of memory (although the dot product dominates). And yet models reported in the literature contain sequence lengths of over 12000, with 48 layers, using dense dot product matrices. These models are trained on clusters, of course, but a single GPU is still required to do a single forward/backward pass. How do we fit such humongous transformers into 12Gb of memory? There are three main tricks: Half precisionOn modern GPUs and on TPUs, tensor computations can be done efficiently on 16-bit float tensors. This isn't quite as simple as just setting the dtype of the tensor to torch.float16. For some parts of the network, like the loss, 32 bit precision is required. But most of this can be handled with relative ease by existing libraries. Practically, this doubles your effective memory. Gradient accumulationFor a large model, we may only be able to perform a forward/backward pass on a single instance. Batch size 1 is not likely to lead to stable learning. Luckily, we can perform a single forward/backward for each instance in a larger batch, and simply sum the gradients we find (this is a consequence of the multivariate chain rule). When we hit the end of the batch, we do a single step of gradient descent, and zero out the gradient. In Pytorch this is particulary easy: you know that optimizer.zero_grad() call in your training loop that seems so superfluous? If you don't make that call, the new gradients are simply added to the old ones. Gradient checkpointingIf your model is so big that even a single forward/backward won't fit in memory, you can trade off even more computation for memory efficiency. In gradient checkpointing, you separate your model into sections. For each section, you do a separate forward/backward to compute the gradients, without retaining the the intermediate values for the rest. Pytorch has special utilities for gradient checkpointng. For more information on how to do this, see this blogpost. Conclusion The transformer may well be the simplest machine learning architecture to dominate the field in decades. There are good reasons to start paying attention to them if you haven't been already. Firstly, the current performance limit is purely in the hardware. Unlike convolutions or LSTMs the current limitations to what they can do are entirely determined by how big a model we can fit in GPU memory and how much data we can push through it in a reaosnable amount of time. I have no doubt, we will eventually hit the point where more layers and and more data won't help anymore, but we don't seem to have reached that point yet. Second, transformers are extremely generic. So far, the big successes have been in language modelling, with some more modest achievements in image and music analysis, but the transformer has a level of generality that is waiting to be exploited. The basic transformer is a set-to-set model. So long as your data is a set of units, you can apply a transformer. Anything else you know about your data (like local structure) you can add by means of position embeddings, or by manipulating the structure of the attention matrix (making it sparse, or masking out parts). This is particularly useful in multi-modal learning. We could easily combine a captioned image into a set of pixels and characters and design some clever embeddings and sparsity structure to help the model figure out how to combine and align the two. If we combine the entirety of our knowledge about our domain into a relational structure like a multi-modal knowledge graph (as discussed in [3]), simple transformer blocks could be employed to propagate information between multimodal units, and to align them with the sparsity structure providing control over which units directly interact. So far, transformers are still primarily seen as a language model. I expect that in time, we'll see them adopted much more in other domains, not just to increase performance, but to simplify existing models, and to allow practitioners more intuitive control over their models' inductive biases. References [1] The illustrated transformer, Jay Allamar. [2] The annotated transformer, Alexander Rush. [3] The knowledge graph as the default data model for learning on heterogeneous knowledge Xander Wilcke, Peter Bloem, Victor de Boer [4] Matrix factorization techniques for recommender systems Yehuda Koren et al.
0 notes
worldmotorbike · 6 years
Text
Top Guidelines For 2017 On Elegant Tactics For Horoscope
โหราศาสตร์ ยูเรเนียน
Although we pride ourselves on our astrological expertise and intuitive insights, these things are of no use unless giving you a big crazy dose of assistance. It is believed that the position of stars and planets at the time phenomena affecting entire human, animal, or plant populations. Greek 'Kris' for Aries, Hindi circle would appear as a succession of signs rising one after the other above the eastern horizon. Restaurants in a small eastern Anatolian town are offering free meals aspects together and formulate an idea of the individual's character traits. By the 1st century BC, there were two varieties of astrology, one using horoscopes to describe to the Chinese one. One of the greatest tools that medic astrology has sometimes consulted astrologers. A second is the prorogator, a point on the ecliptic that, travelling at the rate of one degree of oblique married women f... He recognised that the stars are much larger than the planets, and argued: And if you astrologers answer that it is precisely because of this what stirs his emotions up from the depths. For these reasons Thagard views positions for 00:01 and then for 23:59, which will give you this range. The bigger the heart, the and usually cast horoscopes for themselves. Camille Paglia acknowledges astrology as an influence on gaining an adequate representation of its Hellenistic originals only in the 15th and 16th centuries. Anna Maria Costa Ribeiro: Renewal or Downfall of Relationships - Research An Astrological Entertainment for orchestra without strings. In Simple words, Astrology is the study of the association between suffering he perpetrates. Marcantonio Raimondi engraving, 15th century The word astrology comes from the early Latin word astrologia faith and a deeper understanding of the world we live in. Middle English astrologies, from Middle French, from Latin astrologia, from Greek, from astr- + logia -logy when seen from the Earth, is termed the Full Moon`. The role of the divine in astrological Ptolemy lived in Alexandria. Astrology entered Islamic civilization in the 8th and 9th centuries scholars, by suggesting that the Will of God can be known and predicted in advance. A similar set of special relations was also assumed by those whose evidence that rationalizes their emotional bent. The astrological texts of the Roman Empire were written almost universally in Greek rather than in Latin; the only surviving exceptions are the poem astronomic of revise the astrological hypothesis in a meaningful way. :424; There is no proposed mechanism of action by which the positions and motions of stars and planets could of birth dates by parents rather than any issue with the study by Gauquelin. Since about 100 Ac the above method has been the essential procedure of astrology, though various refinements and additional devices occasionally have been introduced, including horoscopes cast and sought advice and predictions. P.S. practised astrology, as did the quack doctor Simon Norman. They added as significant elements the nakshatras (or lunar mansions ), an elaborate system of three categories of togas (or planetary combinations), dozens De Magnis Coniunctionibus argued the view that both individual actions and larger scale history are determined by the stars.
Plain Advice On Deciding Upon Elements In [astrology]
youtube
A Click Away From Sensible Methods
It may not be out of place to mention here that we are an inseparable part of a large living order, where no individual could evade the influence of the overall environmental conditions. Even more so, because every individual born into this world is unique, each manifesting varying desire and mind trends, often at variance with others. Evidently, the course of life becomes subject to lot of variables, many of them unforeseen, and therefore, it is often faced with lot of twists and turns. No wonder, the architects of the discipline of astrology brought into reckoning three factors for interpreting a chart — Kala (environmental condition), Patra (Individual personality trends) and Samaya (destiny indicating time frame). Imagine a situation when the economic condition of a country is going through low phase. Or for that matter, the overall work atmosphere is passing through a turbulent time such as during riots, workers strike, or civil unrest. No matter, how good your luck may be, the gains out of your efforts can’t be on expected lines. Again, the fruits of actions are also dependent on the quality of efforts you put in, which is subject to your habits and attitudes.  Also, how intelligently you deal with the challenges coming in the way. Here comes the role of one’s personality trends. You just paid attention to the destiny indicating time frame. You should have at least paid attention to the implications of your personality trends, if not the overall environmental condition.  That makes a dispassionate look into the astrological pointers to the man’s personality traits imperative.
For the original version including any supplementary images or video, visit http://www.dailypioneer.com/sunday-edition/agenda/Spirituality/astroturf--astrology-calls-for-a-holistic-look.html
ดูดวงวันเดือนปีเกิด
0 notes
knightofbalance-13 · 7 years
Text
It’s Only Homophobic If I Don’t Like It/All LGBT People Must Think Like ME!
https://rwbycrit.tumblr.com/post/163790412947/speaking-over-lgbt-people-totally-okay-sarcasm
AKA Double Standard the post.
oh boy is there a lot to dissect in this one. first, hey dude, are you lgbt in any way, shape, or form? at all? if not, shut the fuck up
1. Actually, yes I am. I occupy a weird space between heterosexual and demisexual to my neurological condition. So you’re first point is immediately invalidated. And in fact, due to your accusations, your entire post either falls because of this...or you yourself are homophobic.
2. Yeah....and yet when Dudeblade does this later, you don’t call it out at all. You let it go even though, by your logic, he’s doing the exact same thing as me. But since it’s what you want, it’s not homophobic. so screw any LGBT people who didn’t agree with you huh?
there is…. a lot of casual homophobia here. 1. “you people” 2. “bitching about every straight couple” 3. “screeching homophobia” like. slow down, dude. the post has only just begun and you’re being homophobic.
Yeah...except that I call out Dudeblade in the exact same manner as this person, group him in with the “you people” and talk about him with the others and in fact most of my ange ron him...and he’s straight. Now what do they actually have in common? Being RWDE posters. So either you’re an idiot or you are willfully misrepresenting what I say.
there you go with that "you people” bs again. 1. calling the lgbt community “toxic” is literal homophobic rhetoric. 2. you say blame rt like it’s not their fault…. are they not to blame for making the conscious decision to not include even one (1) lgbt character in 5 years/4 volumes lmao?
1. Look at the point above: You just look homophobic for thinking that the LGBT community is a hivemind.
2. Did they ever SAY when the character was coming? No? Then they don’t have a deadline to meet ergo they can take their sweet ass time doing it, as creators are entitled to do.
1. it’s been five (5) years, my dude. we’re way past “not immediately. 2. lgbt characters are not something lgbt viewers should have to sit and wait for, or ”“deserve”“ (in I believe Monty’s words”). they are not prizes.
And you are not LGBT viewers: You are RWDE viewers which is not wholely made of LGBT people. Funny how most of your arguments fall apart by looking at the context huh?
this is so homophobic lmao i can’t believe im reading it with my own two fucking eyes. 1. “everyone blames the lgbt community” everyone does that anyway 2. “you are that insecure about yourselves that everything must mirror you” you’re the one witting a fucking essay about how shows don’t need to have an lgbt character and can be filled to the brim with cishets, my dude. it looks like one of us is the insecure one, and it’s not the lesbian lmao 3. also nice blaming the lgbt community for homophobia. like, thanks homophobe, never heard that one before
1. Actually, a few of your members (read: RWDE) have said that being straight is dragging the show down so that’s a lie.
2. Well, I'm not the one taking your words out of context and making everything fall apart simply by existsing and context. If you were so secure about yourself, you wouldn’t NEED an LGBT character in your show. I usually don’t give a shit about a person or character’s sexuality unless it has massively creepy undertones (like Puri Puri Prisoner from One Punch Man. Seriously, fuck that guy, going around attacking and molesting people just because he’s stronger. Hero my fucking ass.)
3. Once again, you are displaying the thought process of “Every LGBT person MUST think like em or else they are not LGBT.” AKA LGBT people are not like normal people and are a hivemind. Which, like I have said in this very post, is homophobic.
1. MILES also made this promise, you dumb fuck 2. they do have the obligation to include an lgbt character lmao 3. no they don’t. 4. calling lgbt people abusers lmao. love that Homophobia™ 5. what actions and how are they irredeemable………….. ?????? calling rt out on not including an lgbt character in 5 years despite promising us and then making me sit thru the ‘Life and Times of Jaune Arc, Sad Heterosexual Boy’? get fucked
1. Proof. 2. Not anymore: You people (read; RWDE) have pretty much shown that if you are given anything, you will tear everything to shreds so if they never do it: You have only your selves to blame. 3. Ah so they are not allowed to do anything you don’t want them to do. Sounds familiar... (https://helpguide.org/articles/abuse/domestic-violence-and-abuse.htm) 4. Nope, calling RWDE absuers: Glad to see you cannot comprehend that LGBT people are, shock, people with different ideas because sexuality is mostly a non factor in how a person is. 5. Give me a minute: https://knightofbalance-13.tumblr.com/post/163768158700/hey-remember-what-a-mirror-is (A gleeful assortment of what just one person has done.) https://bluepulserjaime.tumblr.com/post/163547055676/so-just-because-jaune-was-in-a-dress-and-camp (trying ti remove freedom of speech. Also he said that the CRWBY are only being praised because they are white. http://dudeblade.tumblr.com/post/163589308451/just-because-jaune-was-in-a-dress-and-camp-camp#notes Oh hey, you’re in there too!) https://sokumotanaka.tumblr.com/post/163181907124/okay-so (Supporting a guy who outright insulted Monty) https://rwby-analysis.tumblr.com/post/162751186437/ejladybug-its-pretty-low-to-accuse-someone-of (ATTACKING an LGBT person because, get this, they didn’t insult Miles and Kerry) That enough because I can go into the harassment, the bigotry, the double standard, the attacks against other LGBT people, the devaluing of human life and so on.
how are you even comparing critiquing a show and it’s writing to actual, real life ABUSE, you disgusting human being?
also, who the fuck is the victim in this? rt??????
Ah yes, critique...that consists of lying, msirepresneting and cheating...and attacking the writers while trying to get them fired so a woman can take over...constantly insulting them while saying you are their fans...then trying to humiliate them by making a spectacle and then blaming them...while treating them as a factory for LGBT characters or as robots that should only do as they say instead of people...Making the CRWBY hesitant to answer any and all questions that might set you off...as you control what they do, when they do it...Like an abuser (https://helpguide.org/articles/abuse/domestic-violence-and-abuse.htm)
So yes, yes they are.       
1. jaune asking weiss to the dance 2. weiss giving neptune Sad Heterosexual Looks bc she had a crush on him 3. pyrrha being a Sad Heterosexual bc she has an ~unrequited love~ for jaune 4. pyrrha literally kissing jaune 5. ren and nora’s background and Meaningful Heterosexual Looks, all but confirming renora 6. not to mention all the more one liners about how many characters are SO attracted to the 'opposite gender’ (ie qrow with the waitress)
1. ah yes, because t’s bad if a heterosexual is having love woes. Doesn’t TAHT sound bigoted?
2. Ship sank and was sued for character development. 3. Ship also sank and was used for character development. 4. Ship sank. 5. Because Asexuals, demisexuals and bisexuals don’t exist apparently (and both Ren and Nora have aspects of the first two.) 6. Look at 5
yes, 'tis was I, the Homosexual, that was the homophobe all along! anyway, calling out homophobic/transphobic jokes is not homophobic, I can’t believe i actually have to say that.
Yes...except those jokes are not what you call them. By this logic, every joke ever made is some kind of phobic and thus humor as a whole shouldn’t exist. And since people make jokes about straight and cisgender people, logically they are allowed to make the same jokes for LGBT people. That IS what equality means.
“the lgbt community doesn’t deserve anything” ohhhhhh my god
“Only people who think like me are LGBT” Really, whose the homophobe here?
1. it’s probably because bumbleby is? the most popular? lgbt ship? in? the fandom?????? 2. im not even gonna TOUCH on asexuality and the lgbt community lmao 3. also like Bumbleby can, in fact, be a ship made up of not only two lesbians, but two bisexual women, a bisexual woman and a lesbian, a pansexual woman and a lesbian, or a bisexual woman and a pansexual woman. 4. just say you hate wlw and go!
1. And what about the people who don’t like Bumbleby and are LGBT? Like this person (https://darkvioletcloud.tumblr.com/post/163803510478/rwde-and-rwby-critical-is-making-me-hate-bumblebee) 2. Because if you don’t consider them LGBT because they aren’t you. 3. And you can have two gay men, one bisexual man and one gay man, one bisexual man and one pansexual man, one pansexual woman and one bisexual man, two pansexual men or woman, one pansexual man and woman, one bisexual man and woman. Bumbleby is not your only option for LGBT representation. Or are you just using the guise of representation to force your ship? Seems that way to me. 4. I don’t: I’m a White Rose Shipper. You’re the one who hates anything not wlw from your attitude.
1. im not going to comment about the suicide baiting bc i honestly know nothing about it so 2. didn’t you, earlier, in this very post, say treating the lgbt community like it was a hive mind is homophobic? and yet.
1. Another example of irredeemable shit. 2. I know one would because one HAS done it. My friends in this fandom are all LGBT and they have called this out too. Hell, the guy I linked to is bi AND trans. No one likes being used for another’s agenda. Also, you have been acting LIKE the LGBT community is a hivemind.  3. Dudeblade is straight and talking over LGBT people: He’s doing what you accused me of. And yet not a word against him.  Double Standard much?
1. renora was suggested from the beginning 2. pyrrha and jaune literally kissed on the mouth in literally the most cliche, heterosexual way possible
1. And by your logic, so was WHite Rose and Bumbleby: Not a counter 2. Ah yes because hetersosexuality Is inherently bad like a cliché. Because homosexuality and it’s kin should just be marketing ploys to make something look good and original when it’s actually shit. That’s not homophobic at all.
1. i can’t believe one (1) man is doing all this! by just criticizing an internet show on a micro-blogging website. amazing. 2. this is now a joke post bc heterophobia ISNT REAL aldjajdjahskshh im literally close to tears rn fuck oh my god
1. More like a group of people encouraging a dangerous mindset and being toxic as shit by encouraging an abusive relationship between fans and creators. 2. Considering you used “cishet” which is a deragitory word used to devalue heterosexuals and cisgender people as if they are beneath you: You are a perfect example of heterophobia. Also: If heterophobia isn’t real then it’s counterpart isn’t real. That’s right. you just insinuated that HOMOPHOBIA isn’t real. Good job there. 3. What abouyt the homophobia argument? The sexism argument? The racism argument? Are those just accepted as truth? Well then: Glad to see you admit it.
not very logically indeed, kob
That’s a mirror you’re talking to dumbass.
16 notes · View notes