Tumgik
#lgbm
bitchboynoy · 4 months
Text
This was the first draft of franky and loids visit to the bdsm club.
The version I have now is more in character. I don’t think Loid would be this snobbish about sex, I don’t think franky could trick Loid into going anywhere, Europe doesn’t have us tipping culture, etc.
But! I had a lot of fun writing this so I wanted to share anyway
Franky and Loid play off of each other really well and are really fun to write. In the future I’d like to write them platonically having sex bc idk I think it would be funny 🤷
Enjoy!
Loid regrets every scrap of trust he’s even given Franky as he opens to the doors of what can only be a sex club. Loid had guessed the address was going to take him to the seedier part of town but Franky had laughed it off with a “Hey trust me, a change of pace will be nice!”
His casual tone had Loid hoping that perhaps it was just a hole-in-the-wall bar with some unpleasant neighbors but he now recognizes that the thought was foolishly optimistic. Apparently so was his thought that he was only entering a sex club as he gives the bouncer the passphrase and Franky’s name and he is led to a private table overlooking the center of the club.
This place must be a fetish sex club if the unusual devices, elaborate knots, and shiny leather are anything to go by. Loid gives the space a passing glance to ensure there are no threats and focuses intently on not making eye contact with the occasional curious glance his stiff posture gets.
He arrives at the table Franky had reserved and gives the waitress a polite nod of thanks as he goes to sit down. She stays and looks at him expectantly until he realizes she is expecting a tip and he quickly pulls out a wallet and gives her apparently the appropriate amount of dalcs.
“Oh good, you made it.” Franky says as Loid sits down primly, crossing his legs and shutting his eyes so he won’t glare at Franky as the other man attempts (poorly) to hide his amusement at Loid’s flustered interaction with the waitress.
“Franky, why the fuck are we here?” Loid asked, deciding that hiding his glare is a politeness Franky does not currently deserve.
“What do you mean?” Franky says with an affected air of casualness. “I thought a change of pace would be nice! Don’t you get tired of meeting at the same three places?”
Loid crosses his arms and stares evenly at Franky. The silent treatment is a sure-fire way to wear him down. It’s usually better to let Franky explain himself instead of succumbing to his goading.
“Fine, fine” Franky starts, not letting Loid’s silent treatment stretch out long. “I honestly wasn’t even sure if you would show up.”
Loid’s face shifts with affront at the insult to his professionalism. He shifts to smooth it out but Franky catches the expression before it’s concealed.
“No offense!” Franky continues “I know you would do anything for your mission which I’m serious about the change of pace!”
Franky looks at him expectantly as he gestures to the activities below them. Loid is lost so he nods for Franky to keep going.
“Look,” Frankie says, “I’m probably the closest thing you have to a friend and I’m honestly a little worried about you.”
The concern would almost be touching if Loid weren’t so confused and they weren’t in a fucking sex club.
“WISE is working you to the bone and you’ve been managing so far but it’s only a matter of time before something gives and when it does someone is going to get seriously hurt and you’ll be lucky if it isn’t you.”
Loid tips his head to concede the point. Since Operation Strix started there has scarcely been a moment where he isn't Twilight the superspy or head of the Forger household. Even his sparse freetime is spent understanding Anya’s TV shows, reading up on parenting and learning new recipes. He can’t recall the last time he wasn’t actively doing his job or holding up a front.
“So why are we here?” Loid asks, Franky’s explanation not connecting with their location.
“You need to find a way to let go, if you don’t you’re going to explode.” Franky pantomimes Loid’s explosion with his hands and little booming noises. “This is just one option of many.”
Franky points to a leather-clad woman spanking the man spread across her lap with a flogger. Loid quickly looks away as the man lets out what he can assume is a breathy moan but it is lost in the noise of the crowd and the music.
“Is that the only reason you brought me here? No new intel or secret sources?” Loid asks, not bothering to hide his frustration. “No secret plots that would absolutely require me being here?”
“Well no, but I figured this-” Franky gestures around them and Loid knows better than to look this time. “-would kickstart your imagination, if only to help you think of something else that would help.”
Franky coughs and Loid knows him well enough to recognize he’s hiding snicker. Loid gives him a pointed look.
“I also thought it would be funny.” Franky admits before continuing. “I’m serious though! You need to find an outlet to relax!”
“Is that all?” Loid asks, straightening his posture. Franky looks disappointed but nods.
“Thank you for your concern, but I can take care of myself.” Loid stands up and buttons his suit jacket. “Good night Franky, let’s not do this again.”
Loid does not storm out because he is an adult with self-control but he does firmly reject another waitress’s offer of assistance with more harshness than necessary. Loid’s damned if he spends another fucking cent in this place and he’s not getting tricked again, by Franky or anyone else.
2 notes · View notes
selkiecoded · 2 months
Text
i like legally blonde the musical :)
4 notes · View notes
scatterbrain-lion · 1 year
Text
tabular kaggle problems are so lame, you can always guarantee a top 10 result with just: - baseline xgboost, catboost, lgbm - import optuna to finetune the models' and the ensemble weight - profit if you wanna go higher: - take someone else's notebook and add that model onto the ensemble - wait for someone to outdo your model - take their model and throw it in the ensemble too - profit i haven't done any of these but if i had the patience i'd do it like 2-3 times just for the competition achievement
0 notes
nyagga · 7 years
Text
whom else saw lego batman???
4 notes · View notes
thegirlthatsketches · 7 years
Photo
Tumblr media
I never thought i’d actually call a clown a favorite character....
I just saw this movie yesterday and I really enjoyed it, if you haven’t seen it yet go see it or once it comes to dvd go buy it. The movie is great
1 note · View note
albadelcesar · 4 years
Photo
Tumblr media
at Playa Las Vistas - Tenerife https://www.instagram.com/p/CGNsVS-lgBM/?igshid=58sgu8d96jyx
1 note · View note
hustlerose · 6 years
Text
LGBM/S
Tumblr media
54 notes · View notes
ijtsrd · 3 years
Photo
Tumblr media
Drug Review Sentiment Analysis using Boosting Algorithms
by Sumit Mishra "Drug Review Sentiment Analysis using Boosting Algorithms"
Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021,
URL: https://www.ijtsrd.compapers/ijtsrd42429.pdf
Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42429/drug-review-sentiment-analysis-using-boosting-algorithms/sumit-mishra
internationaljournalsinengineering, callforpaperengineering, ugcapprovedengineeringjournal
Sentiment Analysis of the Reviews is important to understand the positive or negative effect of some process using their reviews after the experience. In the study the sentiment analysis of the reviews of drugs given by the patients after the usage using the boosting algorithms in machine learning. The Dataset used, provides patient reviews on some specific drugs along with the conditions the patient is suffering from and a 10 star patient rating reflecting the patient satisfaction. Exploratory Data Analysis is carried out to get more insight and engineer features. Preprocessing is done to get the data ready. The sentiment of the review is given according to the rating of the drugs. To classify the reviews as positive or negative three Classification models are trained LightGBM, XGBoost, and CatBoost and the feature importance is plotted. The result shows that LGBM is the best performing Boosting algorithm with an accuracy of 88.89 . 
0 notes
selkiecoded · 7 months
Text
it is a legitimate problem by the way. my brain associates lgbm with utter obsession. ive listened to the soundtrack in full twice today.
4 notes · View notes
thewebexpansion · 3 years
Photo
Tumblr media
A professional appearance and well-strategised branding will help the company build trust with consumers, potential clients and customers. People are more likely to do business with a company that has a polished and professional portrayal. Get in touch with us : WhatsApp Now.. +919477765460 #website #websitedevelopment #thewebexpansion #businessdevelopment #appdevelopment #onlinepresence #branding #onlinebranding #webdevelopment #websitedesigning #business #businessgrowth #ecommerce #ecommercebusiness #onlinebusiness #corporatedesign #businesscards #logowork #branddesign https://www.instagram.com/p/CRT_uv-LgbM/?utm_medium=tumblr
0 notes
watchilove · 3 years
Photo
Tumblr media
When history triggers an idea for a unique gift Caesar crossed the Rubicon. Alexander ventured into Asia Minor. Pancho Villa rode into Texas. The reason why the story of Pancho Villa resonated so uniquely with the creators behind The Unnamed Society is not just because it veers off the beaten path. Villa’s was a legend born at a time of great technological innovation: he was two years old when Alexander Graham Bell made the first phone call; and barely eleven, when George Eastman invented the Kodak camera. By the time Pancho took center stage as a key player in the Mexican revolution, the power of communication through sound and image – of ‘the here and now immortalized’ – sparked a change that was both inevitable and unstoppable. There was no turning back the clock Yet without these two aligning with a third invention that had already gained some momentum, there would arguably never have been a Pancho Villa. Samuel Colt had already been dead sixteen years when Pancho was born in 1878, but his invention, a ‘revolving gun design’ had already reshaped the American West and was beginning to shape the rest of the world… https://www.instagram.com/p/CIiEGA-LgBm/?igshid=h36645letnwd
0 notes
ispyspookymansion · 7 years
Text
.
0 notes
howlongisthedata · 4 years
Text
Deploy ML Models On AWS Lambda - Code Along
Introduction
The site Analytics Vidhya put out an article titled Deploy Machine Learning Models on AWS Lambda on Medium on August 23, 2019.  
I thought this was a great read, and thought I might do a code along and share some of my thoughts and takeaways!  
Tumblr media
[ Photo by Patrick Brinksma on Unsplash ]
You should be able to follow along with my article and their article at the same time (I would read both) and my article should help you through their article.  Their sections numbers don’t make sense (they use 2.3 twice) so I choose to use my own section numbers (hopefully you don’t get too confused there).
Let’s get going!
Tumblr media
Preface - Things I Learned The Hard Way
You need to use Linux! 
This is a big one.  I went through all the instructions using a Windows OS thinking it would all work the same, but numpy on Windows is not the same as numpy on Linux, and AWS Lambda is using Linux.
After Windows didn’t work I switched to an Ubuntu VirtualBox VM only to watch my model call fail again!  (Note that the issue was actually with how I was submitting my data via cURL, and not Ubuntu, but more on that to come).  At the time I was able to deploy a simple function using my Ubuntu VM, but it seemed that numpy was still doing something weird.  I thought maybe the issue was that I needed to use AWS Linux (which is actually different from regular Ubuntu).
To use AWS Linux, you have a few options (from my research):
Deploy an EC2 instance and do everything from the EC2 instance
Get an AWS Linux docker image
Use Amazon Linux WorkSpace
I went with option #3.  You could try the other options and let me know how it goes.  Here is an article on starting an Amazon Linux WorkSpace:
https://aws.amazon.com/blogs/aws/new-amazon-linux-workspaces/
It takes a while for the workspace to become available, but if you’re willing to wait, I think it’s a nice way to get started.  It’s also a nice option if you’re running Windows or Mac and you don’t have a LOT of local memory (more on that in a minute).
Funny thing is that full circle, I don’t think I needed to use Amazon Linux.  I think the problem was with how I was submitting my cURL data.  So in the end you could probably use any flavor of Linux, but when I finally got it working I was using Amazon WorkSpace, so everything is centered around Amazon Linux WorkSpace (all the steps should be roughly the same for Ubuntu, just replace yum with apt or apt-get and there are a few other changes that should be clear).
Another nice thing about running all of this from Amazon WorkSpace is that it also highlights another service from AWS.  And again,  it’s also a nice option if you’re running Windows or Mac and you don’t have a LOT of local memory to run a VM.  VMs basically live in RAM, so if your machine doesn’t have a nice amount of RAM, your VM experience is going to be ssslllooowwwww.  My laptop has ~24GB of RAM, so my VM experience is very smooth.  
Watch our for typos!
I reiterate this over and over throughout the article, and I tried to highlight some of the major ones that could leave you pulling your hair out. 
Use CloudWatch for debugging!
If you’re at the finish line and everything is working but the final model call, you can view the logs of your model calls under CloudWatch.  You can add print statements to your main.py script and this should help you find bugs.  I had to do this A LOT and test almost every part of my code to find the final issue (which was a doozy).
Okay, let’s go!
Tumblr media
#1 - Build a Model with Libraries SHAP and LightGBM.
The article starts off by building and saving a model.  To do this, they grab a open dataset from the package SHAP and use LightGBM to build their model.
Funny enough, SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model.  It happens to also have datasets that you can play with.  SHAP has a lot of dependencies, so it might be easier to just grab any play dataset rather than installing SHAP if you’ve never installed SHAP before.  On the other hand, it’s a great package to be aware of, so in that sense it’s also worth installing (although it’s kind of overkill for our needs here).
Here is the full code to build and save a model to disk using their sample code:
import shap import lightgbm as lgbm
data,labels =  shap.datasets.adult()
params = {'objective':'binary',          'booster_type':'gbdt',          'max_depth':6,          'learning_rate':0.05,          'metric':'auc'}
dtrain = lgbm.Dataset(data,labels)
model = lgbm.train(params,dtrain,num_boost_round=100,valid_sets=[dtrain])
model.save_model('saved_adult_model.txt')
Technically you can do this part on any OS, either way I suggest creating a Python venv.
Note that I’m using AWS Linux, which is RedHat based so I’ll be using yum instead of apt or apt-get.  First I need python3, so I’ll run:
sudo yum -y install python3
Then I created a new folder, and within that new folder I created a Python venv.
python3 -m venv Lambda_PyVenv
To activate your Python venv just type the following:
source Lambda_PyVenv/bin/activate
Now we need shap and lightgbm.  I had some trouble installing shap, so I had to run the following first:
sudo yum install python-devel
sudo yum install libevent-devel
sudo easy_install gevent
Which I got from:
https://stackoverflow.com/questions/11094718/error-command-gcc-failed-with-exit-status-1-while-installing-eventlet
Next I ran:
pip3 install shap
pip3 install lightgbm
And finally I could run:
python3 create_model.py
Note: For file creating and editing I use Atom (if you don’t know how to install atom on Ubuntu, open a new terminal, then I would check out the documentation and just submit the few lines of code from command line).
#2 - Install the Serverless Framework
Next thing you need to do is open a new terminal and install the Serverless Framework on your Linux OS machine.  
Something else to keep in mind, if you try to test out npm and serverless from Atom or Sublime consoles, it’s possible you might hit errors.  You might want to use your normal terminal from here on out!
Here is a nice article on how to install npm using AWS Linux:
https://tecadmin.net/install-latest-nodejs-and-npm-on-centos/
In short, you need to run:
sudo yum install -y gcc-c++ make
curl -sL https://rpm.nodesource.com/setup_10.x | sudo -E bash -
sudo yum install nodejs
Then to install serverless I just needed to run 
sudo npm install -g serverless
Don’t forget to use sudo!
And then I had serverless running on my Linux VM!
Tumblr media
#3 - Creating an S3 Bucket and Loading Your Model
The next step is to load your model to S3.  The article does this by command line.  Another way to do this is to use the AWS Web Console.  If you read the first quarter for my article Deep Learning Hyperparameter Optimization using Hyperopt on Amazon Sagemaker I go through a simple way of creating an S3 bucket and loading data (or in this case we’re loading a model). 
#4 - Create the Main.py file
Their final main.py file didn’t totally work for me, it was missing some important stuff.  In the end my final main.py file looked something like:
import boto3 import lightgbm import numpy as np
def get_model():     bucket= boto3.resource('s3').Bucket('lambda-example-mjy1')     bucket.download_file('saved_adult_model.txt','/tmp/test_model.txt')     model= lightgbm.Booster(model_file='/tmp/test_model.txt')     return model
def predict(event):     samplestr = event['body']     print("samplestr: ", samplestr)     print("samplestr str:", str(samplestr))     sampnp = np.fromstring(str(samplestr), dtype=int, sep=',').reshape(1,12)     print("sampnp: ", sampnp)     model = get_model()     result = model.predict(sampnp)     return result
def lambda_handler(event,context):     result1 = predict(event)     result = str(result1[0])     return {'statusCode': 200,'headers': { 'Content-Type': 'application/json' },'body': result }
I had to include the ‘header’ section and the body.  These needed to be returned as a string in order to not get ‘internal server error’ type errors.
You’ll also notice that I’m taking my input and converting it from a string to a 2d numpy array.  I was feeding a 2d array similar to the instructions and this was causing an error.  This was the error that caused me to leave Ubuntu for AWS Linux because I thought the error was still related to the OS when it wasn’t.  Passing something other than a string will do weird things.  It’s better to pass a string and then just convert the string to any format required.  
(I can’t stress this enough, this caused me 1.5 days of pain).   
#5 - YML File Changes
Next we need to create our .yml file which is required for the serverless framework, and gives necessary instructions for the framework.  
For the most part I was able to use their YML file, but below are a few properties I needed to change:
runtime: python3.7
region : us-east-1
deploymentBucket:  name : lambda-example-mjy1
iamRoleStatements:  - Effect : Allow    Action:     - s3:GetObject    Resource:     - "arn:aws:s3:::lambda-example-mjy1/*"
WARNING!!!
s3.GetObject should be s3:GetObject
Don’t get that wrong and spend hours on a blog typo like I did!
Tumblr media
If you need to find your region, you can use the following link as a resource:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
Also, make sure the region you specify is the same region that your S3 bucket is located in.
#6 - Test What We Have Built Locally
Before you test locally, I think you need your requirements.txt file.  For now you could just run:
pip freeze > requirements.txt
Now, if you’re like me and tried to run it locally at this point, you probably hit an error because they never walked you through how to add the serverless-python-requirements plugin!
Tumblr media
I found a nice article that walks you through this step:
https://serverless.com/blog/serverless-python-packaging/
You want to follow the section that states ...
“ Our last step before deploying is to add the serverless-python-requirements plugin. Create a package.json file for saving your node dependencies. Accept the defaults, then install the plugin: “
In short, you want to:
submit ‘npm init’ from command line from your project directory
I gave my package name the same as my service name: ‘test-deploy’
The rest I just left blank i.e. just hit enter
Last prompt just enter ‘y’
Finally you can enter ‘sudo npm install --save serverless-python-requirements’ and everything so far should be a success.
Don’t forget to use sudo!
BUT WE’RE NOT DONE!!! (almost there...)
Now we need to do the following (if you haven’t already):
pip3 install boto3
pip3 install awscli
run “aws configure” and enter our Access Key ID and Secret Access Key
Okay, now if we have our main.py file ready to go, we can test locally (’again’) using the following command:
serverless invoke local -f lgbm-lambda --path data.txt
Where our data.txt file contains 
{"body":[[3.900e+01, 7.000e+00, 1.300e+01, 4.000e+00, 1.000e+00,
0.000e+00,4.000e+00, 1.000e+00, 2.174e+03, 0.000e+00, 4.000e+01,
3.900e+01]]}
Also, beware of typos!!!!
Tumblr media
Up to this point, make sure to look for any typos, like your model having the wrong name in main.py.  If you’re using their main.py file and saved your model with the name “saved_adult_model.txt”, then the model name in main.py should be “saved_adult_model.txt”.  Watch out for plenty of typos, especially if you’re following along with their code! They didn’t necessarily clean everything :(.  If you get stuck, there is a great video at the end of this blog that could help with typo checking.
#7 - The Moment We’ve All Been Waiting For, AWS Lambda Deployment!
Now we need to review our requirements.txt file and remove anything that doesn’t need to be in there.  Why?  Well, this is very important because AWS Lambda has a 262MB storage limit!  That’s pretty small!
If you don’t slim down your requirements.txt you’ll probably hit storage limit errors.
With that in mind, I recommend taking only the files and folders needed and move them to some final folder location where you’ll run “serverless deploy”.  You should only need the following:
main.py  
package.json
requirements.txt
serverless.yaml
node_modules
package-lock.json
Now from command line within our final folder, we just run ..
serverless deploy
If all goes well, this should deploy everything!  This might take a minute, in which case get some coffee and enjoy watching AWS work for you!
Tumblr media
And if everything went as planned, then we should be able to test our live deployment using cURL!
curl -X POST https://xxxxxxxxxx.execute-api.xx-xxxx-1.amazonaws.com/dev/predictadult -d "39,7,13,4,1,0,4,1,2174,0,4,33"
Hopefully in the end you receive a model response like I did!
Resources:
https://medium.com/analytics-vidhya/deploy-machine-learning-models-on-aws-lambda-5969b11616bf
https://serverless.com/framework/docs/providers/aws/guide/installation/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
https://serverless.com/blog/serverless-python-packaging/
And here is a great live demo from PyData Berlin!
youtube
0 notes
eurekakinginc · 5 years
Photo
Tumblr media
"[D] Predicting Fantasy Football Points for German Bundesliga (kicker.de)"- Detail: Hey guys. This years fantasy football season is about to start. So I decided to do a little project and try to predict each players points for the upcoming season. At first I searched for other people who have done similar stuff. Most of it was in US Sports (NFL, NBA, MLB). One reason imo was the availability of data. Other projects on football were done for week to week predicitons, and mostly in the middle of the season. This is not what I am going for.Speaking of data availability. I had to put a lot of effort into getting historic fantasy points. But eventually I managed to obtain them by scraping the website (kicker.de) and recalculating the points from season 00/01 onwards (makes about ~9k datapoints). As for features I have, name, club, position and age. The target value is obv the fantasy score by the end of the season. The scores is heavily influenced by Rating (1.0-6, German school grading system. 1.0 best, 6 worst) and is done by editors of the newspaper. This is a human perception rating, not a statistic derived one!I used mainly sklearn. My metric to optimize is mean-squared-error (MSE) and I tried several linear regression methods (Lasso, ElasticNet and BayesianRidge). I got results around 2000 MSE. Some hyperparameter tuning later I got to ~1650.Then I thought of giving LGBM a go, but it was actually worse, even with hyperparameter tuning. I thought about trying LSTM, but I think the dataset is way to small for that.To be hones I am DISAPPOINTED with the results. ~1650 MSE is about 41 points of error on AVERAGE. Thats alot. So my next idea was to analyse if the model is bad on all the data, but good at particular areas. All this was done for the 2018/19 season.​left-to-right (Goalkeeper, Defenders, Midfield, Forward)This is the mean-error per position. GK seems to be really hard to predict. One thing thats common knowledge is that from all regular players goalies tend to score a lot of points on average. But ofc there is just one per team playing and they are less likely to get injured or even be left out because of fatigue. Defenders seem the easiest, Midfielders slightly worse and Forwards even more. Those positions are kind of reasonable good, but still not in the are were I would be confident.​y = mean error, x=amount of playersFor this one I put the data into bins of points (-75,250, 25). Real in this context is the actual points that were scored and predicted, the predicted one. So I basically wanted to see the point distribution of the real points and my predicted points. And as it turns out the model is putting way to many players in the (26,50) range. Bot distributions kind of have the same shape, but the spread for the predicted ones is not big enough.​y = mean error, x= bins of pointsSo my idea was, maybe the model is good in some point range. And schockingly, where most of our data for predictions is, there is most of our error. The model is quite good and predicting the top end players (150+ points) and quite good to sort out the garbage( <0 points), but inbetween its quite horrible.​x = teams , y = mean errorWe see now the error for all the teams, two suprising points are the very accurate ones. Augsburg and Nürnberg are in the bottom half of the table, no suprise. Nürnberg was a promoted team and got relegated instantly, Augsburg survived but also fought against relegation all season long. On the other end Borussia Dortmund played a quite good season, fighting for the Championship with Bayern. The big error I assume is because of several factors. First they bought some players that performed quite well with no historic data (Witsel, Paco, Hakimi) or just very few (Sancho, Akanji, Delaney). Secondly they performed all quite well and above league average. So teamwise there doesn't seem to be much insight imo.​https://i.redd.it/qo8wegslged31.pngSo I was interested if there is maybe some sort of correlation between age and points. Like very young players rarely score points, beginning to mid-late 20s is the prime. With eventual bumps and with the 30s they are declining. But as the boxplots suggest, the data is pretty much all over the place. We have huge variances right from the beginning. It really tones down in the 30s. Age also seems like a pretty bad feature to consider.Additionally, I plotted some age/pts graphs for specific players. There are veterans, even on world class level (Neuer), but also from midtable teams, that never made it to international level. Also players with injury issues (Reus, Bender) or people who seemed like good international material and then completely vanished. Also some One-Hit-Wonders. Just have a peek. I put them in an imgur album to not make this post even bigger (https://imgur.com/a/2kSVXdm)So my question. What would be your ideas to improve the performance? More data? Feature Engineering (but what?). Maybe train a regressor per position?P.S: if there is enough interest in the data and you guys wanna play around with it yourself I can publish it.. Caption by thecluelessguy90. Posted By: www.eurekaking.com
0 notes